SNPS1670747138 DAC 2025 800x100px HRes

Compiler Tuning for Simulator Speedup. Innovation in Verification

Compiler Tuning for Simulator Speedup. Innovation in Verification
by Bernard Murphy on 11-27-2024 at 6:00 am

Innovation New

Modern simulators map logic designs into software to compile for native execution on target hardware. Can this compile step be further optimized? Paul Cunningham (GM, Verification at Cadence), Raúl Camposano (Silicon Catalyst, entrepreneur, former Synopsys CTO and now Silvaco CTO) and I continue our series on research ideas. As always, feedback welcome.

The Innovation

This month’s pick is Efficient Compiler Autotuning via Bayesian Optimization. This was published in in the 2021 IEEE/ACM International Conference on Software Engineering and has 22 citations. The authors are from Tianjin University, China and Newcastle University, Australia.

Compiled code simulation is the standard for meeting performance needs in software-based simulation, so should benefit from advances in compiler technology from the software world. GCC and LLVM compilers already support many optimization options. For ease of use, best case sequences of options are offered as -O1/-O2/-O3 flags to improve application runtime, determined by averaging over large codebases and workloads. An obvious question is whether a different sequence delivering even better performance might be possible for a specific application.

This is an active area of research in software engineering, looking not only at which compiler options to select (eg function inlining) but also in what order these options should appear in the sequence, since options are not necessarily independent (A before B might deliver different performance versus B before A).

Paul’s view

Using machine learning to pick tool switches in Place & Route to improve PPA is one of the best examples of commercially deployed AI in the EDA industry today. A similar concept can be applied to picking compiler switches in logic simulation to try and improve performance. Here, there are clear similarities to picking C/C++ compiler switches, known as “compiler autotuning” in academic circles.

In this month’s paper, the authors use a modified Bayesian algorithm to try and beat -O3 in GCC. They use a benchmark suite of 20 small ~1k line C programs (matrix math operations, image processing, file compression, hashing) and consider about 70 different low level GCC switches. The key innovation in the paper is to use tree-based neural networks as the Bayesian predictor rather than a Gaussian process, also during training to quickly narrow down 8 “important” switches and heavily explore permutations of these 8 switches.

Overall, their method is able to achieve an average 20% speed-up over -O3. Compared to other state-of-the-art methods this 20% speed-up is achieved with about 2.5x less training compute. Unfortunately, all their results are using a very old version of GCC from 12 years ago, which the authors acknowledge at the end of their paper, along with a comment that they did try using a more recent version of GCC and were able to achieve only a 5% speed-up over -O3. Still, a nice paper, and I do think the general area of compiler autotuning can be applied to improve logic simulation performance.

Raúl’s view

Our 2024 penultimate paper for the year addresses setting optimization flags in compilers to achieve the fastest code execution (presumably other objective functions like code size or energy expended during computation could have been used). In the compilers studied, GCC and LLVM using 71 and 64 optimization flags respectively. The optimization spaces are vast at 271 and 264. Previous approaches use random iterative optimization, genetic algorithms, Irace (tuning of parameters by finding the most appropriate settings given a set of instances of an optimization problem, “learning”). Their system is called BOCA.

This paper uses Bayesian optimization, an iterative method to optimize an objective function using the accumulated knowledge in the known area of the search space to guide samplings in the remaining area in order to find the optimal sample. It builds a surrogate model that can be evaluated quickly, typically using Gaussian Process (GP, you can look it up here, it is not explained in the paper) which doesn’t scale to high dimensionality (number of flags). BOCA uses a Random Forest instead (RF, also not explained in the paper). To further improve the search, optimizations are ranked into “impactful” and “less impactful” using Gini importance to measure the impact of optimizations (look it up here for more detail). Less impactful optimizations are considered only in a limited number of iterations, i.e., they “decay”.

The authors benchmark the 2 compilers on 20 benchmarks against other state of the art approaches, listing the results for 30 to 60 iterations. BOCA achieves given desired speedups in 43%-78% less time. Against the highest optimization setting of the compilers (-o3) BOCA achieves a speedup of 1.25x for GCC and 1.13x for LLVM. Notably, using around 8 impactful optimizations is best, as more can slow BOCA down. The speedup is limited when using more recent GCC versions, 1.04-1.06x.

These techniques yield incremental improvements. They would certainly be significant in HW design where they can be used for simulation and perhaps for setting optimization flags during synthesis and layout, where AI approaches are now being adopted by EDA vendors. Time will tell.

Also Read:

Cadence Paints a Broad Canvas in Automotive

Analog IC Migration using AI

The Next LLM Architecture? Innovation in Verification

Emerging Growth Opportunity for Women in AI

 


Scaling AI Data Centers: The Role of Chiplets and Connectivity

Scaling AI Data Centers: The Role of Chiplets and Connectivity
by Kalar Rajendiran on 11-26-2024 at 6:00 am

Building the Modern Data Centre AI Compute Nodes

Artificial intelligence (AI) has revolutionized data center infrastructure, requiring a reimagining of computational, memory, and connectivity technologies. Meeting the increasing demand for high performance and efficiency in AI workloads has led to the emergence of innovative solutions, including chiplets, advanced interconnects, and optical communication systems. These technologies are transforming data centers into scalable, flexible ecosystems optimized for AI-driven tasks.

Alphawave Semi is actively advancing this ecosystem, offering a portfolio of chiplets, high-speed interconnect IP, and design solutions that power next-generation AI systems.

Custom Silicon Solutions Through Chiplets

Chiplet technology is at the forefront of creating custom silicon solutions that are specifically optimized for AI workloads. Unlike traditional monolithic chips, chiplets are modular, enabling manufacturers to combine different components—compute, memory, and input/output functions—into a single package. This approach allows for greater customization, faster development cycles, and more cost-effective designs. The Universal Chiplet Interconnect Express (UCIe) is a critical enabler of this innovation, providing a standardized die-to-die interface that supports high bandwidth, energy efficiency, and seamless communication between chiplets. This ecosystem paves the way for tailored silicon solutions that deliver the performance AI workloads demand, while also addressing power efficiency and affordability.

Scaling AI Clusters Through Advanced Connectivity

Connectivity technologies are the backbone of scaling AI clusters and geographically distributed data centers. The deployment of AI workloads in these infrastructures requires high-bandwidth, low-latency communication between thousands of interconnected processors, memory modules, and storage units. While traditional Ethernet-based front-end networks remain critical for server-to-server communication, AI workloads place unprecedented demands on back-end networks. These back-end networks facilitate the seamless exchange of data between AI accelerators, such as GPUs and TPUs, which is essential for large-scale training and inference tasks. Any inefficiency, such as packet loss or high latency, can lead to significant compute resource wastage, underlining the importance of robust connectivity solutions. Optical connectivity, including silicon photonics and co-packaged optics (CPO), is increasingly replacing copper-based connections, delivering the bandwidth density and energy efficiency required for scaling AI infrastructure. These technologies enable AI clusters to grow from hundreds to tens of thousands of nodes while maintaining performance and reliability.

Memory Disaggregation for Resource Optimization

AI workloads also demand innovative approaches to memory and storage connectivity. Traditional data center architectures often suffer from underutilized memory resources, leading to inefficiencies. Memory disaggregation, enabled by Compute Express Link (CXL), is a transformative solution. By centralizing memory into shared pools, disaggregated architectures ensure better utilization of resources, reduce overall costs, and improve power efficiency. CXL extends connectivity beyond individual servers and racks, requiring advanced optical solutions to maintain low-latency access over longer distances. This approach ensures that memory can be allocated dynamically, optimizing performance for demanding AI applications while providing significant savings in operational costs.

The Emergence of the Chiplet Ecosystem

A thriving chiplet ecosystem is emerging, fueled by advances in die-to-die interfaces like UCIe. This ecosystem allows for a wide variety of chiplet use cases, enabling modular and flexible design architectures that support the scalability and customization needs of AI workloads. This modular approach is not limited to high-performance computing; it also has implications for distributed AI systems and edge computing. Chiplets are enabling the creation of custom compute-hardware for edge AI applications, ensuring that AI models can operate closer to users for faster response times. Similarly, distributed learning architectures—where data privacy is a concern—rely on chiplet-based solutions to train AI models efficiently without sharing sensitive information.

Summary

AI is redefining data center infrastructure, necessitating solutions that balance performance, scalability, and efficiency. Chiplets, advanced connectivity technologies, and memory disaggregation are critical enablers of this transformation. Together, they offer the means to scale AI workloads affordably while maintaining energy efficiency and reducing time-to-market for new solutions. By harnessing these innovations, data centers are better equipped to handle the demands of AI, paving the way for more powerful, efficient, and scalable computing solutions.

  • Chiplet technology enables tailored silicon solutions optimized for AI workloads, offering affordability, lower power consumption, and faster deployment cycles.
  • Optical communication technologies, such as silicon photonics and co-packaged optics, are vital to scaling AI clusters and distributed data centers.
  • Memory disaggregation via CXL maximizes resource utilization while reducing costs and energy consumption.

Learn more at https://awavesemi.com/

Also Read:

How AI is Redefining Data Center Infrastructure: Key Innovations for the Future

Elevating AI with Cutting-Edge HBM4 Technology

Alphawave Semi Unlocks 1.2 TBps Connectivity for HPC and AI Infrastructure with 9.2 Gbps HBM3E Subsystem


One Thousand Production Licenses Means Silicon Creations PLL IP is Everywhere

One Thousand Production Licenses Means Silicon Creations PLL IP is Everywhere
by Mike Gianfagna on 11-25-2024 at 10:00 am

Spread Spectrum Modulator RTL IP provides industry standard and custom modulation patterns for Silicon Creations fractional N PLLs

If you sell sneakers, 1,000 pair is called a humble beginning. On the other hand, selling 1,000 licenses for specialized analog IP is a home run.  Silicon Creations celebrated a home run for a critical piece of analog IP that finds its way into a diverse array of applications. Succeeding in so many markets is noteworthy, and I want to share some significant facts around this achievement. You will see how 1,000 production licenses mean Silicon Creations PLL IP is everywhere.

About Silicon Creations

Silicon Creations is a self-funded, leading silicon IP provider. The company provides high-quality IP for precision and general-purpose timing (PLLs), oscillators, low-power, high-performance multi-protocol and targeted SerDes, and high-speed differential I/Os.  The IP finds diverse applications including smart phones, wearables, consumer devices, processors, network devices, automotive, IoT, and medical devices. As you can see, the company provides a lot of IP to multiple markets.

The just-announced milestone centers on its Fractional-N PLL IP. This IP delivers a multi-function, general-purpose frequency synthesizer. Unlike an integer-N PLL, the output frequency of a fractional-N PLL is not limited to integer multiples of the reference frequency. This significantly expands the scope of where the IP can be used. The complexity of the circuit also significantly increases, making it challenging to deliver reliable performance across all applications.

Silicon Creations has tackled this problem to deliver ultra-wide input and output ranges along with excellent jitter performance, modest area, and application-appropriate power. The result is a PLL that can be configured for almost any clocking application in complex SoC environments.

About the Achievement

Randy Caplan

In the press announcement, Silicon Creation’s principal and co-founder, Randy Caplan expanded on this achievement. He explained the company’s Fractional SoC PLLs have been deployed on more than 6 million wafers, translating to billions of chips in the market today.

Those are some very impressive numbers. Randy went on to highlight that the IP is available in a wide range of process nodes, from 2nm to 180nm. This is a testament to the robustness and adaptability of the technology, as it continues to meet the demands of the most advanced applications.

The Silicon Creations Fractional-N PLL IP is successfully deployed in a wide range of application areas, including:

  • High-performance digital clocking
  • PHY reference clock generation (e.g., DDR, PCIe, Ethernet, USB)
  • Fast frequency hopping
  • Spread-spectrum modulation
  • Micro-degree resolution phase stepping

Thanks to its broad feature set, robustness, and reliability, the IP finds application in many markets, including AI, automotive, consumer electronics, IoT, and high-performance computing (HPC).

Customer Testimonials

Kiran Burli

Key members of Silicon Creation’s customer base also weighed in with their perspectives.  Kiran Burli, vice president of Technology Management, Solutions Engineering, Arm, commented, “Arm has successfully utilized Silicon Creations’ Fractional-N PLL to clock our prototype chips across leading edge process nodes for more than a decade. Our collaboration with Silicon Creations ensures optimized performance of the Arm compute platform across a wide range of markets and use cases.”

 

Shakeel Peera

Shakeel Peera, vice president of marketing and strategy for Microchip’s FPGA business unit, commented, “Silicon Creations’ PLL technology is used throughout the PolarFire® FPGA family, including our RT line for space and high-reliability applications. The flexibility and performance of these PLLs support a wide range of use cases, allowing our customers to tailor their designs to meet specific demands across various applications.” Peera added, “We look forward to continuing our collaboration with Silicon Creations as we advance FPGA technology.”

To Learn More

You can read the full text of the press release from Silicon Creations here and listen to Randy Caplan explain the unique business model of Silicon Creations on the Semiconductor Insiders podcast here.  The full library of IP available from Silicon Creations can be found here. And that’s how Silicon Creations PLL IP is everywhere.

Also Read:

Silicon Creations is Fueling Next Generation Chips

Silicon Creations at the 2024 Design Automation Conference

Silicon Creations is Enabling the Chiplet Revolution


Cadence Paints a Broad Canvas in Automotive

Cadence Paints a Broad Canvas in Automotive
by Bernard Murphy on 11-25-2024 at 6:00 am

automotive trends min

Cadence recently launched a webinar series on trends and challenges in automotive design. They contribute through IP from their Silicon Solutions Group, a comprehensive spectrum of design tooling and through collaborative development within a wide partner ecosystem. This collaboration aims to support and advance progress through reference architectures and platforms co-developed with auto OEMs, AI solution builders, foundries and others. I’m writing here on the overview webinar. Check-in with Cadence on upcoming webinars in the series which will dive more deeply into these topics.

Market trends

Automotive options today can be overwhelming: BEVs versus HEVs, ADAS versus different flavors of autonomous driving, in-cabin features galore, often at eye-popping prices. No wonder auto OEMs are juggling architectures, production priorities and radical ideas to monetize mobility. Yet all this churn also represents expanding opportunities in the supply chain, for Tier1s, semiconductor suppliers, foundries, and beyond. Automotive semiconductors are expected to deliver a CAGR of 11% through 2029 and the silicon carbide market (critical to support EV trends) is expected to grow even faster, at a CAGR or 24% through the same period.

Growth is driven in part by electrification and in part by increased sensing and AI content to add more intelligence to ADAS, autonomous driving (AD) and the driver/passenger safety and experience. All this new capability adds cost in hardware, creating pressure tο mitigate costs by consolidating electronics content into a smaller number of devices through zonal architectures. It also adds complexity in software/AI modeling, further contributing to cost and amplifying safety concerns through disaggregated software and AI model development and support, in turn pushing for more vertical integration in the supply chain.

Robert Schweiger (Group Director, Automotive Solutions at Cadence) observed that, as a key supplier in this design chain, Cadence sees a clear trend to vertical integration among auto OEMs and Tier1s now wanting to be hands-on in critical semiconductor and systems design. This isn’t necessarily bad news for automotive semiconductor companies; they too will participate but markets for advanced systems are becoming more competitive.

Robert recapped some sensor trends from this year’s AutoSens conference, some of which I have talked about elsewhere but I think are worth repeating here. Hi-res (8MP) cameras will become mainstream in support of AI. Low-cost, “unintelligent” cameras will also play a role in transferring raw video streams to the central processor on which AI-based inferences can be overlaid. 4D imaging radar (4DR) is catching up fast versus Lidar (except so far in China) thanks to lower pricing. In-cabin sensing for driver monitoring systems (attention, alertness) is now a requirement for a top safety rating according to the EURO-NCAP standard. Similarly, occupancy detection (did I leave a child in the backseat when I locked the car?) is becoming more popular. Both systems use in-cabin cameras or radar.

Cadence automotive technology update

In interest of brevity here I will just call out recent updates. The Tensilica group has been very active, introducing new vision cores (3xx series) and a 4DR hardware accelerator that can be used for vision and radar applications, to which you can add a Neo (or other) NPU for higher-performance AI tasks. I found a zonal controller graphic very interesting here, a single controller connecting to multiple radar and vision sensors, processing vision and radar streams before handing off to a fusion accelerator for enhanced point cloud generation. Clearly, the zonal controller must be close enough to the sensors with a high-speed link  to manage bandwidth/latency between sensors and that controller.

On connectivity, Robert anticipates Automotive Ethernet will play a big role between central and zonal ECUs. At the edges between sensors and zonal ECUs, options are not yet quite so standardized, trending to SERDES-based interfaces to provide necessary bandwidth or MIPI in cases that aren’t quite as demanding. Cadence SSG has connectivity solutions to support all these options.

3D-IC is another important objective in total system cost reduction. Notable recent additions here are Integrity 3D-IC to guide planning, co-design, cross-die optimization, Allegro for package layout co-design and Virtuoso with 3D analysis. Together with UCIe controller, PHY, and verification IP.

In verification, there has been a variety of Verification IP updates. The Helium platform plus integration with Palladium and Protium platforms enables a hybrid virtual prototyping design flows allowing for software development in the cloud as hardware is under development. The MIDAS safety platform drives verification of safety requirements through the Unified Safety Format (USF) with both digital and analog design to ensure compliance with ISO 26262/ASIL requirements. Also now Palladium emulation platforms are fault-simulation-capable, making full-SoC analyses with software stacks practical for system-level safety validation.

In system design and analysis I didn’t see recent updates but of course Cadence hosts a full suite of thermal, RF, EM, SI/PI and CFD solutions, applicable from chip, to board, to rack, even to datacenter.

Finally, Robert also introduced a new Power Module Flow for the design of silicon carbide-based power electronic systems for advanced EV powertrain applications. This flow targets power module design considering thermal, EMI and mechanical stress factors plus die and package co-optimization.

Partnerships / collaborative development

Getting to convergence in this massive re-imagination of a modern car is only possible through major collaborations, prototyping, building reference designs, and integrating with cloud-native software development platforms.

One example is the ZuKiMo government-funded project, taped out on GF22nm and demoed at Embedded World 2024, featuring DreamChip’s latest automotive SoC, hosting Automotive Ethernet, Tensilica AI accelerator IP, and BMW AI image recognition.

At Chiplet Summit 2024 Cadence demonstrated a 7-chiplet system connected through their UCIe in a standard package, running at up to 16GT/s.

Cadence is also collaborating with Arm in support of the SOAFEE initiative, supporting cloud-native design starting with Helium-based virtual prototyping, while allowing subsystems to progressively transition to hardware-based modeling for more precise validation as a design stabilizes.

As one last telling example of collaboration, Tesla has partnered with Cadence to develop their DOJO AI platform, their next step in a full self-driving solution.

In summary, Cadence is plugged into automotive whichever way markets go. You can sign up for the next webinar in the series HERE.

Also Read:

Analog IC Migration using AI

The Next LLM Architecture? Innovation in Verification

Emerging Growth Opportunity for Women in AI

Addressing Reliability and Safety of Power Modules for Electric Vehicles


Podcast EP262: How AI is Changing Semiconductor Design with Rob Knoth

Podcast EP262: How AI is Changing Semiconductor Design with Rob Knoth
by Daniel Nenni on 11-22-2024 at 10:00 am

Dan is joined by Rob Knoth, Solutions Architect in the Strategy and New Ventures group at Cadence. He is a technologist focusing on corporate strategy and the interfaces between domain specific solutions. A key area of expertise is the digital implementation of safety critical and high reliability systems. He has extensive experience in both semiconductor design and EDA.

In this wide-ranging and informative discussion, Dan explores the impact AI is having on semiconductors and semiconductor design with Rob. AI in the EDA flow is discussed, including areas such as harvesting past data to improve future designs, impacts to analog design, verification, and packaging/3DIC.

Rob describes the three layers of EDA within Cadence. These areas include AI-enhanced design engines, AI-assisted optimization and the use of generative AI as a “co-pilot” to assist with tasks such as optimization, verification and generation of new designs.

Rob reviews many examples and use cases for this technology across diverse applications. He also discusses the future of AI-assisted design and the positive impact he expects to see on designer productivity and innovation.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


CEO Interview: Sandeep Kumar of L&T Semiconductor Technologies Ltd.

CEO Interview: Sandeep Kumar of L&T Semiconductor Technologies Ltd.
by Daniel Nenni on 11-22-2024 at 6:00 am

DSC00130

Sandeep Kumar is the Chief Executive of LTSCT (L&T Semiconductor Technologies Ltd). A seasoned technology executive with over 35 years of experience, Kumar has a proven track record in building and scaling technology businesses globally.

As a pioneer in the Indian semiconductor industry, he has co-founded multiple technology ventures and holds leadership positions in several high-growth companies. His expertise spans technology development, finance, M&A, and global partnerships.

Currently, Kumar also serves as the Chief Mentor for the Indian government’s AI CoE, Neuron. He also chairs the boards of Qritrim (enterprise AI), Bhartel (communications, networking, and energy systems), and Mekumi (EdTech). He is also a Partner at Kizki Japan Fund, TMPC Japan, and Global Ascent Partners.

An active investor and advisor, Kumar is involved with numerous startups, MSMEs, and MNCs worldwide as a board member. His deep industry knowledge and extensive network make him a sought-after leader in the technology ecosystem. His global reach extends to the US, India, Japan, China, Taiwan, Chile, Mexico, Israel, and Russia.

Prior to his entrepreneurial pursuits, Kumar held key positions at established corporations. He was a Managing Partner at TechRanch incubator and held leadership roles at Texas Instruments, where he headed the Broadband Business Division and the Global Microcontroller R&D Division. He has also been a Partner at venture capital firms JVP and Crimson.

Tell us about your company?
L&T Semiconductor Technologies (LTSCT) is India’s first major fabless product semiconductor company, pioneering the next generation of semiconductor solutions. Our focus is on designing and delivering smart semiconductor devices for global markets, offering a wide range of semiconductor technologies that enable energy-efficient and high-performance systems. These solutions are critical in sectors like mobility, industrial, and energy.

Our product portfolio spans across smart analog, MEMS & sensors, RF and power electronics, helping sectors with competitive technology solutions. To achieve this, we are leveraging key trends such as electrification, data-driven technologies, and advanced processors. We are committed to supporting decarbonization and digitalization efforts globally, and our mission is to empower our customers with market-leading power, sense, and control integrated circuits.

With a strong presence across India, US, Europe, and Japan, we are positioned to deliver innovative semiconductor technologies and solutions to a diverse set of customers around the world. Our goal is to be a key driver of smart, efficient applications that address the critical needs of a rapidly evolving, tech-driven world.

What problems are you solving?

At L&T Semiconductor Technologies, we’ve chosen the path of chip design over chip manufacturing because we see a tremendous opportunity to address the technology-oriented transformation happening in critical sectors such as mobility, industrial, and energy. These sectors are evolving rapidly, moving from traditional hardware systems to more advanced solutions, creating demand for cutting-edge semiconductor technologies that can support innovations like smarter, more efficient vehicles, smart grids, and Industry 4.0. By focusing on chip design, we can lead the development of these technologies and meet the future needs of these industries.

By specializing in design, we can be agile, respond faster to market needs, and deliver cutting-edge semiconductor technologies that drive efficiency and future readiness. While India is ramping up its manufacturing, our long-term vision is to integrate both design and manufacturing, positioning ourselves as a global semiconductor leader with deep roots in India’s growth story.

What application areas are your strongest?

Our innovative product offerings focus on areas such as mobility, industrial, and energy infrastructure to usher in the next-generation applications.

In mobility, we are working on high-end silicon chips to support the shift toward software-defined vehicles and electric mobility. In the industrial space, our focus is on designing chips that drive automation and smart manufacturing, aligning with the Industry 4.0 movement. The energy sector is evolving with the rise of smart grids and renewable energy integration, and our chips enable real-time data flow and optimized power management in these environments.

In the long term, our goal is also to establish three distinct fabs, each requiring different levels of investment to cater to different technologies, ranging from high-end silicon to SiC and GaN chips. Thus, ensuring we address the specific needs of these sectors while leading the market. Investment decisions will depend on returns and the availability of financial incentives from the government at that time.

What keeps your customers up at night?

We are focusing on customers from mobility, industrial, and energy sectors, as these sectors are strategically important for India and are seeing an increasing push by the government to achieve localization and reduce dependency on global supply chains.  Therefore, our customers need to achieve supply chain resiliency. In addition, global efforts on net-zero and decarbonization are pushing our customers to achieve aggressive targets in a very short period.

LTSCT is a global company with a strong development footprint in India. India offers a stable source of design expertise, a large resource pool in semiconductors, and a nation that is looked upon by others as one that follows strong IP protection, and a non-monopolistic policy. Indian companies have transparency levels that are as advanced as any other free economy. LTSCT provides globally competitive semiconductor solutions to the world that help them diversify and secure a stable supply chain.

Our core vision is to develop ethical and secure solutions that help achieve decarbonization for a globally interconnected digital economy. We combine advanced technologies, creative system design, and strong IP to develop solutions that help our customers meet their net-zero goals while realising energy efficient, high-performance systems to benefit from data, electrification, and the latest technology trends.

What does the competitive landscape look like and how do you differentiate?

The global semiconductor design landscape is highly competitive, with companies vying for market share through mergers and acquisitions. Companies are also heavily investing in their R&D investments, especially areas like artificial intelligence, machine learning, Internet of Things (IoT), autonomous vehicles, and high-performance computing. The industry is witnessing a geographic shift, with Asia emerging as a major player. The growing demand for specialized chips and the rapid pace of technological advancements further intensify competition.

We are committed to driving the shift towards software-defined systems across focus domains. Our focus on chip design for innovations like advanced tech-driven vehicles, smart grids, and Industry 4.0 positions us as leaders in these rapidly evolving markets. LTSCT’s fabless model allows the company to stay agile and responsive to market demands, enabling faster development of cutting-edge technologies. We are leveraging our country’s strength in engineering and innovation while avoiding the massive capital costs associated with semiconductor fabrication. This strategy positions India to succeed in a complex and competitive market, even as it continues to scale its operations and workforce.

What sets us apart is our holistic, system-level approach—we work closely with clients to understand the entire system and deliver a differentiated solution. This includes ensuring software upgradability and adaptability, crucial across sectors/domains where over-the-air updates are becoming essential. Our global reach and local expertise ensure we bring the right innovation and differentiation to the market.

What new features/technology are you working on?

This is one of the most exciting times in the history of technology, specifically India’s semiconductor tech ecosystem. Semiconductors are changing how all industries operate and reimagining how conventional architecture is being designed and used. We are working towards creating innovative solutions to support India’s ambition to become a global semiconductor manufacturing hub by significantly elevating India’s technological capabilities.

Our recent partnership with IBM aims to harness our cutting-edge semiconductor design technology and IBM’s advanced processors to forge next-gen technology products through R&D. We are elevating India’s landscape of surveillance technology by developing a comprehensive range of indigenous Indian IP SoCs (Systems on Chips) for advanced AI IP CCTV products, through one of our collaborations.

Apart from corporate collaborations, we have partnered with C-DAC to create a powerful commercialisation programme for advanced technologies created by C-DAC in semiconductor design and development, embedded software, open-source OS, HPC, and power systems. We are transforming their portfolio of indigenous IPs into global products. Additionally, we have entered a partnership with one of the prominent institutions in the country, IIT Gandhinagar, to develop semiconductor solutions for projects of national importance.

We are thus, strategically collaborating with organisations and academia to create innovative semiconductor solutions with improved functionality and performance.

Also Read:

CEO Interview: Dr. Sakyasingha Dasgupta of EdgeCortix

CEO Interview: Bijan Kiani of Mach42

CEO Interview: Dr. Adam Carter of OpenLight

CEO Interview: Sean Park of Point2 Technology


Semiconductors Slowing in 2025

Semiconductors Slowing in 2025
by Bill Jewell on 11-21-2024 at 2:00 pm

Semiconductor Market Forecasts 2024 SemiWiki

WSTS reported third-quarter 2024 semiconductor market growth of $166 billion, up 10.7 percent from second-quarter 2024. 3Q 2024 growth was the highest quarter-to-quarter growth since 11.6% in 3Q 2016, eight years ago. 3Q 2024 growth versus a year ago was 23.2%, the highest year-to-year growth since 28.3% in 4Q 2021.

Nvidia remained the largest semiconductor company in 3Q 2024 with $35.1 billion in revenue due to its strength in AI GPUs. Nvidia sells its AI GPUs as modules which include memory supplied by SK Hynix, Micron Technology, and Samsung as well as other components supplied by outside vendors. Thus, Nvidia’s semiconductor revenue from its own devices is less than its total revenue. However, Nvidia would still be the largest semiconductor company even if externally purchased components were excluded. Samsung Semiconductor was second at $22.0 billion with memory for AI servers cited as a major revenue driver. Broadcom remained third with its guidance for 3Q 2024 at $14.0 billion. Broadcom highlighted its AI semiconductors as a growth driver. Intel and SK Hynix rounded out the top five.

The third quarter of 2024 was robust for most major semiconductor companies. The memory companies SK Hynix, Micron Technology, and Kioxia all reported double-digit revenue growth in 3Q 2024 versus 2Q 2024. Nvidia and AMD each reported 17% growth due to AI data center demand. The only company showing declining revenue was Renesas Electronics, down 3.8% due to a weak automotive market and inventory reductions. The weighted average revenue growth for 3Q 2024 versus 2Q 2024 for the sixteen companies was 10%.

The outlook for 4Q 2024 shows diverging trends. The data center market, driven by AI, is expected to lead to substantial revenue growth for Nvidia, Micron, and AMD. Samsung Semiconductor and SK Hynix did not provide specific 4Q 2024 revenue guidance, but both cited AI server demand as strong. Companies which are dependent on the automotive industry expect a weak 4Q 2024. Infineon Technologies, Texas Instruments, NXP Semiconductors, and Renesas Electronics all guided for revenue declines in 4Q 2024 based on a weak automotive market and inventory reductions. STMicroelectronics also cited these factors but expects a 2.1% revenue increase. The companies heavily dependent on smartphones have differing revenue expectations, with Qualcomm guiding up 7.2% and MediaTek guiding down 1.0%. The weighted average guidance for 4Q 2024 versus 3Q 2024 is a 3% increase. However, the individual company guidance varies widely, from plus 12% from Micron to an 18% decline from Infineon and a 19% decline from Renesas.

As is the case with 4Q 2024 guidance, the outlook for year 2025 is mixed. AI will drive 2024 server growth in dollars to 42%, according to IDC. 2025 server growth will still be a strong 11% but it will be a significant deceleration from 2024. Smartphones and PCs each recovered to growth in 2024 from 2023 declines. IDC expects 2025 modest growth of smartphones and PCs in the low single-digits. Light vehicle production was a robust 10% in 2023 due to the post-pandemic recovery. S&P Global Mobility shows a 2.1% decline in production in 2024. Production is expected to recover slightly to a 1.8% growth in 2025.

The impact of the memory market also needs to be considered. WSTS’ June 2024 forecast called for 16% semiconductor market growth, with memory growing 77% and non-memory only growing 2.6%. Much of the memory growth in 2024 has been driven by price increases which are certain to slow in 2025.

The chart below shows recent forecasts for the semiconductor market in 2024 and 2025. 2024 forecasts are converging in the range of 15% from Future Horizons to 19% from us at Semiconductor Intelligence (SC-IQ). 2025 forecasts are in two camps. RCD Strategic Advisors and Gartner project slightly slower growth than 2024 at 16% and 13.8%, respectively. We at Semiconductor Intelligence expect significantly slower growth at only 6% in 2025. Future Horizons also sees slower growth at 8%. Our 2025 assumptions are:

· Continued growth in AI, though decelerating
· Healthy memory demand due to AI, but prices moderating
· Mediocre growth for PCs and smartphones
· Relatively weak automotive market
· Potential higher tariffs (particularly in the U.S.) affecting consumer demand

Semiconductor Intelligence is a consulting firm providing market analysis, market insights and company analysis for anyone involved in the semiconductor industry – manufacturers, designers, foundries, suppliers, users or investors. Please contact me if you would like further information.

Bill Jewell
Semiconductor Intelligence, LLC
billjewell@sc-iq.com

Also Read:

AI Semiconductor Market

Asia Driving Electronics Growth

Robust Semiconductor Market in 2024

Semiconductor CapEx Down in 2024, Up Strongly in 2025


Relationships with IP Vendors

Relationships with IP Vendors
by Daniel Nenni on 11-21-2024 at 10:00 am

Semiwiki Blog Post #3 Image #2

An animated panel discussion Design Automation Conference in June offered up a view of the state of RISC-V and open-source functional verification and a wealth of good material for a three-part blog post series.

Parts One and Two covered a range of topics from microcontroller versus more general-purpose processor versus running a full stack with an operating system and buying from a major vendor versus picking open source and adding instructions versus keeping it pristine.

In Part Three, Ron Wilson and Contributing Editor to the Ojo-Yoshida Report tidies up the discussion by asking how to determine the right questions for an IP vendor and whether the functional models are in place. Answering those questions and more were Jean-Marie Brunet, Vice President and General Manager of Hardware-Assisted Verification at Siemens; Ty Garibay, President of Condor Computing; Darren Jones, Distinguished Engineer and Solution Architect with Andes Technology; and Josh Scheid, Head of Design Verification at Ventana Microsystems.

Wilson: Is the gold standard to have asked all the right questions of the vendor. Or is the gold standard to see the RTL performing functionally correctly with the software stack?

Scheid: Designers must focus on both at different stages of the project. It needs vendor selection and what’s needed from the vendor selected. The designer will need to commit to that at some point. The amount of effort it takes to go from evaluating IP into a hardware prototyping platform where the designer can see all that working is a significant investment already.

Garibay: Designers would be happy to have a vendor selection problem with the Arm core, but they don’t. It’s a little unfair to focus on that for RISC-V. Because many of the players in the RISC-V market are relatively new, they have to establish a track record and earn user trust. Arm has been doing that for years and has done a good job. I have no problem staying to the current gold standard and what we all hope to be the challenge.

How can we compare each of the potential players in RISC-V today? It’s knowing the people, their track records, their companies and business profiles. As we move forward and IP becomes silicon, we’ll find out who the real viable players are. Go ahead, disagree.

Jones: CPU IP are complex designs, especially the higher performance ones with so many transistors dedicated to eking out that last quarter of a percent of performance. It’s difficult to verify. That’s why it still matters that at first silicon, a designer can see Linux is booting. Booting Linux is huge because Linux seems simple when turning on the computer. It comes to a prompt. But it just ran billions of instructions and probably touched 90%-plus of the system to get to that prompt.

Scheid: I agree. Designers need a greater relationship with their IP vendor and a good understanding of what has been done and what has been verified. They also need to verify even more themselves. In our space, providing solutions to assist the verification of an SoC or IP integration is good because there are greater challenges associated with it. Ultimately, they’re going to measure up to the point where they boot Linux and see what’s happening. To do this, they need to have an entire SoC where each component talks correctly to each other. That’s a huge task.

Jones: We are talking about the CPU instructions and that’s not the hardest part. When a designer is trying to verify an SoC, the hardest part is the interconnect and the buses and that’s not RISC-V versus Arm. These processors use AXI, an industry standard. That’s where the verification IP is needed. Otherwise, how does a designer find bus exercisers and monitors? It’s more about the large SoC than it is the pieces.

Wilson: Because you did your job, it can be about integration. It sounds like you agree that we’re at that stage, if licensing from an experienced, reputable vendor.

Garibay: If I look at what I expect to be the total challenge for verification of our CPU, the percentage of what is ISA-specific is maybe 10%. Yes, designers must check all the boxes, make sure all the bits are in the right place, and they flip and twiddle correctly. At that level of performance, we’re shooting for with hundreds and hundreds of instructions in flight at any one time. It doesn’t matter which ISA. Making sure that the execution is correct and that the performance is 90% of verification

I’ve done 68,000, x86, MIPS, Arm, PowerPC, a couple of proprietaries and now RISC-V. The difference is not substantial, other than the fact that the theoretical ownership of the specification lies with an external body that is a committee. That’s the main difference. Typically, the ownership has always lived with MIPS or PowerPC, Arm or x86 Duo. Yes, the difference is that the ownership is in a committee. Once the committee makes a call, the industry lives with it.

Scheid: We can contribute as well, whether it’s systemic concerns about software availability or specification extension capabilities that have multiple vendors on stage to talk about these things.

That’s the power of RISC-V. When looking at these solutions, we’re all interested in addressing concerns, making sure everyone’s communicating about what the CPU implements, what the software expects, what user software may run. Systemic risks with other solutions are put aside with the way that RISC-V is set up.

Garibay: It’s not our interests to put out a bad product. We all are cheering for each other.

Brunet: The RISC-V community and ecosystem and the market have mature chips using the RISC-V core, and everything works well. It’s a good sign.

Wilson: You mentioned earlier virtual models. The same question with best functional models. Are those in place? Are they coming?

Jones: Models of RISC-V are publicly available from Imperas. RISC-V is not immature.

Garibay: Designers can get the models. What they run on them may be a challenge.

Also Read:

Changing RISC-V Verification Requirements, Standardization, Infrastructure

The RISC-V and Open-Source Functional Verification Challenge

Andes Technology is Expanding RISC-V’s Horizons in High-Performance Computing Applications


Silicon Creations is Fueling Next Generation Chips

Silicon Creations is Fueling Next Generation Chips
by Mike Gianfagna on 11-21-2024 at 6:00 am

Silicon Creations is Fueling Next Generation Chips

Next generation semiconductor design puts new stress on traditionally low-key parts of the design process. One example is packaging, which used to be the clean-up spot at the end of the design. Thanks to chiplet-based design, package engineers are now rock stars. Analog design is another one of those disciplines.

Not long ago, analog design had a niche place in semiconductor design for the small but important part of the system that interfaced to the real world. Thanks to the performance demand of things like AI, analog design is now a critical enabling technology for just about every system. Data communication speed and latency as well as ubiquitous sensor arrays are prevalent in all advanced designs. These technologies now rely on analog design. At a recent Siemens event, substantial details were presented about this change in system design. Let’s dig into how Silicon Creations is fueling next generation chips.

The Event

Siemens EDA held its Custom IC Verification Forum recently on August 27 in Austin. The event aimed to present a comprehensive view of leading-edge AI-enabled design and verification solutions for custom ICs.  Topics such as circuit simulation, mixed signal verification, nominal to high-sigma variation analysis, library characterization, and IP quality assurance were all covered.

You can check out more details about this event here. What you will see is most of the presentations are given by Siemens folks, with one exception. They keynote address is delivered by Silicon Creations. I’d like to cover the eye-opening insights presented in that keynote.

The Keynote

Randy Caplan

Randy Caplan, Principal and Co-Founder of Silicon Creations presented the keynote, entitled Increasing Analog Complexity is a Reality for Next Generation Chips. For nearly 18 years, Randy has helped to grow Silicon Creations into a leading semiconductor IP company with more than 450 customers in over 20 countries, with a remarkable statistic of nearly 100% employee retention.

Silicon Creations’ designs have been included in over 1,500 mass-produced chips from 2nm to 350nm, from which more than ten million wafers were shipped. The company is known for award-winning analog mixed-signal silicon intellectual property that lowers risk. You can learn about this unique company on SemiWiki here.

Randy’s keynote presented a great deal of information about how chip/system design is changing. He presented many trends, backed-up with detailed data to illustrate how growing complexity is enabled by high-performance analog IP. Below is one example that surprised me. We are all aware of the growth in MOS logic and memory. However, shipments of analog chips are actually growing much faster, indicates how ubiquitous analog IP is becoming across many markets.

Unit forecasts

Randy covered many other relevant and interesting topics in his keynote, including the evolution from monolithic to multi-die design, process technology evolution, multi-core trends, the growth of analog functions required for advanced designs, reliability considerations, and a broad overview of design and process challenges. The unique demands of SerDes and PLL design are also explored.

There are examples presented, along with several architectural options to consider. The collaboration between Silicon Creations and Siemens is discussed, along with details around the use of Siemens Solido, Calibre and Questa solutions are presented.

To Learn More

I have provided a high-level overview of the topics covered in Randy’s keynote. There is much more detail and many more insights available in the slides. If analog content is becoming more important to your chip projects, you should get your own copy of this important keynote address. You can download it here and learn more about how Silicon Creations is fueling next generation chips.


I will see you at the Substrate Vision Summit in Santa Clara

I will see you at the Substrate Vision Summit in Santa Clara
by Daniel Nenni on 11-20-2024 at 10:00 am

Soitec Substrate Vision Summit

WIth packaging being one of the top sources of traffic on SemiWiki, I am expecting a big crowd at this event. A semiconductor substrate is a foundational material used in the fabrication of semiconductor devices. Substrates are a critical part of the manufacturing process and directly affect the performance, reliability, and efficiency of 3D IC based semiconductor devices.

I will be moderating a panel at this event and you certainly do not want to miss that!

AI’s impact on the semiconductor value chain is profound, enabling faster, more efficient, and more innovative approaches across all stages, from material discovery to design, fabrication, and testing. By leveraging AI technologies, the semiconductor industry can address the challenges of scaling, complexity, and demand for high-performance chips in emerging fields like AI, IoT, 5G, and beyond. Panelists will discuss this challenge. 

Panel discussion (Intel, Meta Reality Lab, Samsung Foundry Soitec, moderated by Daniel Nenni).

Register Here

Substrate Vision Summit, organized by Soitec, gathers leading engineers and industry professionals to explore the cutting-edge developments in semiconductor materials that are shaping the future of technology. This event provides a platform for the exchange of ideas, research findings, and technological advancements that are driving the evolution of the semiconductor industry.

As the demand for faster, smaller, and more efficient electronic devices continues to surge, the role of advanced semiconductor materials becomes increasingly critical. This conference will delve into topics such as the latest breakthroughs in silicon-based technologies, the rise of alternative materials like gallium nitride (GaN) and silicon carbide (SiC).

Keynote speakers, including prominent industry leaders, will share their insights on the future directions of semiconductor research and development. Technical sessions will cover a range of themes, from material synthesis and characterization to device fabrication and application. Attendees will have the opportunity to engage in in-depth discussions on challenges and solutions related to material performance, scalability, and sustainability.

Networking sessions and panel discussions will provide additional opportunities for collaboration and knowledge exchange, fostering connections that can lead to innovative partnerships and advancements in the field.

Join us at the Substrate Vision Summit to stay at the forefront of this dynamic and rapidly evolving industry. Together, we will explore the materials that are set to revolutionize electronics and enable the next generation of technological innovation.

Agenda
[ Registration Required ]
Wednesday, December 4
8:00 AM – 9:00 AM
9:00 AM – 9:20 AM
Pierre Barnabé – CEO
9:20 AM – 9:40 AM
Ajit Manocha – CEO & President
9:40 AM – 10:00 AM
Barbara De Salvo – Director of Research
10:00 AM – 10:15 AM
10:15 AM – 10:35 AM
David Thompson – Vice President, Technology Research Processing Engineering
10:35 AM – 11:05 AM
Christophe Maleville – CTO & SEVP of Innovation
11:05 AM – 12:00 PM
12:00 PM – 1:10 PM
1:10 PM – 1:20 PM
Afternoon Track
1:20 PM – 1:45 PM
Panel Discussion – Afternoon Track
1:45 PM – 2:15 PM
Panel Discussion – Afternoon Track
2:15 PM – 2:25 PM
Afternoon Track
2:25 PM – 2:50 PM
Panel Discussion – Afternoon Track
2:50 PM – 3:05 PM
3:05 PM – 3:35 PM
Panel Discussion – Afternoon Track
3:35 PM – 3:45 PM
Afternoon Track
3:45 PM – 4:30 PM
Panel Discussion – Afternoon Track
4:30 PM – 4:55 PM
Panel Discussion – Afternoon Track
4:55 PM – 5:00 PM
5:00 PM – 6:15 PM
SOITEC IN BRIEF

Soitec plays a key role in the microelectronics industry. It designs and manufactures innovative semiconductor materials. These substrates are then patterned and cut into chips to make circuits for electronic components. Soitec offers unique and competitive solutions for miniaturizing chips, improving their performance and reducing their energy usage. 

Register Here