SILVACO 073125 Webinar 800x100

Breker Verification Systems at the 2025 Design Automation Conference #62DAC

Breker Verification Systems at the 2025 Design Automation Conference #62DAC
by Daniel Nenni on 06-02-2025 at 10:00 am

62nd DAC SemiWiki

Breker Verification Systems Plans Demonstrations of its Complete Synthesis and SystemVIP Library and Solutions Portfolio

Attendees who step into the Breker Verification Systems booth during DAC (Booth #2520—second floor) will see demonstrations of its Trek Test Suite Synthesis and SystemVIP libraries and solutions portfolio.

They will learn about complex application processor projects across the SoC and RISC-V core verification stack including data center, automotive, AI accelerator and consumer device applications where Breker’s Trek Test Suite Synthesis and Cache Coherency SystemVIP are deployed.

Breker’s SystemVIP library with Test Suite Synthesis allow for enhance verification coverage while significantly reducing test development time for complex scenarios incorporates debug and coverage analysis and can be ported across simulation, emulation, prototyping, post-silicon and virtual platform environments.

SystemVIP includes prepackaged, automated, self-checking scenario verification libraries, while Test Suite Synthesis is AI-driven, offering high-coverage, corner case bug hunting, test content generation and abstract reusable portability across verification platforms.

Starting with randomized instruction generation and microarchitectural scenarios, SystemVIP includes unique tests that check all integrity levels ensuring the smooth application of the core into an SoC, regardless of architecture, and the evaluation of possible performance and power bottlenecks and functional issues.

The SystemVIP Scenario Library enables high-coverage test generation using AI planning algorithms, test cross combination and concurrent test scheduling. The scenario library includes tests for system coherency in multicore SoCs, Arm integration, RISC-V core integrity, power domain switching, hardware security access rules, automated packet generation and performance profiling.

Engineers developing complex RISC-V cores or leveraging them in their SoCs must take on new verification scenarios that require different techniques. Breker’s SystemVIP can be extended for custom RISC-V instructions to be fully incorporated into the complete test suite crossed with other tests and used for a variety of complex RISC-V core designs. Those include system coherency in multicore SoC integrity test sets, high-coverage core test, power domain switching, hardware security access rules and automated packet generation

The verification of processor cores that leverage the RISC-V Open Instruction Set Architecture (ISA) requires testing specialized, unique scenarios, making Breker’s RISC-V SystemVIP libraries ideal scenario platforms. The libraries use AI Planning Algorithms, cross-test multiplication and concurrent, multi-threaded scheduling to provide rigorous testing from randomized instructions to unique coherency, paging and other complex system integration validation.

Semiwiki readers are invited to arrange demonstrations or private meetings by sending email to info@brekersystems.com or stopping by Booth #2520.

DAC Registration is Open

About Breker Verification Systems

Breker Verification Systems solves complex semiconductor challenges across the functional verification process from streamlining UVM-based testbench composition to execution for IP block verification, significantly enhancing SoC integration and firmware verification with automated solutions that provide test content portability and reuse. Breker solutions easily layer into existing environments and operate across simulation, emulation and prototyping, and post-silicon execution platforms.

Its Trek family is production-proven at leading semiconductor companies worldwide and enables design managers and verification engineers to realize measurable productivity gains, speed coverage closure and easy verification knowledge reuse. As a leader in the development of the Accellera Portable Stimulus Standard (PSS), privately held Breker has a reputation for dramatically reducing verification schedules in advanced development environments. Case studies that feature Altera (now Intel), Analog Devices, Broadcom, IBM and other companies leveraging Breker’s solutions are available on the Breker website.

 

Engage with Breker at:
Website:
 www.brekersystems.com
Twitter: @BrekerSystems
LinkedIn: https://www.linkedin.com/company/breker-verification-systems/
Facebook: https://www.facebook.com/BrekerSystems/

DAC registration is open.

Also Read:

RISC-V Virtualization and the Complexity of MMUs

How Breker is Helping to Solve the RISC-V Certification Problem

Breker Brings RISC-V Verification to the Next Level #61DAC


The SemiWiki 62nd DAC Preview

The SemiWiki 62nd DAC Preview
by Daniel Nenni on 06-02-2025 at 6:00 am

62nd DAC SemiWiki

After being held in San Francisco since the pandemic the beloved Design Automation Conference will be on the move again. In 2026 DAC will be held in Huntington Beach. For you non-California natives, Huntington Beach is a California city Southeast of Los Angeles. It’s known for surf beaches and its long Huntington Beach Pier. I spent time there surfing, fishing, and sailing when I was growing up and have very fond memories.

DAC Registration is Open

This being my 41st DAC I also have fond memories of San Francisco and look forward to this year’s conference. In my opinion it will be a big DAC since next year it will be down South. Additionally, all of the conferences have been bigger thus far this year and I expect no less for #DAC62.

SemiWiki will have a handful of bloggers covering the conference live so stay tuned for updates. SemiWiki will also be highlighting some of the standout companies exhibiting and supporting DAC. It takes an ecosystem to make these chips and SemiWiki is the largest semiconductor forum in the world, absolutely.

There are some SemiWiki moderated panels this year. I am moderating a lunch panel on Tuesday: Can AI Cut Costs in Electronic Design & Verification While Accelerating Time-To-Market?. Did I mention a FREE LUNCH and a chance to relax and learn? I hope to see you there.

Another panel is: Breaking the Design Automation Mold; Wild and Crazy Ideas for Global Optimization”. Bernard Murphy will be the moderator for that one.

DAC has a Chips and Systems theme which is very appropriate since EDA and IP are a critical part of the chip and systems supply chain:

“Join 6,000+ designers, researchers, tool developers, and executives at the premiere global design automation event. DAC is where the electronic design ecosystem assembles each year to find the latest solutions and methodologies in AI, EDA, Chip Verification, and more.”

A new feature this year is the Chiplet Pavillion which is sponsored by EE Times. I’m expecting high traffic for this 2nd level section since chiplets are the future of semiconductor design:

“The EE Times Chiplets in-person conference & Chiplet Pavilion exhibition at DAC 2025 will discuss the progress of chiplet technologies and the chiplet supply chain in all their complexity. The agenda will examine the entire value chain and ecosystem, spanning from initial concept and design exploration to packaging and testing. It will also explore the emergence of initiatives aiming to establish relevant technical standards and a chiplet marketplace.”

The keynotes are always good at DAC. This year’s lineup includes AI related topics of course. It seems to be high level stuff but interesting just the same.

Even more interesting are the SkyTalks  and the TechTalks where deep semiconductor experience comes out to play. There is also an Analyst section, meaning people on the outside of the semiconductor industry looking in, which is always fun. At some point in time these analysts will be replaced by ChatGPT but for now they are live and in person.

The TechTalks have an AI theme and look the most interesting. Amit Gupta will talk about Unlocking the Power of AI in EDA. I worked for Amit at Solido Design and saw first hand the early integration of AI into EDA. Amit is followed by Dr. John Linford from Nvidia, an early Solido customer. John is followed by William Wang from ChipAgents. I met William at DVCON and have had the pleasure to work with his team this year. William introduced SemiWiki to AgenticAI and how it will revolutionize chip design and verification.

That is just a quick update. Expect a lot more DAC content leading up to the event and live coverage during. Register here for the I Love 62nd DAC.

Quick Video, learn how DAC has influenced the semiconductor ecosystem over the last 60 years:

About DAC

DAC is recognized as the global event for chips to systems. DAC offers outstanding training, education, exhibits and superb networking opportunities for designers, researchers, tool developers and vendors. The conference is sponsored by the Association for Computing Machinery (ACM) and the Institute of Electrical and Electronics Engineers (IEEE) and is supported by ACM’s Special Interest Group on Design Automation (SIGDA) and IEEE’s Council on Electronic Design Automation (CEDA).

Also Read:

WEBINAR: PCIe 7.0? Understanding Why Now Is the Time to Transition

Intel Foundry is a Low Risk Aternative to TSMC

The Road to Innovation with Synopsys 224G PHY IP From Silicon to Scale: Synopsys 224G PHY Enables Next Gen Scaling Networks


CEO Interview with Kit Merker of Plainsight

CEO Interview with Kit Merker of Plainsight
by Daniel Nenni on 06-01-2025 at 11:00 am

Kit Merker Headshot blazer gradient background sm

Kit Merker is a technology industry leader with over 20 years of experience building software products. He serves as CEO of Plainsight Technologies and previously held senior positions at Nobl9, JFrog, Google and Microsoft.

Tell us about your company.

Plainsight is focused on making computer vision accessible and scalable for everyone. Our core technology is OpenFilter, an open-source framework that lets you build, deploy, and manage computer vision applications using modular components we call “filters.” A filter is essentially an abstraction that combines code and models, packaged as an app. You can string these filters together to create pipelines, and because they’re containerized, you can deploy them pretty much anywhere Docker runs. The idea is to provide a universal way to describe, manage, and scale vision workloads, moving from prototyping to production seamlessly. Our team comes from a background in distributed and cloud systems. My CTO was an early engineer on Google Dataflow, and I was an early product manager on Kubernetes, so we’ve brought that operational rigor to vision workloads. We’ve battle-tested this technology internally and with customers, and now we’re open sourcing it to benefit the broader community.

What problems are you solving?

The biggest challenge in computer vision is the gap between prototyping and production. Many vision projects get stuck after the prototype phase because scaling and maintaining them is incredibly difficult. There aren’t enough vision engineers, and the infrastructure is complex and expensive. OpenFilter addresses this by providing a scale-agnostic way to describe and deploy vision applications. Developers can go from a working prototype to production without a complete rewrite, and the modular approach means updates, maintenance, and scaling are much simpler. We also help reduce infrastructure and inference costs by allowing smarter resource allocation and workload pooling. Ultimately, we’re unlocking latent demand for vision by making it easier and cheaper to build and deploy real-world applications.

What application areas are your strongest?

OpenFilter shines in large-scale, complex deployments. If you have hundreds or thousands of cameras, large amounts of data, or distributed environments, think retail chains, logistics, or manufacturing, our platform really stands out. It’s also great for building complex vision pipelines involving object detection, tracking, segmentation, and classification. The system integrates with a wide range of data sources, including RTSP streams, webcams (except on Mac), and IoT frameworks like MQTT. While you can use it for small projects, its real value comes when you need to scale, manage costs, and handle continuous updates across many locations or devices.

What keeps your customers up at night?

Our customers are concerned about how to take vision solutions from prototype to production, manage costs, especially GPU and inference costs, and keep everything updated as requirements evolve. Integration with existing data sources and business logic is another big pain point, as is the shortage of skilled vision engineers. They need to be able to scale quickly, manage infrastructure efficiently, and ensure their systems are maintainable over time. The complexity and cost of building and maintaining these systems is what keeps them up at night.

What does the competitive landscape look like and how do you differentiate?

The main competition we see is from homegrown solutions, where teams stitch together open-source libraries like OpenCV or YOLO with custom code. These systems are often brittle and hard to maintain, especially at scale. There are some commercial products out there, but few offer the open-source, modular, and scalable approach that OpenFilter does. Our differentiation comes from the filter abstraction, which lets you combine code and models into reusable, composable units. This makes it easy to move from prototype to production without rework, and the same abstractions work at any scale. We also offer both open-source and commercial support, with the commercial version adding features like supply chain security, telemetry, and proprietary model training. Our approach dramatically improves developer productivity and makes maintenance and scaling much easier.

What new features or technology are you working on?

We’re actively expanding model support. Right now we support PyTorch, but we plan to add other architectures. We’re also working on community edition Docker images to simplify deployment, and adding more downstream data connectors like Kafka, Postgres, and MongoDB. The commercial offering includes enhanced telemetry, supply chain security, and proprietary model training for advanced use cases. Looking ahead, we see potential to extend OpenFilter to other data modalities like audio, text, and geospatial data, and to integrate with agentic and generative AI systems for pre- and post-processing.

How do customers normally engage with your company?

Customers engage with us in several ways. Developers and vision engineers can download and use OpenFilter directly as open source, experimenting with their own models and data. Organizations that need enterprise features or support can license our commercial offering, VisionStack. We also work with services partners to deliver custom solutions and support for complex deployments. Community contributions are encouraged, and we’re building a community around reusable filters and best practices. For those moving from prototype to production, we provide expertise, patching, and support to help them succeed.

Is there anything else you want readers to know?

The biggest “aha” moment for me in computer vision was realizing the gap between supply and demand. There’s enormous latent demand for vision solutions, but the cost and complexity have limited adoption to only the highest-ROI projects. We believe the filter abstraction is the innovation that will unlock this value and democratize computer vision. By making it easier, cheaper, and more consistent to build and deploy vision applications, we hope to see much broader adoption and innovation in the field.

Also Read:

CEO Interview with Bjorn Kolbeck of Quobyte

Executive Interview with Mohan Iyer – Vice President and General Manager, Semiconductor Business Unit, Thermo Fisher Scientific

CEO Interview with Jason Lynch of Equal1


CEO Interview with Bjorn Kolbeck of Quobyte

CEO Interview with Bjorn Kolbeck of Quobyte
by Daniel Nenni on 06-01-2025 at 10:00 am

Bjoern

Bjorn Kolbeck received a PhD in Computer Science from Humboldt University in Berlin. Bjorn had previously worked at HPC centers and at Google. His experience with hyperscale architectures led him to co-found Quobyte in 2013.

Tell us about your company?

Quobyte is scale-out storage, and was designed for massive scalability and extreme availability on hyperscaler principles. It’s designed to handle thousands of nodes using commodity hardware, and the fault resilient architecture is designed to handle failures gracefully. For example, you can lose an entire server,  a rack, or even an entire datacenter, and the cluster will continue to operate with no data loss. At the same time, the software was designed to be simple, it runs completely in user space with no kernel modules or custom drivers or networks, and be  run with very small  teams, which need only basic Linux skills. It’s well suited for semiconductor design, as it excels at managing very large datasets while meeting performance criteria in simulation, design verification, and tape-out among others. The single namespace handles both file and objects, eliminating data silos and facilitates collaboration.

What problems are you solving?

Quobyte addresses a customer’s ever increasing need for performance and capacity, while they face static or even declining budgets, as well as staffing shortages. We do this by being extremely simple to operate, and run on cost-effective commodity hardware. This scalable performance helps maximize resource utilization for compute, GPUs and software licenses. Quobyte can be run on-premises on commodity X86 or ARM servers, in the public cloud, or hybrid cloud environments, depending on the customer’s needs.

What application areas are your strongest?

Our extreme simplicity from initial download to scaling with huge clusters with 100s of petabytes, running on inexpensive commodity hardware as opposed to expensive appliances, and our ability to operate in hybrid environments on premises and in the cloud offers a very compelling value proposition. We are particularly well suited for AI applications, as well as traditional HPC applications in EDA, life sciences, financial services, and oil and gas.

What keeps your customers up at night?

We realize that our customers are demanding more performance to run larger jobs, delivering on tight deadlines, and needing more capacity and better availability with no increase in budget. We satisfy those needs.

What does the competitive landscape look like and how do you differentiate?

The market is dominated by expensive appliances and very complex software. These products are complex to administer and require a large staff, which can be cost prohibitive. We have a completely different approach. Our software is so easy to use and operate that you can download our free edition and be in production in less than an hour. We haven’t seen that from anyone else.

What new features/technology are you working on?

We are continuing to develop our product by making it even easier to use, making large scale data storage even easier to manage as well as adding intelligence to our software and automating optimizations.

How do customers normally engage with your company?

It’s very easy for a customer to engage us. Depending on how you like to learn about new technologies you can download our free edition and install it on your servers or in the cloud, or alternatively you can contact us and get a demo and a customized solution tailored to your use case, environment and needs.

Also Read:

Executive Interview with Mohan Iyer – Vice President and General Manager, Semiconductor Business Unit, Thermo Fisher Scientific

CEO Interview with Jason Lynch of Equal1

CEO Interview with Sébastien Dauvé of CEA-Leti


Video EP7: The impact of Undo’s Time Travel Debugging with Greg Law

Video EP7: The impact of Undo’s Time Travel Debugging with Greg Law
by Daniel Nenni on 05-30-2025 at 10:00 am

In this episode of the Semiconductor Insiders video series, Dan is joined by Dr Greg Law, CEO of Undo, He is a C++ debugging expert, well-known conference speaker, and the founder of Undo. Greg explains the history of Undo, initially as a provider of software development and debugging tools for software vendors. He explains that due to the complex nature of the models driving chip design, Undo also supports the validation and debug of chip designs with a shift left methodology.

He also describes the benefits of Undo’s time travel debugging on large chip designs, both to quickly identify root causes and collaborate across the team to build confidence in the entire design.

Contact Undo

The views, thoughts, and opinions expressed in these videos belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


Podcast EP289: An Overview of How Highwire Helps to Deliver Advanced Fabs with Less Cost, Time and Risk with David Tibbetts

Podcast EP289: An Overview of How Highwire Helps to Deliver Advanced Fabs with Less Cost, Time and Risk with David Tibbetts
by Daniel Nenni on 05-30-2025 at 6:00 am

Dan is joined by David Tibbetts, Chief Safety Officer at Highwire. David is a Certified Safety Professional with 20+ years of occupational safety experience in both general industry and construction settings. He is currently supporting Highwire’s hiring partners utilizing the Highwire suite of software solutions to identify, manage, and mitigate risks presented by contracting partners on construction projects and in existing operational facilities.

Dan explores the unique model and technology that Highwire provides with David. Highwire delivers a platform that helps business owners manage the risks associated with the contractors they hire to support capital projects and on-going maintenance and management of existing facilities. The company serves many industries, including general construction, data centers, health, life sciences, manufacturing, property development, renewable energy, universities and semiconductors.

In the semiconductor area, Highwire works with the three largest semiconductor manufacturers in the world. Dan explores the technology and processes used by Highwire to optimize large-scale projects such as semiconductor fab and large data center construction. David describes how contractor assessment and risk management can have a significant impact on the time and cost associated with these mega projects. Beyond cost and time optimization, David also describes work to reduce injuries and fatalities in large projects. You can learn more about this unique company and how it impacts the semiconductor industry here.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


Synopsys Addresses the Test Barrier for Heterogeneous Integration

Synopsys Addresses the Test Barrier for Heterogeneous Integration
by Mike Gianfagna on 05-29-2025 at 10:00 am

Synopsys Addresses the Test Barrier for Heterogeneous Integration

The trend is clear, AI and HPC is moving to chiplet-based, or heterogenous design to achieve the highest levels of performance, while traditional monolithic system-on-chip (SoC) designs struggle to scale. What is also clear is the road to this new design style is not a smooth one. There are many challenges to overcome. Some are bigger versions of what came before and others are new, driven by a new way of designing and assembling semiconductor systems. Synopsys has been at the forefront of innovation to address these challenges and pave the way for future heterogeneous chip designs. The company recently published an informative article on a way to tame yet another challenge for this new design style.  Let’s examine how Synopsys addresses the test barrier for heterogeneous integration.

An Overview of the Heterogeneous Design Landscape

There are many dimensions to the challenges of heterogeneous chip design. In a recent post I took a close look at several issues. What was clear is the need for a holistic approach to address these design challenges. All of the items discussed in that post were tied together. A change in one aspect of the design affected others, and so the only way to success was to balance everything in a holistic manner.

Synopsys has developed a strong approach here with a comprehensive suite of solutions to address and balance various design parameters. The list includes architectural design, verification, implementation, and device health. My colleague Kalar Rajendiran also discussed the importance of next generation interconnects in this post. You can learn more about how multi-die packaging is enabling the next generation of AI SoCs and how Synopsys is helping to make this happen.

The Problem with Test for Heterogeneous Chip Design

It seems that this design style increases complexity across many vectors. In the new Synopsys article, advanced testing methodologies are discussed through the lens of needed improvements in automatic test equipment (ATE) to maintain signal integrity, accuracy, and performance.

The article points out that, in the heterogeneous design world structural testing of devices requires high-bandwidth test data interfaces for at-speed testing, to confirm truly known-good devices (KGDs) and to achieve high test coverage and a low DPPM number in a reasonable timeframe. The piece points out that ensuring the highest test coverage for individual chiplets is crucial, before integrating them into complex 2.5D or 3D packages, to prevent yield fallout once they are combined with other chiplets in a complete package.  

The article goes on to discuss the number of patterns required to test complex new devices. The patterns have increased significantly, and this is coupled with the fact that there are a limited number of general-purpose IO (GPIO) pins to perform the tests. Furthermore, GPIO speed restricts test data throughput, reducing overall coverage to test advanced designs efficiently. Though the conventional high-speed I/O protocol (PCIe/USB) satisfies the bandwidth requirements, it requires expensive hardware to set up.

Digging a bit deeper, in scenarios where the number of IO pins is limited, the bottleneck often lies in validation time, which extends the product development cycle and significantly increases the test costs.  And the limited availability of high-band width test access ports, especially in multi-die design, highlights the need for a new kind of IO. One that can operate at much higher speeds than GPIO but adds no additional hardware components or complex protocol support on the initialization/calibration sequence, while maintaining signal integrity for the latest manufacturing processes.

How Synopsys Addresses the Test Barrier

The Synopsys article describes another well designed, holistic approach to the problem. Synopsys High-Speed Test GPIOs (HSGPIO) are optimally designed to meet high-speed test requirements. This versatile offering ensures that single IOs can be multiplexed based on their usage as test ports during manufacturability, performing high-speed clock observation during debug and configurable to GPIO during production, making them unique in the industry in supporting comprehensive test needs.

The article provides a comprehensive overview of the Synopsys solution that includes a detailed discussion of:

  • The benefits of high-speed test IO for simplified and reliable testing
  • How to enhance IO performance and optimizing power with multiple modes

The graphic at the top of this post illustrates where the Synopsys HSGPIO fits.

If you are planning to utilize heterogeneous design for your next project, the new article from Synopsys is a must-read. Don’t get caught at the end of a complex design process with test headaches. You can access your copy of Synopsys Test IO to address the High-Performance Efficient Data Transmission and Testing requirements for HPC & AI Applications here. And that’s how Synopsys addresses the test barrier for heterogeneous integration.

Also Read:

Design-Technology Co-Optimization (DTCO) Accelerates Market Readiness of Angstrom-Scale Process Technologies

SNUG 2025: A Watershed Moment for EDA – Part 2

Automotive Functional Safety (FuSa) Challenges


Anirudh Keynote at CadenceLIVE 2025 Reveals Millennium M2000

Anirudh Keynote at CadenceLIVE 2025 Reveals Millennium M2000
by Bernard Murphy on 05-29-2025 at 6:00 am

Anirudh Keynote at CadenceLIVE 2025 Reveals Millennium M2000

Another content-rich kickoff covering a lot of bases under three main themes: the new Millennium AI supercomputer release, a moonshot towards full autonomy in chip design exploiting agentic AI, and a growing emphasis on digital twins. Cadence President and CEO Anirudh Devgan touched on what is new today, and also market directions beyond EDA and systems design, into physical AI (robots, drones) and sciences AI (molecular design). Jensen Huang (NVIDIA) joined Anirudh for a fireside chat preceding this keynote and Satya Nadella (Microsoft) provided a video endorsement, as did Charlie Kawwas (President of Semiconductor Solutions at Broadcom), reinforcing that Cadence is both serving and partnering with the world leaders in tech.

Millennium M2000 Release

Millennium M2000 is the next generation of the Cadence AI hardware acceleration platform, built on Nvidia Blackwell. For those keeping careful track, Anirudh and Jensen announced this platform in the fireside chat immediately before this keynote but I’m covering it here. Jensen also announced that he is going to buy 10 systems. Quite an endorsement.

I wrote about the first-generation Millennium Enterprise Multiphysics Platform last year, when it was clear it would immediately benefit computational fluid dynamics (CFD). Given how pervasive AI has become throughout the Cadence EDA product line, it is now apparent that Millennium will have an increasing role in chip design.

Hardware acceleration is a fundamental tier in the Cadence strategy, initially for (Boolean) logic simulation but now also in support of numeric simulation, which is where Millennium shines. Accelerating CFD is an obvious application for aerodynamic modeling, datacenter cooling, and hydrodynamic modeling for ships. In biosciences, molecular similarity screening can greatly reduce an initial pool of potential therapies, weeding out candidates with possibly strange behaviors or toxicities, before advancing to more detailed lab testing.

In semiconductor design, Cadence’s Cerebrus Intelligent Chip Explorer has been driving significant improvements in PPA through AI, from chip level to system level using numeric simulation methods, a natural partner with M2000. Other EDA applications in the Cadence tool suite from 3D-IC and packaging design, thermal and signal integrity modeling to analog design and analysis, all benefit from AI which can be further accelerated and scaled on the M2000 system.

A Moonshot to Full Autonomy in Chip Design

Anirudh positions AI advances in EDA following the automotive autonomy SAE model, from levels 1 (basic autonomy) to 5 (full autonomy). Nice analogy and useful to grade progress. Cadence started building their JedAI platform more than 5 years ago, to centralize data from spec through to manufacturing as a mechanism to support generative AI throughout the design cycle and across designs. Now AI is a daily reality in design flows they support, to the point that he feels much of what they offer is already at levels 2 or 3. Advances he announced in this talk stretch to levels 3 and 4, thanks in part to a big investment in agentic AI which makes more complex chains of reasoning possible.

Level 5 – full autonomy – he acknowledges is a moonshot but like all moonshots worth attempting to see how far they can get. They can advance on multiple fronts. RTL generation in part, maybe through a CoPilot type of approach (assisted generation with RAG). Partly through leveraging proven IP – Cadence now has a rapidly growing IP catalog to support this direction. Partly through AI-generated C, for which Cadence can use their proven C to RTL technologies. Also partly through generating testbenches, both UVM and Perspec. The following keynote from Uri Frank at Google touched on joint work between Cadence and Google in this area.

On leveraging proven IP, Cadence continues to invest significantly in growing their own IP catalog. Anirudh mentioned this area now has one of the biggest R&D teams in the company and an expanding portfolio across protocol IP and compute IP, including their Tensilica family and NPUs and their recent NeuroEdge introduction (on which I will write more in a following blog). They also recently announced their intent to acquire Secure-IC, a well-respected company in all areas of hardware security, from design to deployment and eventual decommissioning (I wrote about them recently).

Now Agentic AI is available in Integrity 3D-IC, Cerebrus AI Studio and Virtuoso Studio. I’m looking forward to seeing applications in functional verification – maybe next year?

Continued Focus on Digital Twins

In semiconductor design digital twins are a way of life. A critical component here is hardware-assisted logic verification. Last year customers added almost 460 billion gates in new Palladium/Protium capacity, easy to understand when you consider the sizes of designs, particularly AI designs, that are being built today around the world. It’s also not surprising to learn that the Palladium emulation platform, built on a Cadence designed custom chip, is itself pacing those design sizes at 120 billion transistors per chip, 16 trillion transistors in a rack. A super-sized chip to verify super-sized chips!

Beyond semiconductor design, in the physical and bio world digital twins are less widely used, in part because it is more difficult to capture all the complexity of the mechanical, chemical and ambient constraints that must go along with that modeling. Difficult but becoming more approachable thanks to AI, agentic AI, and AI hardware accelerators.

One area that is advancing quite rapidly is in digital twins for datacenters. The Cadence Reality Digital Twin Platform is becoming a reality (😀) for datacenter design and upgrades, where the cost of power and cooling is an everyday headline topic given the growing volume of AI accelerator hardware. Thermal modeling depends critically on very detailed analysis and recommendations to manage thermal hot spots and cooling flows, whether ambient, forced air, or liquid. Rack placements, cooling unit placements, vent placements all depend on optimized modeling. AI to mitigate the impact of AI on power – physician heal thyself indeed. Cadence is closely partnered here with Nvidia.

Digital twins are becoming equally important for aircraft design, drone design (now pervasive in many applications) and robot design for automated factories, warehouses, hospital logistics support. And continuing of course for design of cars, trucks and other transportation options. Semiconductors and systems supporting these use cases will have much tighter power envelopes and significantly more mixed signal content to support all the sensors these systems require, playing nicely to Cadence strengths from chip design up through system design.

Big picture views, hot AI investment, focus on growing markets in tech and partnering with tech leaders on system-level applications. Looks promising to me!


Semiconductor Market Uncertainty

Semiconductor Market Uncertainty
by Bill Jewell on 05-28-2025 at 2:00 pm

2025 Semiconductor Market Forecast

WSTS reported 1st quarter 2025 semiconductor market revenues of $167.7 billion, up 18.8% from a year earlier and down 2.8% from the prior quarter. The first quarter of 2025 was weak for most major semiconductor companies. Ten of the sixteen companies in the table below had declines in revenue versus 4Q 2024, ranging from -0.1% from Broadcom to declines of over 20% from STMicroelectronics and Kioxia. Six companies reported revenue increases, ranging from 1.5% from Texas Instruments to 12% from Nvidia. The outlook for 2Q 2025 is mixed. Nine of the fourteen companies providing guidance expect revenue growth in 2Q 2025 from 1Q 2025, with the highest from SK Hynix at 14.6%. MediaTek expects flat revenue. Four companies are expecting revenue declines, with the largest from Kioxia with a 10.7% decline.

In their conference calls with analysts, most of the companies cited economic uncertainty due to tariffs as a factor in their outlook. Companies dependent on the automotive and industrial markets are seeing recoveries. Strong AI demand is driving growth at Nvidia and the memory companies.

Key end equipment drivers of the semiconductor market are projected to slow in 2025 versus 2024. The server market was a major driver in 2024 with 73% growth in dollar value, according to IDC. 2025 is expected to show healthy growth, but at a much slower rate of 26% growth in dollars. IDC forecasts smartphone units will only grow 2.3% in 2025, down from 6.1% in 2024. PCs are the only major driver expected to show an increase in growth rate in 2025 at 4.3%, up from 1.0% in 2024, according to IDC. The end of support for Windows 10 and increased AI computing should drive PC growth despite tariff uncertainty. Worldwide production of light vehicles is projected to see a decline of 1.7% in 2025 following a 1.6% decline in 2024, according to S&P Global Mobility. Again, tariffs were cited as the major reason for the decline.

The International Monetary Fund (IMF) reduced its outlook for global GDP in April. The IMF cited the uncertainty around tariffs as the primary reason for the reduction. The IMF excepts World GDP growth to decelerate by half of a percentage point from 3.3% in 2024 to 2.8% in 2025. Both advanced economies and emerging/developing economies will see an overall slowing of growth. The biggest growth deceleration in terms of percentage point change is expected in the U.S. (down 1.0), China (down 1.0) and Mexico (down 1.8).

Recent forecasts for global semiconductor market growth in 2025 range from our 7% at Semiconductor Intelligence to 14% at TechInsights. Although TechInsights has the highest forecast, they project a moderate tariff impact would lower growth to 8% and a severe tariff impact would lower it to 2%.

Our 7% forecast for 2025 is primarily based on the uncertainty about tariffs. Tariffs may not affect semiconductors directly but could have a significant impact on key drivers such as automotive and smartphones. The weakness in the semiconductor market could carry into 2026, resulting in low single-digit growth.

Semiconductor Intelligence is a consulting firm providing market analysis, market insights and company analysis for anyone involved in the semiconductor industry – manufacturers, designers, foundries, suppliers, users or investors. Please contact me if you would like further information.

Also Read:

Semiconductor Tariff Impact

Weak Semiconductor Start to 2025

Thanks for the Memories


Design-Technology Co-Optimization (DTCO) Accelerates Market Readiness of Angstrom-Scale Process Technologies

Design-Technology Co-Optimization (DTCO) Accelerates Market Readiness of Angstrom-Scale Process Technologies
by Kalar Rajendiran on 05-28-2025 at 10:00 am

Sassine Holding an 18A Test chip

Design-Technology Co-Optimization (DTCO) has been a foundational concept in semiconductor engineering for years. So, when Synopsys referenced DTCO in their April 2025 press release about enabling Angstrom-scale chip designs on Intel’s 18A and 18A-P process technologies, it may have sounded familiar—almost expected. But to dismiss it as “more of the same” would be to overlook just how far DTCO has come, and how dramatically Synopsys has elevated it. To gain deeper insights, I spoke with Prasad Saggurti, Executive Director of Product Management for Foundation and Security IP, and Ashish Khurana, Executive Director of R&D for Foundry Ecosystem at Synopsys.

DTCO: From Tactical Method to Strategic Enabler

In its earliest form, DTCO focused on adapting design techniques to meet the constraints of shrinking nodes. It was often a reactive, back-end effort to align standard cells and process rules with emerging technology limits. But as Moore’s Law encountered physical and economic headwinds, DTCO evolved into something far more comprehensive—an integrated, predictive approach to co-developing process and design in parallel.

Today, DTCO plays a central role in defining not just how chips are built, but what technologies are viable. And Synopsys, through its close collaboration with Intel, has taken it to a level where it’s shaping the future of Angstrom-era silicon.

DTCO Delivers Early on Intel 18A

The evolution of DTCO was in the spotlight during the 2025 Intel Foundry Direct Connect event. In an on-stage appearance alongside Intel CEO Lip-Bu Tan, Synopsys CEO Sassine Ghazi reached into his pocket, pulled out a chip, and held it up for the audience. That chip, he explained, was a Synopsys test chip built on Intel’s 18A process and had been produced a year earlier. It was proof that deep DTCO integration delivers real, early silicon results.

Such early silicon readiness would have been unthinkable in the traditional flow. It was made possible only because of a close, continuous DTCO collaboration between Synopsys and Intel—spanning process definition, tool enablement, IP development, and design methodology refinement.

DTCO in Action: RibbonFET, PowerVia, and the Intel 18A Breakthrough

This transformation is best illustrated through the tangible gains achieved during the Intel 18A development. Synopsys worked closely with Intel to align their design tools with Intel’s RibbonFET transistor architecture, enabling reduction in timing closure cycles. This streamlined convergence and boosted productivity for design teams using the 18A platform.

At the same time, DTCO was instrumental in optimizing PowerVia, Intel’s backside power delivery system. By leveraging PowerVia-aware floorplanning within Synopsys’ place-and-route tools, the collaboration delivered improvement in power efficiency—a result of co-optimized IR drop management and floorplan restructuring enabled by early-stage modeling.

PICO: DTCO’s Evolution into Full-Stack Optimization

To manage this expanded scope, Synopsys introduced PICO—short for Process-IP-Co-Optimization. PICO represents a structured, pre-silicon flow that spans process assumptions, cell library development, IP integration, toolchain validation, and even 3DIC packaging studies. It ensures all components from transistor to tools are developed in tandem, under real-world constraints.

With PICO:

  • TCAD simulations inform device models early in the cycle.
  • Design rule validation occurs before masks are built.
  • IP is co-architected with process and performance trade-offs in mind.
  • CAD tools are aligned with structures like RibbonFET from the outset.

The Enablement Readiness Cycle: Getting to Market Faster

This all feeds into the enablement readiness cycle—a core strategy for delivering validated design flows, certified IP, and process-aligned methodologies in sync with foundry technology ramps. For Intel 18A, Synopsys’ tools and libraries were ready before silicon. This closed-loop cycle is central to achieving fast, low-risk product development at Angstrom-scale nodes.

Summary

Modern day DTCO is a competitive strategy for the Angstrom-scale era. With strategic collaborative partnerships between foundries and design-enablement ecosystem partners such as Synopsys, DTCO is a full-stack, front-loaded discipline capable of delivering real silicon on cutting-edge process nodes well ahead of schedule.

To learn more, visit Synopsys’ DTCO Solutions page.

Also Read:

Intel Foundry is a Low Risk Aternative to TSMC

Intel’s Foundry Transformation: Technology, Culture, and Collaboration

Intel’s Path to Technological Leadership: Transforming Foundry Services and Embracing AI