ads mdx semiwiki building trust gen 800x100ai

Podcast EP302: How MathWorks Tools Are Used in Semiconductor and IP Design with Cristian Macario

Podcast EP302: How MathWorks Tools Are Used in Semiconductor and IP Design with Cristian Macario
by Daniel Nenni on 08-08-2025 at 10:00 am

Dan is joined by Cristian Macario, senior technical professional at MathWorks, where he leads global strategy for the semiconductor segment. With a background in electronics engineering and over 15 years of experience spanning semiconductor design, verification, and strategic marketing, Cristian bridges engineering and business to help customers innovate using MathWorks tools.

Dan explores how the popular MathWorks portfolio of tools such as Simulink are used in semiconductor and IP design with Cristian, who describes how these tools are used across the complete design process from architecture, to pre-silicon, to post-silicon. Cristian explains several use cases for MathWorks tools in applications such as AI/Datacenter design and the integration of analog/digital design with real-world data.

MathWorks can help develop architectural strategies to optimize analog and mixed signal designs for demanding applications. The early architectural models developed using MathWorks tools can be refined as the design progresses and those models can be used in later phases of design validation to ensure the final silicon implementation follows the original architectural specifications. Cristian also describes use models where semiconductor and IP providers use MathWorks models as executable specifications for products to ensure effective and optimal use of these products

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


Making Intel Great Again!

Making Intel Great Again!
by Daniel Nenni on 08-08-2025 at 6:00 am

Intel 3.0 Logo SemiWiki

Lip-Bu Tan made it very clear on his most recent call that Intel will not continue to invest in leading edge semiconductor manufacturing solo. Lip-Bu is intimately familiar with TSMC and that is the collaborative business model he envisions for Intel Foundry. I support this 100%. Intel and Samsung have tried to compete head-to-head with TSMC in the past using the IDM mentality and have failed so there is no need to keep banging one’s head against that reinforced concrete wall.

Lip-Bu Tan is clearly throwing down the gauntlet like no other Intel CEO has done before. If we want leading edge semiconductor manufacturing to continue to be developed in the United States we all need to pitch in and help. Are you listening politicians? Are you listening Apple, Qualcomm, Broadcom, Marvell, MediaTek, Amazon, Google, Microsoft, etc…

I’m not sure the media understands this. That, and the fact that Lip-Bu under promises and over delivers.

There was some pretty funny speculation after the Intel investor call. Some of which were pretty dire predictions and ridiculous comments from so called “sources”. This has all been discussed in the SemiWiki Experts Forum but let me recap:

First the absolutely most ridiculous one:

“An industry source told the publication that President Donald Trump has mandated TSMC fulfill two conditions if Taiwan is to see any tariff reduction:

  • Buy a 49% stake in Intel
  • Invest a further $400 billion in the US”

To be clear, TSMC investing in Intel will not help Intel. TSMC investing another $400B in the US will not help Intel. This is complete nonsense. The best comment came from my favorite analyst Stacy Rasgon (Bernstein & Co). He estimated that Intel has no more than 18 months to “land a hero customer on 14A” which I agree with completely and so does Elon Musk.

Samsung to Produce Tesla Chips in $16.5 Billion Multiyear Deal

“This is a critical point, as I will walk the line personally to accelerate the pace of progress … the fab is conveniently located not far from my house.” Elon Musk

Of course, everyone wanted to know why Intel missed this mega deal since it is exactly what Intel needs, a hero customer. Personally, I think it is a huge distraction having Elon Musk intimately involved in your business which could end tragically. That is not a risk I would take as the CEO of Intel unless it was THE absolute last resort, which it probably is for Samsung Foundry. Samsung also has plenty of other things to sell Tesla (Memory, Display Tech, Sensors, etc…) so this is a better fit than TSMC or Intel Foundry.

I do hope this deal is successful for all. The foundry race needs three fast horses. The semiconductor industry thrives on innovation and innovation thrives when there is competition, absolutely.

On the positive side of this mega announcement, hopefully other companies will step up and make similar multi-billion-dollar partnerships with Intel Foundry if only to butt egos with Elon Musk. Are you listening Jeff Bezos? How about investing in the industry that helped you afford a $500M yacht? The same for Bill Gates, where would Microsoft be without Intel? How about you Mark Zuckerberg? Where would we all be without leading edge semiconductor manufacturing? And where will we be without access to it in the future because that could certainly happen.

If we want the US to continue to lead semiconductor manufacturing like we have for the past 70+ years we need support from politicians, billionaires, the top fabless semiconductor companies, and most certainly Intel employees.

What should Intel executives do? Simple, just follow Lip-Bu’s leadership and be transparent, play the cards you are dealt, deliver on your commitments, and make Intel great again. Just my opinion of course.

Just a final comment on the most recent CEO turmoil:

Lip-Bu Tan is known all over the world. He was on the Intel Board of Directors before becoming CEO so the Intel Board certainly knows him. The CEO offer letter specifically allowed Lip-Bu to continue his work with Walden International. Lip-Bu founded Walden 38 years ago and it is no secret as to what they do. Walden has invested in hundreds of companies around the world, and yes some of them are in China, but the majority are here in the United States.

What happens next? It will be interesting to see if the semiconductor industry allows political interference in choosing our leadership. Hopefully that is not the case because if it is we are in for a very bumpy ride. Intel has no cause to remove Lip-Bu Tan so if there is a separation it will be on Lip-Bu’s terms. I for one hope that is not the case.

My commitment to you and our company. A message from Intel CEO Lip-Bu Tan to all company employees.

The following note from Lip-Bu Tan was sent to all Intel Corporation employees on August 7, 2025:

Dear Team, 

I know there has been a lot in the news today, and I want to take a moment to address it directly with you.  

Let me start by saying this: The United States has been my home for more than 40 years. I love this country and am profoundly grateful for the opportunities it has given me. I also love this company. Leading Intel at this critical moment is not just a job – it’s a privilege. This industry has given me so much, our company has played such a pivotal role, and it’s the honor of my career to work with you all to restore Intel’s strength and create the innovations of the future. Intel’s success is essential to U.S. technology and manufacturing leadership, national security, and economic strength. This is what fuels our business around the world. It’s what motivated me to join this team, and it’s what drives me every day to advance the important work we’re doing together to build a stronger future.

There has been a lot of misinformation circulating about my past roles at Walden International and Cadence Design Systems. I want to be absolutely clear: Over 40+ years in the industry, I’ve built relationships around the world and across our diverse ecosystem – and I have always operated within the highest legal and ethical standards. My reputation has been built on trust – on doing what I say I’ll do, and doing it the right way. This is the same way I am leading Intel. 

We are engaging with the Administration to address the matters that have been raised and ensure they have the facts. I fully share the President’s commitment to advancing U.S. national and economic security, I appreciate his leadership to advance these priorities, and I’m proud to lead a company that is so central to these goals. 

The Board is fully supportive of the work we are doing to transform our company, innovate for our customers, and execute with discipline – and we are making progress. It’s especially exciting to see us ramping toward high-volume manufacturing using the most advanced semiconductor process technology in the country later this year. It will be a major milestone that’s a testament to your work and the important role Intel plays in the U.S. technology ecosystem.  

Looking ahead, our mission is clear, and our opportunity is enormous. I’m proud to be on this journey with you. 

Thank you for everything you’re doing to strengthen our company for the future.  

Lip-Bu 

https://newsroom.intel.com/corporate/my-commitment-to-you-and-our-company

Also Read:

Why I Think Intel 3.0 Will Succeed

Should Intel be Split in Half?

Should the US Government Invest in Intel?


Agentic AI and the EDA Revolution: Why Data Mobility, Security, and Availability Matter More Than Ever

Agentic AI and the EDA Revolution: Why Data Mobility, Security, and Availability Matter More Than Ever
by Michael Johnson on 08-07-2025 at 10:00 am

NetApp Agentic AI

The EDA (Electronic Design Automation) and semiconductor industries are experiencing a transformative shift—one that’s being powered by the rise of Agentic AI. If you attended this year’s SNUG, CDNLive, and/or DAC 2025, you couldn’t miss it: agentic AI was the hot topic, dominating keynotes, demos, and booth conversations from start-ups to the “Big 3” (Synopsys, Cadence, Siemens EDA).

But beyond the buzz, there’s a real, urgent need driving this adoption. Chip designs are growing exponentially in complexity, and the pool of skilled engineers isn’t keeping pace. The only way to bridge this productivity gap is with smarter automation—enter agentic AI. But for agentic AI to deliver on its promise, the underlying data infrastructure must be up to the task. That’s where NetApp, with ONTAP and FlexCache, comes in.

What Is Agentic AI?
What is agentic AI In short, it’s the next step in AI evolution—systems that don’t just automate tasks but act as reasoning agents. Agentic AI uses specialized AI Agents to reason and iteratively plan to autonomously solve complex, multi-step problems. Agentic AI uses a four-step process for problem-solving: Perceive, Reason, Act, and Learn

Agentic AI: More Than Just Hype

For example, one new EDA startup ChipAgents.ai demonstrated a live demos where agentic AI read a 300-page ARM processor spec and, in real time, generated a detailed test plan and verification suite. As someone who’s been in the trenches of chip verification, I can say: this is not an incremental improvement. This is game-changing productivity.The benefits are clear:

  • Automates the most tedious engineering tasks
  • Bridges the engineering talent gap
  • Enables faster, more reliable chip design cycles

These benefits are only realized if your data is where it needs to be, when it needs to be there, and always secure.

Microsoft kicked off DAC with a talk by William Chappel who presented reasoning agents in the EDA design flow and introduced Microsoft’s Discovery platform. Microsoft’s Discovery Platform for Agentic AI is an advanced hybrid cloud-based environment designed to accelerate the development and deployment of agentic AI workflows.  Discovery Platform used NetApp’s ONTAP FlexCache technology to continuously and securely keep on-prem design data in-sync Microsoft’s Azure NetApp Files volume in the cloud.

Why Data Mobility, Security, and Availability Are Critical for Agentic AI

1. Data Mobility: The Heart of Hybrid Cloud AI

Agentic AI requires massive GPU resources—resources that are often impractical to build or scale in existing datacenters due to massive power requirements of H100, H200, or newer GPU systems.  Requirements for high power racks, water cooling, and rack space constraints will make adoption challenging, and we haven’t discussed the change from traditional networking to InfiniBand networking. That’s why most early experimentation and deployment of agentic AI for EDA will happen in the cloud.

But here’s the challenge: EDA workflows generate and process huge volumes of data that need to move seamlessly between on-prem and cloud environments. Bottlenecks or delays can kill productivity and erode the benefits of AI.

NetApp ONTAP and FlexCache are uniquely positioned to solve this. With ONTAP’s unified data management and FlexCache’s ability to cache active datasets wherever the compute is, enabling engineers to get instant and secure access to the data they need, whether they’re running workloads on-prem, in the cloud, or both.

FlexCache in Action:
FlexCache can securely, continuously and instantly keep all design data in-sync both on-prem and cloud.  This can enable real-time data access to Cloud based AI Agents to secure design data from the active design work being run on-prem.  In the ACT stage, AI agents can then automatically run EDA tools either on-prem or in the cloud based on the AI Agent generated PLAN.

2. Data Security: Protecting Your IP in a Distributed World

EDA data is among the most sensitive in the world. Intellectual property, proprietary designs, and verification strategies are the crown jewels of any semiconductor company. Moving this data between environments introduces risk—unless you have robust, enterprise-grade security.

ONTAP’s security features including encryption at rest and in transit to advanced access controls and audit logging—ensure that your data is always protected, no matter where it lives or moves. FlexCache maintains these security policies everywhere you need your data, so you never compromise on protection, even as you accelerate workflows.

3.Data Availability: No Downtime, No Delays

Agentic AI thrives on data availability. If an AI agent can’t access the latest design files or verification results, productivity grinds to a halt. In a world where chip tape-outs are measured in millions of dollars per day, downtime is not an option.

ONTAP’s legendary reliability and FlexCache’s always-in-sync architecture ensure that your data is available whenever and wherever it’s needed. Whether you’re bursting workloads to the cloud or collaborating across continents, your AI agents—and your engineers—can count on NetApp.

NetApp: The Foundation for Agentic AI in EDA

Agentic AI is set to reshape EDA and semiconductor design, closing the productivity gap and enabling new levels of automation and innovation. But none of this is possible without the right data infrastructure.

Let’s face it: most EDA datacenters today aren’t ready to become “AI Factories,” as NVIDIA’s Jensen Huang and industry experts predict will be required. Customers are unlikely to invest in new on-prem infrastructure until agentic AI solutions mature and requirements are clear. That’s why hybrid cloud is the go-to strategy—and why NetApp is uniquely positioned to help.

  • ONTAP is the only data management platform integrated across all three major clouds’ EDA reference architectures.
  • FlexCache is the most widely adopted hybrid cloud solution for high-performance, always-in-sync data.
  • No other vendor offers this level of hybrid cloud readiness, flexibility, and security.

Even if your organization isn’t ready for the cloud today, why invest in legacy storage that can’t support your hybrid future? The next wave of EDA innovation will be powered by agentic AI, and it will demand data mobility, security, and availability at unprecedented scale. NetApp is ready—are you?

Choose NetApp—and be ready for the future of EDA.

Ready to accelerate your agentic AI journey? Learn more about NetApp ONTAP and FlexCache for EDA design workflows at NetApp.com.

Also Read:

Software-defined Systems at #62DAC

What is Vibe Coding and Should You Care?

Unlocking Efficiency and Performance with Simultaneous Multi-Threading


WEBINAR: What It Really Takes to Build a Future-Proof AI Architecture?

WEBINAR: What It Really Takes to Build a Future-Proof AI Architecture?
by Don Dingee on 08-07-2025 at 6:00 am

AI Inference Use Cases from Edge to Cloud

Keeping up with competitors in many computing applications today means incorporating AI capability. At the edge, where devices are smaller and consume less power, the option of using software-powered GPU architectures becomes unviable due to size, power consumption, and cooling constraints. Purpose-built AI inference chips, tuned to meet specific embedded requirements, have become a not-so-secret weapon for edge device designers. Still, some teams are just awakening to the reality of designing AI-capable chips and have questions on suitable AI architectures. Ceva recently hosted a webinar featuring two of its semiconductor IP experts, who discussed ideas for creating a future-proof AI architecture that can meet today’s requirements while remaining flexible to accommodate rapid evolution.

A broader look at a wide-ranging AI landscape

AI is an enabling technology that powers many different applications. The amount of chip energy consumption and area designers have to work with to achieve the necessary performance for an application can vary widely, and, as with previous eras of compute technology, the roadmap continues to trend toward the upper right as time progresses. Ronny Vatelmacher, Ceva’s Director of Product Marketing, Vision and AI, suggests the landscape may ultimately include tens of billions of AI-enabled devices for various applications at different performance levels. “The cloud still plays a role for training and large-scale inference, but real-time AI happens at the edge, where NPUs (neural processing units) deliver the required performance and energy efficiency,” he says.

At the highest performance levels in the cloud, a practical AI software framework speeds development. “Developers today don’t have to manage the complexity of [cloud] hardware,” Vatelmacher continues. “All of this compute power is abstracted into AI services, fully managed, scalable, and easy to deploy.” Edge devices with a moderate but growing performance focus prioritize the efficient inferencing of models, utilizing techniques such as NPUs with distributed memory blocks, high-bandwidth interconnects, sparsity, and coefficient quantization to achieve this goal. “[Generative AI] models are accelerating edge deployment, with smaller size and lower memory use,” he observes. Intelligent AI-enabled edge devices offer reduced inference latency while maintaining low power consumption and size, and can also enhance data privacy since less raw data moves across the network. Vatelmacher also sees agentic AI entering the scene, systems that go beyond recognizing patterns to planning and executing tasks without human intervention.

How do chip designers plan an AI architecture to handle current performance but not become obsolete in a matter of 12 to 18 months? “When we talk about future-proofing AI architectures, we’re really talking about preparing for change,” Vatelmacher says.

A deep dive into an NPU architecture

The trick lies in creating embedded-friendly NPU designs with a smaller area and lower power consumption that aren’t overly optimized for a specific model, which may fall out of favor as technology evolves, but rather in a resilient architecture. Assaf Ganor, Ceva’s AI Architecture Director, cites three pillars: scalability, extendability, and efficiency. “Resource imbalance occurs when an architecture optimized for high compute workloads is forced to run lightweight tasks,” says Ganor. “A scalable architecture allows tuning the resolution of processing elements, enabling efficient workload-specific optimization across a product portfolio.” He presents a conceptual architecture created for the Ceva-NeuPro-M High Performance AI Processor, delving deeper into each of the three pillars and highlighting blocks in the NPU and their contributions.

Ganor raises interesting points about misleading metrics. For instance, low power does not necessarily equate to efficiency; it might instead mean low utilization. Inferences per second (IPS) by itself can also be deceptive, without normalization for silicon area or energy used. He also emphasizes the critical role of the software toolchain in achieving extensibility and discusses how NeuPro-M handles quantization and sparsity. Some of the ideas are familiar, but his detailed discussion reveals Ceva’s unique combination of architectural elements.

The webinar strikes a good balance between a market overview and a technical discussion of future-proof AI architecture. It is a refreshing approach, taking a step back to see a broader picture and detailed reasoning about design choices. There’s also a Q&A segment captured during the live webinar session. Follow the link to register and view the on-demand webinar.

Ceva Webinar: What it really takes to build a future-proof AI architecture?

Also Read:

WEBINAR: Edge AI Optimization: How to Design Future-Proof Architectures for Next-Gen Intelligent Devices

Podcast EP291: The Journey From One Micron to Edge AI at One Nanometer with Ceva’s Moshe Sheier

Turnkey Multi-Protocol Wireless for Everyone


Software-defined Systems at #62DAC

Software-defined Systems at #62DAC
by Daniel Payne on 08-06-2025 at 10:00 am

siemens panel on software-defined systems min

Modern EVs are prime examples of software-defined systems, so I attended a #62DAC panel session hosted by Siemens to learn more from experts at Collins Aerospace, Arm, AMD and Siemens. Here’s the list of panelists that span several domains, and what follows is my paraphrase of the discussion topics.

Panel Discussion

Q: How does software-defined differ from HW/SW co-design?

Matthew – It’s really a system process vs HW/SW integration and shift left.

Suraj – For automotive using SW updates it is a SW-defined architecture, co-design is ensuring HW is aware of what SW needs to run making HW purpose-built.

Alex – The old way was to design the HW first, then the SW second. It’s more flexible to have a SW-defined approach. SW flexibility creates more verification issues.

Q: Is this different form HW/SW codesign?

David –  Yes, the HW/SW 15-20 year old codesign technology is different from SW-defined, because infrastructure needs to be built for SW-defined. No more using spreadsheets and making lots of iterations, systems must be designed first and are more advanced now.

Suraj – In the automotive and aerospace domains the designs are far more complex than a laptop, so SW-defined is a different approach.

Q: How does this approach affect HW design?

Alex – For pre-silicon validation we’re using emulation and virtual modeling now.  For SW-defined we’re working on SW much earlier, and doing more what-if analysis.

David – That what-if scenario is critical. Over time all systems degrade, so validating during the lifetime of the system is important. Using a SW-defined environment enables that validation.

Q: Aerospace requires critical safety, so how do they design?

Matthew – Most real-time systems use a SW-defined design approach.

Suraj – Our ARM compute IP can go into a car today, but we cannot predict how it will be used five years from now. We model the system and run much SW on that model to know if it meets requirements.

Q: How does using a SW-defined system compare on development time?

Dave – Using PAVE 360 shows how to run systems in real-time, like a vehicle that can be virtually tested, saving millions of driving miles.

Alex – With SW-defined it allows us to validate silicon early for automotive as the trend towards centralized compute vs distributed ECUs, using pre-validated silicon solutions.

Suraj – We’ve been supplying automotive IPs since 2018, and in 2021 IPs were used in developer platforms, then in 2022 for SW developers. In 2024 the IP launch also had virtual platforms and developer tools at the same time, so no more 18 month delay. All SW developers could start with new ARM IP announcements, making early validation now possible much sooner.

Matthews – The costs of designing new devices are only going up, re-spins are prohibitive in cost, so virtual platforms proves out our SW workloads much earlier.

David – You can buy ARM IP, then configure it to meet your workload needs from unknown customers. With SW-defined systems you can configure them, try workloads, then determine the ideal configuration on the first time, saving time and lowering risk.

Q: Does SW-defined effect the lifespan of new devices?

Matthew – We ask questions, like how do you provision for future workloads in 10 years from now? How many programmable accelerators do I have?

Suraj – Allowing SW upgrades in the HW increases the product lifespan and creates a stickiness to that HW platform.

Q: What kind of HW monitors are you using?

Alex – Yes, we can stream data over the system’s lifespan to inform us of their use. Pre-silicon analysis helps us guide our SW engineers on how to make things run smoothly in the field. In-silicon telemetry is important.

Q: How do you verify SW-defined changes?

David – We’ve implement a CI/CD approach with continuous validation. SW-defined requires unit testing and running emulation, then on top of that is a system level approach to put in place pre-silicon to verify the output. The PAVE 360 scenario was partially created manually and virtually, as shown in the booth. If the car is driving in the rain, does the car stop in time for safety? SW-defined products are a new way of system level verification and validation, allowing testing against requirements.

Q: Can we keep up with new developments, like in AI?

Alex – Our AI strategy is general purpose AI acceleration and bespoke edge solutions too. There’s always a newer, faster generative AI coming out in the future.

Suraj – We did an ARM release where new applications were driven by LLM requirements, so we can replace the user manual in a car with a chatbot that responds to questions. SDV infrastructure has to be in place first,  then using AI apps on some NPU  to run efficiently. The question is – How optimal can the workloads be run?

Alex – Our pre-silicon work is done on platforms, where data centers are built up to run new product models, even simulating a million car miles. We want that workload run in an accelerated platform in the data center for automotive market cases.

Q: How do you make backward compatibility work?

Suraj – Yes, standardization is needed to ensure backward compatibility of features. Using virtual platforms enables compatibility testing. Standardization is a must have.

David – In the traditional flow both SW and HW are bonded together, but in SW-defined they are disjoint but leveraged. Trends are shifting towards a software-defined vehicle with OTA updates in automotive. With simulation you know if the OTA update works before shipping it.

Q: Matthew, the aerospace industry has what requirements?

Matthew – For OTA updates they need to be valid and secure for aerospace use. Ensuring that the intended behavior is maintained.

Q: How does security impact systems design?

Matthew – Quick reacting to security threats is enabled by SW-defined products.

Suraj – Using a SW-defined infrastructure has security requirements, so I partition my HW and SW into a system, knowing how secure workloads are to be safely run.

Q: What does SW enabled design change?

Alex – With new products we use virtualization where components are run in separate environments, at AMD we have done this with secure workloads for a while, like in banking cases. We’re trading off how fast we can release new products vs security concerns.

Q: David, how does shift-left work with Software defined?

David – Our engineers break life down into components, so safety and security are no longer after thoughts, but thought of from the very start of a new product.

Suraj – A new car app can make payments without using your phone. OK, I gave it a try for 8 months, but it never worked, because the security standards were never met.

Q: We update phone apps, but what about your vehicle updates and multi-die systems?

Suraj – In cars the real time operations used microcontrollers that were distributed, but with new architectures they tend to have fewer and bigger compute processors, so that industry is progressing.

David – In automotive and aerospace they are bringing in new competition from overseas, so features become a factor and that new feature has to actually work in context and in the field. New business models are emerging, but errors will be made along the way, so if you can model the system in advance, that’s a benefit.

Q: What opportunities are now available?

Matthew – We can now add new features to platforms where we don’t need to pull the HW box from the airplane.

Alex – There are dramatic shifts in new features over time, enabled by modeling.

Suraj – Yes, new features and abilities are enabled, plus we can start to solve more complex problems like ADAS in automotive.

Alex – The emergence of system level modeling has benefits as complexity of vehicles increases. Using digital twin capabilities is fundamental for new systems definition.

Q: What about the challenges of new data centers?

Alex – Energy is the big problem to supply the new data centers.

David – Just using just simulation isn’t enough for data centers, so the models have to be higher level to be viable.

Q: How about aerospace challenges?

Matthew – We’re working on SW approaches ahead of real silicon being available where we can partition the architecture, then optimize for workloads.

Q: What new skills are required?

David – The smart phones have been designed with agile for a while, but automotive is. Just starting to use modern SW methodologies. Many industries do require new skill sets for systems design.

Suraj – A cross pollination across divisions within a company are required to be successful.

Alex – AMD has both SW and HW teams collaborating differently for AI projects, requiring more integration for new CI/CD cycles.

Q: With industry standards, where are we at?

Alex – Natural boundaries emerge between HW and SW standards, Chiplets with high-speed SerDes requires verifying and validating HW.

David – There’s an assumption of silicon flexibility, but not all companies are ready for that kind of design with low unit volumes. Using chiplets in combination for your new product is attractive for many industries. Using a SW-defined environment is natural for Chiplets and 3DIC usage.

Matthew – It’s a HW problem and the standards identify how the ecosystem gets built, plus you also want SW stack reuse too.

Suraj – SOAFEE is a case in point where we standardize SW foundations, and using all ARM-based SoC are consistent for security. Virtual platforms are enabled with standard SW frameworks and re-use happens quicker with standardization.

Open Q&A

Q: The automotive design cycle used to be 7 years, so what is the new design cycle?

David – With one German OEM, they asked to show us how to architect a system.  In 18 months a new design was done, not in 7 years.

Suraj – China is massively changing the design scene. One customer did a first generation SoC in just 12 months. They hired 375 new people, did HW and SW design, ran pre-silicon validation, and it was18 months from concept to early production. China is not cutting corners for their design cycles.

David – We’ve visited Chinese customers and they all wrote virtual simulations. Autos are like putting wheels on a smartphone, just like another smartphone environment, so the SW-defined approach produced great results.

Suraj – In China the pace is go, go, go. We see short development cycles for Chinese automotive as the new drivers.

Alex – The progress in system design have gone from having CPUs, to multi-cores, to 256 cores, bigger caches, adding AI engines, laptops with many CPUs. The system complexity in laptops is insane. With SDV and data center modeling, expanding systems from single chip to whole systems, we see new methodologies with ever shorter design times under a year.

Summary

To watch the complete 1 hour video, view on YouTube.

Related Blogs


From Two Dimensional Growth to Three Dimensional DRAM

From Two Dimensional Growth to Three Dimensional DRAM
by Admin on 08-06-2025 at 10:00 am

Microsoft Word JAP25 AR 00479 art

Epitaxial stacks of silicon and silicon germanium are emerging as a key materials platform for three dimensional dynamic random access memory. Future DRAM will likely migrate from vertical channels to horizontally stacked channels that resemble the gate all around concept in logic. That shift demands a starter material made of many repeating silicon and silicon germanium layers with sharp interfaces, precise thickness control, and a strain state that survives hundreds of repetitions. Meeting all three requirements at production scale on 300 millimeter wafers is the central challenge.

A representative target is a multi stack with at least one hundred bilayers. One practical recipe uses about sixty five nanometers of silicon and ten to fifteen nanometers of silicon germanium per period, with a germanium fraction near twenty percent. The higher germanium level is not arbitrary. It is chosen to enable a highly selective lateral removal of the silicon germanium during device patterning, which in turn opens space for the final silicon channels. The problem is that silicon and silicon germanium do not share the same lattice constant. The mismatch creates a constant drive to relax strain through misfit dislocations or through three dimensional island growth at elevated temperatures. Theory and prior experiments suggest that the total thickness of the silicon germanium far exceeds the critical limit for full relaxation, so success depends on suppressing every pathway that would let the lattice give way.

Material studies show that the inner portion of the wafer can remain fully strained even for stacks as large as one hundred twenty bilayers with ten nanometers of silicon germanium. Cross sectional transmission electron microscopy reveals two dimensional layer growth and smooth interfaces. Reciprocal space maps show vertically aligned satellite peaks, a signature of coherent strain, and energy dispersive x ray spectroscopy confirms composition uniformity from top to bottom. Achieving this result requires low growth temperature, extremely clean gases, and an oxide free starting surface so that the energy barrier to breaking bonds at the interface remains high.

Near the wafer rim the picture changes. The geometry at the bevel lowers the barrier for misfit formation, so dislocations appear at the edge even when the interior is perfect. The density of these edge defects grows with the number of bilayers and with the lattice mismatch. Two practical mitigations exist. One can reduce the germanium content, which directly lowers the mismatch. Alternatively one can co alloy the silicon germanium with a small amount of carbon, which produces a similar effect on the lattice parameter. Experiments show that both approaches reduce edge relaxation. Adding carbon also sharpens interfaces at the silicon on silicon germanium transition. However carbon has a limited solubility in the alloy, so there is a concentration beyond which crystalline quality begins to suffer.

Uniformity over time and across the wafer is the second pillar. Thick stacks exhibit a gradual drift in period thickness from bottom to top and a worsening of within wafer uniformity toward the edge. A key culprit is the thermal environment of the reactor. During long runs the quartz tube accumulates a film that absorbs lamp energy, warms the tube, and perturbs the gas temperature field. The result is a slow change in growth rate that maps into both layer to layer and lateral non uniformities. Tools that actively control the tube temperature reduce these drifts. Independent tuning of hydrogen carrier flow and the temperature profile can push the relative standard deviation of layer thickness to near one percent.

Chemistry choices matter as well. To keep interfaces sharp at moderate temperature, silicon germanium growth benefits from dichlorosilane and germane while the silicon layers use silane. The lower temperature suppresses silicon and germanium intermixing, as confirmed by secondary ion mass spectrometry, and reduces the risk of three dimensional islanding. At the same time the process must avoid amorphous or polycrystalline defects that can seed dislocations. Maintaining a chlorine passivated surface during step transitions helps remove early stage imperfections and keeps the surface ready for two dimensional growth.

The overall lesson is clear. Three dimensional DRAM is feasible with epitaxial silicon on silicon germanium stacks that reach one hundred or more periods, but only when growth conditions, reactor thermals, and alloy choices are co designed. The interior can be kept fully strained, the edge can be tamed by lowering mismatch, and the interfaces can remain sharp enough for selective etching. When these threads are woven together, materials science delivers a realistic foundation for the next DRAM architecture.

You can read the source article here.

Also Read:

Breaking out of the ivory tower: 3D IC thermal analysis for all

PDF Solutions and the Value of Fearless Creativity

Streamlining Functional Verification for Multi-Die and Chiplet Designs


What is Vibe Coding and Should You Care?

What is Vibe Coding and Should You Care?
by Bernard Murphy on 08-06-2025 at 6:00 am

vibe coding

This isn’t a deep article. I only want to help head off possible confusion over this term. I have recently seen “vibe coding” pop up in discussions around AI for code generation. The name is media-friendly giving it some stickiness in the larger non-technical world, always a concern when it comes to anything AI. The original intent is a fast way to prototype web-based apps, particularly the look and feel of the interface. But given a widespread tendency even among those of us who should know better to conflate any specific AI-related idea with everything AI, I offer my take here to save you the trouble of diving down this particular rabbit hole.

A brief summary

This wasn’t a marketing invention. Andrej Karpathy, a guy with serious AI credibility (Open AI and Tesla) came up with the term to describe a method using an LLM-based code generator (voice activated) to describe and change what he wants without worrying about the underlying code generation. It’s a fast way to blow past all the usual distractions, letting the LLM fix bugs, not even having to type LLM prompts, surrendering to the vibe – he will know what he wants when he sees it.

Karpathy stresses that this is a quick and fun way to build throwaway projects, not a way to build serious code. But see above. AI already carries an aura of magic for some. Even a hint that it might let non-experts play the startup lottery must be difficult to resist. There has been abundant criticism of the concept. If some college students are already tempted to let AI write their essays, imagine what would happen if that mindset is let loose on production software (or hardware) projects.

I wrote recently on Agentic/GenAI adoption in software engineering, though in that case still assuming disciplined usage. Vibe coding in that context would be a step too far, indifferent to safety and security, even casual about detailed functionality. Overall – a clever and entertaining idea, in the hands of an expert a quick way to play with prototype ideas. For anyone looking for a shortcut, just a faster way to generate garbage.

In the fast-evolving AI market the websphere is already refining the initial vibe coding proposition. These efforts aim to redirect focus following a public faceplant (unfortunately I can no longer find the link) by a self-admitted non-coding user who launched and announced a vibe-coded app. Said app was immediately bombarded by hackers. Natural selection at work. The (current) new direction presents vibe coding as an adjunct to more conventional LLM-assisted development, not bypassing careful planning and verification/safety/security/reliability testing. These directions still seem to be very webapp-centric, unsurprisingly given the prototyping/human-interface focus of vibe-coding. Even here disastrous mistakes are possible for inexperienced users.

I’m still not convinced. The weakness I see is more in ourselves than in LLMs. Will we always be aware when we are crossing from casual experimentation to disciplined product development? These blending approaches seem designed to make those boundaries more difficult to spot. There are already practical and disciplined ways to exploit AI in development, why not stick to those until we better understand our own limits?

Also Read:

DAC TechTalk – A Siemens and NVIDIA Perspective on Unlocking the Power of AI in EDA

Architecting Your Next SoC: Join the Live Discussion on Tradeoffs, IP, and Ecosystem Realities

cHBM for AI: Capabilities, Challenges, and Opportunities


Unlocking Efficiency and Performance with Simultaneous Multi-Threading

Unlocking Efficiency and Performance with Simultaneous Multi-Threading
by Daniel Nenni on 08-05-2025 at 10:00 am

Akeana webinar on Simultaneous Multi Threading

An Akeana hosted webinar on Simultaneous Multi-Threading (SMT) provided a comprehensive deep dive into the technical, commercial, and strategic significance of SMT in the evolving compute landscape. Presented by Graham Wilson and Itai Yarom, the session was not only an informative overview of SMT architecture and use cases, but also a strong endorsement of Akeana’s unique position as a leader in high performance RISC-V-based SMT-capable IP cores.

Akeana is a venture funded, RISC-V, startup founded in early 2021 by leaders in the industry. Their semiconductor IP offerings include low-end microcontroller cores, mid-range embedded cores, and high-end laptop/server cores, along with coherent and non-coherent interconnects and accelerators.

Akeana frames SMT as a solution born out of necessity—driven by increasing compute density demands in edge AI, data center inference, automotive processing, and more. As heterogeneous SoC architectures become the standard, efficient compute resources management is essential. SMT addresses this by allowing multiple independent threads to run concurrently on the same core, enabling higher resource utilization and reducing idle cycles, particularly when latency or memory fetch bottlenecks arise.

The presenters made a compelling case for SMT’s relevance beyond its traditional niche in networking. They emphasized how companies like NVIDIA and Tesla are now openly embracing SMT in their in-house SoCs for AI workloads, citing improvements in performance-per-watt and latency management. This shift signals broader industry validation for SMT, especially as AI systems grow more complex and thread-aware execution becomes essential. Historically SMT has been available on x86 processors but not those from ARM, as general-purpose compute instances in the cloud seek to optimize for sellable cores per Watt. As the infrastructure landscape shifts towards accelerated computing that requires heterogeneous compute elements, SMT is now back in demand as a valuable capability.

A highlight of the webinar was Akeana’s multi-tiered product lineup: the 100 series (32-bit embedded), the 1000 series (consumer/automotive performance), and the 5000 series (high-performance out-of-order designs), all offering SMT as a configuration option. Notably, their SMT implementation supports up to four threads per core, across both in-order and out-of-order microarchitectures. This flexibility is crucial for customers balancing power, area, and throughput requirements across diverse markets.

Graham and Itai reinforced that SMT is more than just a performance booster—it is a key enabler of system-level efficiency. In multi-threaded SoC configurations, SMT allows a CPU to manage not only main application workloads but also real-time housekeeping tasks such as system management, interrupt handling, and accelerator coordination. The example of networking applications combining USB and Ethernet threads illustrated how SMT reduces the need for separate CPUs, lowering BOM and energy use.

Akeana’s team also emphasized how SMT contributes to safety and redundancy. In automotive contexts, running dual software instances on separate threads allows for fault detection and safe-state recovery. Similarly, in AI training clusters, redundancy via SMT enhances system resilience without duplicating silicon.

From a technical perspective, the discussion on out-of-order vs. in-order SMT implementations was informative. Itai clarified that out-of-order SMT enables further instruction-level parallelism by optimizing across threads, while in-order SMT is more deterministic and lightweight—making it well-suited for real-time embedded applications.

Another key insight was the secure threading implementation Akeana’s provides. Features such as isolated register files, secure context switching, and telemetry support indicate a mature approach to protecting multi-threaded workloads—a necessity in safety-critical and edge environments.

Performance benchmarks presented by Akeana were particularly impressive. A 20–30% uplift in Spec scores using SMT (even in out-of-the-box configurations) and over 2x performance boosts in data movement-intensive tasks underscore SMT’s real-world benefits. And this too for cores with good single thread performance to begin with.

It is clear from the webinar content, that SMT is not just for higher performance of CPU compute, SMT also enables more efficient data movement, connectivity, which is very important for advanced AI SoC systems, which use a range of heterogeneous cores and hardware accelerators. So SMT is a solution that must be considered when orchestrating the interplay between Compute and Data movement from memory or other IO ports.

The Q&A section highlighted growing market interest: over half of Akeana’s customers now request SMT, particularly in automotive, edge inference, and data center workloads. Moreover, the firm currently stands alone among independent RISC-V IP vendors in offering configurable SMT as licensable soft IP—underscoring its first-mover advantage.

Bottom line: the webinar succeeded not only in demystifying SMT, but also in positioning Akeana as a pioneering force in bringing high-performance, secure, and scalable multi-threading capabilities to the RISC-V ecosystem. As compute demands continue to intensify, SMT will likely evolve from a niche capability into a foundational feature—and Akeana appears well-positioned to lead that transition.

You can see a replay of the webinar here.

Also Read:

CAST Webinar About Supercharging Your Systems with Lossless Data Compression IPs

cHBM for AI: Capabilities, Challenges, and Opportunities

Memory Innovation at the Edge: Power Efficiency Meets Green Manufacturing


UCIe 3.0: Doubling Bandwidth and Deepening Manageability for the Chiplet Era

UCIe 3.0: Doubling Bandwidth and Deepening Manageability for the Chiplet Era
by Daniel Nenni on 08-05-2025 at 10:00 am

Chiplet SemiWiki UCIe

The Universal Chiplet Interconnect Express (UCIe) 3.0 specification marks a decisive step in the industry’s shift from monolithic SoCs to modular, multi-die systems. Released on August 5, 2025, the new standard doubles peak link speed from 32 GT/s in UCIe 2.0 to 48 and 64 GT/s while adding a suite of manageability and efficiency features designed to make chiplet-based systems faster to initialize, more resilient in operation, and easier to scale. The focus is clear: deliver more bandwidth density at lower power to feed AI and other data-hungry workloads without sacrificing interoperability.

What distinguishes UCIe 3.0 is the way raw performance is paired with lifecycle controls. On the physical layer and link management fronts, the spec introduces runtime recalibration so links can retune on the fly by reusing initialization states, cutting the penalty traditionally paid when conditions change. An extended sideband channel—now reaching up to 100 mm—expands how far control signals can travel across a package, enabling more flexible topologies for large, heterogeneous SiPs. Together, these advances let designers push higher speeds while keeping signaling robust across bigger systems.

Equally important, UCIe 3.0 codifies system-level behaviors that vendors used to implement idiosyncratically. Early firmware download is standardized via the Management Transport Protocol (MTP), trimming bring-up time and helping fleets boot predictably at scale. Priority sideband packets create deterministic, low-latency paths for time-critical events, while fast-throttle and emergency-shutdown mechanisms provide a common vocabulary for system-wide safety responses over open-drain I/O. These controls, offered as optional features, give implementers flexibility to adopt only what they need without burdening designs with unnecessary logic—an approach that should widen adoption across market tiers.

The standard also broadens applicability with support for continuous-transmission protocols via Raw-Mode mappings, opening doors for uninterrupted data flows between chiplets—for example, SoC-to-DSP streaming paths in communications and signal-processing systems. That versatility sits alongside a commitment to ecosystem continuity: UCIe 3.0 is fully backward compatible with earlier versions, preserving investment in 1.0/2.0 IP while providing a clean migration path to higher speeds and richer control planes.

Industry reaction underscores both the urgency of the problem and confidence in the solution. Companies across compute, EDA, analog, and optical interconnects are aligning roadmaps to 64 GT/s links, citing needs that range from GPU-class bandwidth for AI scale-up to rack-scale disaggregation. Proponents highlight manageability gains—faster initialization, deterministic control traffic, and standardized safety mechanisms—as the glue that will keep complex multi-die systems serviceable in production. Others see UCIe 3.0 as a foundation for co-packaged optics and vertically integrated 3D chiplets, where power-efficient, high-density edge bandwidth is paramount. This breadth of support—from hyperscalers and CPU/GPU leaders to IP providers and verification vendors—signals that UCIe has moved beyond experimentation into platform status.

“AI compute and switch silicon’s non-stop appetite for increased bandwidth density has resulted in Hyperscalers deploying scalable systems by leveraging chiplets built on UCIe for highperformance, low-latency and low-power electrical as well as optical connectivity. UCIe-enabled chiplets are delivering tailored solutions that are highly scalable, extendable, and accelerating the time to market.

The UCIe Consortium’s work on the 3.0 specification marks a significant leap in bandwidth density – laying the foundation for next-generation technologies like Co-Packaged Optics (CPO) and vertically integrated chiplets using UCIe-3D, which demand ultra-high throughput and tighter integration. Alphawave Semi is dedicated to working closely with the Consortium and to enhancing its UCIe IP and chiplets portfolio, as we continue to collaborate closely with customers and ecosystem partners to enable chiplet connectivity solutions that are ready for the optical I/O era” – Mohit Gupta, Executive Vice President and General Manager, Alphawave Semi

Governance and outreach also matter. The UCIe Consortium is stewarded by leading companies across semiconductors and cloud, representing a membership that now spans well over a hundred organizations. With 3.0, the group continues to balance performance targets with practical deployment aids—documentation, public availability of the spec by request, and participation in industry venues—helping engineering teams translate the text into interoperable silicon. The 3.0 rollout ties into conference programming and resource hubs so developers can track changes and ramp adoption alongside their product cadences.

In strategic terms, UCIe 3.0 addresses three pressures simultaneously. First, it restores a scaling vector—bandwidth density per edge length—at a time when transistor-level gains are harder to translate into system throughput. Second, it treats operational manageability as a first-class design axis, acknowledging that the real bottleneck in multi-die systems is often not peak speed but predictable behavior across temperature, voltage, and workload transients. Third, it reinforces an open, backward-compatible ecosystem so buyers can mix chiplets from multiple vendors without blowing up integration costs. That combination—performance, control, and continuity—makes UCIe 3.0 less a point release than a maturation of the chiplet paradigm.

As AI, networking, and edge systems push past monolithic limits, the winners will be those who can assemble specialized dies into coherent, serviceable products. UCIe 3.0 gives the industry a common playbook for doing exactly that—faster links, smarter control, and wider design latitude—turning chiplets from a promising architecture into an operationally reliable one.

You can read more here.

Also Read:

UCIe Wiki

Chiplet Wiki

3D IC Wiki


DAC TechTalk – A Siemens and NVIDIA Perspective on Unlocking the Power of AI in EDA

DAC TechTalk – A Siemens and NVIDIA Perspective on Unlocking the Power of AI in EDA
by Mike Gianfagna on 08-05-2025 at 6:00 am

Screenshot

AI was everywhere at DAC. Presentations, panel discussions, research papers and poster sessions all had a strong dose of AI. At the DAC Pavillion on Monday two heavy weights in the industry, Siemens and NVIDIA took the stage to discuss AI for design, both present and future.  What made this event stand out for me was the substantial dose of real results and grounded analysis of where it was all going. Both speakers did a great job, with lots of real data. Let’s look at what was presented in the DAC TechTalk – A Siemens and NVIDIA perspective on unlocking the power of AI in EDA.

The Presenters

Amit Gupta

There were two excellent presenters at this event.

Amit Gupta is a technology executive and serial entrepreneur with over two decades of leadership in semiconductor design automation and AI innovation. At Siemens EDA, Amit leads the Custom IC division and spearheads AI initiatives across the organization. I attended several events where Amit was participating at DAC. He has an easy-going style that is deeply grounded in real-world experience. He’s a pleasure to listen to and his comments are hard to ignore.

 

Dr. John Linford

Dr. John Linford leads NVIDIA’s CAE/EDA product team. John’s experience spans high-performance physical simulation, extreme-scale software optimization, software performance analysis and projection, and pre-silicon simulation and design. Before NVIDIA, John worked at Arm Ltd. where he helped develop the Arm software ecosystem for cloud and HPC. John brought specific, real examples of how NVIDIA is fueling the AI revolution for Siemens and others.

The Siemens Perspective

Amit kicked off the presentation by taking a rather famous quote and updating it with NVIDIA’S perspective as shown in the image at the top of this post. It was appropriate for NVIDIA to weigh in on this topic as they are enabling a lot of the changes we see around us. More on that in a moment.

Amit began by observing the incredible growth the semiconductor market has experienced, due in large part to the demands of AI. The semiconductor market is projected to hit one trillion dollars by 2030. Amit pointed out that the growth from about one hundred billion to one trillion dollars took several decades, but the growth from one trillion to two trillion dollars is projected to take about 10 years. The acceleration is accelerating if you will.

This fast growth brings to mind another eye-catching statistic. It took several decades starting in the late 1940’s to sell the first one million TV sets. After the launch of the iPhone is 2007, it took Apple just 74 days to sell one million of them. The adoption of new technology is clearly accelerating.  Amit cited statistics that demonstrate how the industry is moving toward a software-defined, AI powered, silicon enabled environment of purpose-built silicon to drive the differentiation of AI workloads. This is the new system design paradigm and the new role of semiconductors. These are the tasks that Siemens EDA is focusing on, both for chips and PCBs.

Amit described the dramatic increase in design complexity, demands for decreased time to market and the lack of engineering talent to keep up. It is against this backdrop that AI holds great promise. He described the broad footprint of investment Siemens EDA is making across many AI disciplines, as shown in the figure below.

AI Use Cases

He explained that each discipline has a specific impact, with those on the left generally making tools more efficient with reduced run time and improved quality of results while those on the right make it easier and more efficient to interact with the tools using natural language to set up complex runs. This is where the lack of engineering expertise can be mitigated. Amit referred to a major announcement Siemens made at DAC describing its EDA System to enable advanced AI and UI capabilities across the entire product line. Details of the Siemens EDA AI announcement can be found on SemiWiki here.

Amit then illustrated the special needs of EDA AI. This is an area where he is particularly well versed as his prior company, Solido was using AI for over 20 years. It turns out there is a big difference between consumer AI and EDA AI. Amit ran some live examples to illustrate how consumer grade AI will simply not work for chip design tasks. The figure below illustrates some of the ways these two tasks have very different requirements.

Consumer vs. EDA AI

This is the essence of why Siemens EDA AI is so effective. Amit explained that beyond the image and language processing delivered by consumer AI, there is a need to master elements such as technical documentation, script generation, creating test benches, and specific design formats such as Verilog and GDSII.  He described the Siemens EDA AI system as the “connective tissue” for chip and board design.

Amit described how the Siemens EDA AI System enables generative and agentic AI across the EDA workflow. This is an aggressive program that enables many parts of the Siemens EDA portfolio with new, state-of-the-art capabilities. The diagram below provides an overview of the system’s impact.

Siemens EDA AI System enables generative and agentic AI across the EDA workflow

Amit then introduced Dr. John Linford to describe how NVIDIA and Siemens are partnering at the hardware, AI model and microservices levels.

The NVIDIA Perspective

John began by explaining how NVIDIA builds supercomputing infrastructure that is co-designed with the software, networks, CPUs, and GPUs, all brought together into an integrated platform that is used by NVIDIA to design its own chips. This system is then sold either in parts or completely to others to enable a broad community to also develop advanced solutions. This recursive model represents some of the elements of NVIDIA’s success and informs how NVIDIA collaborates with Siemens to help create its EDA AI framework.

John described how NVIDIA enables innovation across many areas, both software and hardware. He began by describing the breadth of the CUDA-X libraries, which form a foundational element for NVIDIA. These libraries deliver domain-specific accelerated computing across approximately 450 areas. As examples, there are libraries focused on chip lithography, computational acceleration for tasks such as fluid dynamics and thermal/electrical analysis, and even interfaces to allow the creation of new accelerators for tasks such as physical analysis.

He went on to describe agentic AI workflows that help to accelerate the designer’s creative work as well as quality control and yield management. John then touched on NVIDIA NeMo, a Python-based framework that allows the development of enterprise-grade agentic AI solutions, from data acquisition and curation, to training and deployment, to guardrail analysis. This is an example of some of the capabilities that Siemens can use to build its AI solutions. He also explained now NVIDIA inference microservices, or NIM delivers the building blocks for deployment of complete AI solutions, either in the cloud or on premises.

John also described how NVIDIA accelerates physical simulation, replacing massive explicit analysis with inference to find solutions much faster. This application finds a good match for digital twin development. These solutions don’t necessarily replace long, first-principal analysis but the trained models and inference get you to the final answer to verify much faster. 

To Learn More

This presentation provided excellent detail behind the Siemens EDA AI announcement. This work promises to have a substantial impact on design quality and productivity for complex chip and PCB design. NVIDIA also appears to be well-positioned to provide proven, targeted hardware and software to help realize these innovations.

If complex, AI-fueled design is in your future, you need to take a close look at what Siemens is up to. You can read the full press release announcing the Siemens EDA AI work here.  And you can get a broader view of the Siemens EDA AI work here. And while you’re at it, check out this short but very informative video on how AI is impacting semiconductor design.

And that’s the DAC TechTalk – A Siemens and NVIDIA perspective on unlocking the power of AI in EDA.