wide 1

Chiplets and Cadence at #62DAC

Chiplets and Cadence at #62DAC
by Daniel Payne on 08-12-2025 at 10:00 am

SoC Cockpit Concept min

Using chiplets is an emerging trend well-covered at #62DAC and they even had a dedicated Chiplet Pavilion, so I checked out the presentation from Dan Slocombe, Design Engineering Architect in the Compute Solutions Group at Cadence. In a short 20 minutes Dan managed to cover a lot of ground, so this blog will summarize the key  points.

The need for IC design automation and chiplet automation is driven by the steady growth in the number of 5nm, 3nm and smaller nodes along with the goal of mixing and matching them as part of heterogeneous package-level, multi-die systems. Overcoming manual design challenges requires automation for: preventing stale documentation, reducing the number of human errors, abstracting low-level implementation details, preventing duplication of information sources, handling the proliferation of IPblocks used, having an automated verification strategy to reduce the number of time-consuming verification cycles, and minimizing development times.

At Cadence they are using an internal automation flow “SoC Cockpit” to meet these SoC challenges through:

  • Capturing system-level specifications
  • Using Cadence IP and partner IP libraries
  • Configuration of IP library components
  • Taking feedback from downstream flows
  • Automating verification with simulation, emulation and virtual platforms
  • Providing customer chiplet platforms and reference software

This approach aims to improve design efficiency from an executable spec through GDS II production. Here’s the basic correct-by-construction automation flow, where the specification is an executable spec; construction is the RTL with floor planning and database; software framework is the reference framework and drivers; design collateral is the testbenches, emulation and models; physical design encompasses RTL to GDS.

Design intent captured includes many details:

  • Functional specification
  • Top-level pinout
  • I/O cell selection
  • Pin multiplexing
  • Clock tree definition
  • Reset tree definition
  • Voltage domain definitions
  • Power domain definitions
  • System maps
  • Infrastructure and IP definitions

The SoC Cockpit flowchart encompasses multiple file formats and transformations.

Zooming into the SoC Builder there is SoCGen which starts with an executable spec, creates intermediate files, then finally passes the data to IPGen to create instances of IP. AI agents can be used to select which IP blocks meet the specifications. There are both built-in generators for Verilog RTL and plugin generators to work with a variety of formats: IP-XACT, SDC/UPS/USF, virtual platform, testbench, C/C++ header files.

Users work with a front-end GUI to guide through the specification capture process, enforcing the correct-by-construction approach. Cadence has been able to create this automated flow by harnessing their own tools and IP, like:

Front End 
Conformal Technologies
Jasper Apps
Joules RTL Solutions
Cadence Modus DFT Software Solution

Back End  
Genus Synthesis Solution
Innovus Implementation System
Cadence Cerebrus Intelligent Chip Explorer

Verification
Palladium Emulation
Protium FPGA-Based Prototyping Platform
Helium Virtual and Hybrid Studio
Xcelium Logic Simulator
Verisium AI-Driven Verification Platform
Perspec System Verifier

IP
Tensilica Processors
System IP, Partner IP
Interface IP – UCIe, PCIe
Memory – DDR, LPDDR, Flash

Summary

Cadence has adopted industry standards, including UCIe , Arm’s Chiplet System Architecture (CSA) and AMBA C2C protocols to ensure systems are built based on known standards. With the SoC Cockpit, the new automation features bring together architecture, design and implementation tasks resulting in faster availability of correct by construction designs. This automation reduces time to market and engineering efforts.

Partner with Cadence to help you realize your chiplet and SoC ambitions.

Related Blogs


What XiangShan Got Right—And What It Didn’t Dare Try

What XiangShan Got Right—And What It Didn’t Dare Try
by Jonah McLeod on 08-12-2025 at 6:00 am

XiangShan

An Open ISA, a Closed Mindset — Predictive Execution Charts a New Path

The RISC-V revolution was never just about open instruction sets. It was a rare opportunity to break free from the legacy assumptions embedded in every generation of CPU design. For decades, architectural decisions have been constrained by proprietary patents, locked toolchains, and a culture of cautious iteration. RISC-V, born at UC Berkeley, promised a clean-slate foundation: modular, extensible, and unencumbered. A fertile ground where bold new paradigms could thrive.

XiangShan, perhaps the most ambitious open-source RISC-V project to date, delivers impressively on that vision—at least at first glance. Developed by the Institute of Computing Technology (ICT) under the Chinese Academy of Sciences, XiangShan aggressively targets high performance. Its dual-core roadmap (Nanhu and Kunminghu) spans mobile and server-class performance brackets. By integrating AI-focused vector enhancements (e.g., dot-product accelerators), high clock speeds, and deep pipelines, XiangShan has established itself as the most competitive open-source RISC-V core in both versatility and throughput.

But XiangShan achieves this by doubling down on conventional wisdom. It fully embraces speculative, out-of-order microarchitecture—fetching, predicting, and reordering dynamically to maintain high instruction throughput. Rather than forging a new execution model, it meticulously refines well-known techniques familiar from x86 and ARM. Its design decisions reflect performance pragmatism: deliver ARM-class speed using proven playbooks, made interoperable with an open RISC-V framework.

What truly sets XiangShan apart is not its microarchitecture but its tooling. Built in Chisel, a hardware construction language embedded in Scala, XiangShan prioritizes modularity and rapid iteration. Its open-source development model includes integrated simulators, verification flows, testbenches, and performance monitoring. This makes XiangShan not just a core design, but a scalable research platform. The community can reproduce, modify, and build upon each generation—from Nanhu (targeting Cortex-A76 class) to Kunminghu (approaching Neoverse-class capability).

In this sense, XiangShan is a triumph of open hardware collaboration. But it also highlights a deeper inertia in architecture itself.

Speculative execution has dominated CPU design for decades. From Intel and AMD to ARM, Apple, IBM, and NVIDIA, the industry has invested heavily in branch prediction, out-of-order execution, rollback mechanisms, and speculative loads. Speculation once served as the fuel for ever-increasing IPC (Instructions Per Cycle). But it now carries mounting costs: energy waste, security vulnerabilities (Spectre, Meltdown, PACMAN), and ballooning verification complexity.

Since 2018, when Spectre and Meltdown exposed the architectural liabilities of speculative logic, vendors have shifted focus. Patents today emphasize speculative containment rather than acceleration. Techniques like ghost loads, delay-on-miss, and secure predictors aim to obscure speculative side effects rather than boost performance. What was once a tool of speed has become a liability to mitigate. This shift marks a broader digression in CPU innovation—from maximizing performance to patching vulnerabilities.

Most recent patents and innovations now prioritize security mitigation over performance enhancement. While some performance-oriented developments still surface, particularly in cloud and distributed systems, the dominant trend has become defensive. Designs increasingly rely on rollback and verification mechanisms as safeguards. The speculative execution model, once synonymous with speed and efficiency, has been recalibrated into a mechanism of trust and containment.

This is why XiangShan’s adherence to speculation represents a fork in the road. RISC-V’s openness gave the team a chance to rethink not just the ISA, but the core execution model. What if they had walked away from speculation entirely?

Unlike dataflow machines (Groq, Tenstorrent) or the failed promise of VLIW (e.g., Itanium and its successors in niche DSP or embedded markets), Simplex Micro’s predictive execution model breaks from speculative architecture—but with a crucial difference: it aims to preserve general-purpose programmability. Dataflow and VLIW each delivered valuable lessons in deterministic scheduling but struggled to generalize beyond narrow use cases. Each became a developmental cul-de-sac—offering point solutions rather than a unifying compute model.

Simplex’s family of foundational patents eliminates speculative execution entirely. Dr. Thang Tran—whose earlier vector processor was designed into Meta’s original MTIA chip—has patented a suite of techniques centered on time-based dispatch, latency prediction, and deterministic replay. These innovations coordinate instruction execution with precision by forecasting readiness using cycle counters and hardware scoreboards. Rather than relying on a program counter and branch prediction, this architecture replaces both with deterministic, cycle-accurate scheduling—eliminating speculative hazards at the root.

Developers can still write in C or Rust, compiling code through standard RISC-V toolchains with a modified backend scheduler. The complexity shifts to compilation, not programming. This preserves software portability while achieving hardware-level predictability.

XiangShan has proven what open-source hardware can achieve within the boundaries of established paradigms. Simplex Micro challenges us to redraw those boundaries. If the RISC-V movement is to fulfill its original promise—not just to open the ISA, but to reimagine what a CPU can be—then we must explore roads not taken.

And Predictive Execution may be the most compelling of them all: the fast lane no one has yet dared to take.


The Critical Role of Pre-Silicon Security Verification with Secure-IC’s Laboryzr™ Platform

The Critical Role of Pre-Silicon Security Verification with Secure-IC’s Laboryzr™ Platform
by Kalar Rajendiran on 08-11-2025 at 10:00 am

Pre Silicon Security Verification (Hardware SCA)

As embedded systems and System-on-Chip (SoC) designs grow in complexity and integration, the risk of physical attacks has dramatically increased. Modern day adversaries no longer rely solely on software vulnerabilities; instead, they exploit the physical properties of silicon to gain access to sensitive data. Side-channel attacks (SCA) and fault injection attacks (FIA) have emerged as some of the most potent threats, targeting the physical behavior of chips through power analysis, timing discrepancies, or induced faults. While cryptographic algorithms remain mathematically sound, their hardware implementations often betray subtle leakages that attackers can exploit.

To confront these risks proactively, Secure-IC has developed Laboryzr™, a pre-silicon security verification platform that enables hardware and software teams to simulate real-world threats and validate countermeasures during design—long before tape-out.

Why Pre-Silicon Security Matters

The financial and operational impact of discovering a security flaw post-silicon is enormous. Fixes at this stage involve redesign, re-fabrication, and potentially even product recalls. In contrast, pre-silicon verification allows vulnerabilities to be detected and resolved when the cost of change is still low. For industries such as automotive, defense, medical devices, and critical infrastructure, early detection is not only practical—it’s imperative.

Through pre-silicon security verification, organizations can align more easily with demanding security certifications like FIPS 140-3, ISO/IEC 19790, and Common Criteria. Just as importantly, they can ensure that devices are robust against real-world threats like differential power analysis or electromagnetic glitching.

Introducing Laboryzr™: A Platform for Security Sign-Off

Laboryzr™ is Secure-IC’s comprehensive platform for pre-silicon security verification. With Laboryzr, teams can measure and validate the effectiveness of security countermeasures before tape-out, transforming security sign-off from a concept into a measurable reality.

One of Laboryzr’s most powerful attributes is its ability to provide traceability from specification to silicon. By linking threat models directly to RTL and attack simulations, it ensures that security coverage is both complete and verifiable. Laboryzr™ integrates with industry EDA tools used across the SoC design flow, enabling it to catch vulnerabilities early and help reduce the need for costly post-silicon fixes.

Laboryzr’s Pre-Silicon Verification Components

Virtualyzr™ focuses on the hardware layer. It simulates and emulates side-channel and fault injection attacks at various abstraction levels—from RTL to post-synthesis—leveraging existing EDA workflows. Through the use of Value Change Dump (VCD) files, it reconstructs signal activities that mimic power or electromagnetic emissions, enabling leakage detection and exploitation analysis. It also supports fault injection modeling, including clock glitches, electromagnetic interference, and laser attacks. Originally limited to analyzing small IP blocks like AES cores, Virtualyzr™ has evolved to support full-chip and chiplet-scale analysis through advanced parallelization and optimization.

Catalyzr™ addresses the software layer, where it analyzes source code and binaries to detect vulnerabilities such as timing side channels, cache-based leakages, and improper cryptographic API usage. It performs both static and dynamic analysis to evaluate masking countermeasures, cryptographic integration, and execution behavior. With over seven years of field use, Catalyzr™ has matured into a key component of pre-silicon software security assessments.

Designed for the Modern SoC Design Flow

Laboryzr™ has been under development for more than a decade, evolving through constant customer feedback. One of the earliest challenges faced by Secure-IC was how to create a user interface that seamlessly fit into the existing SoC design flow. Originally offering only a graphical interface, Laboryzr later added a command line interface (CLI) to support CI/CD workflows and accommodate power users seeking integration into automated verification environments.

As customer demands shifted toward larger and more complex designs—including SoCs and chiplets—Laboryzr™ underwent fundamental architecture changes. Secure-IC optimized the platform for speed and scalability, enabling high-throughput simulations that could handle full-chip assessments. These improvements, along with robust support for Place and Route (PR) phases, positioned Laboryzr™ as a go-to solution for teams that require both depth and breadth in their security analysis.

Built for What’s Next: PQC, Chiplets, and Beyond

Secure-IC continues to future-proof Laboryzr™ by expanding support for post-quantum cryptography (PQC) and emerging chiplet-based architectures. The platform is being extended to validate PQC algorithm implementations and to analyze interactions between chiplets, especially as heterogeneous integration becomes more common in next-generation SoC design.

Secure-IC’s upcoming acquisition by Cadence also positions Laboryzr™ for even deeper integration into mainstream EDA workflows. With Cadence as an internal customer, Laboryzr™ will gain access to more complete design environments, allowing further validation of its capabilities on complex, multi-chip systems.

Market Context and Differentiation

Unlike solutions focused on software security or information flow analysis or security verification post-silicon, Secure-IC has long focused exclusively on physical attack emulation at the pre-silicon stage. Laboryzr’s tight integration with EDA flows, real-time emulation capability, and multi-layered approach make it uniquely positioned to address the needs of design teams working from RTL to place and route.

Summary

As hardware security threats continue to evolve, the need for comprehensive, early-stage verification is greater than ever. Security must be engineered with the same rigor and traceability as functional requirements. Secure-IC’s Laboryzr™ platform represents a significant advancement in how security is implemented, validated, and signed off in the silicon lifecycle. It empowers chip developers to simulate threats, validate defenses, and certify hardware security—before silicon is produced.

By enabling early detection of physical vulnerabilities, linking threat models to design data, and providing automation-ready interfaces for hardware and software teams, Laboryzr™ delivers a true shift-left security solution. Its continued development in areas like PQC and chiplet support ensures that it remains at the cutting edge of security verification.

To learn more, you can visit the following pages:

Laboryzr brochure page

Laboryzr product page

Catalyzr product page

Virtualyzr product page

 

 

 


Should Intel be Split in Half?

Should Intel be Split in Half?
by Daniel Nenni on 08-11-2025 at 6:00 am

Intel Should Not Be Split!

A recent commentary from four former Intel board members argue that Intel should be split into two separate companies with separate CEOs and separate board of directors. Charlene BarshefskyReed HundtJames Plummer, and David Yoffie wrote that Intel shareholders should insist on a split which would create a new, independent, manufacturing entity (foundry) with its own CEO and board that would position Intel Foundry as an alternative to TSMC. This is what I call the NOT TSMC market which is companies who want 2nd and 3rd source manufacturing to keep competition alive and well in semiconductor manufacturing. This is a very good thing as we all know.

The semiconductor industry had a thriving NOT TSMC market down to 28nm. At 28nm customers could tape-out to TSMC then take the design files (GDSII) to SMIC, UMC, Chartered, or Samsung for competitive manufacturing. Qualcomm for example routinely used multiple foundries for a given design. At 14nm we switched to FinFETs and customers no longer had the ability to multisource manufacturing due to technical differences between FinFET processes, so chip designers had to choose one foundry for a given design. The other problem with FinFETs is that they are were very difficult to manufacture so we lost GlobalFoundries, UMC and SMIC as alternatives. Even more daunting, Samsung Foundry started having yield problems at 10nm which continue down to 3/2nm which uses the new GAA devices.

As a result TSMC has 90%+ market share at 3nm FinFET and will again dominate at 2nm which is GAA. Clearly this is well deserved as TSMC has executed in a fashion no other foundry, or semiconductor company for that matter, has ever before, absolutely.

The four people mentioned above did serve on the Intel board. Charlene Barshefsky served for 14 years (from 2004 to 2018) she is 75 years old. Reed Hundt served for 19 years (2001 to 2020) and is 77 years old. Jim Plumber served 12 years (2005 to 2017) and is 76 years old. David Yofie served for 28 years (November 1989 until May 2018), he is 71 years old. I certainly respect their service but they come from a different world than what we are dealing with today.

Now let me offer you my opinion on what Intel should do. This comes from a semiconductor professional working in the trenches for the past 40 years. I do not believe Intel should be split. Intel Design needs to be closely integrated with manufacturing. This collaborative recipe has succeeded in the past and can succeed in the future under Lip-Bu Tan.

You can use the AMD split as an example. The design side of AMD is wildly successful while the manufacturing side (GlobalFoundries) is stagnated. What saved AMD is the close relationship they have with TSMC (manufacturing). In fact, I would argue that the relationship between AMD and TSMC is even closer than Apple, TSMC’s top customer. The other close customer relationship TSMC has is with Nvidia, another big Intel competitor.

Unfortunately, Intel will not have this close of a relationship with TSMC anytime soon, even if they split the company. We can argue this in the comment section if you would like but let me tell you it will not happen. Those days have passed. Can Intel effectively compete with AMD and NVIDIA without having a super close relationship with manufacturing? No, they cannot.

The other thing you must know is that TSMC would not be in the dominant position they are in today without close customer collaboration. Intel Foundry needs Intel Design for that collaboration in addition to other customers that are willing to step up and vote for Intel Foundry to be successful.

The other question that needs to be considered: Can the United States stay competitive in the world without homegrown leading edge semiconductor manufacturing?

No, we cannot. We can argue this as well but let me tell you it will not happen and the security of our nation is at risk.

Should it be left up to the Intel Shareholders to decide? Of course it should. The current and former Intel board members got Intel to where they are today so I would definitely not leave it up to them.

Bottom line: I am not am not currently an Intel shareholder but I have been in the past. If I were a shareholder I would vote to keep Intel as a whole while lobbying the government and top US fabless semiconductor companies to invest in Intel and make sure the United Sates maintains our technology leadership and stays secure.

“POTUS and DoC can set the stage, the customers can make the necessary investments, the Intel Board can finally do something positive for the company, and we stop writing opinion pieces on the topic.” Crag Barrett, former CEO of Intel 8-10-2025.

Also Read:

Making Intel Great Again!

Why I Think Intel 3.0 Will Succeed

Intel Foundry is a Low Risk Aternative to TSMC


CEO Interview with Bob Owen of Owens Design

CEO Interview with Bob Owen of Owens Design
by Daniel Nenni on 08-10-2025 at 10:00 am

Bob Fung Photo

Bob Fung is the CEO of Owens Design, a Silicon Valley company specializing in the design and build of complex equipment that powers high-tech manufacturing. Over his 22-year tenure, Bob has led the development of more than 200 custom systems for world-class companies across the semiconductor, biomedical, energy, and emerging tech sectors, solving their most demanding equipment challenges. Under his leadership, Owens achieved a 10x revenue increase while maintaining a 100% delivery record, reflecting its engineering excellence and unwavering customer commitment.

Tell us about your company

At Owens Design, our story began in 1983 with a bold vision: to build a company where world-class engineers and technicians collaborate seamlessly to design and manufacture the next generation of high-tech equipment. Today, that vision is realized through our legacy of delivering over 3,000 custom tools across various industries, including semiconductors, renewable energy, medical devices, hard disk drives, and emerging technologies. We are proud to maintain a 100% on-time delivery record, a testament to our culture of precision, partnership, and performance.

What sets Owens Design apart is our deep understanding of complex equipment engineering, our ability to design production-ready prototypes and rapidly scale manufacturing to meet our customer’s growth. Our customers don’t just come to us to design equipment; they come to us to co-develop future-proof solutions. We offer turnkey services that span custom design, precision engineering, prototyping, pilot builds, and scalable manufacturing. This integrated approach minimizes risk, compresses development cycles, and enables rapid ramp-up for production in fast-evolving markets.

What problems are you solving?

At Owens Design, we help high-tech innovators turn intellectual property (IP) into fab-ready systems and enable new processes with production-capable equipment that scales. We focus on sectors that require precision, speed, and scalability, where standard solutions often don’t suffice.

Across the semiconductor and electronics manufacturing sectors, the increasing complexity of products is reshaping equipment requirements. Advanced technologies such as chiplet integration, 3D packaging, and heterogeneous system design, demand highly customized tools that can meet exact standards for precision, reliability, and integration capability.

At the same time, companies face compressed development timelines and pressure to bring new solutions to market faster. Owens Design addresses these needs by engineering application-specific equipment that enables breakthrough innovations, which are tightly aligned with each customer’s performance and production goals.

As the industry shifts toward more regionalized manufacturing and supply chain resilience, companies are reevaluating their approach to equipment strategy. There is a growing need for agile, scalable platforms that can adapt to rapidly changing product roadmaps and evolving production environments. We support that transition by working closely with our clients’ R&D and operations teams, acting as an extension of their organization to deliver complex tools on accelerated timelines while maintaining the engineering rigor that’s core to their success.

What are your strongest application areas?

Being rooted in Silicon Valley, we’ve grown alongside the semiconductor industry and built deep expertise in the design of complex equipment. Many of our customers are semiconductor OEMs, ranging from early-stage startups to Tier-1 equipment manufacturers, who are looking to bring advanced, high-precision systems to market quickly. They turn to us not just because we understand the technical demands of semiconductor tools but because we consistently deliver on tight timelines with the engineering depth, domain knowledge and execution reliability they need.

As Silicon Valley has evolved into a hub for a broader range of advanced technologies, Owens Design has evolved with it. Today, we apply the same level of rigor and creativity to automation challenges in renewable energy, data storage, medical devices, and emerging tech. These projects often involve highly specialized needs, such as advanced laser processing, precision robotics, or the automated handling of fragile materials. In many cases, standard solutions don’t meet the requirements, and that’s where our broad technical experience becomes essential.

What connects all of our work is the ability to take on complex programs while moving quickly and maintaining high quality. Our development process is designed to compress timelines and give customers confidence from concept through production. For over 40 years, we’ve maintained a 100% delivery record by a focusing on areas where we can deliver exceptional results and a commitment to meet our customer’s needs. That combination of discipline and engineering versatility is what continues to set Owens Design apart.

What keeps your customers up at night?

For semiconductor OEMs, whether they’re early-stage startups or established Tier-1 OEMs, the pressure to move fast while getting it right the first time is intense. They’re trying to bring highly complex systems to market on aggressive timelines, and every delay or design misstep can have real commercial consequences. What we hear most often is concern about bridging the gap between a promising concept and a reliable, production-ready tool, especially when resources are limited, and there is no room for second chances.

These teams aren’t just looking for a contract manufacturer; they’re seeking a partner who thoroughly understands semiconductor equipment. Someone who can engage early, ask the right questions and design a system that meets both performance specs and production realities. Reputation matters in this space. If you’re a startup, getting into the fab with a tool that doesn’t perform can be a deal breaker. And if you’re a Tier-1 company, quality and consistency are non-negotiable across your entire roadmap.

That’s why we place such a strong emphasis on early alignment. We work closely with customers to de-risk development from day one, bringing decades of domain expertise, a proven process, and a sense of urgency that matches theirs. Ultimately, it’s about giving them the confidence that they’ll reach the market quickly with a system that works, scales, and earns the trust of their end users.

What does the competitive landscape look like, and how do you differentiate?

The equipment development space is becoming increasingly specialized as technologies grow more complex and timelines get tighter. While there are many players offering contract manufacturing or niche engineering services, very few are structured to provide proper end-to-end support from early design definition through to production-ready delivery. That’s where Owens Design stands apart.

What differentiates us is our ability to engage early in the product lifecycle, even when requirements are still evolving, and carry that design through to a production-ready, scalable tool. Many other service providers are either focused on early-stage prototyping or late-stage manufacturing. However, few can bridge both sides with the same level of technical depth and delivery reliability. Owens Design has the ability to close the gap.

What new features/technology are you working on?

We’re focused on adding value to our customers, working on new ways to address what they need most. In the semiconductor capital equipment space, we’re receiving a strong message that customers need to get to market even faster and require assistance with navigating their customers’ expectations and fab requirements, including SEMI spec compliance, particle control, vibration control, and fab interface software. With artificial intelligence accelerating the advanced packaging market, we’re seeing many new technologies being developed to help improve yield. There is a high interest in our experience with developing Inspection and Metrology equipment to help new technologies get into production quickly, which has led us to our most recent initiative, called PR:IME™. This new platform accelerates the commercialization of these technologies.

What is the PR:IME platform, and how does it accelerate the development of semiconductor inspection and metrology tools?

The idea behind PR:IME came from a recurring challenge we’ve observed over the years: the time it takes to bring inspection and metrology tools from concept to something ready for deployment in a fab. For most tool developers, every new system starts as a clean sheet, custom mechanics, software, controls, and wafer handling, and that means long lead times and a lot of engineering effort spent on non-differentiating components.

We asked ourselves: What if we could take some of that burden off their plate? PR:IME is our answer to that. It’s a modular platform with standardized mechanical, electrical, and software interfaces. A flexible foundation that lets customers plug in their core IP while we handle the rest. That way, teams can focus on what makes their technology unique, not on reinventing basic infrastructure.

One of the things we’re most excited about is its scalability. For R&D environments, a manual wafer loading option is available to get up and running quickly. Then, as the tool matures and heads toward volume production, there is a clear path to fully automated wafer handling without changing the process hardware. That kind of flexibility makes it easier to iterate early and scale later without having to start over from scratch. It’s really about helping innovators move faster and with more confidence.

How do customers usually engage with your company?

Our engagements typically begin with a collaborative discovery process. We work closely with customers to understand their technical challenges, commercial objectives, and long-term vision for the project. This includes discussing key performance objectives, test methods, cost and schedule constraints, system complexity, and barriers to success. By gaining a clear understanding of both the engineering and business context, we’re able to align early on around what success looks like.

If the opportunity is a strong technical and commercial fit, we partner with customers through a phased development approach. This model offers a structured, low-risk pathway for transitioning from concept to implementation, starting with system architecture and feasibility, then progressing through detailed design, prototyping, and ultimately, scalable production. Each phase is designed to validate assumptions and refine the scope, giving customers confidence in both the technical viability and the business case.

This process enables us to build trust and deliver value at every step, whether we’re designing a new tool from scratch or helping an existing system evolve for the next stage of production.

Contact Owens Design

Also Read:

CEO Interview with Dr. Avi Madisetti of Mixed-Signal Devices

CEO Interview with Andrew Skafel of Edgewater Wireless

CEO Interview with Jutta Meier of IQE


CEO Interview with Karim Beguir of InstaDeep

CEO Interview with Karim Beguir of InstaDeep
by Daniel Nenni on 08-10-2025 at 8:00 am

unnamed

Karim Beguir is InstaDeep’s Chief Executive Officer. He helps companies get to grips with the latest AI breakthroughs and deploy these in order to improve efficiency and ROI. As a graduate of France’s Ecole Polytechnique and former Program Fellow at NYU’s Courant Institute, Karim has a passion for teaching and using applied mathematics. He is also a mentor at Google for Startups Accelerator and is a steering committee member of Deep Learning Indaba. Karim is on a mission to democratise AI and make it accessible to a wide audience.

Tell us about your company?

InstaDeep delivers AI-powered decision-making systems for enterprises, at scale, thanks to expertise in Artificial Intelligence research, software development, and high-performance computing. InstaDeep has a track record in several industries such as biotechnology, logistics, and electronics. In 2024, InstaDeep launched the Pro version of DeepPCB, a cloud-based AI Place & Route solution.

What problems are you solving?

DeepPCB addresses the time-consuming and complex process of manual placement and routing for printed circuit boards (PCBs). DeepPCB is not an EDA solution, it’s complimentary to EDA companies that have PCB design tools. Its AI-powered approach leverages reinforcement learning to deliver high-quality routing that meets modern PCB design requirements beyond the capabilities of traditional auto-routers. DeepPCB accelerates design cycles, produces DRC-clean layouts, and optimizes routing paths to eliminate violations. By overcoming the limitations of manual and traditional PCB design methods, DeepPCB provides engineers and designers with a more efficient, accurate, and scalable solution, ultimately reducing time-to-market, overall design effort and faster product deployment.

What application areas are your strongest?

DeepPCB is built for companies designing complex PCBs, especially in areas like consumer electronics, automation, and industrial tech, where quality control isn’t optional. If speed, precision, and efficiency matter to your business, our automated PCB placement and routing will save you significant time, deliver higher-quality boards, and make your team more productive.

What keeps your customers up at night? Engineering teams are focused on meeting aggressive tape-out schedules by working backward from final deadlines through prototyping and the entire PCB design process. They are seeking ways to leverage new technologies to streamline this workflow and manage increasing complexity. With growing demand and limited resources, simply adding headcount may not resolve these challenges, prompting the need for smarter, more automated solutions to ensure timelines are met without compromising quality.

What does the competitive landscape look like and how do you differentiate?

DeepPCB faces competition from established players who are integrating AI into their existing solutions, as well as from specialized AI-focused startups and even community-driven initiatives. DeepPCB distinguishes itself by transforming the PCB design process from a manual, time-consuming endeavor into an efficient, AI-driven, and cloud-based workflow, resulting in faster development cycles, optimized designs, and greater accessibility for engineers and businesses of all sizes.

What new features/technology are you working on?

DeepPCB is actively working on adding new features to its offering, with the goal to support more use-cases and serve more enterprise customers. The team is also focusing on scaling the solution to bigger and more complex boards. The online platform is also constantly evolving, with fresh new interfaces and interactivity features.

How do customers normally engage with your company?

Customers typically engage with DeepPCB in several ways. Companies involved in PCB design often approach DeepPCB to address current design challenges, while many are also planning for future projects and seeking a competitive edge. Organizations with existing PCB design technology turn to DeepPCB to enhance their capabilities beyond traditional tools. Additionally, resellers look to expand their offerings by partnering with DeepPCB, and research groups engage to stay updated on the latest advancements in AI for PCB design.

DeepPCB offers multiple engagement models, including its cloud-based platform and API integration. The company remains active in the PCB design community by publishing blogs, providing free trials, and offering flexible pricing options to meet diverse customer needs.

Also Read:

CEO Interview with Dr. Avi Madisetti of Mixed-Signal Devices

CEO Interview with Andrew Skafel of Edgewater Wireless

CEO Interview with Jutta Meier of IQE


Podcast EP302: How MathWorks Tools Are Used in Semiconductor and IP Design with Cristian Macario

Podcast EP302: How MathWorks Tools Are Used in Semiconductor and IP Design with Cristian Macario
by Daniel Nenni on 08-08-2025 at 10:00 am

Dan is joined by Cristian Macario, senior technical professional at MathWorks, where he leads global strategy for the semiconductor segment. With a background in electronics engineering and over 15 years of experience spanning semiconductor design, verification, and strategic marketing, Cristian bridges engineering and business to help customers innovate using MathWorks tools.

Dan explores how the popular MathWorks portfolio of tools such as Simulink are used in semiconductor and IP design with Cristian, who describes how these tools are used across the complete design process from architecture, to pre-silicon, to post-silicon. Cristian explains several use cases for MathWorks tools in applications such as AI/Datacenter design and the integration of analog/digital design with real-world data.

MathWorks can help develop architectural strategies to optimize analog and mixed signal designs for demanding applications. The early architectural models developed using MathWorks tools can be refined as the design progresses and those models can be used in later phases of design validation to ensure the final silicon implementation follows the original architectural specifications. Cristian also describes use models where semiconductor and IP providers use MathWorks models as executable specifications for products to ensure effective and optimal use of these products

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


Making Intel Great Again!

Making Intel Great Again!
by Daniel Nenni on 08-08-2025 at 6:00 am

Intel 3.0 Logo SemiWiki

Lip-Bu Tan made it very clear on his most recent call that Intel will not continue to invest in leading edge semiconductor manufacturing solo. Lip-Bu is intimately familiar with TSMC and that is the collaborative business model he envisions for Intel Foundry. I support this 100%. Intel and Samsung have tried to compete head-to-head with TSMC in the past using the IDM mentality and have failed so there is no need to keep banging one’s head against that reinforced concrete wall.

Lip-Bu Tan is clearly throwing down the gauntlet like no other Intel CEO has done before. If we want leading edge semiconductor manufacturing to continue to be developed in the United States we all need to pitch in and help. Are you listening politicians? Are you listening Apple, Qualcomm, Broadcom, Marvell, MediaTek, Amazon, Google, Microsoft, etc…

I’m not sure the media understands this. That, and the fact that Lip-Bu under promises and over delivers.

There was some pretty funny speculation after the Intel investor call. Some of which were pretty dire predictions and ridiculous comments from so called “sources”. This has all been discussed in the SemiWiki Experts Forum but let me recap:

First the absolutely most ridiculous one:

“An industry source told the publication that President Donald Trump has mandated TSMC fulfill two conditions if Taiwan is to see any tariff reduction:

  • Buy a 49% stake in Intel
  • Invest a further $400 billion in the US”

To be clear, TSMC investing in Intel will not help Intel. TSMC investing another $400B in the US will not help Intel. This is complete nonsense. The best comment came from my favorite analyst Stacy Rasgon (Bernstein & Co). He estimated that Intel has no more than 18 months to “land a hero customer on 14A” which I agree with completely and so does Elon Musk.

Samsung to Produce Tesla Chips in $16.5 Billion Multiyear Deal

“This is a critical point, as I will walk the line personally to accelerate the pace of progress … the fab is conveniently located not far from my house.” Elon Musk

Of course, everyone wanted to know why Intel missed this mega deal since it is exactly what Intel needs, a hero customer. Personally, I think it is a huge distraction having Elon Musk intimately involved in your business which could end tragically. That is not a risk I would take as the CEO of Intel unless it was THE absolute last resort, which it probably is for Samsung Foundry. Samsung also has plenty of other things to sell Tesla (Memory, Display Tech, Sensors, etc…) so this is a better fit than TSMC or Intel Foundry.

I do hope this deal is successful for all. The foundry race needs three fast horses. The semiconductor industry thrives on innovation and innovation thrives when there is competition, absolutely.

On the positive side of this mega announcement, hopefully other companies will step up and make similar multi-billion-dollar partnerships with Intel Foundry if only to butt egos with Elon Musk. Are you listening Jeff Bezos? How about investing in the industry that helped you afford a $500M yacht? The same for Bill Gates, where would Microsoft be without Intel? How about you Mark Zuckerberg? Where would we all be without leading edge semiconductor manufacturing? And where will we be without access to it in the future because that could certainly happen.

If we want the US to continue to lead semiconductor manufacturing like we have for the past 70+ years we need support from politicians, billionaires, the top fabless semiconductor companies, and most certainly Intel employees.

What should Intel executives do? Simple, just follow Lip-Bu’s leadership and be transparent, play the cards you are dealt, deliver on your commitments, and make Intel great again. Just my opinion of course.

Just a final comment on the most recent CEO turmoil:

Lip-Bu Tan is known all over the world. He was on the Intel Board of Directors before becoming CEO so the Intel Board certainly knows him. The CEO offer letter specifically allowed Lip-Bu to continue his work with Walden International. Lip-Bu founded Walden 38 years ago and it is no secret as to what they do. Walden has invested in hundreds of companies around the world, and yes some of them are in China, but the majority are here in the United States.

What happens next? It will be interesting to see if the semiconductor industry allows political interference in choosing our leadership. Hopefully that is not the case because if it is we are in for a very bumpy ride. Intel has no cause to remove Lip-Bu Tan so if there is a separation it will be on Lip-Bu’s terms. I for one hope that is not the case.

My commitment to you and our company. A message from Intel CEO Lip-Bu Tan to all company employees.

The following note from Lip-Bu Tan was sent to all Intel Corporation employees on August 7, 2025:

Dear Team, 

I know there has been a lot in the news today, and I want to take a moment to address it directly with you.  

Let me start by saying this: The United States has been my home for more than 40 years. I love this country and am profoundly grateful for the opportunities it has given me. I also love this company. Leading Intel at this critical moment is not just a job – it’s a privilege. This industry has given me so much, our company has played such a pivotal role, and it’s the honor of my career to work with you all to restore Intel’s strength and create the innovations of the future. Intel’s success is essential to U.S. technology and manufacturing leadership, national security, and economic strength. This is what fuels our business around the world. It’s what motivated me to join this team, and it’s what drives me every day to advance the important work we’re doing together to build a stronger future.

There has been a lot of misinformation circulating about my past roles at Walden International and Cadence Design Systems. I want to be absolutely clear: Over 40+ years in the industry, I’ve built relationships around the world and across our diverse ecosystem – and I have always operated within the highest legal and ethical standards. My reputation has been built on trust – on doing what I say I’ll do, and doing it the right way. This is the same way I am leading Intel. 

We are engaging with the Administration to address the matters that have been raised and ensure they have the facts. I fully share the President’s commitment to advancing U.S. national and economic security, I appreciate his leadership to advance these priorities, and I’m proud to lead a company that is so central to these goals. 

The Board is fully supportive of the work we are doing to transform our company, innovate for our customers, and execute with discipline – and we are making progress. It’s especially exciting to see us ramping toward high-volume manufacturing using the most advanced semiconductor process technology in the country later this year. It will be a major milestone that’s a testament to your work and the important role Intel plays in the U.S. technology ecosystem.  

Looking ahead, our mission is clear, and our opportunity is enormous. I’m proud to be on this journey with you. 

Thank you for everything you’re doing to strengthen our company for the future.  

Lip-Bu 

https://newsroom.intel.com/corporate/my-commitment-to-you-and-our-company

Also Read:

Why I Think Intel 3.0 Will Succeed

Design-Technology Co-Optimization (DTCO) Accelerates Market Readiness of Angstrom-Scale Process Technologies

Intel Foundry is a Low Risk Aternative to TSMC


Agentic AI and the EDA Revolution: Why Data Mobility, Security, and Availability Matter More Than Ever

Agentic AI and the EDA Revolution: Why Data Mobility, Security, and Availability Matter More Than Ever
by Michael Johnson on 08-07-2025 at 10:00 am

NetApp Agentic AI

The EDA (Electronic Design Automation) and semiconductor industries are experiencing a transformative shift—one that’s being powered by the rise of Agentic AI. If you attended this year’s SNUG, CDNLive, and/or DAC 2025, you couldn’t miss it: agentic AI was the hot topic, dominating keynotes, demos, and booth conversations from start-ups to the “Big 3” (Synopsys, Cadence, Siemens EDA).

But beyond the buzz, there’s a real, urgent need driving this adoption. Chip designs are growing exponentially in complexity, and the pool of skilled engineers isn’t keeping pace. The only way to bridge this productivity gap is with smarter automation—enter agentic AI. But for agentic AI to deliver on its promise, the underlying data infrastructure must be up to the task. That’s where NetApp, with ONTAP and FlexCache, comes in.

What Is Agentic AI?
What is agentic AI In short, it’s the next step in AI evolution—systems that don’t just automate tasks but act as reasoning agents. Agentic AI uses specialized AI Agents to reason and iteratively plan to autonomously solve complex, multi-step problems. Agentic AI uses a four-step process for problem-solving: Perceive, Reason, Act, and Learn

Agentic AI: More Than Just Hype

For example, one new EDA startup ChipAgents.ai demonstrated a live demos where agentic AI read a 300-page ARM processor spec and, in real time, generated a detailed test plan and verification suite. As someone who’s been in the trenches of chip verification, I can say: this is not an incremental improvement. This is game-changing productivity.The benefits are clear:

  • Automates the most tedious engineering tasks
  • Bridges the engineering talent gap
  • Enables faster, more reliable chip design cycles

These benefits are only realized if your data is where it needs to be, when it needs to be there, and always secure.

Microsoft kicked off DAC with a talk by William Chappel who presented reasoning agents in the EDA design flow and introduced Microsoft’s Discovery platform. Microsoft’s Discovery Platform for Agentic AI is an advanced hybrid cloud-based environment designed to accelerate the development and deployment of agentic AI workflows.  Discovery Platform used NetApp’s ONTAP FlexCache technology to continuously and securely keep on-prem design data in-sync Microsoft’s Azure NetApp Files volume in the cloud.

Why Data Mobility, Security, and Availability Are Critical for Agentic AI

1. Data Mobility: The Heart of Hybrid Cloud AI

Agentic AI requires massive GPU resources—resources that are often impractical to build or scale in existing datacenters due to massive power requirements of H100, H200, or newer GPU systems.  Requirements for high power racks, water cooling, and rack space constraints will make adoption challenging, and we haven’t discussed the change from traditional networking to InfiniBand networking. That’s why most early experimentation and deployment of agentic AI for EDA will happen in the cloud.

But here’s the challenge: EDA workflows generate and process huge volumes of data that need to move seamlessly between on-prem and cloud environments. Bottlenecks or delays can kill productivity and erode the benefits of AI.

NetApp ONTAP and FlexCache are uniquely positioned to solve this. With ONTAP’s unified data management and FlexCache’s ability to cache active datasets wherever the compute is, enabling engineers to get instant and secure access to the data they need, whether they’re running workloads on-prem, in the cloud, or both.

FlexCache in Action:
FlexCache can securely, continuously and instantly keep all design data in-sync both on-prem and cloud.  This can enable real-time data access to Cloud based AI Agents to secure design data from the active design work being run on-prem.  In the ACT stage, AI agents can then automatically run EDA tools either on-prem or in the cloud based on the AI Agent generated PLAN.

2. Data Security: Protecting Your IP in a Distributed World

EDA data is among the most sensitive in the world. Intellectual property, proprietary designs, and verification strategies are the crown jewels of any semiconductor company. Moving this data between environments introduces risk—unless you have robust, enterprise-grade security.

ONTAP’s security features including encryption at rest and in transit to advanced access controls and audit logging—ensure that your data is always protected, no matter where it lives or moves. FlexCache maintains these security policies everywhere you need your data, so you never compromise on protection, even as you accelerate workflows.

3.Data Availability: No Downtime, No Delays

Agentic AI thrives on data availability. If an AI agent can’t access the latest design files or verification results, productivity grinds to a halt. In a world where chip tape-outs are measured in millions of dollars per day, downtime is not an option.

ONTAP’s legendary reliability and FlexCache’s always-in-sync architecture ensure that your data is available whenever and wherever it’s needed. Whether you’re bursting workloads to the cloud or collaborating across continents, your AI agents—and your engineers—can count on NetApp.

NetApp: The Foundation for Agentic AI in EDA

Agentic AI is set to reshape EDA and semiconductor design, closing the productivity gap and enabling new levels of automation and innovation. But none of this is possible without the right data infrastructure.

Let’s face it: most EDA datacenters today aren’t ready to become “AI Factories,” as NVIDIA’s Jensen Huang and industry experts predict will be required. Customers are unlikely to invest in new on-prem infrastructure until agentic AI solutions mature and requirements are clear. That’s why hybrid cloud is the go-to strategy—and why NetApp is uniquely positioned to help.

  • ONTAP is the only data management platform integrated across all three major clouds’ EDA reference architectures.
  • FlexCache is the most widely adopted hybrid cloud solution for high-performance, always-in-sync data.
  • No other vendor offers this level of hybrid cloud readiness, flexibility, and security.

Even if your organization isn’t ready for the cloud today, why invest in legacy storage that can’t support your hybrid future? The next wave of EDA innovation will be powered by agentic AI, and it will demand data mobility, security, and availability at unprecedented scale. NetApp is ready—are you?

Choose NetApp—and be ready for the future of EDA.

Ready to accelerate your agentic AI journey? Learn more about NetApp ONTAP and FlexCache for EDA design workflows at NetApp.com.

Also Read:

Software-defined Systems at #62DAC

What is Vibe Coding and Should You Care?

Unlocking Efficiency and Performance with Simultaneous Multi-Threading


WEBINAR: What It Really Takes to Build a Future-Proof AI Architecture?

WEBINAR: What It Really Takes to Build a Future-Proof AI Architecture?
by Don Dingee on 08-07-2025 at 6:00 am

AI Inference Use Cases from Edge to Cloud

Keeping up with competitors in many computing applications today means incorporating AI capability. At the edge, where devices are smaller and consume less power, the option of using software-powered GPU architectures becomes unviable due to size, power consumption, and cooling constraints. Purpose-built AI inference chips, tuned to meet specific embedded requirements, have become a not-so-secret weapon for edge device designers. Still, some teams are just awakening to the reality of designing AI-capable chips and have questions on suitable AI architectures. Ceva recently hosted a webinar featuring two of its semiconductor IP experts, who discussed ideas for creating a future-proof AI architecture that can meet today’s requirements while remaining flexible to accommodate rapid evolution.

A broader look at a wide-ranging AI landscape

AI is an enabling technology that powers many different applications. The amount of chip energy consumption and area designers have to work with to achieve the necessary performance for an application can vary widely, and, as with previous eras of compute technology, the roadmap continues to trend toward the upper right as time progresses. Ronny Vatelmacher, Ceva’s Director of Product Marketing, Vision and AI, suggests the landscape may ultimately include tens of billions of AI-enabled devices for various applications at different performance levels. “The cloud still plays a role for training and large-scale inference, but real-time AI happens at the edge, where NPUs (neural processing units) deliver the required performance and energy efficiency,” he says.

At the highest performance levels in the cloud, a practical AI software framework speeds development. “Developers today don’t have to manage the complexity of [cloud] hardware,” Vatelmacher continues. “All of this compute power is abstracted into AI services, fully managed, scalable, and easy to deploy.” Edge devices with a moderate but growing performance focus prioritize the efficient inferencing of models, utilizing techniques such as NPUs with distributed memory blocks, high-bandwidth interconnects, sparsity, and coefficient quantization to achieve this goal. “[Generative AI] models are accelerating edge deployment, with smaller size and lower memory use,” he observes. Intelligent AI-enabled edge devices offer reduced inference latency while maintaining low power consumption and size, and can also enhance data privacy since less raw data moves across the network. Vatelmacher also sees agentic AI entering the scene, systems that go beyond recognizing patterns to planning and executing tasks without human intervention.

How do chip designers plan an AI architecture to handle current performance but not become obsolete in a matter of 12 to 18 months? “When we talk about future-proofing AI architectures, we’re really talking about preparing for change,” Vatelmacher says.

A deep dive into an NPU architecture

The trick lies in creating embedded-friendly NPU designs with a smaller area and lower power consumption that aren’t overly optimized for a specific model, which may fall out of favor as technology evolves, but rather in a resilient architecture. Assaf Ganor, Ceva’s AI Architecture Director, cites three pillars: scalability, extendability, and efficiency. “Resource imbalance occurs when an architecture optimized for high compute workloads is forced to run lightweight tasks,” says Ganor. “A scalable architecture allows tuning the resolution of processing elements, enabling efficient workload-specific optimization across a product portfolio.” He presents a conceptual architecture created for the Ceva-NeuPro-M High Performance AI Processor, delving deeper into each of the three pillars and highlighting blocks in the NPU and their contributions.

Ganor raises interesting points about misleading metrics. For instance, low power does not necessarily equate to efficiency; it might instead mean low utilization. Inferences per second (IPS) by itself can also be deceptive, without normalization for silicon area or energy used. He also emphasizes the critical role of the software toolchain in achieving extensibility and discusses how NeuPro-M handles quantization and sparsity. Some of the ideas are familiar, but his detailed discussion reveals Ceva’s unique combination of architectural elements.

The webinar strikes a good balance between a market overview and a technical discussion of future-proof AI architecture. It is a refreshing approach, taking a step back to see a broader picture and detailed reasoning about design choices. There’s also a Q&A segment captured during the live webinar session. Follow the link to register and view the on-demand webinar.

Ceva Webinar: What it really takes to build a future-proof AI architecture?

Also Read:

WEBINAR: Edge AI Optimization: How to Design Future-Proof Architectures for Next-Gen Intelligent Devices

Podcast EP291: The Journey From One Micron to Edge AI at One Nanometer with Ceva’s Moshe Sheier

Turnkey Multi-Protocol Wireless for Everyone