RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

Manufacturing Is Strategy: Leadership Lessons from the Semiconductor Front Lines

Manufacturing Is Strategy: Leadership Lessons from the Semiconductor Front Lines
by Kalar Rajendiran on 01-21-2026 at 10:00 am

PDF Fireside Chat Dinner Wideshot Dec 3, 2025

This article is an editorial synthesis of a fireside chat between Tom Caulfield, Executive Chairman of GlobalFoundries, and John Kibarian, CEO of PDF Solutions that took place on December 3rd 2025, during the PDF Solutions Users Conference. John Kibarian led the conversation to get Tom Caulfield’s perspectives on leadership lessons forged at the center of semiconductor manufacturing, strategy, and technological change. Topics ranged from factory operations and AI to supply chains, education, and careers.

As you read, a clear throughline emerges: deliberate leadership choices shape outcomes. Manufacturing is strategy. AI is leverage rather than magic. Accountability defines culture. Sustained success requires leaders willing to embrace discomfort rather than defer it.

The Decision Most Leaders Avoid

When Tom became CEO of GlobalFoundries in 2018, the board expected continued investment in 7nm technology and stable execution across the company’s global footprint, including China. The economics, however, left little room for interpretation. Advanced semiconductor manufacturing had become a scale business dominated by a small number of players investing tens of billions of dollars per node. With approximately $5 billion in annual revenue, GlobalFoundries could not compete in that race without putting the company’s future at risk.

Within weeks, Tom made the decision to exit 7nm. The move was controversial, but it immediately clarified priorities. Rather than chasing prestige, the company chose focus, realism, and a path toward sustainable differentiation.

Manufacturing Is Strategy

Tom’s views on manufacturing were shaped long before GlobalFoundries, during his tenure at IBM. Semiconductor fabs are unforgiving systems where physics enforces discipline. Minor deviations in process control compound quickly into yield loss, missed commitments, and customer dissatisfaction.

What separated high-performing fabs from struggling ones was not simply superior equipment. It was rigor. Engineers were often buried in manual data collection, leaving too little time for analysis or root-cause identification. Accountability was fragmented, and problems persisted longer than they should have.

Improvement required disciplined analytics, automation, and unmistakable ownership. Leaders were expected to absorb pressure rather than transmit it downward, allowing teams to focus on execution instead of self-protection. Manufacturing excellence, Tom learned, is not an aspiration but rather a leadership decision.

Accountability as a Force Multiplier

One of the most counterintuitive insights from the fireside chat was how accountability reduces fear. When responsibility is explicit and leaders are visibly accountable, organizations move faster. Defensive behavior recedes, and problem-solving accelerates.

At GlobalFoundries, ambiguity was unacceptable. The operating rhythm centered on defining the problem and fixing it with urgency.

This mindset extended to careers as well. Tom emphasized that mastery often creates its own trap. When people become highly competent, learning slows, comfort sets in, and growth plateaus. Organizations stagnate for the same reason individuals do. Sustained progress requires leaders and teams to continually place themselves in unfamiliar territory.

Why GlobalFoundries Walked Away from the Leading Edge

Exiting 7nm was not a retreat from relevance. It was an acknowledgment of where demand actually resides. The majority of semiconductor volume serves markets such as automotive, industrial systems, RF, and power management. Segments that value reliability, longevity, and integration over transistor density.

GlobalFoundries’ Singapore operations illustrated what disciplined execution could deliver. Years of sustained reinvestment, operational control, and focus on differentiated technologies produced a profitable and resilient manufacturing base. The strategic mandate became clear: replicate that model across the organization.

By aligning ambition with economic reality, GlobalFoundries positioned itself to compete where it could win rather than where scale dictated the terms.

Global Doesn’t Mean Everywhere;  It Means Repeatable

For decades, global manufacturing was equated with geographic reach. In practice, excessive dispersion often created fragility rather than resilience. The fireside chat reframed global manufacturing as repeatability rather than footprint.

True global capability comes from a common manufacturing platform that can be qualified, transferred, and scaled across multiple fabs. Customers care less about the specific location of production than about confidence that supply can shift reliably when disruptions occur.

Repeatability is what converts manufacturing from an operational necessity into a strategic asset.

AI in the Real World: Leverage, Not Magic

Artificial intelligence featured prominently in the discussion, but without exaggeration. AI has demonstrated real value in digital domains such as design verification, predictive maintenance, forecasting, and equipment utilization. In these areas, pattern recognition and optimization deliver measurable returns.

Manufacturing, however, remains grounded in physical reality. Materials, mechanics, and human judgment still govern outcomes. AI can enhance decision-making, but it does not replace accountability or operational discipline.

Leaders who succeed with AI deploy it selectively, prioritizing applications that deliver meaningful, order-of-magnitude improvements rather than incremental gains.

The Semiconductor Supply Chain Is a Leadership Failure

The concentration of advanced semiconductor manufacturing in a single region of the world represents a systemic vulnerability. This is not an ideological concern. It is a failure of governance and risk-management.

While initiatives such as the CHIPS Act have begun addressing supply-side economics, demand-side commitments remain insufficient. Building fabs requires long-term certainty. Manufacturing transitions unfold over years, not quarters, and leadership must plan accordingly.

Supply chain resilience ultimately reflects foresight and responsibility, not nationalism.

AI Is Rewriting Who Captures Value

AI is not only changing how chips are built; it is reshaping industry economics. As design costs fall and software increasingly defines system functionality, system companies are finding it more attractive to develop custom silicon tailored to their products.

This shift places pressure on traditional boundaries between system companies, fabless designers, and foundries. In the next phase of the industry, differentiation and focus will matter more than scale alone.

Leadership Must Institutionalize Global Talent

Modern semiconductor manufacturing and design depend on global talent. Remote engineering, once viewed as a compromise, has become a competitive advantage. Distributed teams enable continuous progress across time zones, broaden perspectives, and expand access to scarce expertise.

The pandemic accelerated adoption, but the deeper lesson endures. Organizations should not wait for crises to modernize how they work. Leadership must institutionalize what works rather than revert to familiar constraints.

Why Liberal Arts Still Matter in an AI World

The discussion concluded with a reflection on education and leadership. Engineering teaches how to build systems. Liberal arts cultivate judgment, context, and critical thinking.

As AI accelerates execution and optimization, human value increasingly lies in framing problems, weighing tradeoffs, and making decisions under uncertainty. These capabilities are not automated. They are developed.

Also Read:

PDF Solutions’ AI-Driven Collaboration & Smarter Decisions

PDF Solutions Charts a Course for the Future at Its User Conference and Analyst Day

PDF Solutions Calls for a Revolution in Semiconductor Collaboration at SEMICON West


2026 Outlook with Kamal Khan of Perforce

2026 Outlook with Kamal Khan of Perforce
by Daniel Nenni on 01-21-2026 at 8:00 am

Kamal Khan

Tell us a little bit about yourself and your company.

Kamal Khan, Vice President North America Automotive/Semiconductor at Perforce. Perforce is trusted by the world’s leading brands to drive quality, security, compliance, collaboration, and speed across the technology lifecycle. Our global footprint spans more than 80 countries and includes over 75% of the Fortune 100. Perforce’s integrated semiconductor solutions—Perforce P4 and IPLM—provide an integrated, highly scalable solution for IP and design data management.

What was the most exciting high point of 2025 for your company?

In May 2025, we announced our partnership with Siemens Digital Industries Software to transform how smart, connected products are designed and developed. This was certainly a high point of the year, generating strong interest from industry media and customers alike. Together with Siemens, we’re delivering a unified development platform that brings real-time decision-making, end-to-end traceability, and AI-driven insight across the product development lifecycle.

What was the biggest challenge your company faced in 2025?

Honestly, the biggest challenge we faced in 2025 was keeping up with the rapid pace of change in semiconductor design. The industry is moving fast—AI chips, automotive systems, IoT devices—all requiring massive amounts of reusable IP and tighter compliance. Managing thousands of IP blocks while maintaining quality and compliance—without slowing down innovation—is tough. We have to make sure our solutions, like IPLM and P4, not only centralize IP management but also incorporate greater automation and intelligence to help teams move faster with confidence. That balance between speed and reliability was the hardest part, but it pushed us to innovate in ways that really matter to our customers.

How is your company’s work addressing this challenge?

We’re tackling these challenges head-on by doubling down on integration and automation through our partnership with Siemens. Semiconductor teams need seamless workflows, and that means connecting IP and data management with the design and verification tools they already rely on. By integrating IPLM and P4 with Siemens EDA solutions, we’re creating a unified environment where engineers can manage IP, track dependencies, and run compliance checks without jumping between systems. We’re also introducing AI-driven capabilities to reduce manual effort and catch issues earlier. This partnership isn’t just about technology; it’s about giving design teams confidence that they can innovate faster without sacrificing quality or compliance.

What do you think the biggest growth area for 2026 will be, and why?

In 2026, the biggest growth opportunity for Perforce in the semiconductor industry will come from helping teams work smarter, not harder. Chip design is getting more complex as companies race to build AI-driven processors for everything from cars to smart devices. That complexity means more IP to manage, more dependencies, more compliance checks, and more chances for costly mistakes.

How is your company’s work addressing this growth?

IPLM can address this growing complexity by giving teams a single source of truth for IP, automating the release process and BOM management, and using AI to predict dependencies and flag risks early. Combined with P4’s trusted, scalable version control, this approach helps engineers focus on innovation instead of chasing files or fixing errors—speeding up development and reducing respins.

What conferences did you attend in 2025 and how was the traffic?

Our team attended and exhibited at Embedded World and DAC in 2025. Booth traffic was strong at both conferences, particularly at DAC, as we announced our partnership with Siemens at the event and there was a lot of excitement around that. We also attended the Chiplet Summit, Siemens’ User2User conference, and the GSA Executive Forum in Silicon Valley.

Will you participate in conferences in 2025? Same or more as 2025?

We will be attending the same conferences in 2026, and possibly a few more.

How do customers normally engage with your company?

Our customer engagements with the largest semiconductor companies typically start with a technical evaluation, followed by a POC, before moving to production. We maintain a high cadence of communication with customers to ensure close collaboration and the highest level of support throughout the evaluation, POC, and beyond.

We welcome all semiconductor leaders to join our IPLM Monthly User Group (MUG) sessions. It’s a great opportunity to hear from product experts, industry peers, and IPLM users about topics like IP governance, quality, and security, as well as IPLM best practices and the latest product features. To register, visit https://www.perforce.com/products/helix-iplm/user-group.

Also Read:

What’s New with IP Lifecycle Management (IPLM)

5 Lessons the Semiconductor Industry Can Learn from Gaming

Why IP Quality and Governance Are Essential in Modern Chip Design


Curbing Soaring Power Demand Through Foundation IP

Curbing Soaring Power Demand Through Foundation IP
by Bernard Murphy on 01-21-2026 at 6:00 am

Curbing Soaring Power Demand min

Power has become a very hot (ha-ha) topic. The media has latched onto the emergence of massive AI datacenters disrupting energy pricing for consumers. Both as consumers and in industry we welcome faster and better features in our hand-held computing devices, cars, homes, industrial processes and businesses. But without further advances in the enabling technologies those new features and higher performance will burn more and more power, putting further strain on stretched personal and business budgets and demanding we more frequently recharge our mobile devices. What more can be done to rein in this relentless thirst for power?

Image courtesy Synopsys

Power is a uniquely challenging metric to manage because it is impacted by all aspects of system design and implementation, from application software down to detailed circuit implementation. Applications, architecture and design teams each work to minimize power in their respective domains, but they all depend on tight power optimization in the underlying enabling technologies. Part of that enablement is through power optimized foundation IP (embedded memories, logic cells, I/Os and NVM), which must be carefully designed to deliver the best power without sacrificing challenging performance goals or compromising on cost/area expectations. The Synopsys Foundation IP Teams in collaboration with their System and EDA colleagues are able to leverage capabilities they have honed on many technology nodes over 20+ years to address these goals. Synopsys has released a digest of six articles on foundation IP enablement for power optimization highlighting some of the focus areas for their organization is working towards to achieve this goal. Lots of good material. In a short blog I can only call out a few notable points which caught my eye.

Low voltage operation

An important way to reduce dynamic power is to reduce voltage since this power is proportional to voltage squared. In DVFS or ultra-low voltage systems, voltages can drop as to 0.7V or lower to minimize power for relatively performance insensitive circuitry. Many IoT devices such as implantable medical devices, must run for years before battery replacement and are operating at very low voltage to meet this need. Energy-harvesting IoT devices can go even lower, down to 0.4 volts.

Conventional foundation IP is not optimal in this regime. Embedded memories must be designed with multiple assist techniques to deliver target power metrics without compromising performance or area. Since these voltages are closer to threshold levels, reliability concerns around switching errors and delay variances must be managed much more carefully. Here Synopsys Foundation IP shows 10-30% area savings and 19-37% power savings from compiler generated memories optimized for low voltage operation. To also support architectural optimizations these compilers offer multiple levels of power control from light sleep to full power off.

Logic cell libraries must equally be redesigned for this operating regime and much more carefully characterized for on-chip variation. For deep low voltage operation, modeling methods go even further, considering higher order moments in timing distributions.

Managing power for HPC and AI

Even with all the improvements mentioned above, one size doesn’t fit all in aggressive demanding applications, especially at 2-3nm processes. Different design teams may choose to add custom characterization corners, or use shrinks from slightly larger feature size processes, or specifically optimized cells for low power and area.

The Synopsys High-Performance Core (HPC) Design Kit allows designers to adapt libraries, providing tools to tune libraries to their unique needs, in this paper for HPC and AI goals. The kit supports a wide range of Vt options, allowing for example super-high-performance processors to support boost modes in DVFS where a core can be overdriven (for a short time) at higher clock frequency. Balancing out power and thermal concerns, other logic blocks can be scaled back to lower voltages and clock frequencies. For cache memories, the kit also provides highly tuned instances to meet tight access times and setup and hold requirements.

Accelerator cores, big arrays of multiply-accumulate (MAC) blocks with supporting memory, are at the heart of AI. Packing these blocks efficiently is essential to managing area and power. Pitch matching specialized logic cells to memories is important in these closely packed repetitive structures, to minimize interconnect power.

Low power AI processors

This paper has a particular focus on AI hardware in datacenters. Here big GPUs support training AI models and are notorious power hogs. But training is an infrequent activity for most AI service providers. These businesses are most concerned with inferencing, invoked when you or I ask a question to ChatGPT or a similar model. Inferences are the primary and high-volume AI revenue generators (or cost center) for service providers. Engagements today start free but very quickly switch to subscription models, calibrated to the complexity of each request and generated response, the time it takes to deliver that response, and importantly how much power is burned in the process.

Power is a hugely important metric in datacenters, governing not only the performance, reliability and useful life of servers and ancillary equipment but also the cost of cooling methods to keep the datacenter running. Power expense in support of cooling is as significant as IT power expense. Many of the same techniques used in power saving in mobile devices such as power switching and DVFS are already commonplace in datacenter designs.

AI chips excel at simultaneous multi-threading. This is what makes them so effective for matrix-intensive AI, but it also results in higher average activity per unit area than you would commonly see in a CPU. Limiting power demand and heating therefore requires lower core voltages, perhaps 0.7 volts. However, communicating with external devices must be handled by I/Os which can span low internal to higher external voltages. Synopsys Foundation IP libraries provide special I/O cells to support these needs.

One Synopsys feature that caught my eye for AI support is their Word All Zero (and half word) memory. Optimized AI inference models are sparse, especially at the edge, containing many zero weights. Avoiding multiply operations for cases where one input is zero can be a big win for both power and performance. Another cool idea, new to me is that they provide compact latch-based memories to support activation and pooling operations.

Customizing for ultra-aggressive requirements

As extensive as these foundation IP offerings are, some design teams always want to push further. One such team, in the process of designing optical network infrastructure for Edge AI needed their logic to run at 0.4 volts, demanding memory compilers and logic libraries to match on a very aggressive schedule. Synopsys designed specially optimized memory compiler and logic library IP to meet these needs, on a timeline that helped that customer meet their targets.

Coming soon

Flash technologies have become essential in many applications, but standard implementations were never designed for embedded use below 28nm or in demanding IoT or AI applications. Magneto-Resistive RAM (MRAM) and Resistive RAM (RRAM) have become the go-to solutions for such use-cases, MRAM for high reliability and performance, RRAM for low cost and high density.

Synopsys already provides compiler-based options to deliver either class of memory instance. The MRAM options supports up to128Mb with multiple feature options and low area and power footprints. RRAM compilers are currently in development.

In process advances approaching the Angstrom level, the next technology challenge is foundation IP built on Gate-All-Around (GAA) transistors. There are plenty of interesting challenges here, yet Synopsys is already sampling 2nm GAA libraries with customers.

Very interesting papers, lots of good details which I couldn’t reasonably compress into this blog. Check it out.

Also Read:

Acceleration of Complex RISC-V Processor Verification Using Test Generation Integrated with Hardware Emulation

TSMC based 3D Chips: Socionext Achieves Two Successful Tape-Outs in Just Seven Months!

CISCO ASIC Success with Synopsys SLM IPs


2026 Outlook with Badru Agarwala of Rise Design Automation (RDA)

2026 Outlook with Badru Agarwala of Rise Design Automation (RDA)
by Daniel Nenni on 01-20-2026 at 10:00 am

Badru Agarwala headshot

Badru Agarwala is the CEO and Co-Founder of Rise Design Automation (RDA). With a strong track record of 40 years in EDA, he was previously the General Manager of the Calypto Systems Division at Mentor Graphics, now Siemens EDA.  He advanced High-Level Synthesis with Catapult and drove innovations in high-level verification and power optimization.

Tell us a little bit about your company. 

RDA is an EDA startup founded by industry veterans with more than 30 years of experience each in hardware design and EDA. We are focused on fundamentally changing how hardware is designed today—supercharging productivity and design quality by raising the level of hardware abstraction, closing the gap between systems and silicon, and deploying agentic AI that operates in real, production design workflows now.

What was the most exciting high point of 2025 for your company? 

The most exciting high point of 2025 was our direct engagement with customers and the feedback we received from real production designs at Tier-1 semiconductor companies. While we believed we had built a differentiated product and architecture, seeing it validated on real silicon by practicing engineers—and delivering measurable results—was a defining moment for the company.

What was the biggest challenge your company faced in 2025? 

The biggest challenge we faced in 2025 was overcoming the long-standing barriers and perceptions that have limited industry-wide adoption of higher-level hardware abstraction and High-Level Synthesis. In practice, this meant supporting diverse design styles, delivering predictable out-of-the-box QoR comparable to hand-coded RTL, establishing robust verification and debug workflows, and providing a clear path for new users to become productive quickly.

How is your company’s work addressing this challenge?  

RDA takes a platform-first approach to High-Level Synthesis: not an incremental point tool, but an integrated system that combines core HLS technology with a production ecosystem of IP and automation. A key part of this is an agent-based, tool-using workflow that helps engineers generate and refine designs by iterating on synthesis and verification results, guided by measured QoR metrics. Customer engagements in 2025 have demonstrated that these barriers can be overcome on real designs, with predictable QoR and workflows that scale beyond expert users.

Are you incorporating AI into your products?

Yes. We treat the LLM as a modular component. Correctness and progress are driven by tool feedback grounded in compile or elaboration, synthesis and verification, and by measured QoR and constraint metrics, rather than being tied to any single LLM or orchestration stack. Engineers describe intent and constraints, AI proposes candidates, Rise tools executes and measure, and AI selects and refines based on the results. We also use reinforcement learning for architectural exploration, optimizing choices against the same QoR and constraint metrics. Once design intent is raised above RTL, today’s natural-language models become practical and effective for design and IP generation and architectural tradeoff exploration.

What do you think the biggest growth area for 2026 will be, and why?

In 2026, we expect our primary growth to come from expanding customer deployments of our core technology, building on the validation achieved in 2025. With proven quality and productivity on real designs, we see clear opportunities to extend the platform’s value in critical directions that our customers are asking for. This includes enhancing the power and deployment flexibility of our agentic AI capabilities, as well as leveraging our architecture to more tightly bridge system-level design and silicon through architectural exploration and virtual platform workflows.

What conferences did you attend in 2025 and how was the traffic?

In 2025, we attended both DVCon and DAC, which were our first conferences after emerging from stealth. Traffic at both events was strong, and we had many productive conversations with attendees.

Will you participate in conferences in 2025? Same or more as 2025?

Our current plan is to attend both DVCon and DAC again in 2026, and we are evaluating the possibility of adding additional conferences as we continue to grow.

How do customers normally engage with your company?

Customers typically engage with us directly to explore next steps. In addition, starting in January 2026, customers can also work with our new North America distributor, AI Tech Sales, which we are excited to partner with as we expand our reach.

Contact RDA

Also Read:

Reimagining Architectural Exploration in the Age of AI

Rise Design Automation Webinar: SystemVerilog at the Core: Scalable Verification and Debug in HLS

Moving Beyond RTL at #62DAC


2026 Outlook with Cristian Amitroaie, Founder and CEO of AMIQ EDA

2026 Outlook with Cristian Amitroaie, Founder and CEO of AMIQ EDA
by Daniel Nenni on 01-20-2026 at 8:00 am

AMIQ DAC Cropped

Cristian Amitroaie is the Founder and CEO of AMIQ EDA,  specializing in software tools for the semiconductor design and verification industry. He co-founded AMIQ in 2003 as a consulting services company and established the AMIQ EDA business unit in 2008 to productize internal tools as commercial solutions. These tools, including DVT IDE and Verissimo SystemVerilog Linter, are used by engineers to improve coding quality and efficiency in hardware design and verification.

Can you tell us a little bit about your company?

Since 2008, we have been providing innovative software tools to benefit both designers and verification engineers. We help our users increase the speed and quality of new code development while implementing best practices. We make it easier to maintain legacy code, accelerate language and methodology learning, simplify debugging, and automatically generate accurate documentation. Our users say they can’t imagine coding without our tools.

What was the most exciting high point of 2025 for your company?

When we introduced our AI Assistant in our DVT IDE family back in 2024, we embarked on a journey to find appropriate ways to leverage AI for the benefit of our users. This past year has been quite exciting on this front. We’ve added new AI-based features to DVT IDE, leveraging all the design and testbench knowledge we’ve compiled from the user code. For example, AI Assistant can now insert context-aware code completion generated from any large language model (LLM) as the user types. AI Assistant also has the ability to auto-correct and explain compilation problems, saving even more time for users.
What was the biggest challenge your company faced in 2025?

The AI space is huge, and it’s challenging to identify the technologies that best apply to our products and can provide the most benefit to our users. It takes significant time and resources to add these new AI features, and we always wonder whether they will be used enough and provide sufficient value to be worth our investment. In addition, the AI space is evolving and expanding rapidly, so today’s state of the art will be outdated within a few months.

How is your company’s work addressing this biggest challenge?

We cast a wide net as we look for potentially useful AI technologies, and we bring in experts to guide us when appropriate. We also work closely with our most advanced users to validate AI features before we roll them out to our complete customer base. The good news is that this approach is working. We’ve seen rapid uptake of AI features among our users and a lot of enthusiasm for more capabilities going forward.

What do you think the biggest growth area for 2026 will be, and why?

Maybe it’s obvious at this point, but we certainly expect AI to become a bigger part of our product suite and to drive our business. AI Assistant will do more in DVT IDE, and we’re integrating more AI-enabled features to further help our users create and debug design and verification code. We’re also working on ways to provides access to our internal models for both IDE users and AI developers. In addition, we’re adding AI features to both Verissimo SystemVerilog Linter and Specador Documentation Generator as we speak.

How is your company’s work addressing this growth?

We just need to keep doing what’s working now: keep a close eye on AI technologies, call in experts as needed, and work with users to validate the value of new features. In addition, we use our DVT compilers and Verissimo to ensure that all code generated by all AI agents is correct.

Is AI affecting the way you develop your products?

Part of our constant monitoring of AI technologies includes looking for ways to help our internal development process. Like many programming teams, we’re finding that AI can suggest solutions to real problems and help us develop code more quickly. Using AI ourselves helps us to understand its strengths and limitations, plus think of more ways to help our users develop their own code.

What conferences did you attend in 2025 and how was the traffic?

Traffic at conferences and conventions remains strong. We attended and exhibited at our usual three in-person events: the Design Automation Conference (DAC) and the Design and Verification Conference (DVCon) in the U.S., as well as DVCon Europe. Our international distributors also represent us at local events, including SemIsrael and workshops in Japan.

Will you attend conferences in 2026? Same or more?

We love it when our users stop by our booth to say hello and of course we always meet some new potential users as well. So we will be attending all the same events again this year.

Additional questions or final comments?

2025 was another year of growth and success for AMIQ EDA, and we anticipate the same for this year. Thank you for the chance to discuss the status of our company and our industry. We’ll continue posting on SemiWiki to keep you up to date on our progress.

Contact Amiq EDA

Also Read:

Runtime Elaboration of UVM Verification Code

Better Automatic Generation of Documentation from RTL Code

AMIQ EDA at the 2025 Design Automation Conference #62DAC


Pushing the Packed SIMD Extension Over the Line: An Update on the Progress of Key RISC-V Extension

Pushing the Packed SIMD Extension Over the Line: An Update on the Progress of Key RISC-V Extension
by Daniel Nenni on 01-20-2026 at 6:00 am

Pushing the Packed SIMD Extension Over the Line Andes RISCV Summit

The rapid growth of signal processing workloads in embedded, mobile, and edge computing systems has intensified the need for efficient, low-latency computation. Rich Fuhler’s update on the RISC-V Packed SIMD extension highlights why scalar SIMD digital signal processing (DSP) instructions are becoming a critical architectural feature and how the RISC-V ecosystem is moving closer to standardizing and deploying them at scale.

Packed SIMD, sometimes referred to as scalar SIMD, occupies a middle ground between purely scalar execution and full vector or GPU-style parallelism. Rather than operating on long vectors, packed SIMD instructions perform the same operation on multiple narrow data elements packed into a single scalar register. This approach is particularly effective for DSP-heavy workloads such as audio codecs, image processing, and communications algorithms, where operations like saturated arithmetic, multiply-accumulate (MAC), and bit manipulation dominate execution profiles.

One of the primary motivations for packed SIMD instructions is their suitability for latency-sensitive and deterministic workloads. Many DSP applications must meet strict real-time deadlines and cannot tolerate the overhead or nondeterminism associated with offloading computation to GPUs or wide vector units. Scalar SIMD instructions reduce instruction count and execution cycles while remaining tightly integrated into the scalar pipeline, enabling predictable timing behavior that is essential for real-time systems such as audio processing chains or control loops in industrial applications.

Power and silicon area efficiency are equally important drivers. In embedded and IoT devices, full SIMD or vector units often impose prohibitive costs in terms of energy consumption and die area. The presentation highlights a striking comparison from Andes Technology: a vector extension with two vector processing units can require roughly 850K logic gates, whereas the packed SIMD extension can be implemented in approximately 80K gates. This order-of-magnitude difference makes packed SIMD an attractive solution for designers who need higher performance than scalar code can deliver but cannot afford the overhead of full vector hardware.

As a result, a wide range of markets stand to benefit from the standardization of packed SIMD in RISC-V. These include mobile and edge AI, automotive and industrial IoT, consumer electronics, communications infrastructure such as 5G and satellite systems, and even microcontroller-class devices. In all of these domains, workloads frequently involve fixed-point arithmetic and repetitive DSP kernels that map naturally to packed SIMD operations.

From a standardization perspective, the Packed SIMD extension has reached an important consolidation phase. Instruction definitions that were previously scattered across multiple documents are being combined, with the majority now captured in the v0.92 draft of the specification, albeit with some renaming. New architectural tests have been written, and discussions are ongoing with the Architecture Review Committee to finalize instruction layout and formatting before formal review. An asciidoc version of the specification is expected to be published to GitHub, signaling increasing maturity and openness of the extension.

Toolchain support is also progressing rapidly. Updates for GCC, LLVM, and binutils-gdb have already been pushed upstream, ensuring that compiler and debugger ecosystems can take advantage of packed SIMD instructions. Work on C and C++ intrinsic functions is underway, which will make it easier for application developers to explicitly leverage the extension without resorting to hand-written assembly. In addition, architectural models and compliance tools such as SAIL, ACTs, and RISCOF are being prepared for public availability, alongside simulators like QEMU and Spike.

Bottom line: Benchmarking results presented using the Andes D23 core demonstrate substantial performance gains across a wide range of audio codecs and DSP workloads when packed SIMD is enabled, compared to configurations without DSP support. These results reinforce the extension’s practical value and underline why pushing the Packed SIMD extension “over the line” is a key milestone for the RISC-V ecosystem

Also Read:

RISC-V: Powering the Era of Intelligent General Computing

Navigating SoC Tradeoffs from IP to Ecosystem

S2C, MachineWare, and Andes Introduce RISC-V Co-Emulation Solution to Accelerate Chip Development


2026 Outlook with Richard Hegberg of Caspia Technologies

2026 Outlook with Richard Hegberg of Caspia Technologies
by Daniel Nenni on 01-19-2026 at 10:00 am

Richard Hegberg

Tell us a little bit about yourself and your company

Richard Hegberg

I’m Rick Hegberg and I’ve been CEO of Caspia Technologies since 2024. I have a deep semiconductor background, including CEO roles at three semiconductor start-ups and executive roles at SanDisk/WD, Qualcomm, Atheros, Numonyx/Micron, ATI/AMD, and VLSI Technology.

Throughout my career I’ve worked with the global semiconductor supply chain to solve many challenges, both technical and otherwise. Thanks to the growing use of AI, the recent rise in hardware-focused cyberattacks caught my attention.

Something needed to be done to protect the security of the hardware root of trust and I felt this could only be accomplished with a ground-up approach to chip design that incorporated rigorous hardware security verification and validation. This is the mission of Caspia Technologies. I felt the company was developing a truly holistic and effective approach to ensure superior security for future chip designs and so I joined them to help take the vision to the next level.

What was the most exciting high point of 2025 for your company? 

We had many breakthrough events over the course of the year. Part of that was discovering real security flaws in popular open-source designs. There is a lot of work ahead for the entire industry to tighten hardware security.

I would say the most memorable event for me was the introduction of our first product, CODAx, which performs static security analysis on early RTL designs. The idea is to find and fix weak security practices early, before they lead to potential catastrophic security breaches in the field. Think of CODAx as a static “linting” tool that is specifically focused on secure design practices. This is the tool we used to find the weaknesses in the open-source designs I mentioned.

What was the biggest challenge your company faced in 2025?

In a word, education. We have found a wide range of awareness of the risks and pervasiveness of hardware attacks across the semiconductor supply chain. Some companies are at the forefront of addressing these problems, but many have yet to see the breadth of the problem and prioritize an approach to address it.

Related to this is a discussion we’ve had regarding the difference between security IP and secure IP. While using a commercially available hardware root of trust is a good idea, this by itself does not assure your design will be secure. The attacks that are developing are very sophisticated and protecting against them requires a larger perspective of the problem.

You can check out a recent blog post from Caspia on this topic.

How is your company’s work addressing this challenge?  

Caspia simplifies the adoption of robust security for the enterprise. This includes tools that easily fit into existing EDA flows, well-developed physical assurance methodologies, and training and curriculum development for hardware security.

Regarding tools, Caspia is currently working on a platform that contains three products:

CODAx is the static security checker I mentioned. It contains over 150 security rules that are constantly updated with trained security large language models (LLMs).

SVx tunes formal verification to look for security robustness with AI-generated security-focused assertions.

PFx facilitates dynamic security validation of completed designs using AI to harness existing co-simulation and emulation technologies.

What do you think the biggest growth area for 2026 will be, and why?

The semiconductor industry is undergoing a major shift to enhance the security hardness of chip designs. Caspia will play a key role in that shift with our security platform, methodologies, and training.

All three of the products I mentioned will move to mainstream deployment in 2026 and this will fuel significant growth for the company.

How is your company’s work addressing this growth?

I’ve already described the product pipeline we have and the expected impact that will have on our growth. Beyond that, we are working with several large players in the industry to facilitate easier adoption of our robust security technology.

Expect more information about this work in the months ahead.

Are you incorporating AI into your products? / Is AI affecting the way you develop your products?

The answer is Yes to both. Cyberattacks are continually enhanced with new AI approaches. That demands security enhancements that are also AI-driven, and Caspia has made extensive use of AI technologies to meet this challenge.

We have access to the world’s largest security threat databases. Caspia has helped to build some of them. We use the previously mentioned LLMs to continually analyze these threats to enhance our tools. Caspia is also developing a growing array of AI agents to identify threats and design weaknesses and take corrective action.  

Agentic security verification/hardening is clearly the way forward. One of our founders, Dr. Mark Tehranipoor recently did a podcast with the DAC folks that provides some good perspectives on these topics. You can listen to it here.

How do customers normally engage with your company?

We will be attending a growing number of events in 2026. For example, we are a Corporate Sponsor at the IEEE International Symposium on Hardware Oriented Security and Trust (HOST), and we will likely be presenting at conferences such as GOMAC and DAC. You can find us at these events, and you can also reach out to us at our website here. We’d be happy to explore how we can help.

Additional comments? 

Security hardening for chip design is no longer an option; it is a requirement for continued success and growth in the market.  Caspia is ready to show you the way forward with well-developed methodologies, training and easy-to-adopt tools. Let us help you secure the future of your next design.

WEBINAR: Why AI-Assisted Security Verification For Chip Design is So Important

Also Read:

A Six-Minute Journey to Secure Chip Design with Caspia

Caspia Focuses Security Requirements at DAC

CEO Interview with Richard Hegberg of Caspia Technologies


Siemens EDA Illuminates the Complexity of PCB Design

Siemens EDA Illuminates the Complexity of PCB Design
by Mike Gianfagna on 01-19-2026 at 6:00 am

Siemens EDA Illuminates the Complexity of PCB Design

As heterogeneous multi-die design becomes more prevalent, the focus on advanced analysis has predictably shifted in that direction. While these challenges are important to overcome, we shouldn’t lose sight of how complete systems are built. Short and long reach communication channels, system-level power management and the all-important PCB are still fundamental building blocks for every complex system.

Siemens Digital Industries Software takes a broad and holistic view of system design, and a recent white paper is a great example of that perspective at work. The paper is titled How long is that trace? and it illustrates the complexity of PCB analysis and why it’s so important to get it right. If you are engaged in delivering complex systems, this white paper provides important information to ensure a successful project. A download link is coming but first let’s examine some of the topics covered when Siemens EDA illuminates the complexity of PCB design.

Getting it Right – Signal Analysis

Measuring and matching propagation delay for complex signal traces is both critical for performance and quite challenging to accomplish.  The white paper points out that:

To match the signal propagation time of two traces, PCB designers make the length of the two traces match down to a few thousandths of an inch (mils). While this is a good place to start, other factors influence the delay of the signal.

The impact of how high frequency signals and vias affect propagation delay is discussed in some detail. The piece explains how to use phase angle to calculate trace delay for example.  The question is posed:

Since different frequencies propagate at different speeds, how does that speed difference affect a digital signal that is not a sine wave?

Fourier analysis is used to show how digital signals containing high frequency components are affected by the interconnect. The relationship of magnitude and phase is discussed across a spectrum of the harmonic frequencies of the signals involved.  The figure below is an example of a plot to examine the composition of a digital signal. There is a lot more to getting this right than you may think. This white paper does a great job explaining what’s involved.

A digital signal and the harmonics that create the edge rate

Getting it Right – Via Design

The white paper also discusses how vias impact the edge rate and thus the trace delay. The piece explains an important point related to this issue:

If vias passed all frequencies of a signal equally, their impact would not be as significant. But vias impact some frequencies more than others, so via characteristics also affect signal delay.

There is a lot of rich and relevant detail presented regarding how via design impacts trace delay. Slightly different via geometries are analyzed in detail. It turns out that via geometry can have significant and non-intuitive impact on overall trace delay and thus overall system performance.

Again, frequency analysis and harmonics play a role in finding the right answers. The impact of various via return paths are also examined. The detail presented will get your attention.

To Learn More

After reading this white paper you will realize that copper length is not the only factor impacting the performance of a PCB trace. It is pointed out that vias have an inherent delay due to their span, but other characteristics add delay and distortion to the signal. The bottom line is the time it takes a signal to rise above the switching threshold at the driver to the time it takes to cross the switching threshold at the receiver.

Edge rate is key. The piece points out that the signal edge is composed of a fundamental sine wave and multiple higher frequency harmonics, all of which must have a certain amplitude and phase to reproduce the signal. When you want to know what the final performance will be, using a simulator is the best way. To find out more about this important system level analysis and optimization process download your copy of the Siemens Digital Industries white paper here. And that’s how Siemens EDA illuminates the complexity of PCB design.

Also Read:

Siemens and NVIDIA Expand Partnership to Build the Industrial AI Operating System

Automotive Digital Twins Out of The Box and Real Time with PAVE360

Addressing Silent Data Corruption (SDC) with In-System Embedded Deterministic Testing


Accelerating Advanced FPGA-Based SoC Prototyping With S2C

Accelerating Advanced FPGA-Based SoC Prototyping With S2C
by Daniel Nenni on 01-18-2026 at 12:00 pm

Prodigy S8 100 Logic Systems

Having spent a significant amount of my career in EDA and IP I can tell you first hand how important picking the right prototyping partner is. I have known S2C since my interview with CEO Toshio Nakama in 2017. It has been a pleasure working with them and I look forward to seeing an S2C update at DVCon the first week of March here in Silicon Valley. Specifically I am looking forward to seeing the new Prodigy Logic System.

The Prodigy S8-100 Logic System-VP1902 is a high-performance FPGA-based prototyping platform designed to accelerate advanced System-on-Chip (SoC) and ASIC development in demanding applications such as AI, high-performance computing (HPC), networking, and RISC-V processor design. Built and marketed by S2C Inc., a leader in FPGA prototyping solutions, the S8-100 series harnesses the latest AMD Versal™ Premium VP1902 adaptive SoC as its core building block. This integration enables developers to prototype large-scale digital logic designs with unprecedented gate capacity and I/O flexibility compared to previous generations of prototyping systems.

At its core, the Prodigy S8-100 platform is about bridging the gap between RTL design and full hardware validation before production silicon is available. FPGA prototyping has become essential because software development, validation, and system-level debugging often cannot wait for the final ASIC to be manufactured. The S8-100’s modular architecture enables hardware teams to test, refine, and validate entire SoCs—right down to peripheral interfaces—on reconfigurable hardware. Unlike simple simulation environments, this FPGA-based approach allows real execution of logic under real timing conditions, enabling much earlier detection of integration bugs and performance bottlenecks.

A defining feature of the Prodigy S8-100 is its massive logic capacity, with each VP1902 FPGA supporting up to 100 million ASIC equivalent gates.

Systems can be configured in three variants:

In multi-FPGA configurations, the total effective capacity scales up to 400 million gates, providing headroom for extremely complex designs that incorporate multiple cores, accelerators, memory hierarchies, and communication fabrics.

Resource-wise, the S8-100 offers rich internal capabilities including tens of thousands of logic cells, megabits of on-chip RAM, and thousands of DSP slices per FPGA. It also boasts advanced I/O support with high-speed transceivers and contemporary interface standards such as PCIe Gen5, enabling real-world connectivity with host systems and other devices. The result is a prototyping system capable of both high throughput and real-world system integration testing.

Beyond raw hardware, the Prodigy S8-100 ecosystem includes a suite of productivity tools to streamline prototyping workflows. S2C’s software, including PlayerPro-CT for partitioning and ProtoBridge for co-simulation, helps automate complex multi-FPGA design partitioning and bitstream generation. An extensive catalog of “Prototype Ready IP” daughter cards further expands the platform’s usability, offering pre-validated interface modules (for memory, Ethernet, GPIO, and more) that plug into the system without consuming valuable FPGA logic. These tools together reduce setup time, simplify board bring-up, and allow teams to concentrate on verification and software development instead of hardware plumbing.

The Prodigy S8-100 is also gaining traction in emerging markets such as RISC-V SoC development, where developers need to validate not just CPU cores but entire subsystems that include custom extensions and accelerators. In a recent collaboration with Andes Technology, the S8-100 platform has been used to prototype advanced RISC-V designs with custom instructions and high-bandwidth interfaces, demonstrating its value in next-generation CPU and SoC workflows.

Bottom line: The Prodigy S8-100 Logic System-VP1902 represents a state-of-the-art prototyping solution that addresses the challenges of modern digital design: huge logic capacity, flexible I/O, scalable configurations, and robust toolchains. For semiconductor developers working on cutting-edge chips—from AI accelerators to custom processors—platforms like the S8-100 make it possible to validate complex designs thoroughly, accelerate software readiness, and reduce the risk associated with first-silicon prototypes. As design complexity continues to grow, FPGA-based prototyping systems like the Prodigy S8-100 will remain essential tools in the semiconductor development cycle.

 

REQUEST A QUOTE

Also Read:

S2C, MachineWare, and Andes Introduce RISC-V Co-Emulation Solution to Accelerate Chip Development

FPGA Prototyping in Practice: Addressing Peripheral Connectivity Challenges

S2C Advances RISC-V Ecosystem, Accelerating Innovation at 2025 Summit China


CEO Interview with Moshe Tanach of NeuReality

CEO Interview with Moshe Tanach of NeuReality
by Daniel Nenni on 01-16-2026 at 2:00 pm

Moshe Tanach

Moshe Tanach is co-founder and CEO of NeuReality. Prior to founding the company, he held senior engineering leadership roles at Marvell and Intel, where he led complex wireless and networking products from architecture through mass production. He also served as AVP of R&D at DesignArt Networks (later acquired by Qualcomm), where he led development of 4G base station technologies.

Tell us about your company? What problems are you solving?

NeuReality was established by industry veterans from Nvidia-Mellanox, Intel, and Marvell, united by a vision to transform datacenter infrastructure for the AI era. As computational focus shifts from CPUs to GPUs and specialized AI processors, we recognized that general-purpose legacy CPU and NIC architectures had become bottlenecks, limiting high-end GPU performance and efficiency. Our mission is to redefine these system components, prioritizing efficiency and cost-effectiveness for next-generation AI infrastructure.

We address a critical challenge in today’s AI datacenters — underutilized GPUs idling while waiting for data. Whether in distributed training of large language models or in disaggregated inference pipelines, the network connecting these GPUs is increasingly vital both in bandwidth and latency. The core challenge is to move large volumes of data between GPUs instantly, enabling continuous computation. Failure to do so results in significant cost inefficiencies and undermines the profitability of AI applications.

Our purpose-built heterogeneous compute architecture, advanced AI networking, and software-first philosophy led to the launch of our NR1 product. NR1 integrates an embedded AI-NIC and is delivered with comprehensive inference-serving and networking software stacks. These are natively integrated with MLOps, orchestration tools, AI frameworks, and xCCL libraries, ensuring rapid innovation and optimal GPU utilization. We are now developing our second-generation of products starting with the NR2 AI-SuperNIC, focused exclusively on GPU-direct, east-west communication for large-scale AI factories.

What is the biggest pain point that you are solving today for customers?

The paradigm in datacenter design has shifted from optimizing individual server nodes to architecting entire server racks and clusters, scaling up to hundreds or thousands of GPUs. The interconnect between these GPU nodes and racks must match the performance of in-node connectivity, delivering maximum bandwidth and minimal latency.

Our customers’ primary pain point is that current networking solutions, such as those from Nvidia and Broadcom, are neither wide nor fast enough, resulting in wasted GPU resources and increased operational costs due to power inefficiencies. To address this, we developed the NR2 AI-SuperNIC, purpose-built for scale-out AI systems. Free from legacy constraints, NR2 offers 1.6Tbps bandwidth, sub-500ns latency, and native support for GPU-direct interfaces over RoCE and UET. A flexible control plane and full hardware offload to the data plane supports all distributed collective libraries, orchestration, and MLOps protocols. By eliminating unnecessary overhead, NR2 achieves industry-leading power efficiency, a critical advantage as the number of NIC ports and wire speeds continue to rise.

Once you secure the best GPUs and XPUs for AI, network performance and integration into AI workflows becomes the ultimate differentiator for AI datacenters and multi-site “AI brains.”

What keeps your customers up at night?

Our customers are focused on three core challenges:

  • Maximizing the ROI of their GPU investments
  • Managing AI infrastructure growth in a cost-effective, sustainable manner
  • Avoiding lock-in to proprietary, closed solutions

From the outset, we addressed these concerns with a software-first, open-standards approach. This gives customers the flexibility to mix accelerators, adapt architectures, and scale without overhauling their entire system while leveraging the power of the developers’ communities. Customers recognize that superior hardware alone is insufficient. Robust, open software that leverages community-driven innovation and supports new algorithms and deployment models, is essential to unlocking the full value of their infrastructure. Our software-first strategy has earned significant trust and respect from customers using our NR1 AI-NIC with our Inference Serving Stack (NR-ISS) and our Scale-out Networking Stack (NR-SONS) and those preparing to adopt NR2 AI-SuperNIC.

What does the competitive landscape look like and how do you differentiate? What new features/technology are you working on?

The competitive landscape is dominated by Nvidia’s ConnectX and Broadcom’s Thor General-purpose NIC products. While these solutions are advancing in bandwidth, their latency remains above 1 microsecond, which becomes a significant bottleneck as speeds increase to 800G and 1.6T. Hyperscalers and other leading customers are demanding faster, more efficient networking to pair with Nvidia GPUs and their own custom XPUs. Without such solutions, they are compelled to develop their own NICs to overcome current limitations, a task found to be long and complex.

NeuReality differentiates itself by delivering double the bandwidth and less than half the latency of competing products. We then deliver exclusive AI features in the core network engines, such as the packet processors and the hardened transport layers, and the integrated system functions, such as PCIe switch and peripheral interfaces.

We defined and designed NR2 AI-SuperNIC die, package and board in collaboration with market leaders to accommodate diverse system topologies. Features include:

  • Integrated UALink for high-performance in-node connectivity between CPUs and GPUs, bridging scale-up and scale-out networks
  • Embedded PCIe switch for flexible system architectures
  • xCCL acceleration for both mathematical and non-mathematical collectives, a unique capability
  • Exceptional power efficiency—2.5W per 100G, setting a new industry benchmark
  • Comprehensive, open-source software stack with native support for all major AI frameworks and libraries.

Looking at this table, you can clearly see the advantage of NR2 AI-SuperNIC compared to today’s solutions and to future roadmap solutions from our competition:

How do customers normally engage with your company?

We work directly with hyperscalers, neocloud customers, and enterprises, providing support both directly and through system integrators and OEMs. Our engineering team invests in understanding each customer’s unique needs, collaborating closely to deliver tailored solutions. Most customers approach us not simply seeking a new networking solution but aiming to maximize the value of their GPU investments.

Engagements often begin with proof-of-concept (POC) projects. With our NR1 AI-CPU product, we established a robust ecosystem of partners, channels, and lead customers to ensure early product validation and customer satisfaction. For NR2, we are inviting partners to join the AI-SuperNIC Partnership and validate interoperability with their hardware, software stacks, and communication libraries well before full-scale deployment.

What is next in the evolution of AI infrastructure?

Looking ahead, we anticipate two key trends will shape customer focus and industry direction.

First, as AI workloads become increasingly dynamic and distributed, customers will demand even greater flexibility and automation in their infrastructure. This will drive the adoption of intelligent orchestration platforms that can optimize resource allocation in real time, ensuring maximum efficiency and responsiveness across diverse environments. To me, it’s crystal clear that Rack-scale design is not enough. Scale-out must evolve together with scale-up to support ease of deployment that is less dependent on the location of GPUs in the node, server, rack, or cluster of racks.

Second, we expect sustainability and energy efficiency to become central decision factors for enterprises building or using large-scale AI infrastructure. Organizations will seek solutions that not only deliver top tier performance but also minimize environmental impact and operational costs. As a result, power-efficient networking and hardware offload will become critical differentiators in the market.

CONTACT NEUREALITY

Also Read:

2026 Outlook with Paul Neil of Mach42

CEO Interview with Scott Bibaud of Atomera

CEO Interview with Rabin Sugumar of Akeana