Bronco Webinar 800x100 1

PQShield on Preparing for Q-Day

PQShield on Preparing for Q-Day
by Bernard Murphy on 01-22-2026 at 8:00 am

PQShield on Preparing for Q-Day

Following my series on quantum computing (QC), it is timely to look again at what is still the most prominent real-world concern around this technology: its ability to hack classical security methods for encryption and related tasks. Given what I have written on the topic, an understandable counter would be that QC is still in development with long time-horizons (2030-2040) before production, so who cares? One challenge is that dates for Q-day (the popular term for when quantum hacking will become real) are projections; we don’t really know how secret programs and innovations might accelerate the arrival of Q-day, either for brute-force hacks or through new quantum algorithms accessible at lower qubit counts. It is however clear that day will come.

Another challenge is that long-lifetime applications (cars, planes, finance, utilities, defense, …)  built today without quantum defenses may still be in use past Q-day. For this reason, NSA supported by NIST and European and Chinese regulatory bodies are putting in place requirements that systems vulnerable to QC attack must be phased out around 2030. At that point it really doesn’t matter how far out we think Q-day might be. Non-compliant products will be shut out of major markets.

Post-quantum security

There are immediate concerns even before Q-day, which suggest we should pay urgent attention to post-quantum security. Hacker initiatives such as ‘Harvest Now, Decrypt Later’ are an immediate threat. A related threat, “Trust Now, Forge Later”, applies to trusted signature mechanisms for over-the-air updates. Bad actors are already collecting and storing encrypted data and signatures for later decryption. We can’t depend on a public announcement of Q-day. We’ll realize it has arrived only when multiple keys and signatures have already been compromised. A determined-enough adversary with deep enough pockets (maybe a nation-state) might pull this off even sooner than regulatory timelines.

Classical security techniques used in encryption, key exchange, and authorization depend on the difficulty of math problems such as factorization for an integer formed as the product of two very large prime numbers. We already know such techniques can be cracked easily on a quantum computer using Shor’s algorithm or related techniques. Post-quantum offers a variety of options for quantum-resistant security. One I have looked at a bit more closely is lattice-based cryptography, based on a very cool bit of math on lattices. More importantly, these algorithms are generally more complex than their classical counterparts, requiring hardware assistance in performance-sensitive applications. (By the way, algorithms are labeled quantum-resistant rather that invulnerable since no-one knows what future quantum algos might be invented.)

Post quantum security support must provide secure boot, secure device authentication, and secure channels. The reference standard now in the US is the NSA’s Commercial National Security Algorithm (CNSA) 2.0. This defines a variety of algorithms proposed by NIST to address different use cases. (NIST doesn’t develop the algorithms itself. It stages bake-offs between algorithms proposed by commercial and other providers. PQShield is one of the contributors to these contests.)

Sebastien Riou (Fellow, Product Security Architecture at PQShield) has hosted a webinar getting into options to secure each aspect: from different options for secure boot and fault injection, device authentication and side-channel protection, and secure channel (also considering side-channel protection). Lots of good information here when considering application-specific tradeoffs.

Partnerships with MicroChip and Collins Aerospace

PQShield has partnered with MicroChip on their PolarFire SoC FPGAs. The webinar highlights a range of products applicable in this context. PQShield’s MicroLib Core library is a bare metal (software-only) PQC library with an option to support side-channel protection. A second level provides platform security with hardware IP, including an AES accelerator, side-channel protection and configurable HW-based PQC acceleration. The third level offers maximum hardware -accelerated performance with AES, lattice-based PQC, all configurable as a high throughput peripheral to serve high bandwidth and/or multi-tasking objectives.

Another partner, Collins Aerospace, is also collaborating with PQShield on a proof-of-concept integration of post-quantum cryptography solutions. As evidence of PQShield’s credibility in this space, their hybrid cryptographic library is undergoing validation for FIPS 140-3, the mandatory standard for the protection of sensitive data within the U.S. and Canadian federal systems. The hybrid library supports classical cryptography alongside PQC, beneficial for OEMs who want to manage a smooth transition between classical methods and PQC.

What stands out for me is that semiconductor and system enterprises working in defense, space, automotive and avionics are already preparing for post-quantum-readiness. As are credit card companies (who face enormous liabilities if they are hacked), see an earlier post of mine. It’s looking like wait-and-see on post-quantum will be a difficult position to defend in markets and in boardrooms.

Very informative webinar. You can register to watch HERE.

Also Read:

The Quantum Threat: Why Industrial Control Systems Must Be Ready and How PQShield Is Leading the Defense

Think Quantum Computing is Hype? Mastercard Begs to Disagree

Podcast EP304: PQC Standards One Year On: The Semiconductor Industry’s Next Move


2026 Outlook with Ying J Chen of S2C

2026 Outlook with Ying J Chen of S2C
by Daniel Nenni on 01-22-2026 at 6:00 am

Ying J Chen of S2C

I’m Ying J Chen, VP of S2C. S2C is a leading global supplier of FPGA prototyping solutions for advanced SoC and ASIC designs, holding the second largest share of the global prototyping market. Founded in 2003, the company has supported more than 600 customers, including 11 of the top 25 semiconductor companies worldwide, with teams and operations across the U.S., Asia, Europe, and ANZ.

What was the most exciting high point of 2025 for your company? 

One of the most exciting highlights of 2025 was seeing our long-term RISC-V investments translate into concrete, system-level results through close collaboration with ecosystem partners.

Working with the Beijing Open Source Chip Research Institute (BOSC), we completed key system validation of multiple generations of the Kunminghu RISC-V processors on our Prodigy S8-100 Logic System. The dual-core Kunminghu V2 RISC-V processor successfully booted GUI-based OpenEuler 24.03 at 50 MHz on S2C’s Prodigy S8-100 Logic System, running real applications such as LibreOffice and the classic DOOM game. We also validated BOSC’s third-generation 16-core Kunminghu processor with a NoC interconnect on two S8-100Q Logic Systems (each with four VP1902 FPGAs), achieving stable timing closure at 13.3 MHz and demonstrating scalability for more complex designs.

In parallel, we demonstrated Andes Technology’s AX45MPV vector processor IP running a live large language model on our Prodigy S8-100 Logic System. Together, these milestones highlighted our ability to support high-performance, multi-core RISC-V systems with real software workloads, reinforcing S2C’s role as a trusted prototyping partner in the RISC-V ecosystem.

What was the biggest challenge your company faced in 2025?

The biggest challenge in 2025 was managing rapid growth in design scale and system complexity, especially for AI-focused SoCs. Customers increasingly need high capacity, fast execution, and deep debug visibility at the same time. Verification has moved well beyond RTL correctness—teams are bringing up full systems with complex software stacks. This puts pressure on infrastructure while increasing sensitivity to cost, deployment effort, and workflow continuity. Tighter schedules and market uncertainty further pushed customers to look for platforms that can adapt across different development phases.

How is your company’s work addressing this challenge? 

A key issue we see is the trade-off between execution speed and debug depth. Traditionally, prototyping and emulation are handled by separate systems, which adds cost and slows iteration.

Our response has been to rethink how these needs are supported across a project’s lifecycle. With OmniDrive, we’re exploring a dual-mode approach built on a shared hardware foundation, allowing teams to use the same platform for fast software bring-up or deeper debug, depending on the stage of development. While the modes aren’t interchangeable at runtime, this approach helps reduce duplicated infrastructure and improve overall price-performance.

This direction is being refined through close collaboration with early customers, where we’re validating that it holds up in real engineering environments, both technically and economically.

What do you think the biggest growth area for 2026 will be, and why? How is your company’s work addressing this growth? 

We see RISC‑V and AI‑silicon as the key growth drivers in 2026. RISC‑V is moving into broader, more customized deployments, while AI workloads are drastically increasing design scale and complexity.

To support this, we provide a verification infrastructure built for scale. Our RTL Compile Flow (RCF) and Incremental Compile Flow (ICF) efficiently handle very large designs and are proven in multi‑FPGA deployments, shifting verification left.

In 2026, we will promote OmniDrive, our next‑generation emulation system. Based on latest FPGA architecture, its “dual‑mode” supports both emulation and prototyping, enabling high‑speed verification and fine‑grained debugging. This significantly reduces customer hardware investment and total cost.

Together, RCF, ICF, and OmniDrive offer a smooth progression from bring‑up to full‑system validation, helping customers scale while controlling both technical risk and infrastructure cost.

What conferences did you attend in 2025 and how was the traffic?

In 2025, we were selective about which conferences we attended and how much value they delivered. We’ve participated in DAC for many years, but this was the first year we chose not to attend. For us, DAC has become less effective in terms of qualified traffic and meaningful technical engagement, especially given how our customer base and focus areas have evolved.

In contrast, RISC-V–focused conferences, including Andes’ RISC-V CON and related ecosystem events, generated much more relevant and application-specific interaction. The audience there was closely aligned with what we’re working on—real systems, software bring-up, and system-level verification—so the conversations were more relevant and more actionable.

We also saw solid results from DVCon events across different regions. While traffic varied by location, the overall quality was strong, particularly among verification engineers and technical decision-makers. These events continue to be valuable for in-depth technical discussions rather than broad marketing exposure.

Overall, in 2025 we saw a clear shift away from large, general-purpose conferences toward more focused, domain-specific events. For us, relevance and engagement mattered far more than raw foot traffic, and that’s increasingly guiding where we invest our time.

Will you participate in conferences in 2026? Same or more as 2025?

We expect to remain active in industry conferences in 2026, at a similar level or slightly higher than in 2025. Our focus will continue to be on technically driven events that support deeper engineering discussion.

As SoC designs continue to scale up in size and complexity, we’ll place more emphasis on demonstrating how OmniDrive, together with our partitioning and RTL compile technologies, helps teams handle larger designs more efficiently. Rather than broad visibility, we prioritize venues where we can show practical system capability and engage with customers facing real scale and integration challenges.

How do customers normally engage with your company?

Customers usually first connect with us through events, press announcements, media coverage, advertising, or organic search. These initial touchpoints often spark technical conversations, such as system capacity, partitioning, or full-system bring-up challenges. From there, we work closely with engineering teams through evaluations, demos, and proof-of-concept projects. Given the scale and complexity of modern SoCs, these relationships tend to be long-term and collaborative, often spanning multiple design iterations. This approach ensures our engagements are practical, technically meaningful, and directly tied to customer needs.

Are you incorporating AI into your products? Is AI affecting the way you develop your products?

Yes, AI is starting to influence both our products and how we engage with customers. We’ve begun applying AI-driven techniques within our tool flows to help shorten compile time, reduce verification cycles, and improve performance efficiency in customer deployments.

At the same time, we’re preparing an AI-enabled knowledge base based on large models to support faster, more consistent technical guidance. While this capability is still being refined, the goal is to improve support efficiency without overcomplicating the engineering workflow.

Overall, we see AI as a practical enabler—used selectively to accelerate development and improve the customer experience in complex SoC and AI-driven designs.

Also Read:

S2C, MachineWare, and Andes Introduce RISC-V Co-Emulation Solution to Accelerate Chip Development

FPGA Prototyping in Practice: Addressing Peripheral Connectivity Challenges

S2C Advances RISC-V Ecosystem, Accelerating Innovation at 2025 Summit China

Double SoC prototyping performance with S2C’s VP1902-based S8-100


Arteris Smart NoC Automation: Accelerating AI-Ready SoC Design in the Era of Chiplets

Arteris Smart NoC Automation: Accelerating AI-Ready SoC Design in the Era of Chiplets
by Daniel Nenni on 01-22-2026 at 6:00 am

Smart NoC Automation Accelerating AI Ready SoC Design in the Era of Chiplet
Presentation from the 2025 SemiIsrael Expo

As semiconductor design pushes into increasingly complex territory, driven by Ai, ML, HPC, and heterogeneous system architectures, designers are challenged to balance performance, power, and time-to-market pressures. In this landscape, network-on-chip (NoC) architectures have emerged as a foundational building block for modern SoC interconnects, replacing traditional bus-based approaches to support scalable, high-bandwidth communication among numerous IP blocks. But designing an efficient NoC manually in an AI-ready SoC, especially with chiplet-based partitioning, quickly becomes a bottleneck. This is where Arteris Smart NoC Automation plays a transformative role.

Arteris Smart NoC Automation is a suite of tools and methodologies that automates the creation, optimization, and integration of NoC fabrics into SoC designs. Unlike conventional interconnect design flows that require extensive manual intervention, Smart NoC Automation uses intelligent, algorithm-driven processes to generate an interconnect tailored to the specific performance, throughput, latency, and area requirements of a given design. The result is a highly-optimized NoC that accelerates the path from specification to silicon, while ensuring that even the most demanding workloads, such as those driven by AI accelerators, are supported effectively.

At its core, Smart NoC Automation addresses the complexity of heterogeneous data flows within AI-ready SoCs. Modern AI workloads typically involve a diverse set of processing elements: CPUs, GPUs, AI accelerators, ISPs, and various memory subsystems (e.g., DDR, HBM). The communication patterns among these blocks are non-uniform and dynamic, requiring a flexible interconnect that can adapt to high-bandwidth data paths and low-latency control flows. Manual NoC design often results in over-provisioning (leading to wasted silicon) or under-provisioning (causing performance bottlenecks). Smart NoC Automation takes a data-driven approach, analyzing the specific traffic requirements of each block and producing a balanced network topology that meets performance goals with minimal overhead.

The transition to chiplet-based architectures further underscores the need for automated NoC design. Chiplet modular, pre-validated silicon blocks that are integrated into a system package using advanced interconnects (e.g., silicon interposers, organic substrates), enable designers to mix and match IP from different process nodes and vendors. While chiplets provide benefits such as improved yield and shorter development cycles, they complicate system integration: each chiplet may have distinct interface protocols, clock domains, and bandwidth profiles. Coordinating communication across chiplets demands a NoC fabric that can seamlessly bridge intra- and inter-chiplet domains, while maintaining coherence and meeting stringent latency constraints. Smart NoC Automation accelerates this by generating networks that are chiplet-aware, capable of mapping internal NoC segments to external interconnects with minimal designer effort.

Another key advantage of Smart NoC Automation is its ability to incorporate quality-of-service policies directly into the network fabric. AI workloads often have mixed-criticality traffic: real-time sensor data, high-volume tensor data streams, and control-plane signals all sharing the same network. Without QoS mechanisms, critical traffic can be delayed by bulk transfers, degrading performance and predictability. Automated NoC synthesis can embed traffic shaping, priority scheduling, and bandwidth reservation into the fabric, ensuring that performance-critical AI functions maintain determinism even under heavy load.

From a productivity standpoint, Smart NoC Automation drastically reduces design iterations. Traditional NoC design involves multiple manual tuning passes: adjusting topologies, re-evaluating performance models, and iterating physical design constraints. Automation compresses these cycles by generating optimized NoC proposals rapidly and enabling rapid what-if analyses. Designers can explore architectural alternatives and trade-offs interactively, without the lengthy turnaround times typically associated with manual RTL tweaking.

Finally, Smart NoC Automation supports hardware/software co-optimization, a crucial factor for AI-ready SoCs. By providing accurate performance models and exposing architectural parameters early in the design flow, software developers can optimize driver stacks, communication libraries, and scheduling algorithms in parallel with hardware development. This co-design approach ensures that both hardware and software are aligned for peak AI performance at launch.

Bottom line: Arteris Smart NoC Automation is a pivotal enabler for modern SoC design, especially in the era of AI and chiplets. By automating NoC generation and optimization, it removes one of the most time-consuming and error-prone steps in the SoC design flow, supports heterogeneous and chiplet-based architectures, ensures performance and QoS requirements are met, and accelerates overall time to silicon. As SoCs continue to scale in complexity to meet the demands of AI and next-generation compute workloads, Smart NoC Automation will remain essential for delivering high-performance, power-efficient designs on schedule.

Watch the full video here!

Contact Arteris

Also Read:

The IO Hub: An Emerging Pattern for System Connectivity in Chiplet-Based Designs

Arteris Simplifies Design Reuse with Magillem Packaging

Arteris Expands Their Multi-Die Support

 


Manufacturing Is Strategy: Leadership Lessons from the Semiconductor Front Lines

Manufacturing Is Strategy: Leadership Lessons from the Semiconductor Front Lines
by Kalar Rajendiran on 01-21-2026 at 10:00 am

PDF Fireside Chat Dinner Wideshot Dec 3, 2025

This article is an editorial synthesis of a fireside chat between Tom Caulfield, Executive Chairman of GlobalFoundries, and John Kibarian, CEO of PDF Solutions that took place on December 3rd 2025, during the PDF Solutions Users Conference. John Kibarian led the conversation to get Tom Caulfield’s perspectives on leadership lessons forged at the center of semiconductor manufacturing, strategy, and technological change. Topics ranged from factory operations and AI to supply chains, education, and careers.

As you read, a clear throughline emerges: deliberate leadership choices shape outcomes. Manufacturing is strategy. AI is leverage rather than magic. Accountability defines culture. Sustained success requires leaders willing to embrace discomfort rather than defer it.

The Decision Most Leaders Avoid

When Tom became CEO of GlobalFoundries in 2018, the board expected continued investment in 7nm technology and stable execution across the company’s global footprint, including China. The economics, however, left little room for interpretation. Advanced semiconductor manufacturing had become a scale business dominated by a small number of players investing tens of billions of dollars per node. With approximately $5 billion in annual revenue, GlobalFoundries could not compete in that race without putting the company’s future at risk.

Within weeks, Tom made the decision to exit 7nm. The move was controversial, but it immediately clarified priorities. Rather than chasing prestige, the company chose focus, realism, and a path toward sustainable differentiation.

Manufacturing Is Strategy

Tom’s views on manufacturing were shaped long before GlobalFoundries, during his tenure at IBM. Semiconductor fabs are unforgiving systems where physics enforces discipline. Minor deviations in process control compound quickly into yield loss, missed commitments, and customer dissatisfaction.

What separated high-performing fabs from struggling ones was not simply superior equipment. It was rigor. Engineers were often buried in manual data collection, leaving too little time for analysis or root-cause identification. Accountability was fragmented, and problems persisted longer than they should have.

Improvement required disciplined analytics, automation, and unmistakable ownership. Leaders were expected to absorb pressure rather than transmit it downward, allowing teams to focus on execution instead of self-protection. Manufacturing excellence, Tom learned, is not an aspiration but rather a leadership decision.

Accountability as a Force Multiplier

One of the most counterintuitive insights from the fireside chat was how accountability reduces fear. When responsibility is explicit and leaders are visibly accountable, organizations move faster. Defensive behavior recedes, and problem-solving accelerates.

At GlobalFoundries, ambiguity was unacceptable. The operating rhythm centered on defining the problem and fixing it with urgency.

This mindset extended to careers as well. Tom emphasized that mastery often creates its own trap. When people become highly competent, learning slows, comfort sets in, and growth plateaus. Organizations stagnate for the same reason individuals do. Sustained progress requires leaders and teams to continually place themselves in unfamiliar territory.

Why GlobalFoundries Walked Away from the Leading Edge

Exiting 7nm was not a retreat from relevance. It was an acknowledgment of where demand actually resides. The majority of semiconductor volume serves markets such as automotive, industrial systems, RF, and power management. Segments that value reliability, longevity, and integration over transistor density.

GlobalFoundries’ Singapore operations illustrated what disciplined execution could deliver. Years of sustained reinvestment, operational control, and focus on differentiated technologies produced a profitable and resilient manufacturing base. The strategic mandate became clear: replicate that model across the organization.

By aligning ambition with economic reality, GlobalFoundries positioned itself to compete where it could win rather than where scale dictated the terms.

Global Doesn’t Mean Everywhere;  It Means Repeatable

For decades, global manufacturing was equated with geographic reach. In practice, excessive dispersion often created fragility rather than resilience. The fireside chat reframed global manufacturing as repeatability rather than footprint.

True global capability comes from a common manufacturing platform that can be qualified, transferred, and scaled across multiple fabs. Customers care less about the specific location of production than about confidence that supply can shift reliably when disruptions occur.

Repeatability is what converts manufacturing from an operational necessity into a strategic asset.

AI in the Real World: Leverage, Not Magic

Artificial intelligence featured prominently in the discussion, but without exaggeration. AI has demonstrated real value in digital domains such as design verification, predictive maintenance, forecasting, and equipment utilization. In these areas, pattern recognition and optimization deliver measurable returns.

Manufacturing, however, remains grounded in physical reality. Materials, mechanics, and human judgment still govern outcomes. AI can enhance decision-making, but it does not replace accountability or operational discipline.

Leaders who succeed with AI deploy it selectively, prioritizing applications that deliver meaningful, order-of-magnitude improvements rather than incremental gains.

The Semiconductor Supply Chain Is a Leadership Failure

The concentration of advanced semiconductor manufacturing in a single region of the world represents a systemic vulnerability. This is not an ideological concern. It is a failure of governance and risk-management.

While initiatives such as the CHIPS Act have begun addressing supply-side economics, demand-side commitments remain insufficient. Building fabs requires long-term certainty. Manufacturing transitions unfold over years, not quarters, and leadership must plan accordingly.

Supply chain resilience ultimately reflects foresight and responsibility, not nationalism.

AI Is Rewriting Who Captures Value

AI is not only changing how chips are built; it is reshaping industry economics. As design costs fall and software increasingly defines system functionality, system companies are finding it more attractive to develop custom silicon tailored to their products.

This shift places pressure on traditional boundaries between system companies, fabless designers, and foundries. In the next phase of the industry, differentiation and focus will matter more than scale alone.

Leadership Must Institutionalize Global Talent

Modern semiconductor manufacturing and design depend on global talent. Remote engineering, once viewed as a compromise, has become a competitive advantage. Distributed teams enable continuous progress across time zones, broaden perspectives, and expand access to scarce expertise.

The pandemic accelerated adoption, but the deeper lesson endures. Organizations should not wait for crises to modernize how they work. Leadership must institutionalize what works rather than revert to familiar constraints.

Why Liberal Arts Still Matter in an AI World

The discussion concluded with a reflection on education and leadership. Engineering teaches how to build systems. Liberal arts cultivate judgment, context, and critical thinking.

As AI accelerates execution and optimization, human value increasingly lies in framing problems, weighing tradeoffs, and making decisions under uncertainty. These capabilities are not automated. They are developed.

Also Read:

PDF Solutions’ AI-Driven Collaboration & Smarter Decisions

PDF Solutions Charts a Course for the Future at Its User Conference and Analyst Day

PDF Solutions Calls for a Revolution in Semiconductor Collaboration at SEMICON West


2026 Outlook with Kamal Khan of Perforce

2026 Outlook with Kamal Khan of Perforce
by Daniel Nenni on 01-21-2026 at 8:00 am

Kamal Khan

Tell us a little bit about yourself and your company.

Kamal Khan, Vice President North America Automotive/Semiconductor at Perforce. Perforce is trusted by the world’s leading brands to drive quality, security, compliance, collaboration, and speed across the technology lifecycle. Our global footprint spans more than 80 countries and includes over 75% of the Fortune 100. Perforce’s integrated semiconductor solutions—Perforce P4 and IPLM—provide an integrated, highly scalable solution for IP and design data management.

What was the most exciting high point of 2025 for your company?

In May 2025, we announced our partnership with Siemens Digital Industries Software to transform how smart, connected products are designed and developed. This was certainly a high point of the year, generating strong interest from industry media and customers alike. Together with Siemens, we’re delivering a unified development platform that brings real-time decision-making, end-to-end traceability, and AI-driven insight across the product development lifecycle.

What was the biggest challenge your company faced in 2025?

Honestly, the biggest challenge we faced in 2025 was keeping up with the rapid pace of change in semiconductor design. The industry is moving fast—AI chips, automotive systems, IoT devices—all requiring massive amounts of reusable IP and tighter compliance. Managing thousands of IP blocks while maintaining quality and compliance—without slowing down innovation—is tough. We have to make sure our solutions, like IPLM and P4, not only centralize IP management but also incorporate greater automation and intelligence to help teams move faster with confidence. That balance between speed and reliability was the hardest part, but it pushed us to innovate in ways that really matter to our customers.

How is your company’s work addressing this challenge?

We’re tackling these challenges head-on by doubling down on integration and automation through our partnership with Siemens. Semiconductor teams need seamless workflows, and that means connecting IP and data management with the design and verification tools they already rely on. By integrating IPLM and P4 with Siemens EDA solutions, we’re creating a unified environment where engineers can manage IP, track dependencies, and run compliance checks without jumping between systems. We’re also introducing AI-driven capabilities to reduce manual effort and catch issues earlier. This partnership isn’t just about technology; it’s about giving design teams confidence that they can innovate faster without sacrificing quality or compliance.

What do you think the biggest growth area for 2026 will be, and why?

In 2026, the biggest growth opportunity for Perforce in the semiconductor industry will come from helping teams work smarter, not harder. Chip design is getting more complex as companies race to build AI-driven processors for everything from cars to smart devices. That complexity means more IP to manage, more dependencies, more compliance checks, and more chances for costly mistakes.

How is your company’s work addressing this growth?

IPLM can address this growing complexity by giving teams a single source of truth for IP, automating the release process and BOM management, and using AI to predict dependencies and flag risks early. Combined with P4’s trusted, scalable version control, this approach helps engineers focus on innovation instead of chasing files or fixing errors—speeding up development and reducing respins.

What conferences did you attend in 2025 and how was the traffic?

Our team attended and exhibited at Embedded World and DAC in 2025. Booth traffic was strong at both conferences, particularly at DAC, as we announced our partnership with Siemens at the event and there was a lot of excitement around that. We also attended the Chiplet Summit, Siemens’ User2User conference, and the GSA Executive Forum in Silicon Valley.

Will you participate in conferences in 2025? Same or more as 2025?

We will be attending the same conferences in 2026, and possibly a few more.

How do customers normally engage with your company?

Our customer engagements with the largest semiconductor companies typically start with a technical evaluation, followed by a POC, before moving to production. We maintain a high cadence of communication with customers to ensure close collaboration and the highest level of support throughout the evaluation, POC, and beyond.

We welcome all semiconductor leaders to join our IPLM Monthly User Group (MUG) sessions. It’s a great opportunity to hear from product experts, industry peers, and IPLM users about topics like IP governance, quality, and security, as well as IPLM best practices and the latest product features. To register, visit https://www.perforce.com/products/helix-iplm/user-group.

Also Read:

What’s New with IP Lifecycle Management (IPLM)

5 Lessons the Semiconductor Industry Can Learn from Gaming

Why IP Quality and Governance Are Essential in Modern Chip Design


Curbing Soaring Power Demand Through Foundation IP

Curbing Soaring Power Demand Through Foundation IP
by Bernard Murphy on 01-21-2026 at 6:00 am

Curbing Soaring Power Demand min

Power has become a very hot (ha-ha) topic. The media has latched onto the emergence of massive AI datacenters disrupting energy pricing for consumers. Both as consumers and in industry we welcome faster and better features in our hand-held computing devices, cars, homes, industrial processes and businesses. But without further advances in the enabling technologies those new features and higher performance will burn more and more power, putting further strain on stretched personal and business budgets and demanding we more frequently recharge our mobile devices. What more can be done to rein in this relentless thirst for power?

Image courtesy Synopsys

Power is a uniquely challenging metric to manage because it is impacted by all aspects of system design and implementation, from application software down to detailed circuit implementation. Applications, architecture and design teams each work to minimize power in their respective domains, but they all depend on tight power optimization in the underlying enabling technologies. Part of that enablement is through power optimized foundation IP (embedded memories, logic cells, I/Os and NVM), which must be carefully designed to deliver the best power without sacrificing challenging performance goals or compromising on cost/area expectations. The Synopsys Foundation IP Teams in collaboration with their System and EDA colleagues are able to leverage capabilities they have honed on many technology nodes over 20+ years to address these goals. Synopsys has released a digest of six articles on foundation IP enablement for power optimization highlighting some of the focus areas for their organization is working towards to achieve this goal. Lots of good material. In a short blog I can only call out a few notable points which caught my eye.

Low voltage operation

An important way to reduce dynamic power is to reduce voltage since this power is proportional to voltage squared. In DVFS or ultra-low voltage systems, voltages can drop as to 0.7V or lower to minimize power for relatively performance insensitive circuitry. Many IoT devices such as implantable medical devices, must run for years before battery replacement and are operating at very low voltage to meet this need. Energy-harvesting IoT devices can go even lower, down to 0.4 volts.

Conventional foundation IP is not optimal in this regime. Embedded memories must be designed with multiple assist techniques to deliver target power metrics without compromising performance or area. Since these voltages are closer to threshold levels, reliability concerns around switching errors and delay variances must be managed much more carefully. Here Synopsys Foundation IP shows 10-30% area savings and 19-37% power savings from compiler generated memories optimized for low voltage operation. To also support architectural optimizations these compilers offer multiple levels of power control from light sleep to full power off.

Logic cell libraries must equally be redesigned for this operating regime and much more carefully characterized for on-chip variation. For deep low voltage operation, modeling methods go even further, considering higher order moments in timing distributions.

Managing power for HPC and AI

Even with all the improvements mentioned above, one size doesn’t fit all in aggressive demanding applications, especially at 2-3nm processes. Different design teams may choose to add custom characterization corners, or use shrinks from slightly larger feature size processes, or specifically optimized cells for low power and area.

The Synopsys High-Performance Core (HPC) Design Kit allows designers to adapt libraries, providing tools to tune libraries to their unique needs, in this paper for HPC and AI goals. The kit supports a wide range of Vt options, allowing for example super-high-performance processors to support boost modes in DVFS where a core can be overdriven (for a short time) at higher clock frequency. Balancing out power and thermal concerns, other logic blocks can be scaled back to lower voltages and clock frequencies. For cache memories, the kit also provides highly tuned instances to meet tight access times and setup and hold requirements.

Accelerator cores, big arrays of multiply-accumulate (MAC) blocks with supporting memory, are at the heart of AI. Packing these blocks efficiently is essential to managing area and power. Pitch matching specialized logic cells to memories is important in these closely packed repetitive structures, to minimize interconnect power.

Low power AI processors

This paper has a particular focus on AI hardware in datacenters. Here big GPUs support training AI models and are notorious power hogs. But training is an infrequent activity for most AI service providers. These businesses are most concerned with inferencing, invoked when you or I ask a question to ChatGPT or a similar model. Inferences are the primary and high-volume AI revenue generators (or cost center) for service providers. Engagements today start free but very quickly switch to subscription models, calibrated to the complexity of each request and generated response, the time it takes to deliver that response, and importantly how much power is burned in the process.

Power is a hugely important metric in datacenters, governing not only the performance, reliability and useful life of servers and ancillary equipment but also the cost of cooling methods to keep the datacenter running. Power expense in support of cooling is as significant as IT power expense. Many of the same techniques used in power saving in mobile devices such as power switching and DVFS are already commonplace in datacenter designs.

AI chips excel at simultaneous multi-threading. This is what makes them so effective for matrix-intensive AI, but it also results in higher average activity per unit area than you would commonly see in a CPU. Limiting power demand and heating therefore requires lower core voltages, perhaps 0.7 volts. However, communicating with external devices must be handled by I/Os which can span low internal to higher external voltages. Synopsys Foundation IP libraries provide special I/O cells to support these needs.

One Synopsys feature that caught my eye for AI support is their Word All Zero (and half word) memory. Optimized AI inference models are sparse, especially at the edge, containing many zero weights. Avoiding multiply operations for cases where one input is zero can be a big win for both power and performance. Another cool idea, new to me is that they provide compact latch-based memories to support activation and pooling operations.

Customizing for ultra-aggressive requirements

As extensive as these foundation IP offerings are, some design teams always want to push further. One such team, in the process of designing optical network infrastructure for Edge AI needed their logic to run at 0.4 volts, demanding memory compilers and logic libraries to match on a very aggressive schedule. Synopsys designed specially optimized memory compiler and logic library IP to meet these needs, on a timeline that helped that customer meet their targets.

Coming soon

Flash technologies have become essential in many applications, but standard implementations were never designed for embedded use below 28nm or in demanding IoT or AI applications. Magneto-Resistive RAM (MRAM) and Resistive RAM (RRAM) have become the go-to solutions for such use-cases, MRAM for high reliability and performance, RRAM for low cost and high density.

Synopsys already provides compiler-based options to deliver either class of memory instance. The MRAM options supports up to128Mb with multiple feature options and low area and power footprints. RRAM compilers are currently in development.

In process advances approaching the Angstrom level, the next technology challenge is foundation IP built on Gate-All-Around (GAA) transistors. There are plenty of interesting challenges here, yet Synopsys is already sampling 2nm GAA libraries with customers.

Very interesting papers, lots of good details which I couldn’t reasonably compress into this blog. Check it out.

Also Read:

Acceleration of Complex RISC-V Processor Verification Using Test Generation Integrated with Hardware Emulation

TSMC based 3D Chips: Socionext Achieves Two Successful Tape-Outs in Just Seven Months!

CISCO ASIC Success with Synopsys SLM IPs


2026 Outlook with Badru Agarwala of Rise Design Automation (RDA)

2026 Outlook with Badru Agarwala of Rise Design Automation (RDA)
by Daniel Nenni on 01-20-2026 at 10:00 am

Badru Agarwala headshot

Badru Agarwala is the CEO and Co-Founder of Rise Design Automation (RDA). With a strong track record of 40 years in EDA, he was previously the General Manager of the Calypto Systems Division at Mentor Graphics, now Siemens EDA.  He advanced High-Level Synthesis with Catapult and drove innovations in high-level verification and power optimization.

Tell us a little bit about your company. 

RDA is an EDA startup founded by industry veterans with more than 30 years of experience each in hardware design and EDA. We are focused on fundamentally changing how hardware is designed today—supercharging productivity and design quality by raising the level of hardware abstraction, closing the gap between systems and silicon, and deploying agentic AI that operates in real, production design workflows now.

What was the most exciting high point of 2025 for your company? 

The most exciting high point of 2025 was our direct engagement with customers and the feedback we received from real production designs at Tier-1 semiconductor companies. While we believed we had built a differentiated product and architecture, seeing it validated on real silicon by practicing engineers—and delivering measurable results—was a defining moment for the company.

What was the biggest challenge your company faced in 2025? 

The biggest challenge we faced in 2025 was overcoming the long-standing barriers and perceptions that have limited industry-wide adoption of higher-level hardware abstraction and High-Level Synthesis. In practice, this meant supporting diverse design styles, delivering predictable out-of-the-box QoR comparable to hand-coded RTL, establishing robust verification and debug workflows, and providing a clear path for new users to become productive quickly.

How is your company’s work addressing this challenge?  

RDA takes a platform-first approach to High-Level Synthesis: not an incremental point tool, but an integrated system that combines core HLS technology with a production ecosystem of IP and automation. A key part of this is an agent-based, tool-using workflow that helps engineers generate and refine designs by iterating on synthesis and verification results, guided by measured QoR metrics. Customer engagements in 2025 have demonstrated that these barriers can be overcome on real designs, with predictable QoR and workflows that scale beyond expert users.

Are you incorporating AI into your products?

Yes. We treat the LLM as a modular component. Correctness and progress are driven by tool feedback grounded in compile or elaboration, synthesis and verification, and by measured QoR and constraint metrics, rather than being tied to any single LLM or orchestration stack. Engineers describe intent and constraints, AI proposes candidates, Rise tools executes and measure, and AI selects and refines based on the results. We also use reinforcement learning for architectural exploration, optimizing choices against the same QoR and constraint metrics. Once design intent is raised above RTL, today’s natural-language models become practical and effective for design and IP generation and architectural tradeoff exploration.

What do you think the biggest growth area for 2026 will be, and why?

In 2026, we expect our primary growth to come from expanding customer deployments of our core technology, building on the validation achieved in 2025. With proven quality and productivity on real designs, we see clear opportunities to extend the platform’s value in critical directions that our customers are asking for. This includes enhancing the power and deployment flexibility of our agentic AI capabilities, as well as leveraging our architecture to more tightly bridge system-level design and silicon through architectural exploration and virtual platform workflows.

What conferences did you attend in 2025 and how was the traffic?

In 2025, we attended both DVCon and DAC, which were our first conferences after emerging from stealth. Traffic at both events was strong, and we had many productive conversations with attendees.

Will you participate in conferences in 2025? Same or more as 2025?

Our current plan is to attend both DVCon and DAC again in 2026, and we are evaluating the possibility of adding additional conferences as we continue to grow.

How do customers normally engage with your company?

Customers typically engage with us directly to explore next steps. In addition, starting in January 2026, customers can also work with our new North America distributor, AI Tech Sales, which we are excited to partner with as we expand our reach.

Contact RDA

Also Read:

Reimagining Architectural Exploration in the Age of AI

Rise Design Automation Webinar: SystemVerilog at the Core: Scalable Verification and Debug in HLS

Moving Beyond RTL at #62DAC


2026 Outlook with Cristian Amitroaie, Founder and CEO of AMIQ EDA

2026 Outlook with Cristian Amitroaie, Founder and CEO of AMIQ EDA
by Daniel Nenni on 01-20-2026 at 8:00 am

AMIQ DAC Cropped

Cristian Amitroaie is the Founder and CEO of AMIQ EDA,  specializing in software tools for the semiconductor design and verification industry. He co-founded AMIQ in 2003 as a consulting services company and established the AMIQ EDA business unit in 2008 to productize internal tools as commercial solutions. These tools, including DVT IDE and Verissimo SystemVerilog Linter, are used by engineers to improve coding quality and efficiency in hardware design and verification.

Can you tell us a little bit about your company?

Since 2008, we have been providing innovative software tools to benefit both designers and verification engineers. We help our users increase the speed and quality of new code development while implementing best practices. We make it easier to maintain legacy code, accelerate language and methodology learning, simplify debugging, and automatically generate accurate documentation. Our users say they can’t imagine coding without our tools.

What was the most exciting high point of 2025 for your company?

When we introduced our AI Assistant in our DVT IDE family back in 2024, we embarked on a journey to find appropriate ways to leverage AI for the benefit of our users. This past year has been quite exciting on this front. We’ve added new AI-based features to DVT IDE, leveraging all the design and testbench knowledge we’ve compiled from the user code. For example, AI Assistant can now insert context-aware code completion generated from any large language model (LLM) as the user types. AI Assistant also has the ability to auto-correct and explain compilation problems, saving even more time for users.
What was the biggest challenge your company faced in 2025?

The AI space is huge, and it’s challenging to identify the technologies that best apply to our products and can provide the most benefit to our users. It takes significant time and resources to add these new AI features, and we always wonder whether they will be used enough and provide sufficient value to be worth our investment. In addition, the AI space is evolving and expanding rapidly, so today’s state of the art will be outdated within a few months.

How is your company’s work addressing this biggest challenge?

We cast a wide net as we look for potentially useful AI technologies, and we bring in experts to guide us when appropriate. We also work closely with our most advanced users to validate AI features before we roll them out to our complete customer base. The good news is that this approach is working. We’ve seen rapid uptake of AI features among our users and a lot of enthusiasm for more capabilities going forward.

What do you think the biggest growth area for 2026 will be, and why?

Maybe it’s obvious at this point, but we certainly expect AI to become a bigger part of our product suite and to drive our business. AI Assistant will do more in DVT IDE, and we’re integrating more AI-enabled features to further help our users create and debug design and verification code. We’re also working on ways to provides access to our internal models for both IDE users and AI developers. In addition, we’re adding AI features to both Verissimo SystemVerilog Linter and Specador Documentation Generator as we speak.

How is your company’s work addressing this growth?

We just need to keep doing what’s working now: keep a close eye on AI technologies, call in experts as needed, and work with users to validate the value of new features. In addition, we use our DVT compilers and Verissimo to ensure that all code generated by all AI agents is correct.

Is AI affecting the way you develop your products?

Part of our constant monitoring of AI technologies includes looking for ways to help our internal development process. Like many programming teams, we’re finding that AI can suggest solutions to real problems and help us develop code more quickly. Using AI ourselves helps us to understand its strengths and limitations, plus think of more ways to help our users develop their own code.

What conferences did you attend in 2025 and how was the traffic?

Traffic at conferences and conventions remains strong. We attended and exhibited at our usual three in-person events: the Design Automation Conference (DAC) and the Design and Verification Conference (DVCon) in the U.S., as well as DVCon Europe. Our international distributors also represent us at local events, including SemIsrael and workshops in Japan.

Will you attend conferences in 2026? Same or more?

We love it when our users stop by our booth to say hello and of course we always meet some new potential users as well. So we will be attending all the same events again this year.

Additional questions or final comments?

2025 was another year of growth and success for AMIQ EDA, and we anticipate the same for this year. Thank you for the chance to discuss the status of our company and our industry. We’ll continue posting on SemiWiki to keep you up to date on our progress.

Contact Amiq EDA

Also Read:

Runtime Elaboration of UVM Verification Code

Better Automatic Generation of Documentation from RTL Code

AMIQ EDA at the 2025 Design Automation Conference #62DAC


Pushing the Packed SIMD Extension Over the Line: An Update on the Progress of Key RISC-V Extension

Pushing the Packed SIMD Extension Over the Line: An Update on the Progress of Key RISC-V Extension
by Daniel Nenni on 01-20-2026 at 6:00 am

Pushing the Packed SIMD Extension Over the Line Andes RISCV Summit

The rapid growth of signal processing workloads in embedded, mobile, and edge computing systems has intensified the need for efficient, low-latency computation. Rich Fuhler’s update on the RISC-V Packed SIMD extension highlights why scalar SIMD digital signal processing (DSP) instructions are becoming a critical architectural feature and how the RISC-V ecosystem is moving closer to standardizing and deploying them at scale.

Packed SIMD, sometimes referred to as scalar SIMD, occupies a middle ground between purely scalar execution and full vector or GPU-style parallelism. Rather than operating on long vectors, packed SIMD instructions perform the same operation on multiple narrow data elements packed into a single scalar register. This approach is particularly effective for DSP-heavy workloads such as audio codecs, image processing, and communications algorithms, where operations like saturated arithmetic, multiply-accumulate (MAC), and bit manipulation dominate execution profiles.

One of the primary motivations for packed SIMD instructions is their suitability for latency-sensitive and deterministic workloads. Many DSP applications must meet strict real-time deadlines and cannot tolerate the overhead or nondeterminism associated with offloading computation to GPUs or wide vector units. Scalar SIMD instructions reduce instruction count and execution cycles while remaining tightly integrated into the scalar pipeline, enabling predictable timing behavior that is essential for real-time systems such as audio processing chains or control loops in industrial applications.

Power and silicon area efficiency are equally important drivers. In embedded and IoT devices, full SIMD or vector units often impose prohibitive costs in terms of energy consumption and die area. The presentation highlights a striking comparison from Andes Technology: a vector extension with two vector processing units can require roughly 850K logic gates, whereas the packed SIMD extension can be implemented in approximately 80K gates. This order-of-magnitude difference makes packed SIMD an attractive solution for designers who need higher performance than scalar code can deliver but cannot afford the overhead of full vector hardware.

As a result, a wide range of markets stand to benefit from the standardization of packed SIMD in RISC-V. These include mobile and edge AI, automotive and industrial IoT, consumer electronics, communications infrastructure such as 5G and satellite systems, and even microcontroller-class devices. In all of these domains, workloads frequently involve fixed-point arithmetic and repetitive DSP kernels that map naturally to packed SIMD operations.

From a standardization perspective, the Packed SIMD extension has reached an important consolidation phase. Instruction definitions that were previously scattered across multiple documents are being combined, with the majority now captured in the v0.92 draft of the specification, albeit with some renaming. New architectural tests have been written, and discussions are ongoing with the Architecture Review Committee to finalize instruction layout and formatting before formal review. An asciidoc version of the specification is expected to be published to GitHub, signaling increasing maturity and openness of the extension.

Toolchain support is also progressing rapidly. Updates for GCC, LLVM, and binutils-gdb have already been pushed upstream, ensuring that compiler and debugger ecosystems can take advantage of packed SIMD instructions. Work on C and C++ intrinsic functions is underway, which will make it easier for application developers to explicitly leverage the extension without resorting to hand-written assembly. In addition, architectural models and compliance tools such as SAIL, ACTs, and RISCOF are being prepared for public availability, alongside simulators like QEMU and Spike.

Bottom line: Benchmarking results presented using the Andes D23 core demonstrate substantial performance gains across a wide range of audio codecs and DSP workloads when packed SIMD is enabled, compared to configurations without DSP support. These results reinforce the extension’s practical value and underline why pushing the Packed SIMD extension “over the line” is a key milestone for the RISC-V ecosystem

Also Read:

RISC-V: Powering the Era of Intelligent General Computing

Navigating SoC Tradeoffs from IP to Ecosystem

S2C, MachineWare, and Andes Introduce RISC-V Co-Emulation Solution to Accelerate Chip Development


2026 Outlook with Richard Hegberg of Caspia Technologies

2026 Outlook with Richard Hegberg of Caspia Technologies
by Daniel Nenni on 01-19-2026 at 10:00 am

Richard Hegberg

Tell us a little bit about yourself and your company

Richard Hegberg

I’m Rick Hegberg and I’ve been CEO of Caspia Technologies since 2024. I have a deep semiconductor background, including CEO roles at three semiconductor start-ups and executive roles at SanDisk/WD, Qualcomm, Atheros, Numonyx/Micron, ATI/AMD, and VLSI Technology.

Throughout my career I’ve worked with the global semiconductor supply chain to solve many challenges, both technical and otherwise. Thanks to the growing use of AI, the recent rise in hardware-focused cyberattacks caught my attention.

Something needed to be done to protect the security of the hardware root of trust and I felt this could only be accomplished with a ground-up approach to chip design that incorporated rigorous hardware security verification and validation. This is the mission of Caspia Technologies. I felt the company was developing a truly holistic and effective approach to ensure superior security for future chip designs and so I joined them to help take the vision to the next level.

What was the most exciting high point of 2025 for your company? 

We had many breakthrough events over the course of the year. Part of that was discovering real security flaws in popular open-source designs. There is a lot of work ahead for the entire industry to tighten hardware security.

I would say the most memorable event for me was the introduction of our first product, CODAx, which performs static security analysis on early RTL designs. The idea is to find and fix weak security practices early, before they lead to potential catastrophic security breaches in the field. Think of CODAx as a static “linting” tool that is specifically focused on secure design practices. This is the tool we used to find the weaknesses in the open-source designs I mentioned.

What was the biggest challenge your company faced in 2025?

In a word, education. We have found a wide range of awareness of the risks and pervasiveness of hardware attacks across the semiconductor supply chain. Some companies are at the forefront of addressing these problems, but many have yet to see the breadth of the problem and prioritize an approach to address it.

Related to this is a discussion we’ve had regarding the difference between security IP and secure IP. While using a commercially available hardware root of trust is a good idea, this by itself does not assure your design will be secure. The attacks that are developing are very sophisticated and protecting against them requires a larger perspective of the problem.

You can check out a recent blog post from Caspia on this topic.

How is your company’s work addressing this challenge?  

Caspia simplifies the adoption of robust security for the enterprise. This includes tools that easily fit into existing EDA flows, well-developed physical assurance methodologies, and training and curriculum development for hardware security.

Regarding tools, Caspia is currently working on a platform that contains three products:

CODAx is the static security checker I mentioned. It contains over 150 security rules that are constantly updated with trained security large language models (LLMs).

SVx tunes formal verification to look for security robustness with AI-generated security-focused assertions.

PFx facilitates dynamic security validation of completed designs using AI to harness existing co-simulation and emulation technologies.

What do you think the biggest growth area for 2026 will be, and why?

The semiconductor industry is undergoing a major shift to enhance the security hardness of chip designs. Caspia will play a key role in that shift with our security platform, methodologies, and training.

All three of the products I mentioned will move to mainstream deployment in 2026 and this will fuel significant growth for the company.

How is your company’s work addressing this growth?

I’ve already described the product pipeline we have and the expected impact that will have on our growth. Beyond that, we are working with several large players in the industry to facilitate easier adoption of our robust security technology.

Expect more information about this work in the months ahead.

Are you incorporating AI into your products? / Is AI affecting the way you develop your products?

The answer is Yes to both. Cyberattacks are continually enhanced with new AI approaches. That demands security enhancements that are also AI-driven, and Caspia has made extensive use of AI technologies to meet this challenge.

We have access to the world’s largest security threat databases. Caspia has helped to build some of them. We use the previously mentioned LLMs to continually analyze these threats to enhance our tools. Caspia is also developing a growing array of AI agents to identify threats and design weaknesses and take corrective action.  

Agentic security verification/hardening is clearly the way forward. One of our founders, Dr. Mark Tehranipoor recently did a podcast with the DAC folks that provides some good perspectives on these topics. You can listen to it here.

How do customers normally engage with your company?

We will be attending a growing number of events in 2026. For example, we are a Corporate Sponsor at the IEEE International Symposium on Hardware Oriented Security and Trust (HOST), and we will likely be presenting at conferences such as GOMAC and DAC. You can find us at these events, and you can also reach out to us at our website here. We’d be happy to explore how we can help.

Additional comments? 

Security hardening for chip design is no longer an option; it is a requirement for continued success and growth in the market.  Caspia is ready to show you the way forward with well-developed methodologies, training and easy-to-adopt tools. Let us help you secure the future of your next design.

WEBINAR: Why AI-Assisted Security Verification For Chip Design is So Important

Also Read:

A Six-Minute Journey to Secure Chip Design with Caspia

Caspia Focuses Security Requirements at DAC

CEO Interview with Richard Hegberg of Caspia Technologies