RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

WEBINAR: How PCIe Multistream Architecture is Enabling AI Connectivity

WEBINAR: How PCIe Multistream Architecture is Enabling AI Connectivity
by Daniel Nenni on 11-11-2025 at 8:00 am

multistream webinar banner square

In the race to power ever-larger AI models, raw compute is only half the battle. The real challenge lies in moving massive datasets between processors, accelerators, and memory at speeds that keep up with trillion-parameter workloads. Synopsys tackles this head-on with its webinar, How PCIe Multistream Architecture is Enabling AI Connectivity at 64 GT/s and 128 GT/s, set for November 18, 2025, at 9:00 AM PST. This 60-minute session, led by Diwakar Kumaraswamy, a veteran SoC architect with over 15 years in high-speed interconnects, targets engineers and system designers building the next wave of AI infrastructure.

REGISTER NOW

At the heart of the discussion is PCIe Multistream Architecture, a fundamental redesign that breaks away from the single-stream limitations of earlier PCIe generations. In traditional PCIe, all data packets, whether from storage, networking, or GPU memory, share a single serialized path. This creates bottlenecks during bursty AI traffic, such as gradient updates in distributed training or real-time inference across multiple streams. Multistream changes the game by allowing multiple independent data flows to travel in parallel over the same physical link. Each stream gets its own error handling, ordering rules, and quality-of-service controls, dramatically improving throughput and reducing latency.

The webinar will contrast this with legacy designs and show how Multistream unlocks the full potential of PCIe 6.0 (64 GT/s) and PCIe 7.0 (128 GT/s). At 64 GT/s, a single x16 link delivers 256 GB/s bidirectional bandwidth, enough to feed an entire rack of GPUs without throttling. Double that to 128 GT/s in PCIe 7.0, and you’re looking at 512 GB/s per link, a leap that makes disaggregated AI architectures viable. Think GPU clusters spread across racks, NVMe storage pools serving petabytes to language models, or 800G Ethernet backhauls, all connected with microsecond-level coherence.

Diwakar will dive into the mechanics: how 1024-bit datapaths at 1 GHz clocks enable 2x performance across transfer sizes, how PAM4 signaling and FLIT-based error correction tame signal loss over long traces, and how adaptive equalization in the PHY layer keeps power in check. He’ll also cover practical integration, linking user logic to Multistream controllers, managing timing closure in complex SoCs, and verifying compliance with advanced test environments.

For AI system designers, the implications are profound. Multistream minimizes jitter in all-to-all GPU communication, accelerates model convergence, and supports secure multi-tenancy through per-stream encryption. It enables chip-to-chip links in SuperNICs, smart SSDs, and AI switches, all while cutting idle power by 20–30% through efficient low-power states. This isn’t just about speed—it’s about building scalable, sustainable, and secure AI platforms.

Bottom line: As PCIe 8.0 begins to take shape on the horizon, this webinar positions Multistream as the cornerstone of future AI connectivity. Whether you’re designing edge inference engines or exascale training clusters, understanding this architecture is no longer optional, it’s essential. The session promises not just theory, but actionable insights to future-proof your designs in an AI-driven world.

REGISTER NOW

Also Read:

TCAD Update from Synopsys

Synopsys and NVIDIA Forge AI Powered Future for Chip Design and Multiphysics Simulation

Podcast EP315: The Journey to Multi-Die and Chiplet Design with Robert Kruger of Synopsys


A Six-Minute Journey to Secure Chip Design with Caspia

A Six-Minute Journey to Secure Chip Design with Caspia
by Mike Gianfagna on 11-11-2025 at 6:00 am

A Six Minute Journey to Secure Hardware Design with Caspia

Hardware-level chip security has become an important topic across the semiconductor ecosystem. Thanks to sophisticated AI-fueled attacks, the hardware root of trust and its firmware are now vulnerable. And unlike software security, an instantiated weakness cannot be patched. The implications of such vulnerabilities are vast and quite expensive to repair.  How all this happened, what is known about the resultant weaknesses, how AI fits into the picture, and how to add expert-level security hardening to your existing chip design flow are all big questions to ponder.

A coherent view of the big picture with a clear path to secure chip design has been hard to find, until now. Caspia Technologies recently released a series of three short videos that explain what’s happening to chip security, why it’s happening and what can be done about it. Links to these videos are coming. You can watch all of three of them in under six minutes and your investment in time will show you why more secure chip design is so important and how to achieve it. So, let’s take a six-minute journey to secure chip design with Caspia.

Chapter 1 – The Elephant in the Room: Weak Chip Security

The first video frames the problem from a big-picture point of view. Why the hardware root of trust has become vulnerable and what it means for chip design are explored. The graphic at the top of this post is from this first video. What we see here is the changing landscape of DevSecOps.

For many years this high-growth industry focused on software security. Code was analyzed, weaknesses were identified, and software updates were developed and tested to increase the robustness of the code. Underlying this entire segment was the assumption that the hardware root of trust was secure and immutable. And for a long time, this was true. Some of the forces that made hardware vulnerable to attack are discussed in this segment. The result is an emerging segment of DevSecOps that focuses on fortifying the security of the hardware.

This is the sole domain of Caspia Technologies.

Chapter 2 – Identify Security Threats Before They Harm You

The second video takes a deeper dive into the chip security problem. Specific examples of hardware weaknesses and the resultant impact are taken from recent headlines. The applications cited will be familiar to all. Security risks are closer to home than you may realize.

This video also showcases the substantial progress being made across the semiconductor ecosystem to understand these new security risks. Two examples of how government, industry, and academia collaborate to track and categorize security risks are presented. These efforts form the foundation for finding and fixing security risks. The diagram below illustrates some of the details discussed.

Collaborative Efforts to Track Security Risks

Caspia Technologies and its founding team at the University of Florida in Gainesville have been pioneering catalysts for this work. The details of these efforts and its impact are also touched on.

Chapter 3 – How GenAI Adds Expert Security to Existing Design Flows 

In this final installment, approaches to apply GenAI technology in novel ways to create breakthrough security verification are presented. This video explains how GenAI capabilities can be harnessed to deliver expert level security verification to existing design teams and flows. The graphic below summarizes some of the relevant qualities that pave the way to new approaches.

GenAI Enhanced Verification Breakthrough

The specific ways Caspia Technologies uses GenAI are detailed, with examples of how Caspia’s AI agents work together to ensure new chip designs are robust and secure against a growing threat profile.

This is the future of chip design.

To Learn More

If chip-level security is a concern (and it should be), I highly recommend investing six minutes to allow Caspia to show you the path to a more secure future. The insights are valuable and actionable.

Here is where you can view each chapter of the story:

Chapter 1

Chapter 2

Chapter 3

You can also find out more about Caspia and its impact on the industry on SemiWiki here.  And that’s a six-minute journey to secure chip design with Caspia.

Also Read:

Large Language Models: A New Frontier for SoC Security on DACtv

Caspia Focuses Security Requirements at DAC

CEO Interview with Richard Hegberg of Caspia Technologies


Lessons from the DeepChip Wars: What a Decade-old Debate Teaches Us About Tech Evolution

Lessons from the DeepChip Wars: What a Decade-old Debate Teaches Us About Tech Evolution
by Lauro Rizzatti on 11-10-2025 at 10:00 am

Lessons from the DeepChip wars Table

The competitive landscape of hardware-assisted verification (HAV) has evolved dramatically over the past decade. The strategic drivers that once defined the market have shifted in step with the rapidly changing dynamics of semiconductor design.

Design complexity has soared, with modern SoCs now integrating tens of billions of transistors, multiple dies, and an ever-expanding mix of IP blocks and communication protocols. The exponential growth of embedded software has further transformed verification, making early software validation and system-level testing on emulation and prototyping platforms essential to achieving time-to-market goals.

Meanwhile, extreme performance, power efficiency, reliability, and security/safety have emerged as central design imperatives across processors, GPUs, networking devices, and mobile applications. The rise of AI has pushed each of these parameters to new extremes, redefining what hardware-assisted verification must deliver to keep pace with next-generation semiconductor innovation.

Evolution of HAV Over the Past Decade

My mind goes back to the spirited debates that played out on the pages of DeepChip in the mid-2010s. At the time, I was consulting for Mentor Graphics, proudly waving the flag for the Veloce hardware-assisted verification (HAV) platforms. My face-to-face counterpart in those discussions was Frank Schirrmeister, then marketing manager at Cadence, standing his ground in defense of the Palladium line.

Revisiting those exchanges today underscores just how profoundly user expectations for HAV platforms have evolved. Three key aspects of deployment, once defining points of contention, have since flipped entirely: runtime versus compile time, multi-user operation, and DUT debugging. Let’s take a closer look at how each has transformed.

Compile Time versus Runtime: A Reversal of Priorities

A decade ago, shorter compile times were prized over faster runtime performance. The prevailing rationale, championed by Cadence in the processor-based HAV camp, was that rapid compilation improved engineering productivity by enabling more iterative RTL debug cycles per day. In contrast, the longer compile times typical of FPGA-based systems often negated their faster execution speeds, creating a significant workflow bottleneck.

Over the past decade, however, the dominant use case for high-end emulation has shifted dramatically. While iterative RTL debug remains relevant, today’s most demanding and value-critical tasks involve validating extremely long software workloads: booting full operating systems, running complete software stacks, executing complex application benchmarks, and, increasingly, deploying entire AI/ML models. These workloads no longer run for minutes or hours, instead they run for days or even weeks, completely inverting the equation and rendering compile time differences largely irrelevant.

This fundamental shift in usage has decisively tilted the value proposition for high-end applications toward high-performance, FPGA-based systems.

The Capacity Driver: From Many Small Jobs to One Long Run

Back in 2015, one of the central debates revolved around multi-user support and job granularity. Advocates of processor-based emulation systems argued that the best way to maximize the value of a large, expensive platform was to let as many engineers as possible run small, independent jobs in parallel. The key metric was system utilization: how many 10-million-gate blocks could be debugged simultaneously on a billion-gate system?

While the ability to run multiple smaller jobs remains valuable, the driver for large-capacity emulation has shifted entirely. The focus has moved from maximizing user parallelism to enabling a single, system-critical pre-silicon validation run. This change is fueled by the rise of monolithic AI accelerators and complex multi-die architectures that must be verified as cohesive systems.

Meeting this challenge demands new scaling technologies—such as high-speed, asynchronous interconnects between racks—that enable vendors to build ever-larger virtual emulation environments capable of hosting these massive designs.

The economic rationale has evolved as well: emulation is no longer justified merely by boosting daily engineering productivity, but by mitigating the catastrophic risk of a system-level bug escaping into silicon in a multi-billion-dollar project.

The Evolution of Debug: From Waveforms to Workloads

Historically, the quality of an emulator’s debug environment was defined by its ability to support waveform visibility of all internal nets. Processor-based systems excelled in this domain, offering native, simulation-like access to every signal in the design. FPGA-based systems, by contrast, were often criticized for the compromises they imposed, such as the performance and capacity overhead of inserting probes, and the need to recompile whenever those probes were relocated.

That paradigm has been fundamentally reshaped by the rise of software-centric workloads. For an engineer investigating why an operating system crashed after running for three days, dumping terabytes of low-level waveforms is not only impractical but largely irrelevant. Debug has moved up the abstraction stack—from tracing individual signals to observing entire systems. The emphasis today is on system-level visibility through software debuggers, protocol analyzers, and assertion-based verification, approaches that are less intrusive and far better suited to diagnosing the behavior of complex systems over billions of cycles.

At the same time, waveform capture technology in FPGA-based platforms has advanced dramatically. Modern instrumentation techniques have reduced traditional overhead from roughly 30% to as little as 5%, making deep signal visibility available when needed, without imposing a prohibitive cost.

Debug is no longer a monolithic task. It has become a multi-layered discipline where effectiveness depends on choosing the right level of visibility for the problem at hand.

In Summary

Recently, I had the chance to reconnect with Frank—now Marketing Director at Synopsys—who, a decade ago, was my counterpart in those spirited face-to-face debates on hardware-assisted verification. This time, however, the discussion took a very different tone. Both of us, now veterans of this ever-evolving field, found ourselves in full agreement on the dramatic metamorphosis of the semiconductor design landscape—and how it has redefined the architecture, capabilities, and deployment methodologies of HAV platforms.

What once divided the “processor-based” and “FPGA-based” camps has largely converged around a shared reality: design complexity, software dominance, and AI-driven workloads have reshaped the fundamental priorities of verification. The focus has shifted from compilation speed and multi-user utilization toward system-level validation, scalability, and long-run stability. Table I summarizes how the key attributes of HAV systems have evolved over the past decade.

Table: The Evolution of the HAV Main Attributes from ~2015 to ~2025

More importantly, the role of HAV itself has expanded far beyond its original purpose. Once considered a late-stage verification tool—used primarily for system validation and pre-silicon software bring-up—it has now become an indispensable pillar of the entire semiconductor design flow. Modern emulation platforms span nearly the full lifecycle: from early RTL verification and hardware/software co-design to complex system integration and even first-silicon debug.

Also Read:

TCAD Update from Synopsys

Synopsys and NVIDIA Forge AI Powered Future for Chip Design and Multiphysics Simulation

Podcast EP315: The Journey to Multi-Die and Chiplet Design with Robert Kruger of Synopsys


Think Quantum Computing is Hype? Mastercard Begs to Disagree

Think Quantum Computing is Hype? Mastercard Begs to Disagree
by Bernard Murphy on 11-10-2025 at 6:00 am

Just got an opportunity to write a blog on PQShield, and I’m delighted for several reasons. Happy to work with a company based in Oxford and happy to work on a quantum computing-related topic, which you’ll find I will be getting into more deeply over coming months. (Need a little relief from a constant stream of AI topics.) Also important, I enjoy connecting technology to real world needs, things everyday people care about. Mastercard has something to say here.

Security is visceral when it comes to our money

I find in talking about and writing about security in tech that, while we all understand the importance of security, our understanding is primarily intellectual. Yes, security is important, but we don’t know how to monetize it. We do what we can, but we don’t need to get ahead of the game if end-user concern remains relatively low. As an end-user I share that view – until I’m hacked. As happened to my credit card a few weeks ago – I saw a charge at a Panda Express in San Francisco – a 3-hour drive from where I live. The card company removed the charge and sent me new card. Happily it’s been years since I was last hacked. But what if hacks could happen every year, or month, or week?

In their paper Mastercard talk about malicious actors using a “Harvest Now, Decrypt Later” attack paradigm. Building a giant pool of encrypted communications and keeping it in storage until strong enough decryption mechanisms become available. We’re not aware that data we care about is in the hands of bad actors because nothing bad has happened – yet. This is not a theoretical idea. The possibility already exists for systems using DES, RSA-1024 or weaker mechanisms, which is why most though maybe not all weak systems have been upgraded.

The stronger threat comes from quantum computing (QC). You might think that QCs are just toys. Small qubit counts can’t handle real jobs. Your view may be outdated. IBM already have a one-thousand usable qubit computer, Google is planning for a one-million qubit system and who knows what governments around the world can now reach, especially in hacking hotbeds.

OK you counter, but these are very specialized systems. Governments don’t want to hack my credit cards (though I’m not sure I’d trust that assertion). But it doesn’t matter. To build demand, QC centers provide free or moderate-cost access to their systems. All you have to do is download an algorithm, maybe from the dark Web, to factor large integers. Then you can break RSA-encrypted messages using Shor’s or similar algorithms.

In fairness, recent estimates suggest that RSA-2048 may not be broken before 2031. But improvements in quantum error correction are already pushing down that limit. We really can’t be certain when that barrier will be breached. Once it is, the flood gates will open thanks to all that harvested encrypted data. That breach will affect not only credit cards but all electronic payment systems and more generally finance systems. Our intellectual concern will very rapidly become a visceral concern if we are not prepared.

PQShield and quantum-resistant encryption

Mastercard mentions two major mechanisms to defend against quantum attacks: post-quantum cryptography (PQC) and quantum key distribution (QKD). QKD offers theoretically strong guarantees but is viewed currently as a future solution, not yet ready for mass deployment. The Mastercard paper reinforces this position, citing views from multiple defense agencies and the NSA. More immediate defenses are based on QKC, for which PQShield offers solutions today.

Several algorithms have been proposed which NIST is supporting with draft standards. Importantly, National Security System owners, operators and vendors will be required to replace legacy security mechanisms with CNSA 2.0 for encryption in classified and mission-critical networks. CNSA 2.0 defines suite of standards for encryption, hashing and other objectives.

The NIST transition plan projects urgency. New software, firmware and public-facing systems should be upgraded in 2025. Starting 2027 all new NSS acquisitions must be CNSA 2.0 compliant by default. By 2030 all deployed software and firmware must use CNSA 2.0 signatures and any networking equipment that cannot be upgraded with PQC must be phased out. The Mastercard paper talks about plans in other regions which seem not quite as far ahead, though I expect EU enthusiasm for tech regulation will quickly address that shortfall.

PQShield is already well established in PQC. This is a field where customer deals are unlikely to be announced, but other indicators are promising. Their PQCCryptoLib-Core is in “Implementation Under Test” testing at NIST. They are in the EU-funded Fortress project. They have partnered with Carahsoft Technologies to make quantum-safe technology available to US public sector companies. And they have published multiple research papers, so you can dig into their technology claims in detail.

Fascinating company. You can learn more HERE.

Also Read:

Podcast EP304: PQC Standards One Year On: The Semiconductor Industry’s Next Move

Formal Verification: Why It Matters for Post-Quantum Cryptography

Podcast EP290: Navigating the Shift to Quantum Safe Security with PQShield’s Graeme Hickey


CEO Interview with Roy Barnes of TPC

CEO Interview with Roy Barnes of TPC
by Daniel Nenni on 11-08-2025 at 10:00 am

Roy Barnes Headshot

Roy Barnes is the group president of The Partner Companies (TPC), a specialty manufacturer, overseeing TPC’s photochemical etching companies: Elcon, E-Fab, Fotofab, Microphoto and PEI.

Roy is an experienced leader known for building strong teams, driving change and inspiring people to succeed. Throughout his career, he has cultivated a leadership style grounded in open communication and collaboration—ensuring teams understand one another’s strengths and work together to achieve lasting results. Roy’s focus on connection and development has consistently helped TPC grow stronger and perform at its best.

Tell us about your company?

The Partner Companies began in 2010 with a straightforward idea: bring together specialty manufacturers who excel at solving precision manufacturing problems. Over the past fifteen years, we’ve built an integrated group of eleven companies that share something fundamental — each delivers mission-critical solutions to industries where failure isn’t an option.

What ties these companies together is deep expertise in specialized processes that can complement each other to best serve our partners’ manufacturing needs. The five companies I oversee, Elcon, E-Fab, Fotofab, Microphoto and PEI, have mastered photochemical etching for ultra-precise thin metal parts, ceramic metallization and more.

The semiconductor industry has been core to our business from the beginning. Elcon Precision, which dates back to 1967, was delivering high-precision photochemical etching solutions to semiconductor manufacturers more than forty years ago. E-Fab has been doing the same since 1982. That foundation has allowed us to grow alongside the industry as it’s evolved.

Today, TPC’s 11 specialty manufacturers operate across facilities in the United States, Asia, the United Kingdom, and Mexico. Companies like Elcon, E-Fab, Fotofab, Lattice Materials, Optiforms, and PEI each bring distinct technical capabilities, whether that’s crystal growth for precision optics, high-volume production of complex metal parts, sheet metal fabrication and assembly, or plastic injection molding.

What problems are you solving?

The semiconductor sector is undergoing rapid transformation, fueled by record-breaking sales and breakthroughs in AI, 5G, and autonomous technologies that are redefining how chips power modern life. To adapt to new technological developments, we use photochemical etching — precise manufacturing process that uses light-sensitive coatings and chemical etchants to remove metal and create detailed components – to make design changes and be intimate with the design change process. Once designs are validated, we are able to rapidly scale production volumes.

This type of collaborative problem-solving is representative of how TPC helps customers overcome complex engineering challenges, bringing both process expertise and measurable results.

For example, a leading semiconductor equipment manufacturer recently shared they faced intermittent reliability failures in an aluminum nitride (AlN) heater assembly and turned to Elcon, a TPC company, for support. Elcon conducted a comprehensive failure analysis of the AlN braze stack, including the substrate, metallization, and nickel layers. Cross-sectional and adhesion testing revealed that insufficient metallization adhesion — caused by low surface roughness — was the root issue.

Elcon then conducted a design of experiments (DOE) to define the optimal AlN surface roughness range, confirming a direct correlation between roughness and metallization integrity consistent with published adhesion mechanisms. Based on these findings, Elcon provided quantitative updates to the surface finish specification, resulting in a substantial improvement in adhesion reliability and overall heater performance. While niche, these are the kind of engineering challenges — and solutions — that help enable the next gen equipment to make next gen chips.

What application areas are your strongest?

Our primary focus is partnering with semiconductor equipment manufacturers, with special emphasis on innovative and scalable domestic solutions. By working closely with these companies, we help address challenges related to equipment design and production, enhancing their competitiveness and supply chain security.

What keeps your customers up at night?

Many of our customers face intense supply chain pressures, tariff impacts, rising costs from Asia and intellectual property concerns to ensure our designs avoid being replicated or stolen. TPC’s domestic manufacturing capabilities, engineering support and dedication to secure, IP-focused partnerships help mitigate these risks, offering our customers peace of mind.

What does the competitive landscape look like and how do you differentiate?

TPC stands apart in the rapidly evolving semiconductor industry through our integrated manufacturing approach, offering more than just photochemical capabilities. As demand for faster and reliable chips grows, our ability to leverage adjacent technologies enables us to provide customers with complete designs and vertical integrations. While most of our competitors are small, family-owned businesses, TPC’s frequent investments allow us to expand our engineering and scaling capabilities. Our diverse range of materials used – from copper, stainless steel to titanium and niobium — results in new manufacturing solutions such as titanium etching.

What new features/technology are you working on?

In the heart of Silicon Valley, where we have multiple facilities, we have implemented advanced process control techniques for semiconductor processes including statistical process control (SPC) charts, to bring greater consistency and quality to photochemical etching.

This brings semiconductor-grade precision to a traditionally less-regulated manufacturing environment, improving repeatability and yields for our customers. We also continually invest in new personnel and technologies to further enhance our offerings in specialty manufacturing, photochemical etching and assembly

How do customers normally engage with your company?

We value collaborative partnerships, helping customers develop their designs and solutions with the support of our engineering team. Our customers often come in with a print and our role is to guide them through the manufacturing process. The photochemical etching process in the semiconductor manufacturing is complex and precision-driven especially as U.S. companies work to domestic supply chains. TPC’s experts are here to help customers navigate it with speed, accuracy and reliability.

For more information and to get in touch with TPC, visit https://www.thepartnercos.com/.

Also Read:

CEO Interview with Wilfred Gomes of Mueon Corporation

CEO Interview with Rodrigo Jaramillo of Circuify Semiconductors

CEO Interview with Sanjive Agarwala of EuQlid Inc.


CEO Interview with Mr. Shoichi Teshiba of Macnica ATD

CEO Interview with Mr. Shoichi Teshiba of Macnica ATD
by Daniel Nenni on 11-08-2025 at 6:00 am

Shoichi Teshiba

With over 30 years of experience in sales and marketing of semiconductors and electronic components, Mr. Shoichi Teshiba has worked across a broad range of industries including storage, servers, networking, consumer electronics, industrial equipment, and automotive. Combining deep insight into both domestic and global markets with strong technical expertise, he has recently focused on facilitating the social implementation of emerging startup technologies. Currently, he plays a key leadership role in global technology scouting, particularly in deep tech, and oversees the management of European subsidiaries. Mr. Shoichi Teshiba has a proven track record of introducing cutting-edge technologies worldwide and driving open innovation initiatives.

Tell us about your company.

Macnica ATD Europe is the European subsidiary of Macnica Inc., one of the world’s largest technology distributors and solution providers. At our core, we are more than a distributor, we are a catalyst for innovation. Our mission is to connect pioneering semiconductor, AI, and imaging technologies with the EMEA market, providing not only cutting-edge products but also deep technical support, integration capabilities, and go-to-market strategies. We work closely with both our global technology partners and EMEA customers to co-create solutions that solve the pressing needs of today and anticipate the challenges of tomorrow.

What problems are you solving?

We help our customers navigate the complexity of emerging technologies. Whether it’s optimizing AI deployment at the edge or accelerating time-to-market for embedded systems, Macnica ATD Europe is here to bridge the gap between innovation and implementation. One of our core strengths lies in making advanced technology accessible, adaptable, and scalable. This means demystifying AI for industrial use cases, streamlining hardware-software integration, and simplifying supply chains through strong vendor partnerships and local support.

What application areas are your strongest?

Our most active and mature application domains are:

  • Edge AI and Computer Vision for Industry 4.0, smart cities, and healthcare
  • Semiconductor Solutions for automotive and industrial applications
  • AI Acceleration and Data Processing in embedded and real-time systems

We offer vertically integrated expertise that combines domain knowledge with component-level mastery – enabling us to serve OEMs, and system integrators with equal effectiveness.

What keeps your customers up at night?

In one word: complexity. Our customers are innovating rapidly, but they face significant technical, regulatory, and operational hurdles:

  • How to scale AI at the edge without compromising performance or cost
  • How to navigate supply chain uncertainty while staying ahead of the technology curve

We address these challenges by acting as a proactive partner, providing not just technology, but also consulting, design support, and local engineering expertise.

What does the competitive landscape look like and how do you differentiate?

While the technology distribution space is crowded, we stand out by offering a hybrid model of distribution and consulting. Our differentiation comes from:

  • Deep technical support and in-house engineering
  • Strong, trusted relationships with best-in-class suppliers
  • A collaborative ecosystem model that integrates customers and partners
  • Local expertise backed by the global strength of the Macnica Group

We don’t just sell components, we co-develop solutions. This high-touch, value-added approach is what makes our customers return project after project.

How do customers normally engage with your company?

Our engagement model is highly collaborative. Customers typically come to us at various stages:

  • Early stage for design consultation or technology scouting
  • Mid-stage for component sourcing and prototyping support
  • Late-stage for deployment optimization, integration, and long-term lifecycle support

They engage through our technical and sales engineers, or through co-innovation initiatives with our partners. We also host webinars, tradeshow, with live demo tradeshows, and live demos to help de-risk complex technology decisions. Whether it’s a multinational or a startup, our goal is to act as an extension of their technical and innovation teams.

 Contact Manica ATD

Also Read:

CEO Interview with Sanjive Agarwala of EuQlid Inc.

CEO Interview with Rodrigo Jaramillo of Circuify Semiconductors

CEO Interview with Wilfred Gomes of Mueon Corporation


Video EP11: Meeting the Challenges of Superconducting Quantum System Design with Mohamed Hassan

Video EP11: Meeting the Challenges of Superconducting Quantum System Design with Mohamed Hassan
by Daniel Nenni on 11-07-2025 at 10:00 am
In this episode of the Semiconductor Insiders video series,  Dan is joined by Mohamed Hassan, who leads the Quantum EDA segment at Keysight. Mohamed provides a broad overview of superconducting quantum system design. He discusses the challenges for this design style and how EDA requirements for quantum design differ from traditional chip design. 
 
Mohamed describes the Keysight offerings to address this design style, along with details of Keysight’s QuantumPro EM Solution. He describes its workflow, along with a description of a real-world application.

The views, thoughts, and opinions expressed in these videos belong solely to the speaker, and not to the speaker’s employer, organization,
committee or any other group or individual.

TSMC Kumamoto: Pioneering Japan’s Semiconductor Revival

TSMC Kumamoto: Pioneering Japan’s Semiconductor Revival
by Daniel Nenni on 11-07-2025 at 6:00 am

TSMC Kumamoto Fab 2

In the lush landscapes of Kumamoto Prefecture, on Japan’s Kyushu Island, TSMC is etching a new chapter in global chip production. The TSMC Kumamoto facility, operationalized through its wholly-owned subsidiary Japan Advanced Semiconductor Manufacturing (JASM), represents the Taiwanese giant’s bold foray into Japan, the first dedicated wafer fab outside Taiwan, the U.S., and Europe.

Announced in 2021, this project embodies a strategic pivot toward diversified, resilient manufacturing. With an initial investment exceeding $8.6 billion, Kumamoto underscores TSMC’s commitment to serving key Asian clients while bolstering Japan’s domestic semiconductor ecosystem.

The journey began with groundbreaking in 2022, transforming a sprawling industrial site in Kikuyo Town into a state-of-the-art cleanroom. Construction progressed swiftly, with Taiwanese engineers relocating en masse to oversee integration. By late 2024, JASM’s first fab commenced mass production, focusing on mature yet vital process nodes: 22/28nm and 12/16nm. These nodes are ideal for automotive semiconductors, image sensors, and microcontrollers, sectors where Japanese powerhouses like Sony and Denso dominate.

The facility’s monthly capacity targets 55,000 wafers, powered entirely by renewable energy sources, aligning with Japan’s green manufacturing mandates. As of April 2025, JASM’s workforce has swelled to around 2,400, including 527 new local hires, fostering a blend of Taiwanese expertise and Japanese precision. This infusion has not only created jobs but also sparked a skills renaissance in Kyushu, a region long synonymous with electronics but starved of cutting-edge fabs. This fab was dubbed the “Knight Castle” by locals since construction was done 24 hours a day.

Kumamoto’s strategic imperative is twofold: Geopolitically, it mitigates risks from Taiwan’s exposure to cross-strait tensions, diversifying TSMC’s footprint amid U.S.-China frictions. Economically, it taps into Japan’s resurgence under the “Semiconductor Revival Plan,” backed by subsidies from the Ministry of Economy, Trade and Industry.

This funding is part of a national initiative which aims to reclaim 10% of global chip production by 2030. For TSMC, Kumamoto secures proximity to loyal customers: Sony’s image sensors for smartphones and cameras, Denso’s automotive chips for electric vehicles, and emerging AI peripherals. In an era where automotive semis face slumps, exacerbated by delayed EV adoption, the fab’s specialization offers stability. Industry watchers project profitability by late 2025, with yields already performing robustly.

Yet, expansion hasn’t been seamless. The second fab, earmarked for advanced 6/7nm processes on a 321,000-square-meter plot adjacent to the first, has encountered headwinds. Initially slated for Q1 2025 groundbreaking, construction was deferred to mid-year due to severe traffic congestion from the initial site’s operations (commutes ballooning from 15 minutes to an hour) irking residents. TSMC Chairman C.C. Wei cited these local pains during the June 2025 shareholders’ meeting, emphasizing dialogues with Japanese authorities for infrastructure upgrades. Further delays in 2025 stemmed from softer automotive demand and a pivot toward U.S. investments, pushing mass production to late 2027. TSMC builds fabs closely tied to customer demand so this a good example of intelligent semiconductor business decisions.

Despite these challenges, TSMC reaffirmed its commitment in August 2025 with board member Paul Liu quashing rumors of diminished Japanese focus. The second fab promises elevated capabilities, including 40nm variants for industrial applications, potentially doubling output and attracting more clients.

Beyond wafers, Kumamoto catalyzes regional transformation. Kyushu’s IC production value hit ¥1 trillion in 2024 for the first time in 16 years, fueled by TSMC’s ripple effects. Local suppliers, from equipment makers to materials firms, now furnish 60% of needs, nurturing a self-sustaining cluster. Governor Takashi Kimura has championed community buy-in, securing promises for green spaces and training programs amid wastewater monitoring starting January 2025.

Bottom live: Kumamoto could spawn a third fab post-2030, embedding TSMC deeper in the “semiconductor triangle” of Taiwan, Japan, and the U.S. As AI and EVs propel chip demand, this outpost fortifies supply chains, blending Eastern innovation with Western resilience. In Kumamoto, silicon flows not just as commerce, but as a bridge across borders proving that in the chip wars, collaboration outpaces isolation. For TSMC, it’s a testament to enduring partnerships, for Japan it is a revival etched in silicon, absolutely.

Also Read:

AI-Driven DRC Productivity Optimization: Revolutionizing Semiconductor Design

Exploring TSMC’s OIP Ecosystem Benefits

Breaking the Thermal Wall: TSMC Demonstrates Direct-to-Silicon Liquid Cooling on CoWoS®

TSMC’s Push for Energy-Efficient AI: Innovations in Logic and Packaging


AI RTL Generation versus AI RTL Verification

AI RTL Generation versus AI RTL Verification
by Bernard Murphy on 11-06-2025 at 10:00 am

RTL generation vs RTL Verification

I should admit up front that I don’t have a scientific answer to this comparison, but I do have a reasonably informed gut feel, at least for the near-term. The reason I ask the question is that automated RTL generation grabs headlines with visions of designing chips through natural language prompts, making design widely accessible. No doubt this has a lot of media appeal, an attractive place to invest AI $$. But the realities of chip design today don’t support that bet and are more aligned with investment in AI-assisted verification. Verification may not be as glamorous as design but looks like a much more compelling target for AI investment today.

Where are the real costs in design?

Years of analysis in the semiconductor industry (e.g. this Wilson report) confirm that on average 50% or more of IC/ASIC project time is spent in verification. Projects require as many verification engineers at peak as design engineers (the people building RTL) and even design engineers spend half their time in verification. Verifying RTL is a major time- and resource-consuming sink in chip design.

That verification far outweighs RTL design should not be surprising in an age where IP reuse and design reuse dominate. Few organizations have the luxury to start from a clean sheet on each design. Even startups will use commercial IP. Silicon Catalyst portfolio companies can tap into a wide range of IP, from Arm for example, at no up-front cost. There will always be some differentiating content requiring from-scratch development or extensive redesign but the hard part there is the innovation and making the idea practical in a competitive PPA envelope, not in creating the RTL.

Challenges for RTL creation

I can’t find a definitive “first” paper on using GenAI to create RTL but there are plenty of recent papers. These continue to show progress, at about the same rate as parallel efforts in software generation, intriguing but not up to hands-free usage.

More common is usage as an assistant – think of an extended auto-completion operation. A designer might describe in a comment what a following always block should do and ask the auto-completer to create that block. An emerging measure of how successful that operation is in practice, whether for RTL or software, is the rate at which the designer/programmer accepts the generated logic. Fairly consistently this seems to be around 25%.

25% is not bad for an autocompleter. How often do you accept an MS-Word or text message autocompletion? I don’t most of the time, but sometimes it’s useful. RTL auto-completion is more complex than sentence completion, so hats off to RTL generators for getting this far.

Why isn’t the acceptance rate better? There are multiple reasons. Lack of a sufficiently broad training corpus and ambiguities in the comment prompt are two obvious examples. Another revealing example is that, without additional coaching, a bot will assume the timing depth of an expression depends on the number of terms in the expression, not recognizing that arithmetic expressions are more costly than logic expressions. For some other examples see this paper.

The core problem is that generating quality RTL is a lot more complex than current systems can rise to, even for an always block. Does it support PPA targets? Does it create security or safety holes? Is the implementation intuitively reasonable and is it readily maintainable by an RTL designer?

Could more of these problems be solved in time? Quite possibly, which is why I think RTL creation moonshots are worth supporting. But we shouldn’t confuse those efforts with near-term ROI.

Opportunities in RTL verification

A big opportunity for return should be in debug, representing 47% of verification effort according to the same Wilson report. Despite years of study, in debug we still haven’t advanced far beyond debugger IDEs, which certainly help visualize behavior but do little to triage or root-cause bugs in aid of compressing debug time. (there are some point solutions, e.g. for triaging CDC violations.)

Promising signs can be seen in agentic approaches to debug, from startups like ChipAgents and Bronco AI. Triage – reducing and assigning a pile of bugs from a regression to sub-teams for further analysis – might be the biggest contribution here. As engineers we tend to obsess about how to automate root cause analysis for hard corner cases, but more effort is likely consumed in triage around the bulk of less complex bugs than in a few anomalous cases. Initial root-cause isolation (did this error come from module A or module B) can drill down to approximate fault locations, to avoid wasting engineer time on trying to debug a problem that isn’t in their code after all.

Agentic methods should be ideal for this sort of analysis for two reasons. First they can learn from expert engineers how they would approach triage and rough root-causing. Second, because they are agentic they should be able to run additional trial simulations to confirm or rule-out preliminary guesses.

This approach to debug builds on verification know-how and methods that have been in refinement for years. Success here, even partial success, could show significant ROI. Contrast that with an as-yet unquantified level of investment to improve RTL generation quality, where we also don’t know what value we can assign to partial success.

Food for thought.

Also Read:

Scaling Debug Wisdom with Bronco AI

CEO Interview with David Zhi LuoZhang of Bronco AI

ChipAgents Tackles Debug. This is Important

CEO Interview with Dr. William Wang of Alpha Design AI


Memory Matters: The State of Embedded NVM (eNVM) 2025

Memory Matters: The State of Embedded NVM (eNVM) 2025
by Daniel Nenni on 11-06-2025 at 6:00 am

NVM Survey 25 Square Banner for SemiWiki 400x400 px

Make a difference and take this short survey. It asks about your experience with embedded non-volatile memory technologies. The survey is anonymous, and the results will be shared in aggregate to help the industry better understand trends: 2025 Embedded Non-Volatile Memory Survey.

We are now in the AI era where data is the lifeblood of innovation, and embedded NVM stands as a cornerstone technology, retaining information without using power and enabling everything from MCUs and IoT SoCs to automotive controllers and secure elements.

As of November 2025, embedded NVM is moving fast. Edge data is surging, AI features are landing on MCUs and SoCs, and power budgets are tighter than ever. Memory is central to the devices we build. This survey looks at where eNVM stands today in terms of markets, technology, and adoption, and where it’s heading next.

Market Overview and Growth

Embedded emerging NVM, including MRAM, RRAM/ReRAM, and PCM, is entering a broader adoption phase across MCUs, connectivity, and edge-AI devices, with momentum building in automotive and industrial markets. Research firm Yole Group indicate the embedded emerging segment should exceed $3B by 2030, reflecting wider availability in mainstream process nodes and stronger pull where eFlash is no longer a good fit at ≤28 nm.

Technological Advancements

Embedded flash remains foundational, but scaling limits at advanced nodes have pushed MRAM, ReRAM, and embedded PCM to the foreground. Foundries and IDMs are extending embedded options beyond 28/22 nm planar CMOS toward 10–12 nm-class platforms, including FinFET. Yole highlights aggressive foundry roadmaps: TSMC has established high-volume MRAM/ReRAM and is preparing 12nm FinFET ReRAM/MRAM for 2025 and beyond. Samsung, GlobalFoundries, UMC, and SMIC are accelerating embedded MRAM/ReRAM/PCM across general-purpose MCUs and high-performance automotive designs. STMicroelectronics stands out as the IDM fully committed to embedded PCM, ramping xMemory solutions for industrial and automotive MCUs, with 18nm FD-SOI extending reach after 2025.

In parallel, BCD and HV-CMOS flows are incorporating embedded NVM as practical replacements for EEPROM/OTP in analog, power management, and mixed-signal designs. On the IP side, suppliers are qualifying embedded NVM technologies for these platforms, giving designers more options where cost, endurance, and retention outweigh legacy choices. Beyond code and data storage, in-/near-memory compute concepts using eNVM are gaining interest for low-power edge AI inference.

Drivers, Challenges, and Use Cases

Automotive remains the center of gravity for embedded emerging NVM, and 2025 brings a noticeable uptick in secure ICs and industrial MCUs as more products reach production. In practice, ReRAM, MRAM, and PCM each have a role: ReRAM is gaining traction in several high-volume categories; MRAM and PCM are attractive where speed and endurance dominate. The mix varies by node, application, and vendor roadmap.

The challenges are familiar: integrating eNVM at advanced logic nodes, trading off endurance and retention, qualifying to automotive-grade reliability, and achieving cost-effective density as embedded code and AI parameters grow. The trend line is positive, with PDK/IP availability growing and capacity ramping, so these issues are being addressed rather than deferred.

Outlook

By 2030, embedded NVM will underpin more on-chip AI features and practical in-/near-memory compute blocks, with broader use in neuromorphic-inspired accelerators at the edge. Yole’s projections indicate that the embedded emerging segment is now the primary engine of growth, led by ReRAM in high-volume MCUs and analog ICs, while MRAM and embedded PCM consolidate in performance-critical niches. As edge data grows, eNVM’s role expands from “just storage” to part of the computing fabric, redefining efficiency and making embedded memory more central than ever to device intelligence.

Bottom line: In 2025, embedded NVM isn’t just memory, it’s the enabler of intelligent, persistent systems on chip. With accelerating adoption across MCUs and edge SoCs, and clear roadmaps from leading foundries and IDMs, the trajectory is set: embedded memory matters more than ever. Let us know your opinion by taking the short survey.

Take the 2025 Embedded Non-Volatile Memory Survey Here.

Also Read:

Chiplets: Powering the Next Generation of AI Systems

Podcast EP312: Approaches to Advance the Use of Non-Volatile Embedded Memory with Dave Eggleston

Podcast EP311: An Overview of how Keysom Optimizes Embedded Applications with Dr. Luca TESTA