RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

The Importance of Productizing AI. Everyday Examples

The Importance of Productizing AI. Everyday Examples
by Bernard Murphy on 09-10-2025 at 6:00 am

image generation fail

Keeping up with the furious pace of AI innovation probably doesn’t allow a lot of time for deep analysis across many use cases. However I can’t help feeling we’re sacrificing quality and ultimately end user acceptance of AI by prioritizing new capabilities over rigorous productization. I am certain that product companies do rigorous testing and quality control but modeling the human element in AI interaction, through prompts or voice commands for example, seems still to be more art than science, an area where we still don’t understand safe boundaries between how we communicate with AI and how AI responds. To illustrate I’m sharing a couple of my own experiences, one in image generation and one in voice control for music in my car.

Image generation

I regularly use AI to generate images for my blogs and more recently for my website because I can generate exactly what I want and fine tune as needed through refined prompts. At least that’s my expectation as an average non-expert user. Building a trial website image revealed a gap between expectation and practice, as should be clear in the image above.

I didn’t set out to create an image of a guy with three hands. My prompt was something like “show an excited storyteller standing on a pile of ideas”. A little abstract but nothing odd about the concept.  The first image the generator built was OK but not quite what I wanted so I added a couple of modest prompt tweaks. The image I’m showing came from the second tweak. The generator hallucinated and I have no idea why, nor did I know how to correct this monster.

I switched to a different starting prompt and then to a different image generator (I tried GPT4-o and DALL-E 3) but was still shooting in the dark. Without any understanding of safe boundaries there was no guarantee I wouldn’t run into a different hallucination. The obvious concern here is how this affected my productivity. My goal was to quickly generate a decent image conveying my concept, then move on with the rest of my website building task. Instead I spent the best part of a day experimenting blindly with the image generation tool.

Voice control for Apple Music in my new car

I write frequently about automotive tech so it seemed only right that on a recent car purchase I should go for a model with all the latest automation options, allowing me to comment from experience, not just from theory. I’m not going to name the car model, but this is a premium brand, known for excellent quality.

There are many AI features in the car including advanced driver assistance which I haven’t yet started to explore (maybe in later blogs). Here I just want to talk about controlling music selection through Car Play using voice control, an important safety feature to minimize driver distraction. The alternative control surface is an inviting center console touch screen which is not where I should be looking when I’m driving; I know because I drifted partly out of lane a couple of times driving back from the dealer. Not going to make that mistake again.

Now I know I should only be using voice controls when driving. But when my voice commands aren’t working it is natural to look at the screen to try to figure out the problem. That was happening to me a lot when driving back from the dealer. I eventually learned what I was doing wrong, which pointed to more opportunities for improved productization.

First a few of the tracks stored on my phone are corrupt – no idea why. When Apple Music hit a corrupt track it stopped playing. I initially guessed something about the app was broken when running through Car Play. Second, and more importantly, I didn’t know what voice commands I could reasonably use with Siri. An illusion encouraged by chat and voice control engines is that they can understand anything. In fact they do a very good job of mimicking understanding within a bounded range but they don’t always make it apparent where they are crossing from correct interpretation of our intent to guesses or to simply saying they don’t understand or ignoring me.

As an infrequent Siri user, the trick I had to learn was to first to use voice control exclusively when driving, and second to ask Siri what commands she can understand. I also learned that Apple Music might get stuck on a (presumably corrupt) track but wouldn’t tell me why it had stopped. Now I know if I give a command and don’t hear anything I have to suggest it skip a track.

On the plus side as a Siri novice, I learned that Siri knows a lot about music genres, so I don’t have to limit a request to a specific song or album. I can ask for baroque music or rock and it will do something intelligent with that request. Pretty cool.

Takeaways

What does any of this have to do with the larger world of AI applications? The systems used in these two examples are built on AI platforms. The image generation (diffusion) part is different but interpreting the prompt is regular LLM technology. Voice-based commands also build on LLM except the input is audio rather than text. So my experiences in these simple examples could be indicative of what we might encounter in many AI applications.

You might not consider what I found to be major issues, though for me creating confusion while driving is a potentially big problem. My major takeaway here was that I had to learn more about Siri capabilities before I could use Car Play effectively while still being a safe driver. Not quite the trouble-free experience I had expected.

When generating images I didn’t appreciate the productivity hit required by trial-and-error exploration through prompts. I will now be a lot more cautious in using image generation and I will be more cynical about products which claim to simplify generating exactly the image I want.

What could product builders like my auto manufacturer, smartphone builders, and image generation products do to help more? My suggestions would be more guard bands to separate accurate interpretation from guesses, more methods to validate/double-check interpretation, and more active feedback when something goes wrong. Improved productization on top of already impressive AI capabilities could limit negative experiences (and accidents) and help AI transition from neat concepts to full production acceptance.

One more important area where product builders might contribute is to help us refine our own understanding of what AI can and cannot do. Unfortunately too many users still see AI as magical, thinking it can do whatever they want, maybe even better than they can imagine. We need to have drilled into us that AI is just a technology like any other technology, very capable within its own limitations, and that we must invest in understanding how to operate it correctly. Then we will avoid disappointment, or worse dismissing everything AI as hype, when instead the real problem is in our over-inflated expectations.

Also Read:

Musk’s new job as Samsung Fab Manager – Can he disrupt chip making? Intel outside

CEO Interview with Carlos Pardo of KD

Arm Reveals Zena Automotive Compute Subsystem


Rapidus, IBM, and the Billion-Dollar Silicon Sovereignty Bet

Rapidus, IBM, and the Billion-Dollar Silicon Sovereignty Bet
by Jonah McLeod on 09-09-2025 at 10:00 am

Rising Wafer

Can cash and IBM collaboration put Japan into premier-league chipmaking? Rapidus is betting billions it can.

When Japan announced the creation of Rapidus in 2022, the news was met with a mix of enthusiasm and skepticism. The company would enter the market at a time of escalating demand for semiconductor fabrication capacity to power the build-out of AI/ML data centers worldwide, and amid international political pressure for each region to secure locally sourced production. Here was a government-backed consortium with a bold mandate: return Japan to the forefront of advanced logic manufacturing, producing 2 nm chips by 2027 and positioning itself for nodes beyond 2 nm with IBM’s support, (Atsuyoshi Koike CEO, Rapidus Corp. on August 26 at the Hot Chips 2025 Conference). For an industry dominated by TSMC, Samsung, and Intel, this looked like a geopolitical counterweight.

The model of commercial–government co-investment that catalyzed Taiwan’s semiconductor ascent was exemplified on February 21, 1987, with the founding of Taiwan Semiconductor Manufacturing Company (TSMC) in Hsinchu, Taiwan. In a 2005 speech at MIT Sloan, TSMC founder Dr. Morris Chang described the capital structure that launched the company: “Philips put up about 27 percent, the [Taiwan] government put up 48 percent, and I started a three- to four-month campaign to round up the other 25 percent.” This strategic alignment of public and private capital laid the foundation for what would become the world’s most advanced semiconductor foundry.

TSMC’s emergence coincided with the inflection point of the IBM PC era, as global PC shipments surged from roughly 20 million units in 1988 to over 100 million by 1998—a 5× volume increase. The company’s pure-play foundry model was uniquely positioned to capitalize on this demand, enabling fabless innovators to scale rapidly without the burden of fabrication infrastructure. TSMC didn’t just ride the wave—it became the foundation on which the PC boom was built.

Fast forward to 2025, and Rapidus has achieved several milestones—pilot 2 nm fab activity in Hokkaido (Rapidus, 2025), successful nanosheet transistor demonstrations with IBM (Koike, Hot Chips 2025), and ecosystem partnerships with Siemens, imec, and Tenstorrent (Rapidus, 2025 press release). But behind the scenes, an unusual element of their strategy is drawing attention in the industry: the company is reportedly offering substantial upfront payments to IP vendors to secure early support (SemiWiki, Jul 8, 2025). This approach diverges sharply from the traditional royalty-driven model used by TSMC, and it raises fundamental questions about sustainability, competitiveness, and whether money can substitute for ecosystem momentum.

TSMC Juggernaut

In the conventional foundry model, IP vendors—companies providing essential building blocks like memory controllers, I/O subsystems, or physical libraries—port their IP to a new process node based on expected customer demand. They absorb some upfront engineering cost in exchange for future royalties once customers tape out silicon at volume.

This model thrives at TSMC, where the scale is undeniable. Moving from N3 to N2, for example, was straightforward for many vendors because the ecosystem is already in place. The network effect here is powerful: customers prefer the node with the richest IP catalog, and IP vendors support the node with the largest customer base. It becomes a self-reinforcing cycle—what we might call the TSMC Snowball.

Morris Chang, TSMC’s founder, has emphasized that Taiwan’s long-term ascendancy in semiconductor manufacturing came not just from financial capital, but from structural advantages. As he noted in a recent MIT talk, success required a steady supply of well-trained technicians, low turnover among employees, and the benefits of co-location: “Learning is local. The experience curve works only when you have a common location.” TSMC thrived because it could build an ecosystem in one place, accumulating knowledge and lowering costs over time. This underscores the challenge Rapidus faces in Japan, where it must build not only fabs but also a cohesive ecosystem that can replicate such learning-by-doing effects.

Silicon by Subsidy

Rapidus, by contrast, does not yet have that volume. Analysts estimate the company may only reach 25,000 wafers per month by 2026—a fraction of what TSMC and Samsung move through advanced nodes. From an IP vendor’s perspective, supporting Rapidus’s 2 nm node is a high-risk, low-return proposition under a pure royalty model. To bridge the gap, Rapidus is reportedly providing upfront incentives to IP vendors to ensure critical libraries will be available for early customers. This “pay to play” strategy provides immediate engagement, but it is an expensive way to buy credibility.

The question is whether this model can scale. If Rapidus spends heavily on upfront inducements while wafer output remains modest, the economics could become unsustainable—particularly given estimates that the company will need 5 trillion yen (~US $34–35 billion) to reach full mass production. IBM, for its part, is contributing R&D expertise, intellectual property, and engineering support. Roughly 150 Rapidus engineers have trained at IBM’s Albany NanoTech Complex, and IBM has provided nanosheet transistor designs and know-how. But IBM is not funding Rapidus directly.

That leaves the burden squarely on Japanese government subsidies (over 1.7 trillion yen committed so far) and corporate investors like Toyota, NTT, Sony, SoftBank, Kioxia, and NEC. With each new expense—construction, equipment, packaging, and now upfront IP agreements—the funding gap looms larger. In July 2025, Rapidus reached a key milestone: obtaining electrical characteristics in 2 nm GAA transistors at its IIM-1 fab according to Koike in his Hot Chips presentation. This underscores that IBM’s knowledge transfer is yielding tangible technical results, even as financial sustainability remains an open question.

Equipment ≠ Ecosystem

What Rapidus seems to recognize is that lithography scanners, deposition tools, and etchers—though expensive—are not the hardest part of launching an advanced node. The real bottleneck is the design ecosystem. If customers can’t access proven IP libraries, trusted design flows, and reliable EDA tool support, they won’t risk a tapeout at Rapidus. Upfront deals are a way to short-circuit that chicken-and-egg problem.

But it is also an admission that Rapidus lacks the organic pull that makes TSMC’s ecosystem self-sustaining. TSMC doesn’t have to offer inducements; vendors flock to TSMC because their customers demand it. Here Rapidus hopes to differentiate on speed, Koikede declared. Its All-Single Wafer Processing concept promises turn-around times as short as 15–50 days, compared to roughly 120 days for conventional batch processing. The company claims this “world’s shortest TAT” will give fabless customers faster iterations, a potential incentive to choose Rapidus despite ecosystem challenges.

The Foundry That Blinked

There is precedent here. GlobalFoundries, during its early 20 nm and 14 nm efforts, offered financial incentives to lure IP providers. The model was workable in the short term but failed to produce a virtuous cycle. Without sufficient customer pull, GlobalFoundries pivoted away from bleeding-edge logic to focus on specialty and trailing-edge nodes. Could Rapidus meet the same fate? Its government backing is stronger, and its partnership with IBM gives it world-class technical foundations. But the structural challenge—convincing IP vendors and fabless customers to bet on a low-volume node—remains.

It is important to distinguish between Rapidus’s ecosystem partners and its IP vendors. Companies like Tenstorrent are fabless design houses and potential customers, developing RISC-V CPU IP and AI accelerators. They may work with Rapidus to manufacture chips, but they are not IP vendors in the sense of providing standard cell libraries, memory compilers, or interface IP. The upfront incentives are aimed at traditional IP providers such as Arm, Synopsys, or Cadence—whose libraries are essential for enabling customer designs on a new node.

Rapidus’s strategy of relying on upfront arrangements introduces a set of structural risks that could undermine its long-term viability. While government subsidies may cushion the early years, the question looms: can the model sustain itself once Rapidus must operate without external funding? The reliance on pre-paid commitments also risks entrenching vendor dependency, potentially locking Rapidus into a narrow ecosystem of IP providers and limiting flexibility for future customers.

Perception is another challenge. If customers begin to view IP support as contingent on inducements rather than intrinsic ecosystem strength, confidence in the node’s reliability—especially for mission-critical tapeouts—may erode. And with every yen spent securing ecosystem buy-in, margin pressure intensifies in a market already defined by razor-thin profitability.

By contrast, TSMC has a more resilient model. It monetizes IP support indirectly through high-volume production that guarantees royalties for vendors, and directly through premium wafer pricing at advanced nodes. This approach reinforces ecosystem strength without compromising long-term margins or customer trust. Engineers, EDA developers, and foundry insiders understand that semiconductor competition isn’t just about transistors per square millimeter. It’s about design enablement, ecosystem health, and business model viability.

For TSMC, the story is straightforward: the ecosystem follows volume. For Rapidus, the story is more precarious: the ecosystem must be bought before volume can materialize. In his Hot Chips presentation Koike asserted that Beyond subsidies and IBM know-how, Rapidus is betting on Design–Manufacturing Co-Optimization (DMCO), integrating AI, advanced sensors, and partnerships like Keysight to improve yield and PDK precision. Combined with its “Rapid and Unified Manufacturing Service” (RUMS), Rapidus is pitching a foundry model built on speed and co-innovation, not just wafer starts.

The Payment Illusion

Rapidus deserves credit for attempting something few nations have dared in decades: re-entering the leading edge of semiconductor manufacturing. Its partnership with IBM has already yielded technical milestones, and its government and corporate backing give it a level of resilience GlobalFoundries never enjoyed. But money alone will not guarantee success. Upfront incentives may secure early IP availability, but they do not guarantee customer adoption or sustainable economics. In the end, Rapidus must find a way to transform these early arrangements into long-term ecosystem momentum.

Morris Chang’s reflections at MIT underscore the gap. Taiwan’s rise was powered by talent pipelines, co-location, and the experience curve—factors that compounded over decades and lowered costs through learning by doing. Rapidus, by contrast, is trying to buy time with subsidies and upfront deals. If it succeeds, it will validate Japan’s gamble. If it fails, it will reinforce why the TSMC juggernaut keeps rolling—and why competing at the bleeding edge requires not just technology and subsidies, but an ecosystem that grows organically from customer demand.

Also Read:

Revolutionizing Processor Design: Intel’s Software Defined Super Cores

TSMC’s 2024 Sustainability Report: Pioneering a Greener Semiconductor Future

Intel’s Commitment to Corporate Responsibility: Driving Innovation and Sustainability


Exploring Cycuity’s Radix-ST: Revolutionizing Semiconductor Security

Exploring Cycuity’s Radix-ST: Revolutionizing Semiconductor Security
by Daniel Nenni on 09-09-2025 at 6:00 am

image


Cycuity’s Radix-ST represents a groundbreaking advancement in semiconductor security, addressing the growing complexity and vulnerability of modern chip designs. Introduced on August 27, 2025, by Cycuity, Inc., Radix-ST leverages static analysis techniques to identify and resolve security weaknesses early in the chip design cycle. As cyber threats targeting semiconductor devices escalate, this tool offers a proactive, efficient approach to safeguarding the backbone of today’s electronic systems, from IoT devices to data centers.

At its core, Radix-ST operates without the need for simulation or emulation, distinguishing it from traditional dynamic methods like Cycuity’s Radix-S and Radix-M. By analyzing Register Transfer Level (RTL) source code as soon as design components are developed, it delivers early detection of potential vulnerabilities. This efficiency is critical in an industry where late-stage security issues can lead to costly redesigns or, worse, exploited weaknesses in deployed systems. Radix-ST goes beyond basic linting tools by performing deep security analysis, pinpointing issue locations in the source code and mapping them to the MITRE-maintained Common Weakness Enumeration (CWE) database. This integration with the broader Radix workflow ensures actionable insights, enhancing design security from the outset.

The tool’s benefits are evident in its ability to minimize user input while maximizing impact. It automates RTL inspection with proprietary detection engines, reducing false positives and providing integrated reporting within the Radix GUI. Features like cross-view navigation across source, schematic, and cone views empower engineers to explore and address weaknesses comprehensively. Feedback from users, such as Mark Labbato (Senior Engineer) of Booz Allen Hamilton, highlights its practicality enabling security analysis before full simulation environments are ready, thus optimizing verification cycles. Mitch Mlinar, Cycuity’s VP of Engineering, emphasizes that this early intervention cuts costs and boosts productivity, a claim supported by its seamless integration with existing Radix tools.

Radix-ST’s relevance is underscored by the evolving threat landscape. Semiconductors, powering everything from automotive systems to defense applications, are increasingly targets for remote cyberattacks. Traditional verification methods often miss subtle vulnerabilities, especially in complex system-on-chip (SoC) designs. While static analysis has limitations, detecting only specific issue types, its speed and early application complement dynamic techniques, offering a balanced security assurance strategy. This approach aligns with industry shifts toward “secure by design” principles, where proactive risk mitigation is paramount.

The tool’s rollout comes at a time when hardware security is no longer optional but a necessity, as noted by Cycuity CEO Andreas Kuehlmann. With features like quantifiable security coverage metrics, Radix-ST provides a data-driven way to assess verification completeness, helping teams identify areas needing further testing. Its adaptability across commercial and defense sectors, supported by a $99 million IDIQ contract for supply chain security, underscores its broad applicability. Use cases range from securing roots of trust in hardware to ensuring compliance with evolving cybersecurity standards like ISO 21434.

Critically, while Radix-ST promises significant advantages, its effectiveness depends on adoption and integration into diverse design workflows. The semiconductor industry’s reliance on global supply chains and multiple partners could challenge uniform implementation. Moreover, the tool’s focus on static analysis might not fully address runtime vulnerabilities, suggesting a need for continued innovation. Nonetheless, Radix-ST positions Cycuity as a pioneer, offering a scalable solution to a pressing challenge.

Bottom line: Cycuity’s Radix-ST is a transformative tool in semiconductor security, enabling early vulnerability detection with minimal overhead. As of August 28, 2025, it stands as a vital asset for engineers navigating the complexities of modern chip design, reinforcing the industry’s shift toward robust, proactive security measures.

About Cycuity

Cycuity is a pioneer in hardware security delivering security assurance for semiconductor devices, a rapidly increasing target for remote cyberattacks. Cycuity’s innovative Radix software products and services specify, integrate and verify security across the hardware development lifecycle to ensure robust protection for the chips powering today’s sophisticated electronic systems. Radix uncovers security weaknesses across all levels, from block and subsystem to full system-on-chip (SoC) and firmware, enabling our customers to identify and resolve risks prior to manufacturing. Serving both commercial and defense industries, Cycuity provides the broadest security assurance across the design supply chain. For more information, please visit https://cycuity.com.

Also Read:

Video EP9: How Cycuity Enables Comprehensive Security Coverage with John Elliott

Security Coverage: Assuring Comprehensive Security in Hardware Design

Podcast EP287: Advancing Hardware Security Verification and Assurance with Andreas Kuehlmann

Leveraging Common Weakness Enumeration (CWEs) for Enhanced RISC-V CPU Security


Smart Verification for Complex UCIe Multi-Die Architectures

Smart Verification for Complex UCIe Multi-Die Architectures
by Admin on 09-08-2025 at 10:00 am

Figure 1

By Ujjwal Negi – Siemens EDA

Multi-die architectures are redefining the limits of chip performance and scalability through the integration of multiple dies into a single package to deliver unprecedented computing power, flexibility, and efficiency. At the heart of this transformation is the Universal Chiplet Interconnect Express (UCIe) protocol, which enables high-bandwidth, low-latency communication between dies. But with innovation comes complexity: inter-die interactions, evolving protocol features, and the need to verify behavior across multiple abstraction layers create significant verification challenges.

This article examines the core verification hurdles in UCIe-based multi-die systems and explains how Questa™ One Avery™ Verification IP (Avery VIP) provides a protocol-aware, automated, and layered verification framework. It supports block-level to full system-level validation—combining automation, advanced debug tools, and AI-driven coverage analysis to accelerate verification closure.

Verification Hurdles in Multi-Die Systems

In traditional verification, individual design blocks are validated in isolation. Multi-die architectures require a shift toward verifying the entire system, ensuring that inter-die communication, synchronization, and interoperability between protocols, such as PCIe, CXL, CHI, and UCIe, function seamlessly. The challenge grows as the UCIe specification evolves—UCIe 2.0 and later versions introduce features such as the management transport protocol, lane repair, autonomous link training, and advanced sideband handling—all of which require verification environments to remain adaptable and up to date.

The test environment must also offer flexibility to simulate both standard operating conditions (such as steady-state data transfers and control signaling) and edge cases (like misaligned lanes or failed link training). Fine-grained controllability is essential, allowing errors to be injected and traffic patterns manipulated to test robustness—especially since faults can propagate across dies and protocol layers.

In addition, system-level performance metrics like cross-die latency, throughput, and bandwidth must be monitored in real time to ensure the design meets its performance targets under diverse workloads. This performance validation needs to be continuous and consistent across various traffic patterns and system states.

Configuration space management presents another challenge, as multi-die systems require synchronized register updates across dies, including real-time error reporting and runtime reconfiguration. Finally, verification must be able to scale from simulation, where deep debug visibility is possible, to emulation and hardware prototyping, for speed and real-world validation.

Accelerating Multi-Die Verification with Questa One Avery UCIe VIP

The Questa One Avery UCIe VIP is built to handle the full spectrum of multi-die verification needs through a layered, configurable framework.

Figure 1. Layered UCIe verification framework.

It supports multiple bus functional models (BFM) across diverse DUT types. At the block level, Avery VIP can verify standalone components, such as the Logical PHY (LogPHY), D2D adapter, and protocol layers. At the die or system level, it operates in two modes: a full-stack mode for end-to-end testing of the D2D adapter, LogPHY, mainband, and sideband (with or without raw die-to-die interface (RDI), and a No-LogPHY/RDI2RDI mode for direct, higher-level protocol testing without physical link dependencies.

Figure 2. Avery VIP use models.

To accelerate customer testbench deployment, the Questa One VIP Configurator can generate a UVM-compliant testbench in just a few clicks, enabling verification to begin almost immediately. It allows visual DUT–VIP connectivity mapping with automatic signal binding, intuitive GUI-based BFM configuration, and selection of pre-built sequences aligned with the DUT’s use model. This eliminates extensive manual set up and provides a ready-to-run environment from the outset.

Figure 3: GUI based Avery VIP Configurator.

Avery VIP also provides layer-specific and feature-specific callbacks, enabling on-the-fly traffic manipulation and precise error injection, whether that means corrupting FLITs, altering parameter exchanges, or testing abnormal traffic patterns. Real-time monitoring and dynamic score boarding help track system reactions to these injected conditions.

For compliance and coverage, Avery VIP includes a comprehensive Compliance Test Suites (CTS) framework. Questa™ One Avery CTS is organized by DUT type, protocol mode, and direct specification references. The protocol test suite contains more than 500 checks for UCIe core layers and approximately 3,000 checks for PCIe, CXL, and AXI interactions. Runtime configurability allows protocol features, FLIT formats, or topology to be adjusted on the fly without rebuilding the environment.

The Avery CTS smart API layer further simplifies test creation, offering high-level functions to monitor link training, access configuration or memory spaces, control link state transitions, generate placeholder protocol traffic, and inject packets at precise protocol states.

Debugging and performance tracking are built in. Avery VIP features protocol-aware transaction recording, correlating transactions across layers and highlighting errors with associated signal activity. FSM recording logs internal state transitions such as LTSSM, FDI, and RDI events, complete with timestamps and annotations. Layer-specific debug trackers focus on LogPHY, the D2D adapter, FDI/RDI drivers, parity checks, and configuration space accesses. The performance logger provides real-time throughput, latency, and bandwidth measurements, and can focus on specific simulation phases for targeted analysis.

Figure 4: Protocol-aware transaction association.

Finally, an AI-driven verification co-pilot is integrated through Questa One Verification IQ, acting as an intelligent assistant throughout the verification cycle. The Questa One Verification IQ Coverage Analyzer leverages machine learning to automatically identify and prioritize coverage gaps, rank tests based on their contribution to overall coverage, and detect recurring failure signatures for faster root-cause analysis. The system visualizes coverage data through intuitive heatmaps and bin distribution charts, enabling teams to make data-driven decisions, focus effort on high-impact areas, and continuously refine their verification strategy for maximum efficiency.

Figure 5: Questa One VIQ Coverage Analyzer.
Driving the Future of Multi-Die with UCIe Verification

As an open standard, UCIe is rapidly becoming the cornerstone of heterogeneous integration, allowing chipmakers to combine chiplets from different vendors into a single package with seamless interoperability. By standardizing the interconnect layer, UCIe unlocks a new wave of innovation in data centers, AI accelerators, and high-performance computing, paving the way for designs that are more scalable, energy-efficient, and cost-optimized than traditional monolithic approaches.

The Questa One Avery UCIe VIP is built to handle the demands of this rapidly evolving landscape. It scales effortlessly from block-level to full system-level verification, streamlines environment set up, allows precise error injection, and brings in AI-driven coverage analysis to speed up closure. With its mix of compliance suites, flexible runtime configuration, and powerful debug tools, it gives verification teams the confidence to hit performance, compliance, and reliability goals, helping the industry move toward a future where multi-die systems aren’t the exception, but the standard.

For a deeper dive into the full verification architecture, download the complete whitepaper: Accelerating UCIe Multi-Die Verification with a Scalable, Smart Framework.

Ujjwal Negi is a senior member of the technical staff at Siemens EDA based in Noida, specializing in storage and interconnect technologies. With over two years of hands-on experience, she has contributed extensively to NVMe, NVMe over Fabrics, and UCIe Verification IP solutions. She graduated with a Bachelor of Technology degree from Bharati Vidyapeeth College of Engineering in 2023.

Also Read:

Orchestrating IC verification: Harmonize complexity for faster time-to-market

Perforce and Siemens at #62DAC

Breaking out of the ivory tower: 3D IC thermal analysis for all

 


PDF Solutions Adds Security and Scalability to Manufacturing and Test

PDF Solutions Adds Security and Scalability to Manufacturing and Test
by Mike Gianfagna on 09-08-2025 at 6:00 am

PDF Solutions Adds Security and Scalability to Manufacturing and Test

Everyone knows design complexity is exploding. What used to be difficult is now bordering on impossible. While design and verification challenges occupy a lot of the conversation, the problem is much bigger than this. The new design and manufacturing challenges of 3D innovations and the need to coordinate a much more complex global supply chain are examples of the breadth of what lies ahead. The opportunity of ubiquitous use of AI creates even more challenges. I’m not referring to designing AI chips and systems, but rather how to use AI to increase the chances of design success for those systems.

Data is the new oil, and the foundation of AI. That statement is quite relevant here. What is needed to tame these challenges is a way to collect data upstream in that complex global supply chain and find ways to extract insights that will be used to drive decisions and actions further down the supply chain. Doing that across a complex supply chain requires a highly scalable and secure platform. It turns out PDF Solutions has been quietly building what’s needed to make all this happen. Let’s examine what they’ve been doing and what comes next as PDF Solutions adds security and scalability to manufacturing and test solutions.

Building the Foundation

There are two key products from PDF Solutions that build an infrastructure for the future. A bit of background will set the stage.

First is the Exensio® Analytics Platform, which automatically collects, analyzes and detects statistical changes in manufacturing test, assembly, and packaging operations that negatively affect product yield, quality, or reliability, wherever those operations occur.

The suite of modules in this product enables fabless companies, IDMs, foundries, and OSATs to securely collect and manage test data, assure high outgoing quality, deploy machine learning to the edge, establish controls around the test process, and improve efficiency across the manufacturing supply chain.

The machine learning capabilities will be key to what comes next. More on that in a moment.  The product has quite a large footprint across manufacturing and test operations worldwide.

Of course, a massive data analytics platform like this needs many sources of data across the worldwide supply chain to power it. To enable this part of the platform there is DEX, PDF’s data exchange network. DEX is an infrastructure deployed globally at OSAT locations that connects to potentially every test cell or test and assembly equipment across the worldwide supply chain. It automatically delivers test rules, models & recipes, and brings, in real time, all of that test data back to Exensio. There, it will be normalized using PDF Solutions semantic model that ensures it is always complete, consistent and ready for analysis.  DEX is the plumbing for test rules, models and data for Exensio if you will.

I began my career in test data analytics. Back then, the number and type of testers that were part of that process was extremely small compared to what exists today. In spite of that, I can tell you that interfacing to each source of test data, acquiring the data and ensuring it was accurate and timely was a huge challenge. The problem has gotten much, much bigger. DEX is a combination of hardware and software developed by PDF Solutions that interfaces to all data sources, validates the information and transports it to the required location for timely analysis. This is a very complex process.

The system is optimized to handle the unique challenges of semiconductor data acquisition, enabling PDF Solutions to manage petabytes of information on cloud. The diagram below illustrates how DEX feeds critical data to Exensio.

How DEX feeds critical data to Exensio

What’s Next

Systems the size and scope of Exensio and DEX have already created a substantial impact on the semiconductor industry. The graphic at the top of this post illustrates some of the areas that have benefited from these systems. The next chapter of this story will focus on a concept called Data Feed Forward or DFF. PDF sees DFF providing an enterprise solution to collect, transform and distribute data across a customer’s supply chain to enable advanced test methodologies utilizing feed-forward data.

Using PDF’s global infrastructure, test data can be captured upstream and transported to the edge to be used to inform optimized strategies for downstream testing. The concept of feed forward data enablement isn’t new. It has been used successfully to optimize physical implementation by using early data to inform estimates for late-stage implementation.  In the case of manufacturing and test, a globally available, secure and scalable data infrastructure can be used to optimize test in the chiplet and advanced packaging era.

We are seeing the dawn of AI-driven test from PDF Solutions based on this technology. The goal of AI-driven test is to save time, reduce cost and improve quality. The diagram below summarizes the current three focus areas for this capability.

Digging Deeper

Dr. Ming Zhang

I had the opportunity recently to speak with Ming Zhang, vice president of Fabless Solutions at PDF. I’ve worked with Ming in the past and I’ve always found him to be insightful with a strong view of the big picture and what it all means. So, getting his explanation of AI for test was particularly valuable. Here’s what I learned about the three AI-driven test solutions summarized above.

Ming explained that Predictive Test has an optimization focus. As we all know, test insertions are expensive in terms of manufacturing time and tester cost. Most test strategies apply the same test program to all chips. But what if that could be optimized? Using feed forward data, the parts of a design that are both robust and potentially weak can be identified. Using this kind of unit-specific data, a test program can be developed that is optimized for the specific part. Ming explained what that meant is that test for parts of the design that were demonstrated as robust could be minimized or even skipped.

On the other hand, parts of the design that showed potential weakness could be tested more rigorously. In the end, the actual test time might be the same as before, but the tests applied would be more effective at finding good and bad die, so quality would improve for the same, or lower test cost.

We then discussed Predictive Binning. Ming described this approach as a straight cost saving measure. He explained that based on early data, chips that were likely to fail a test step could be removed or binned out. This essentially avoids test costs that would result in a failure so test cost would be reduced.

We then discussed Predictive Burn-in. This is another cost saving measure. In this case, the devices that have exhibited robust behavior can be identified as not requiring burn-in. Since this process requires hundreds to thousands of hours using expensive measurement and ambient control equipment, a significant cost saving can be realized by avoiding burn-in where it’s not necessary.

Ming pointed out that all of these technologies apply advanced AI algorithms to the massive worldwide manufacturing database managed by PDF Solutions. The initial AI models are developed by PDF. Ming went on to explain that some of its customers want to build proprietary models to drive test decisions as well. PDF support this process also, creating an environment where certain customers can build their own AI for test models and algorithms to enhance competitiveness.

I was quite energized after my discussion. Ming painted a rather upbeat and exciting view of what’s ahead. By the way, if you happen to be in Taiwan on September 12, Ming will be presenting this work at the SEMICON event there.

The Last Word

Dr. John Kibarian

PDF Solutions’ president, CEO and co-founder Dr. John Kibarian made a comment recently that steps back a bit and takes a bigger picture view of where this can all go. He said:

“Collaboration from the system company, all the way back to the equipment vendor, is important today in order to get out new products. Then once those products are launched, the collaboration for the ongoing maintenance of that production flow is requiring a lot more effort. The collaboration required is going to go up. It will be doable at scale only because the industry is going to require it to be doable.”

 The vision painted by John is quite exciting as PDF Solutions adds security and scalability to manufacturing and test.

Also Read:

PDF Solutions and the Value of Fearless Creativity

Podcast EP259: A View of the History and Future of Semiconductor Manufacturing From PDF Solution’s John Kibarian


Revolutionizing Processor Design: Intel’s Software Defined Super Cores

Revolutionizing Processor Design: Intel’s Software Defined Super Cores
by Admin on 09-07-2025 at 2:00 pm

Intel European CPU Patent Application

In the ever-evolving landscape of computing, Intel’s patent application for “Software Defined Super Cores” (EP 4 579 444 A1) represents a groundbreaking approach to enhancing processor performance without relying solely on hardware scaling. Filed in November 2024 with priority from a U.S. application in December 2023, this innovation addresses the inefficiencies of traditional high-performance cores, which often sacrifice energy efficiency for speed through frequency turbo boosts. By virtually fusing multiple cores into a “super core,” Intel proposes a hybrid software-hardware solution that aggregates instructions-per-cycle (IPC) capabilities, enabling energy-efficient, high-performance computing. This essay explores the concept, mechanisms, benefits, and implications of Software Defined Super Cores (SDC), highlighting how they could transform modern processors.

The background of this patent underscores persistent challenges in processor design. High-IPC cores, while powerful, depend heavily on process technology node scaling, which is becoming increasingly difficult and costly. Larger cores also reduce overall core count, limiting multithreaded performance. Hybrid architectures, like those blending performance and efficiency cores, attempt to balance single-threaded (ST) and multithreaded (MT) needs but require designing and validating multiple core types with fixed ratios. Intel’s SDC circumvents these issues by creating virtual super cores from neighboring physical cores—typically of the same class, such as efficiency or performance cores—that execute portions of a single-threaded program in parallel while maintaining original program order at retirement. This gives the operating system (OS) and applications the illusion of a single, larger core, decoupling performance gains from physical hardware expansions.

At its core, SDC operates through a synergistic software and hardware framework. The software component—potentially integrated into just-in-time (JIT) compilers, static compilers, or even legacy binaries—splits a single-threaded program into instruction segments, typically around 200 instructions each. Flow control instructions, such as conditional jumps checking a “wormhole address” (a reserved memory space for inter-core communication), steer execution: one core processes odd segments, the other even ones. Synchronization operations ensure in-order retirement, with “sync loads” and “sync stores” enforcing global order. Live-in loads and live-out stores handle register dependencies, transferring necessary data via special memory locations without excessive overhead (estimated at under 5%). For non-linear code, like branches or loops, indirect branches or wormhole loop instructions dynamically re-steer cores, using predicted targets or stored program counters to maintain parallelism.

Hardware support is minimal yet crucial, primarily enhancing the memory execution unit (MEU) with SDC interfaces. These interfaces manage load-store ordering, inter-core forwarding, and snoops, using a shared “wormhole” address space for fast data transfers. Cores may share caches or operate independently, but the system guarantees memory ordering and architectural integrity. The OS plays a pivotal role, provisioning cores based on hardware-guided scheduling (HGS) recommendations, migrating threads to SDC mode when beneficial (e.g., for high-IPC phases), and reverting if conditions change, such as increased branch mispredictions or system load demanding more independent cores.

The benefits of SDC are multifaceted. Energy efficiency improves by allowing longer turbo bursts or operation at lower voltages, as aggregated IPC reduces the need for frequency scaling. Flexibility is a key advantage: platforms can dynamically adjust between high-ST performance (via super cores) and high-MT throughput (via individual cores), adapting to workloads without fixed hardware ratios. Unlike prior multi-threading decompositions, which incurred 25-40% instruction overheads from replication, SDC minimizes redundancy, focusing on explicit dependencies. This could democratize high-performance computing, reducing reliance on advanced process nodes and enabling scalable designs in data centers, mobile devices, and AI accelerators.

However, challenges remain. Implementation requires precise software splitting to minimize communication overhead, and hardware additions, though small, must be validated for reliability. Compatibility with diverse instruction set architectures (ISAs) via binary translation is mentioned, but real-world deployment may face OS integration hurdles.

In conclusion, Intel’s Software Defined Super Cores patent heralds a paradigm shift toward software-centric processor evolution. By blending virtual fusion with efficient inter-core communication, SDC promises to bridge the gap between performance demands and hardware limitations, fostering more adaptable, efficient computing systems. As technology nodes plateau, innovations like this could define the next era of processors, empowering applications from AI to everyday computing with unprecedented dynamism.

You can see the full patent application here.

Also Read:

Intel’s Commitment to Corporate Responsibility: Driving Innovation and Sustainability

Intel Unveils Clearwater Forest: Power-Efficient Xeon for the Next Generation of Data Centers

Intel’s IPU E2200: Redefining Data Center Infrastructure

Revolutionizing Chip Packaging: The Impact of Intel’s Embedded Multi-Die Interconnect Bridge (EMIB)


TSMC’s 2024 Sustainability Report: Pioneering a Greener Semiconductor Future

TSMC’s 2024 Sustainability Report: Pioneering a Greener Semiconductor Future
by Admin on 09-07-2025 at 8:00 am

TSMC Substainability Report 2024 2025

TSMC, the world’s most trusted semiconductor foundry, released its 2024 Sustainability Report, underscoring its commitment to embedding environmental, social, and governance principles into its operations. Founded in 1987 and headquartered in Hsinchu Science Park, TSMC employs 84,512 people globally and operates facilities across Taiwan, China, the U.S., Japan, and Europe. The report, spanning 278 pages, highlights TSMC’s role as an “innovation pioneer, responsible purchaser, practitioner of green power, admired employer, and power to change society.” Amid rising global risks like extreme weather, as noted in the World Economic Forum’s Global Risks Report, TSMC emphasizes multilateral cooperation to advance sustainability, aligning with UN Sustainable Development Goals (SDGs).

In letters from ESG Steering Committee Chairperson C.C. Wei and ESG Committee Chairperson Lora Ho, TSMC reaffirms sustainability as core to its resilience and competitiveness. Wei stresses that ESG is embedded in every decision, driving net zero emissions by 2050 and carbon neutrality. The company saved 104.2 billion kWh globally in 2024 through efficient chips, equivalent to 44 million tons of reduced carbon emissions. By 2030, each kWh used in production is projected to save 6.39 kWh worldwide. Ho highlights collaborations across five ESG directions: green manufacturing, responsible supply chains, inclusive workplaces, talent development, and care for the underprivileged.

Environmentally, as a “practitioner of green power,” TSMC focuses on climate and energy (pages 108-123), water stewardship (pages 124-134), circular resources (pages 135-146), and air pollution control (pages 147-153). It deployed 1,177 energy-saving measures, achieving 810 GWh in annual savings and 13% renewable energy usage, targeting 60% by 2030 and RE100 by 2040. Scope 1-3 emissions reductions follow SBTi standards, with 2025 as the baseline for absolute cuts by 2035. A new carbon reduction subsidy for Taiwanese tier-1 suppliers and the GREEN Agreement for 90% of raw material emitters aim to slash Scope 3 emissions. Water-positive goals by 2040 include a 2.7% reduction in unit consumption and 100% reclaimed water systems. Circular efforts recycled 97% of waste globally, transforming 9,400 metric tons into resources, while volatile organic compounds and fluorinated GHGs saw 99% and 96% reductions, respectively.

Socially, TSMC positions itself as an “admired employer” (pages 155-202), fostering an inclusive workplace with a Global Inclusive Workplace Statement and campaigns on action, equity, and allyship. It conducted a global Workplace Human Rights Climate Survey and expanded human rights due diligence to suppliers, incorporating metrics into long-term goals. Women comprise 40% of employees, with targets for over 20% in management. Talent development averaged 90 learning hours per employee, with programs like the Senior Manager Learning and Development achieving 90-point satisfaction. Occupational safety maintained an incident rate below 0.2 per 1,000 employees, enhanced by 24/7 ambulances and diverse protective gear. As a force for societal change (pages 204-232), TSMC’s foundations benefited 1,391,674 people through 171 initiatives, investing NT$2.441 billion. Social impact assessments using IMP and IRIS+ frameworks supported STEM education, elderly care, and SDG 17 partnerships.

Governance-wise (pages 234-251), TSMC reported NT$2.95 trillion in revenue and NT$1.17 trillion in net income, with 69% from advanced 7nm-and-below processes. R&D spending hit US$6.361 billion, up 3.1-fold in a decade. The ESG Performance Summary (pages 263-271) details metrics like 100% supplier audits and top rankings in DJSI and MSCI ESG.

Bottom line: The report showcases TSMC’s 2024 achievements: 11,878 customer innovations, 96% customer satisfaction, and NT$2.45 trillion in Taiwanese economic output, creating 358,000 jobs. Despite challenges like geopolitical tensions, TSMC’s net zero roadmap and inclusive strategies position it as a sustainability leader, driving shared value for stakeholders and a resilient future.

Also Read:

TSMC 2025 Update: Riding the AI Wave Amid Global Expansion

TSMC Describes Technology Innovation Beyond A14

TSMC Brings Packaging Center Stage with Silicon


Intel’s Commitment to Corporate Responsibility: Driving Innovation and Sustainability

Intel’s Commitment to Corporate Responsibility: Driving Innovation and Sustainability
by Admin on 09-07-2025 at 6:00 am

Intel Corporate Responsibility Report 2025

Introduction by Lip-BU Tan:

I’m an engineer at heart. Nothing motivates me more than solving hard problems. Our teams across Intel are driven by this same mindset—inspired by the power of technology to enable new solutions to our customers’ toughest challenges.

I fundamentally believe this will be a catalyst for innovation throughout our company for years to come—and every day brings new opportunities for us to improve.

In our 2024-2025 Corporate Responsibility Report, you will see we have made important progress in many areas. We are driving greater compute performance in our products while improving their power efficiency. Our water conservation and use of renewable energy is supporting a resilient and sustainable manufacturing footprint. And our close collaboration with partners across our value chain is helping customers to achieve their own sustainability goals.

But this work is never done—and we have a lot of hard work ahead as we take actions to reshape our company, strengthen our culture, and empower our engineers to do what they do best.

Underpinning this work is a consistent focus on technology, sustainability, and talent investments aligned with our long-term goals. At the end of the day, our work in these areas drives innovation and growth—because when great people engineer great products to delight our customers, we strengthen our business and help meet the needs of a changing world.

I am looking forward to the work ahead as we build a new Intel. Thank you for your feedback and your partnership

Lip-Bu Tan

Chief Executive Officer

In the 2024-25 Corporate Responsibility Report, Intel outlines a refreshed strategy under new CEO Lip-Bu Tan, emphasizing transformation amid global challenges. Founded on principles of transparency, ethics, and human rights, the report highlights Intel’s integrated approach to corporate responsibility, aligning with frameworks like the UN Sustainable Development Goals and Global Reporting Initiative. With a targeted workforce of 75,000 employees worldwide, Intel’s efforts span people, sustainability, and technology, demonstrating how a tech giant can balance business growth with societal impact.

Central to Intel’s people-focused initiatives is fostering an inclusive, safe culture. The company invests in talent development, aiming to retain top performers through consistent indicators across talent systems. In 2024, Intel’s undesired turnover rate was 5.9%, reflecting ongoing economic pressures but also progress in employee engagement. Programs like Inclusive Leaders delivered 120 workshops to 2,271 participants, promoting dignity and respect, with 91% of employees reporting respectful treatment. Safety remains paramount, with a recordable injury rate of 0.71 per 100 employees, below the industry average. Intel’s global volunteer program, Intel Involved, saw employees contribute over 830,000 hours, amplified by $7.8 million in Foundation matches. Philanthropy totaled $79.5 million, supporting STEM education and humanitarian relief. Respecting human rights, Intel conducted 252 supplier audits, addressing forced labor and returning over $200,000 in fees to workers. Responsible minerals sourcing achieved 99% conformance, expanding to include aluminum and copper.

Sustainability efforts underscore Intel’s environmental stewardship. Achieving 98% renewable electricity globally, Intel reduced Scope 1 and 2 GHG emissions by 24% from 2019, avoiding 84% of cumulative emissions over the decade. The Climate Transition Action Plan targets net-zero Scope 1 and 2 by 2040. Energy conservation saved 2.4 billion kWh since 2020, with $104 million invested yielding $150 million in savings. Water stewardship restored net positive water in the US, India, Costa Rica, and Mexico, conserving 10.5 billion gallons and restoring 2.9 billion through projects like Oregon’s McKenzie River enhancement. Waste management upcycled 66% of manufacturing streams, diverting 74,000 tons from landfills via circular economy practices. Supply chain sustainability engaged 140 suppliers on GHG reductions, with 99% responding to CDP questionnaires. Responsible chemistry addressed PFAS through collaborations like the NSTC’s PRISM program, promoting safer alternatives.

Technology initiatives leverage Intel’s expertise for societal good. Product energy efficiency improved 4.0X for clients and 2.7X/3.0X for servers from 2019 baselines, reducing Scope 3 emissions. Responsible AI evolved with a new “Protect the Environment” principle, operationalizing governance for risks like bias. Intel’s Digital Readiness Programs trained 8 million in AI skills across 29 countries, emphasizing inclusion. The Intel Responsible Technology Initiative funded 465 projects in 42 countries, addressing health, education, and climate via solutions like AI for veterans’ transitions and water monitoring in mining.

Intel’s report reflects resilience amid challenges like economic pressures and geopolitical risks. By integrating responsibility into operations, Intel not only mitigates risks but also drives innovation, as seen in CHIPS Act awards bolstering US manufacturing. Looking ahead, Intel’s ambitions—net-zero emissions, inclusive AI, and resilient supply chains—position it as a leader in sustainable tech. This holistic approach ensures Intel enriches lives while building a stronger future for all stakeholders.

See the full report here.

Also Read:

Revolutionizing Chip Packaging: The Impact of Intel’s Embedded Multi-Die Interconnect Bridge (EMIB)

Intel’s Pearl Harbor Moment

Should the US Government Invest in Intel?


CEO Interview with Rabin Sugumar of Akeana

CEO Interview with Rabin Sugumar of Akeana
by Daniel Nenni on 09-06-2025 at 10:00 am

unnamed (5)

Rabin Sugumar was Distinguished Engineer and Chief Architect at Marvell/Cavium and built and led the architecture group for the ThunderX Arm server processor line. Most recently he led the architecture of the ThunderX3 processor, which had industry leading single thread performance and socket level performance at time of silicon.

Prior to Marvell/Cavium, he was at Broadcom where he was one of the lead architects on the server processor that became ThunderX2. ThunderX2 was the first Arm server to achieve single thread and socket level performance comparable to high end Intel Xeon servers and paved the way for Arm servers in the data center. During Rabin’s career, he has also worked on architecture and design of vector processors at Cray Research, early multi-threaded and out-of-order SPARC processors at Sun Microsystems, and InfiniBand adapters at Sun Microsystems/Oracle.

Rabin obtained his PhD in Computer Science and Engineering from the University of Michigan. The cache simulator he wrote during his PhD is still widely used in academia. Rabin has over 25 years of experience in CPU architecture and design, and 28 granted patents.

Tell us about your company?

Akeana was founded over 4 years ago and came out of stealth August 2024. We were formed with an engineering team that has been together for over 20 years, with a proven record of pioneer processor development. If you ever wondered where the Cavium/Marvell ThunderX2 processor team went to, well they are at Akeana now.

Akeana is a provider of RISC-V ISA based processors and subsystems IP. We offer a broad range of cores from ultra small 100-series, compute/data movement optimized 1000-series and ultra-high performance 5000 series. On top of core products, we offer interconnect fabrics (coherent and non-coherent), System IP blocks, and application specific hardware accelerators. We are unique in the RISC-V world of offering the same breadth of processors and interconnect IP as ARM.

What problems are you solving?

We solve the performance problem, which comes into two main categories. First, sheer performance and second, optimal performance per area/power. For sheer performance we offer industry leading performance RISC-V processors, as well as scaling to large multi-core systems. Optimum performance comes from efficient compute per area/power but also data movement optimizations and customizations. This is provided with highly efficient, customizable RISC-V cores and accelerators as well as optimized connectivity fabrics in multi-core systems.

Customers may further configure and customize the processors and systems to meet their compute and data PPA (performance, power and area) points.

What application areas are your strongest?

We thrive in a broad range of applications which require high performance and customized compute subsystems. For example, AI chips or chiplets used in Datacenter and Edge AI where compute, accelerator and interconnect fabrics need to be combined in different ways to optimize execution of relevant AI models. We also have seen great traction for Akeana in Automotive. We see the ‘Server on Wheels’ trend of automotive intersect very well with Akeana’s strengths. Physical AI is another area with our customers/partners. Our ability to configure a core for real-time, low latency compute sets us apart in this area. And the traditional high performance compute use cases in Datacenter and Mobile are becoming increasingly RISC-V friendly, and we play well in this area given our efficient performance and strong focus on compliance with standards.

What keeps your customers up at night?

Computation and algorithm needs are changing so quickly, especially with the usage of AI algorithms in a broad range of applications. Customers see their competitors constantly evolving and supporting newer AI models, use cases and fear losing ground in this evolving landscape. With Akeana customizable RISC-V cores and multi-core fabrics we can provide customers with the performance points they need for compute and data movement, as well as the flexibility to respond to market and algorithm changes. RISC-V is emerging as an AI native heterogeneous hardware-software co-design platform, and this is what Akeana is enabling for our AI chip customers.

Some other customers have other worries, that as their SoCs are battery powered, they are power sensitive. The(se) customers need software flexibility, high performance compute and data movement, but at lowest power possible. With our highly efficient compute and data movement systems, we enable customers to operate at very low power with the most efficiency. Other customers need customized AI solutions for their use cases, but so far have not had to build expertise in-house to architect and design the AI solutions they need. For such customers, Akeana functions as a design expert resource, helping them through architecture and performance decisions and providing design assistance to pull together AI subsystems.

What does the competitive landscape look like, and how do you differentiate?

The competitive landscape is changing with many recent acquisitions of RISC-V based solution providers. These acquisitions contribute to an increase in RISC-V adoption and validate the overall RISC-V approach to compute.

We differentiate on performance and the completeness of our IP solutions where we are able to pull together entire compute subsystems all developed and designed to work well together. We also have a strong design team that is able to work with customers to architect, design, and deliver fully validated subsystems.

What new features/technology are you working on?

We have just integrated Simultaneous Multi-threading (SMT) into our cores, as an option if the customer desires the latency hiding and concurrency enabled by SMT. We exhibited this functionality in multiple demos at the recent RISC-V Summit, Santa Clara.

I am really excited to announce that one of our partners has just taped out a Server class SoC with our Akeana 5000 series and Akeana 1000 series with multi-threading and using our multi-core coherent mesh interconnect fabrics. We believe this SoC will be a pioneer RVA23 compliant multi-core RISC-V Server chip. Once the silicon is available and on boards, our software partners are keen to get to porting and running code on this SoC.

We are also working on Automotive Functional Safety compliance and going through certification with our products for our Automotive customers.

How do customers normally engage with your company?
SoC development customers who are looking to use programmable cores will reach out to us via our local Sales interface, or via contact request on our webpage (www.akeana.com). Our engagement team will review the most suitable products for the customers’ needs with the customer and jointly develop a solution. Customers can work with the processor IPs prior to commitment to use them. If needed, the processor cores and interconnect fabrics can be configured and customized to their specific requirements.
Once the customer decides to move forward, a license agreement is completed for full delivery of our Soft IP products for the customers’ SoC design completion and tape out.

RANiX Employs CAST’s TSN IP Core in Revolutionary Automotive Antenna System

RANiX Employs CAST’s TSN IP Core in Revolutionary Automotive Antenna System
by Daniel Nenni on 09-06-2025 at 8:00 am

ranix TSN SW antenna array figure

This press release from CAST announces a significant collaboration with RANiX Inc., highlighting the integration of CAST’s TSN Switch IP core into RANiX’s new Integrated Micro Flat Antenna System (IMFAS) SoC. This development underscores the growing adoption of Time-Sensitive Networking (TSN) in the automotive sector, particularly for enhancing in-vehicle communication efficiency. As someone tracking advancements in automotive electronics and IP cores, I find this release both timely and insightful, though it leans heavily on promotional language typical of industry announcements.

At its core, the release details how RANiX, a South Korean leader in automotive and IoT communication chips, has leveraged CAST’s TSN technology to synchronize and route signals from a multi-protocol antenna array. The IMFAS SoC handles diverse protocols like 5G, WiFi, GNSS/GPS, BLE, and UWB, funneling them through an Ethernet backbone to the vehicle’s Telematics Control Unit (TCU). By replacing lengthy RF cable runs with TSN-enabled Ethernet, the system promises reduced complexity, lower costs, improved signal integrity, and seamless integration into Software-Defined Vehicles (SDVs). This is a smart evolution, aligning with the industry’s shift toward zonal architectures where centralized processing dominates.

CAST’s TSN-SW Multiport Ethernet Switch IP core is positioned as the enabler here, boasting ultra-low latency, standards compliance (e.g., IEEE 802.1Q), and configurability for applications beyond antennas, such as sensor fusion in automated parking or environmental sensing. The release quotes RANiX’s CTO, No Hyoung Lee, praising CAST’s forward-thinking approach to evolving TSN standards and their Functional Safety features, crucial for ISO 26262 compliance in automotive designs. Alexander Mozgovenko, CAST’s TSN Product Manager, reciprocates by lauding RANiX’s innovative use of the core, noting their long-standing partnership since 2011. This mutual endorsement adds credibility, but it also feels somewhat scripted, as press releases often do.

From a technical standpoint, the announcement is compelling. TSN’s ability to ensure deterministic timing and prioritization in Ethernet networks addresses a key pain point in modern vehicles, where real-time data from multiple sources must coexist without interference. RANiX’s IMFAS exemplifies this by creating a “flat” antenna system that minimizes physical cabling, potentially slashing weight and assembly costs—vital for electric vehicles aiming for efficiency. Moreover, CAST’s broader IP portfolio, including CAN-FD, LIN, and ASIL-D ready cores, positions them as a one-stop shop for automotive bus controllers, which could appeal to designers seeking integrated solutions.

However, the release has limitations. It lacks quantitative data, such as specific latency figures, cost savings percentages, or performance benchmarks, which would strengthen its claims. While it mentions “proven reliability” and “cost-effectiveness,” these are vague without metrics or third-party validations. Additionally, the focus on RANiX’s 80% market share in South Korean tolling chipsets feels tangential, perhaps included to bolster their credentials but not directly tied to IMFAS. In a broader context, with TSN still emerging in automotive (as the release notes some firms are “still contemplating” adoption), this could be a bellwether for wider implementation, especially amid the push for autonomous driving.

Overall, this press release showcases a practical TSN application, signaling progress in Ethernet-based vehicle networks. It’s well-structured, with clear sections on the technology, quotes, and company backgrounds, making it accessible to both technical audiences and investors. For CAST, it reinforces their expertise in IP cores since 1993; for RANiX, it highlights their innovation in a competitive field. If executed as described, the IMFAS could indeed simplify in-vehicle communications, paving the way for smarter, more efficient cars. That said, I’d love to see follow-up data on real-world deployments to gauge its impact. In an era of rapid automotive electrification and connectivity, announcements like this are exciting harbingers of what’s next.

Link to Press Release

Also Read:

CAST Webinar About Supercharging Your Systems with Lossless Data Compression IPs

WEBINAR Unpacking System Performance: Supercharge Your Systems with Lossless Compression IPs

Podcast EP273: An Overview of the RISC-V Market and CAST’s unique Abilities to Grow the Market with Evan Price

CAST Advances Lossless Data Compression Speed with a New IP Core