100X800 Banner (1)

Radio Frequency Integrated Circuits (RFICs) Generated by AI Based Design Automation

Radio Frequency Integrated Circuits (RFICs) Generated by AI Based Design Automation
by Admin on 12-10-2025 at 10:00 am

Figure1

By Jason Liu, RFIC-GPT Inc.

Radio frequency integrated circuits (RFICs) have become increasingly critical in modern electronic systems, driven by the rapid growth of wireless communication technologies (5G/6G), the Internet of Things (IoT), and advanced radar systems. With the desire for lower power consumption, higher integration, and enhanced performance, the complexity of RFICs has escalated correspondingly.

The design of RFICs is considered to be one of the most challenging areas in IC design due to the frequency dependent parasitic effects and time-consuming simulations, particularly the electromagnetic (EM) simulations. Till now, RFICs design remains heavily reliant on the expertise and intuition of specific experienced designers, requiring numerous iterative tuning and manual optimizations due to the nonlinear interactions between active and passive circuits. Conventional design flows, illustrated in the top half of Fig. 1, tend to be time-consuming and inefficient. Therefore, exploring efficient and automated methodologies to streamlining the RFIC design while ensuring optimal performance has become a key focus of research and industry. Here we introduce an AI-enabled automated design flow of end-to-end RFIC synthesis framework, integrating multiple precise modeling and optimization algorithms. As shown in the bottom half of Fig. 1, this process enables automated circuit synthesis of a DRC/LVS clean layout including placement and routing. Compared to traditional manual design flows that require repeated iterations between circuit design, layout, and EM simulation, the proposed approach enables efficient exploration of the extensive design space, which is one of the most significant challenges in design automation.

The overall framework of the proposed automated RFIC design flow is depicted in Fig. 2. This methodology is organized into three stages: circuit topology selection and specification definition, parameter optimization, and layout synthesis, with each stage being tightly integrated. The proposed flow begins with the selection of an appropriate circuit topology and the definition of key performance specifications, which should meet the functional requirements. For automation, the specifications are formalized into quantifiable targets and boundaries, which systematically guide the parameter optimization process and enables thorough exploration of the solution space, ensuring that the circuit satisfies all required standards.

Fig.2 Automated Design Flow of RFICs.

The second stage of the automated flow is circuit parameter optimization based on the collaboration of multiple optimization algorithms, which includes various black-box optimization approaches. A black-box problem refers to an optimization scenario where the internal structure of the objective function is unknown and only its output for given inputs can be observed. Black-box optimization algorithms are designed to efficiently optimize such functions, especially when evaluations are costly, by adaptively selecting evaluation points. RFIC design inherently involves strongly coupled, nonlinear, multi-objective trade-offs (e.g., NF, gain, matching, linearity, power, and area) over a high-dimensional design space. These characteristics make RFIC design a typical black-box optimization problem, well-suited for advanced algorithms such as BO, genetic algorithms (GA), particle swarm optimization (PSO) etc. The final stage of the automated design flow of RFICs focuses on layout synthesis. Once the circuit parameters are optimized, the corresponding schematic is automatically translated into a physical layout using parameterized cells in conjunction to the optimization results. Placement and routing are subsequently performed within a RL-based Actor-Critic proximal policy optimization (PPO) framework, where the state is defined by the position and orientation of each device, the action corresponds to the movement direction and distance for the next placement step, the reward function is designed to optimize key layout metrics such as area utilization and density. Once the placement is finished, routing is performed by algorithm that efficiently determines the shortest path for signal wires while avoiding layout rule violations. The detailed algorithms of place and route will be presented in the future work.

To demonstrate the viability and effectiveness of the proposed automated design flow, it is applied to two different LNAs in 40-nm CMOS technology: a 2.4 GHz differential Cascode LNA and a 5.5 GHz two-stage differential CS LNA. For the first case, the automatically generated schematic and layout are presented in Fig. 3, where two transformers are used for input and output matching networks.

The cross-couple capacitor structure is introduced to neutralize Cgd, enhance gate-drain isolation and reduce nonlinear distortion. This example features a design space of 18 design variables and 7 optimization objectives. By applying the proposed automated design flow, circuit and layout (DRC/LVS clean) synthesis are accomplished within minutes. Fig. 3 shows the synthesized layout of the proposed differential cascode LNA, which occupies a die area of 0.38 × 0.94 mm2. Fig. 4 illustrates the post-simulated NF and S-parameters as well as the pre-simulated results, all specifications are satisfied. The post-simulated S21 of the proposed LNA shows a 3-dB bandwidth of 2 GHz to 2.7 GHz. The detailed simulation results are compared between pre-layout and post-layout simulations, revealing slight differences.

A 5.5 GHz two-stage differential CS LNA is also generatively designed within a couple of minutes, where the generated schematic and layout are shown in Fig. 5, in which three transformers implement the input, interstage and output matching networks while the cross-couple capacitor structure is applied to each CS stage. This architecture introduces approximately ten additional design variables (26 in total), substantially expanding the design space and increasing optimization complexity. As shown in Fig. 6, both pre- and post-simulated NF and S-parameters meet the targets. The Table in Fig. 6 shows close agreement between the pre- and post-simulations.

Finally, an automated design flow of RFICs based on various AI models and algorithms is presented. Moreover, this design flow has been impletemented in RFIC-GPT, a tool readily to be tested online: https://rfic-gpt.com/ .

Also Read:

Propelling DFT to New Levels of Coverage

AI-Driven DRC Productivity Optimization: Insights from Siemens EDA’s 2025 TSMC OIP Presentation

How PCIe Multistream Architecture Enables AI Connectivity at 64 GT/s and 128 GT/s


Ceva-XC21 Crowned “Best IP/Processor of the Year”

Ceva-XC21 Crowned “Best IP/Processor of the Year”
by Daniel Nenni on 12-10-2025 at 8:00 am

CEVA XC21 Award Social 251125

In a resounding affirmation of innovation in semiconductor intellectual property (IP), Ceva, Inc. (NASDAQ: CEVA) has been honored with the prestigious “Best IP/Processor of the Year” award at the 2025 EE Awards Asia, held in Taipei on December 4. The accolade went to the Ceva-XC21, a groundbreaking vector digital signal processor (DSP) core that redefines efficiency in 5G and 5G-Advanced communications. This victory underscores Ceva’s unwavering commitment to delivering high-performance, low-power solutions that propel the next era of connected devices, from cellular IoT modems to non-terrestrial network VSAT terminals.

The EE Awards Asia, organized by EE Times Asia and now in its 12th year, stands as Asia’s premier recognition for excellence in electronics engineering. Attracting nominations from global industry leaders, the awards celebrate breakthroughs in categories spanning IP cores, power management, AI accelerators, and more. This year’s event, coinciding with the EE Tech Summit, drew over 500 engineers, executives, and innovators, highlighting Asia’s pivotal role in shaping global semiconductor trends. Ceva’s win in the IP/Processor category—amid stiff competition from giants like Arm and Synopsys, signals the XC21’s transformative potential in an era where power efficiency and scalability are non-negotiable for 5G deployment.

At the heart of the Ceva-XC21 is its advanced architecture, building on the proven Ceva-XC20 foundation while introducing true dual-threaded hardware for contention-free multithreading. This design features dual processing elements, instruction, and data memory subsystems, enabling seamless parallel execution of complex workloads. The processor supports a versatile 9-issue Very Long Instruction Word set, accommodating integer formats (INT8/16/32) alongside half-precision, single-precision, and double-precision floating-point operations. A dedicated instruction set architecture (ISA) accelerates 5G New Radio (NR) functions, making it ideal for enhanced Mobile Broadband, ultra-Reliable Low-Latency Communications, and massive Machine-Type Communications.

What sets the XC21 apart is its scalability and efficiency. Available in three variants (Ceva-XC210, XC211, and XC212) each offers configurable single- or dual-thread options with 32 or 64 16-bit x 16-bit multiply-accumulate (MAC) units. This modularity allows designers to tailor the core to specific needs, from compact RedCap devices to high-throughput industrial terminals. Compared to its predecessor, the widely adopted Ceva-XC4500, the XC21 delivers up to 1.8x performance uplift in the XC212 variant while slashing core area by 48% in the XC210 model. The XC211 maintains equivalent performance at just 63% of the previous die size, achieving a CoreMark/MHz score of 5.14 for superior control code execution.

These metrics translate to tangible benefits: unprecedented power savings for battery-constrained IoT endpoints, reduced bill-of-materials costs for consumer gadgets, and enhanced AI/ML integration for smarter edge processing. Interconnectivity is equally robust, with up to six AXI4 bus interfaces via an AMBA matrix for high-bandwidth data flows, ensuring effortless SoC integration. Software support further eases adoption, including a unified programming model compatible with the Ceva-XC4500 ecosystem, an optimizing LLVM C compiler, and comprehensive debug tools like JTAG and real-time trace.

“We are thrilled and deeply honored by this recognition from the EE Awards Asia jury,” said Amir Panush, CEO of Ceva. “The Ceva-XC21 embodies our vision of democratizing 5G Advanced connectivity making it accessible, efficient, and future-proof. In a market projected to see 5G connections surpass 2 billion by 2026, this DSP empowers our licensees to innovate without compromise, from smart wearables to satellite backhaul systems.”

This isn’t Ceva’s first triumph at EE Awards Asia; the company previously clinched the same category in 2023 for the Ceva-XC22 and in 2024 for the NeuPro-Nano NPU, cementing its legacy as a trailblazer in edge IP. The XC21’s success reflects broader industry shifts: as 6G horizons emerge, demand for versatile, energy-efficient processors intensifies. Analysts at Gartner forecast that by 2028, 75% of enterprise-generated data will be processed at the edge, necessitating IP like the XC21 to handle multi-protocol stacks (LTE/5G/NTN) with minimal overhead.

Looking ahead, Ceva’s roadmap hints at even bolder integrations, blending XC21’s vector prowess with AI accelerators for hybrid edge-cloud paradigms. For developers, the implications are profound: shorter time-to-market, lower power envelopes, and scalable designs that future-proof against evolving standards. As Gideon Wertheizer, Executive VP of Research and Development, noted, “Winning ‘Best IP/Processor’ validates our relentless focus on architecting for tomorrow’s challenges today.”

In an industry often dominated by raw compute power, the Ceva-XC21 reminds us that true excellence lies in balance, performance without excess, innovation without waste. This award not only elevates Ceva’s profile but also accelerates the proliferation of intelligent, connected ecosystems across Asia and beyond. As 5G matures into a ubiquitous fabric, the XC21 stands as a cornerstone, weaving efficiency into the wireless future.

Contact Ceva Here.

Also Read:

United Micro Technology and Ceva Collaborate for 5G RedCap SoC and Why it Matters

Ceva Unleashes Wi-Fi 7 Pulse: Awakening Instant AI Brains in IoT and Physical Robots

A Remote Touchscreen-like Control Experience for TVs and More


Propelling DFT to New Levels of Coverage

Propelling DFT to New Levels of Coverage
by Bernard Murphy on 12-10-2025 at 6:00 am

Increase DFT coverage

Siemens recently released a white paper on a methodology to enhance test coverage for designs with tight DPPM requirements. I confess when I first skimmed the paper, I thought this was another spin on fault simulation for ASIL A-D qualification, but I was corrected and now agree that while there are some conceptual similarities this proposal describes a quite different purpose. The intent is to extend coverage accomplished through ATPG test vectors with carefully graded functional vectors in support of strengthening coverage and minimizing DPPM, particularly around areas that are missed by ATPG.

Why is this needed?

Just as large designs are becoming harder to verify, they are also becoming harder to test in manufacturing. SoCs have always needed PLLs for clock generation and now incorporate mixed signal blocks for sensor and actuator interfaces, all functions that are inaccessible to conventional DFT. Just think of all the mixed-signal speech, image, IMU, accelerometer and other real-world sensors we now expect our electronics to support.

Many SoCs must ensure very low failure rates. SD memory subsystems support warm-data caching in datacenters and must guarantee vanishingly small error rates to support cloud service pricing models. Cars too, beyond meeting ASIL A-D certification levels, also depend on very low error rates to minimize expensive recalls. None of these requirements is directly related to safety but they are very strongly related to profitability for the chip maker and the OEM.

Enhancing coverage

ATPG will mark anything it couldn’t control/observe as untestable. Strengthening coverage depends on finding alternate vectors to address these untestable faults, where possible. This is accomplished in the Siemens flow by grading vectors produced during design verification against faults injected into the design. Clearly the aim is to find a very compact subset of vectors since these tests must run efficiently on ATE equipment. This method uses a fault simulator, also used in automotive safety verification (hence my initial confusion), for functional fault grading. Rather than checking the effectiveness of safety mechanisms, here we want to control and observe those “untestable” faults from the ATPGP flow.

Importantly, while ATPG can’t access the internals of 3rd party IP, around AMS blocks or other cases, functional verification is setup to be able to exercise these functions in multiple ways, through boot sequences, mixed signal modeling and other possibilities. Maybe you can’t test internal nodes in these blocks, but you can test port behavior, and that’s just what you need to enhance DFT coverage.

Methods and tools

Fault grading proceeds as you would expect. You inject faults at appropriate points, ideally for a range of fault models, from stuck-at to timing delay faults and more. The paper suggests injecting faults at module ports since these most closely mirror controllable and observable points in testing. They point out that while you could inject faults on (accessible) internal nodes, that approach will likely lead to overestimating how much you improved coverage, since ATE equipment cannot access such points.

They suggest using the usual range of fault coverage enhancement techniques. First use formal to eliminate faults that are truly untestable, even in simulation. Then use simulation for blocks and smaller subsystems. And finally, use emulation for large subsystems and the SoC. Used in that order, I would suggest. Address as many untestable faults as you can at each stage so that you have a smaller number to address in the next stage.

Siemens integrated tool flow and experiment

The paper works with a tool flow of Tessent™ ATPG and Questa One Sim FX . Using this flow, they were able to demonstrate test coverage improvement by 0.94% and fault coverage improvement by 0.95%. These might not seem like big improvements but when coverage is already in the high 90s and you still need to do better, those improvements can spell the difference between product success and failure.

Good paper. You can download it HERE.


AI-Driven DRC Productivity Optimization: Insights from Siemens EDA’s 2025 TSMC OIP Presentation

AI-Driven DRC Productivity Optimization: Insights from Siemens EDA’s 2025 TSMC OIP Presentation
by Daniel Nenni on 12-09-2025 at 10:00 am

AI Driven DRC Productivity Optimization Siemens AMD TSMC

 

In the rapidly evolving semiconductor industry, Design Rule Checking (DRC) remains a critical bottleneck in chip design workflows. Siemens EDA’s presentation at the 2025 TSMC Open Innovation Platform Forum, titled “AI-Driven DRC Productivity Optimization,” showcases how artificial intelligence is revolutionizing this process. Delivered by David Abercrombie, Sr. Director of Calibre Product Management at Siemens EDA, alongside AMD experts Stafford Yu and GuoQin Low, the talk highlights collaborative advancements with TSMC and AMD to enhance productivity across understanding, fixing, debugging, and collaboration in DRC sign-off.

The presentation opens with an overview of Siemens EDA’s new AI Workflow System, designed to boost the entire EDA ecosystem. This system integrates knowledge capture, next-gen debug platforms, AI debug assistance, and automated fixing, ultimately optimizing DRC sign-off. Central to this is the Siemens EDA AI System, an open, secure platform deployable on-premises or in the cloud. It features a GenAI interface, a knowledge base, and a data lake that amalgamates Siemens EDA data, Calibre-specific data, and customer inputs. Powered by LLMs, ML models, and data query APIs, it enables intelligent solutions across tools like Calibre, Aprisa, and Solido. Key benefits include a single installation process, flexibility for customer integrations, and support for assistants, reasoners, and agents. This infrastructure ensures AI tools are hosted on customer hardware, maintaining data security while accelerating workflows.

A major focus is on boosting user understanding through AI Docs Assistant and Calibre RVE Check Assist. The AI Docs Assistant allows users to query Siemens EDA tool documentation via browser or integrated GUIs, providing instant answers with RAG-generated citations. It supports specific tools and versions, includes company documentation, and collects feedback for continuous improvement. Integrated with Calibre’s Results Viewing Environment (RVE) and Vision AI, it streamlines access to knowledge. Complementing this, Calibre RVE Check Assist leverages TSMC’s Design Rule Manual (DRM) data, embedding precise rule descriptions and specialized images directly into the RVE. This enhances designers’ comprehension of rule checks, improving debugging experiences and productivity. Additionally, RVE Check Assist User Notes facilitate in-house knowledge sharing: designers capture fixing suggestions and images in RVE, storing them in a central database within the EDA AI Datalake. This shared repository allows organization-wide review, enhancing DRC-fixing flows by leveraging collective expertise.

Shifting to automated fixing, the presentation details Calibre DesignEnhancer, an analysis-based tool for sign-off DRC-clean modifications on post-routed designs. It includes modules like DE Via, which maximizes via insertion to reduce IR drop and boost robustness, and DE Pge, which enhances power grids by adding Calibre nmDRC-clean interconnects for better EM and IR performance. The engine supports LEF/DEF formats and outputs incremental, full, or ECO DEF files, integrating seamlessly with Place-and-Route tools. Its infrastructure handles simple and complex metal rules, such as spacing (M.S., V#.S.), enclosure (M.EN., V.EN.), and forbidden patterns (EFP.M., EFP.V.), considering connectivity and rule dependencies. Examples illustrate fixes like expanding or trimming edges to resolve end-to-end spacing violations, demonstrating its precision in layout contexts.

For debugging, Calibre Vision AI addresses full-chip integration challenges, such as handling billions of violations with sluggish navigation and limited perspectives. It enables “shift left” strategies, identifying issues early for Calibre-clean resolutions. Features include intelligent debug via check grouping (e.g., bad via arrays or fill overlaps), full-chip analysis at 20x speed (reducing 71GB databases to 1.4GB with instant loading), and cross-team collaboration through bookmarks, ASCII RDB exports, and HTML reports. Integration with the Siemens EDA AI System adds natural language capabilities for tool operations, data reasoning, and knowledge access.

AMD’s testimonial underscores real-world impact: on a design with 600 million errors across 3400 checks, Vision AI grouped them into 381 signals, enabling 2x faster root-cause analysis. Heatmaps revealed systematic issues like fill overlaps with clock cells or missing CM0 in breaker cells, compressing cycle times.

Bottom line: This collaboration between Siemens EDA, TSMC, and AMD exemplifies AI’s transformative role in DRC. By boosting workflows, understanding, fixing, debugging, and collaboration, these tools promise significant productivity gains, potentially shortening design cycles and improving chip reliability. As semiconductor nodes advance, such innovations are essential for maintaining competitive edges in high-stakes industries.

Also Read:

An Assistant to Ease Your Transition to PSS

Accelerating SRAM Design Cycles: MediaTek’s Adoption of Siemens EDA’s Additive AI Technology at TSMC OIP 2025

Why chip design needs industrial-grade EDA AI


How PCIe Multistream Architecture Enables AI Connectivity at 64 GT/s and 128 GT/s

How PCIe Multistream Architecture Enables AI Connectivity at 64 GT/s and 128 GT/s
by Kalar Rajendiran on 12-09-2025 at 8:00 am

Link Utilization Graph

As AI and HPC systems scale to thousands of CPUs, GPUs, and accelerators, interconnect performance increasingly determines end-to-end efficiency. Training and inference pipelines rely on low-latency coordination, high-bandwidth memory transfers, and rapid communication across heterogeneous devices. With model sizes expanding and system topologies becoming more complex, the I/O fabric must match the pace of compute. PCIe remains the backbone of server connectivity, and with PCIe 6.0 operating at 64 GT/s and PCIe 7.0 at 128 GT/s, sustaining link utilization is now one of the central design challenges.

PCIe controllers built on a single-stream model cannot keep the link busy under modern AI traffic patterns. This is the motivation for the PCIe Multistream Architecture and was the topic of a recent webinar hosted by Synopsys. Diwakar Kumaraswamy, Senior Staff Technical Product Manager led the webinar session and provided some insights for Gen7 controller considerations too.

Why Multistream Architecture Is Needed

PCIe bandwidth has doubled with each generation, growing from 32 GT/s in Gen5 to 64 GT/s in Gen6 and 128 GT/s in Gen7. Gen8 is already under exploration at 256 GT/s. At these speeds, any idle cycle on the link produces a disproportionate bandwidth loss. AI workloads with their highly parallel, mixed-size transactions, amplify this issue. They generate a blend of large DMA transfers, cache reads, small control messages, and synchronization traffic, all arriving simultaneously from many agents.

Gen6’s shift to PAM4 signaling, mandatory FLIT encoding, and tighter latency budgets further stresses the controller. FLIT mode requires efficient packing of TLPs into 256-byte units, and the wider datapaths needed for PAM4 increase the amount of data the controller must handle each cycle. With only a single TLP accepted per cycle, the controller becomes the bottleneck long before the physical link is saturated. Multistream Architecture solves this by introducing parallelism in how transactions enter and flow through the controller.

How Multistream Architecture Works

Multistream Architecture allows multiple TLPs to be accepted and serialized concurrently. Instead of one application interface per Flow Control (FC) type—Posted, Non-Posted, and Completions, Gen6 introduces two interfaces per FC type. This means the controller can ingest several independent packet streams at once while still maintaining strict ordering rules within an FC type.

This concurrency aligns naturally with FLIT mode. With multiple packets arriving per cycle, the controller can efficiently pack FLITs and maintain continuous transmission at 64 GT/s and 128 GT/s. In workloads dominated by small packets, Multistream Architecture improves link utilization by more than 80 percent. In short, the controller stays busy, and the link stays full.

From Gen5 to Gen6 and Forward to Gen7

The architectural leap from Gen5 to Gen6 is the most significant in PCIe’s history. Gen5 used NRZ signaling at 32 GT/s and relied on single-stream controllers. Gen6 doubles the rate, switches to PAM4, moves entirely to FLIT mode, widens datapaths, and introduces FEC. These changes demanded a controller that can handle far more work per cycle. Multistream Architecture was the response with a resilient design that allows Gen7 to use it without modification, simply doubling the signaling rate to 128 GT/s.

The same fundamental controller architecture scales across two generations of PCIe, streamlining SoC roadmaps and validation flows. This continuity matters a lot for designers.

Gen6 Application Interface Changes

In Gen5, only one TLP per cycle can be driven per FC type. If multiple traffic classes are active, an arbitration stage selects which one proceeds. Receive-side options (store-and-forward, cut-through, bypass) help latency but do not improve concurrency.

Gen6 redefines the model. Each FC type gains two transmit and two receive interfaces, letting applications push multiple TLPs concurrently. The controller attaches order numbers within each FC type to ensure correct sequencing, while allowing other FC types to progress independently. The receive side no longer requires bypass modes because parallel TLP arrival and serialization eliminate the bottleneck altogether. This higher-parallelism interface is essential for feeding a Gen6 or Gen7 controller at full rate.

Gen7 x16 Controller Considerations

A PCIe 7.0 x16 link delivers 2,048 Gb/s per direction—demanding a 1024-bit datapath operating at 2 GHz. Native application interfaces provide maximum bandwidth with minimal overhead. Many SoCs, however, depend on AXI, so Gen7 controllers integrate an AMBA bridge capable of sustaining full throughput without compromising latency. This ensures deployment flexibility across AI accelerators, CPUs, GPUs, and complex multi-die architectures.

Summary

Multistream Architecture is the critical enabler for PCIe operation at 64 GT/s and 128 GT/s. By allowing multiple TLPs to flow in parallel, it keeps FLIT pipelines full, maximizes link utilization, and accommodates the bursty, multichannel traffic typical of AI and HPC workloads. With more than 80 percent utilization gains for small-packet patterns and architectural continuity from Gen6 to Gen7, it forms the backbone of next-generation PCIe connectivity. Synopsys PCIe IP integrates this architecture with proven interoperability, ensuring that designers can fully exploit the performance of PCIe 6.0 and 7.0 in advanced AI systems.

You can watch the recorded webinar here.

Learn more about Synopsys PCIe IP Solutions here.

Also Read:

Synopsys + NVIDIA = The New Moore’s Law

WEBINAR: How PCIe Multistream Architecture is Enabling AI Connectivity

Lessons from the DeepChip Wars: What a Decade-old Debate Teaches Us About Tech Evolution


Revitalizing Semiconductor StartUps

Revitalizing Semiconductor StartUps
by Daniel Nenni on 12-09-2025 at 6:00 am

Silicon Catalyst Hero

Tarun Verma, Managing Partner of Silicon Catalyst, delivered a keynote at Verification Futures Austin titled “Revitalizing Semiconductor StartUps.” Drawing from his role in the world’s only accelerator focused on the global semiconductor industry, Tarun outlined the sector’s resurgence, persistent challenges for startups, and pathways to innovation. Silicon Catalyst, founded in 2015 in Silicon Valley, has screened over 1,500 early-stage companies, admitted more than 150 into its programs, and built a portfolio valued at over $3 billion. With expansions to Israel (2019), the UK (2021), and the EU (2024), it boasts over 400 advisors, 500 partners, and investments exceeding $1 billion in venture capital, plus $200 million each in in-kind partnerships and grants. The accelerator spans chips, chiplets, materials, IP, photonics, MEMS, sensors, life sciences, and quantum technologies.

Tarun’s bottom-line-up-front emphasized semiconductors’ revival: they claim 16 of the top 20 tech market caps and rank as the third most profitable industry. AI’s hardware limitations fuel a “gold rush,” while geopolitics shifts from globalism, positioning chips as essential national assets. Yet startups face hurdles like escalating prototyping costs, declining venture capital, elusive product-market fit, and customers’ reluctance for design wins. A surge in investments, via CHIPS Acts worldwide, sovereign wealth funds, and green shoots in deep tech VC, offers hope. Chiplets and advanced packaging could level the playing field, but commercialization demands an aggressive startup playbook: urgent CHIPS Act implementation, supplemented government funding, and a robust ecosystem for research translation.

Tracing the industry’s history, Verma highlighted milestones: the 1950s transistor invention at Bell Labs; 1960s integrated circuits and VC emergence in Silicon Valley; 1970s microprocessors; 1980s Japanese threats spurring SIA/SRC/SEMATECH; 1990s TSMC’s foundry model; 2000s consolidation and VC decline; 2010s Moore’s Law slowdown, AI rise, and Chinese competition; 2020s pandemic shortages, CHIPS Acts, export curbs, and generative AI. This evolution reflects a virtuous Moore’s Law cycle, lowering costs, expanding applications, boosting R&D and revenue—now fragmented by supply chains.

Future forces include system companies like Apple, Google, and Nvidia becoming “silicon houses” with extensive chip design and manufacturing. Domain-specific architectures, AI/data center buildouts (with hyperscalers investing billions in CapEx), and power-hungry data centers (projected U.S. consumption rising sharply) drive growth. Chiplets enable die disaggregation, optimizing yields and nodes, with standards like UCIe addressing interfaces and known-good-die challenges. TSMC’s packaging roadmap underscores this shift, expanding computing volume through 2.5D/3D integration.

From 2020-2025, AI compute booms, geopolitical tensions, sovereignty pushes (via SWFs), and in-house designs reshaped the landscape. By 2030, revenues could exceed $1 trillion, propelled by edge AI in wearables and IoT, electrification (SiC/GaN for EVs/grids), and packaging bottlenecks. Beyond, quantum, photonic, and neuromorphic chips promise efficiency leaps.

VC trends reveal semis’ struggles: investments dipped to under 2% of total VC, favoring software’s quicker returns. Typical startup timelines span 10 years, with funding from angels/grants early, escalating to VC/CVC later. The VC model—seeking 3-5x returns via hits—disfavors semis’ capital intensity and timelines. U.S. VC surged overall, but semis lagged; AI captures a growing share. Key players include Intel Capital (74 deals) and Celesta (26), per PitchBook (2014-2024).

Silicon Catalyst counters these via its ecosystem: 70+ in-kind partners, 400+ advisors, 450+ investors, accelerators, and universities. Its timeline shows evolution from concept (2014) to ventures (2024). Verma spotlighted a track on emerging hardware, featuring AI/EDA trends, edge inference, and battery sensing.

In conclusion, revitalizing startups requires bridging research to industry. Tarun urged aggressive prototyping funds, ecosystem strengthening, and contrarian VC opportunities. Contact: Silicon Catalyst.

Also Read:

Podcast EP320: The Emerging Field of Quantum Technology and the Upcoming Q2B Event with Peter Olcott

Silicon Catalyst on the Road to $1 Trillion Industry

The 2025 Semi Industry Forum: On the Road to a $1 Trillion Industry


What’s New with Integrated Product Lifecycle Management (IPLM)

What’s New with Integrated Product Lifecycle Management (IPLM)
by Daniel Payne on 12-08-2025 at 10:00 am

visualize new min

I’ve blogged about Methodics before they were acquired by Perforce back in 2020, so I wanted to get an update on Perforce IPLM (Integrated Product Lifecycle Management) by attending their recent webinar. Hassan Ali Shah, Senior Product Manager and Rien Gahlsdor, Perforce IPLM Product Owner were the two webinar presenters. Their IPLM enables end-to-end traceability for semiconductor IP plus metadata across all of your company design projects, so that you can have a unified IP catalog for discovery and reuse, automate the release process, improve design productivity and benefit from collaboration.

Perforce IPLM

Enhancing end-to-end traceability was presented in five new features. The first new feature discussed was server side conflict resolution, as conflicts can show up when more than one version of an IP is found in the IPV hierarchy. The old way of resolving conflicts was using the CLI client, while now you can resolve conflict with IPLM Core and even preview the resolved hierarchy using IPLM Web without building the workspace.

Each IP may have users and groups granted read permission on properties and write permission on the IP, or you could hide property values from users, improving your flexibility. Protected properties work on Libraries, IPs and even custom objects, while permissions are set on property sets.

There’s new support of Redis Streams for event handling, ensuring that events are read at least once. Any property change will trigger an event, and you can show the previous value of changed fields.

IPLM Core is supporting single sign-on (SSO), which improves system security, helps productivity and makes it easier for users to login.

Keysight ADS users now have features to rollback, retrieve and sync different IP versions, making them more productive by using VersIC ADS.

Hassan talked about how users can visualize using the new Shopping Cart, finding IPs of interest quickly to store them for later use. You can browse from the catalog, add to cart, then analyze and use each IP in your BOM. There’s a quick filter that shows a dynamic count as you search for any IP, and you can set both static and dynamic filters. Searching for an IP can be global, or refined with fuzzy matching. All versions of an IP can now be viewed from a single interface, saving time.

After a live demo the next topic was how Perforce has modernized the tech stack; IPLM Client supports Python 3, IPLM Web works with Node.js 22.14, and OS versions support Red Hat 9, Rocky 9, CentOS 9, SLES15.

Coming Next

Looking ahead, Rien talked about how Perforce will be supporting the Model Context Protocol (MCP), an open standard for how AI applications like LLMs connect between tools and data sources. This technology will let you use natural language to learn, query and run actions and workflows with IPLM. Another AI feature coming is predictive search, where you receive predictive recommendations from IPLM to quickly help you find answers. A live demo was shown where the prompt, “What is an IPLM label” was typed:

Natural Language

The next prompt was, “show me libraries that have labels attached”, and the LLM churned out coherent answers rapidly.

Future improvements for end-to-end traceability will include more flexible workspace that allow multiple lines and versions of an IP in a workspace. Multiple P4 (version control system) servers will be supported for VersIC (design data management tool), instead of just one P4 client.

Hassan finished by showing new improvements coming for visualizing your portfolio: Seeing resolved trees, dashboard customization, dynamic updates, deeper analysis of stored objects in your shopping cart.

New Dashboard

Summary

It’s not often that EDA vendors actually perform live demos, but in this webinar they were confident enough to actually run their IPLM tools through the paces, showing how each new feature looked and worked on demo designs. Perforce continues to add new features to their IPLM and the coming attractions look promising with AI technology and visualization improvements.

View the full webinar online.

Related Blogs


Jensen Huang Drops Donald Trump Truth Bomb on Joe Rogan Podcast

Jensen Huang Drops Donald Trump Truth Bomb on Joe Rogan Podcast
by Daniel Nenni on 12-08-2025 at 6:00 am

Jensen Huang Elon Musk
Jensen Huang and Elon Musk at SpaceX

How’s that for a clickable title? It really should be called Jensen Huang’s origin story but who is going to click on that?

As  podcaster myself I can say without a doubt that this was the best podcast I have listened to all year. During my 30+ EDA and IP career Nvidia was a customer on many different occasions. I do know how they got started and some of their trials and tribulations. I also remember seeing Jensen in his leather jacket driving his Ferrari around Silicon Valley. He is very approachable, we have met a few times and I also met his wife at an event at Stanford University. Jensen is a dedicated family man which always impresses me. Jensen married his college girlfriend, as did I, and has lived the American dream, absolutely.

I listened to this podcast twice and while I knew some of his origin story this was the most detail into Jensen’s life I have ever heard. It is also Nvidia’s origin story as well as 3D graphics, gaming, TSMC, and AI origin stories.

Jensen’s comments on Donald Trump did not surprise me at all. Social media is the bane of our society. It makes stupid people look smart and smart people look stupid. Hopefully AI can fix that! Jensen certainly has AI confidence as do I.

In this episode of The Joe Rogan Experience, Joe interviews NVIDIA CEO Jensen Huang in a wide-ranging conversation blending politics, technology, and personal anecdotes. They begin reminiscing about their first meeting at SpaceX, where Jensen gifted Elon Musk an advanced AI chip, and a later call involving Donald Trump discussing a UFC event at the White House.

The discussion shifts to Donald Trump: Jensen describes POTUS as a gifted listener with practical, America-first policies on manufacturing and energy. He praises Trump’s pro-growth stance, crediting “drill baby drill” for enabling AI factories and re-industrialization. Rogan notes Trump’s unfiltered style, calling him an “anti-politician” while acknowledging divisive moments. Jensen emphasizes unity, urging support for the president to foster national prosperity, jobs, and technological leadership. I agree with this 100%.

AI dominates the talk: Jensen views the U.S. in a perpetual technology race, from the Industrial Revolution to AI, stressing its role in superpowers like information and military might. He downplays doomsday fears, predicting gradual progress channeled toward safety and accuracy, reducing hallucinations through reflection and research. Rogan probes sentience concerns, but Jensen differentiates AI’s intelligence from undefined consciousness, likening future threats to cybersecurity defended collectively by AI agents. He envisions AI diffusing into daily life, boosting efficiency, closing technology divides via accessible tools like ChatGPT, and creating abundance, potentially enabling universal high income as Elon Musk suggests. However, he warns of job shifts, citing radiology where AI increased demand rather than replacing professionals.

Personally I feel there are decidedly more good people than bad on this earth thus good AI will triumph over evil. I also believe AI is a tidal wave so either you ride it or get crushed by it. If you are not using AI today get ready to be crushed!

Jensen recounts NVIDIA’s tumultuous origins: Founded in 1993 to pioneer accelerated computing for games, it nearly failed multiple times. Early wrong tech choices led to layoffs and a pivotal $5 million plea to Sega’s CEO saving the company. A $500,000 chip emulator gamble and TSMC’s partnership enabled their breakthrough chip, birthing modern 3D graphics from video games. Jensen credits luck, resilience, and first-principles thinking, admitting daily anxiety fuels him more than success. He reveals inventing CUDA in 2006 tanked NVDA stock but enabled AI, transforming NVIDIA into a $3 trillion powerhouse.

Jensen shares his immigrant journey: Born in Taiwan, moved to Thailand, then at nine sent to a tough Kentucky boarding school amid poverty and violence. His parents followed two years later, starting anew. He attributes success to hard work, vulnerability in leadership, and surrounding himself with top scientists.

The episode closes on success’s realities: Not constant joy, but enduring fear, humiliation, and gratitude. Jensen embodies the American dream with inspiring tales of clawing through poverty and uncertainty to impacting the world. It is a GREAT story and one that should be heard by all!

Also Read:

Synopsys + NVIDIA = The New Moore’s Law

Podcast EP318: An Overview of Axelera AI’s Newest Chip with Fabrizio Del Maffeo

An Assistant to Ease Your Transition to PSS

 


Cerebras AI Inference Wins Demo of the Year Award at TSMC North America Technology Symposium

Cerebras AI Inference Wins Demo of the Year Award at TSMC North America Technology Symposium
by Daniel Nenni on 12-07-2025 at 2:00 pm

Cerebras TSMC OIP 2025

This is a clear reminder of how important the semiconductor ecosystem is and how closely TSMC works with customers. The TSMC Symposium started 30 years ago and I have been a part of it ever since.  This event is attended by TSMC’s top customers and partners and is the #1 semiconductor networking event of the year, absolutely.

Cerebras Systems, the pioneer in wafer-scale AI acceleration, today announced that its live demonstration of the CS-3 AI inference system received the prestigious Demo of the Year award at the 2025 TSMC North America Technology Symposium in Santa Clara.

The winning demonstration showcased the Cerebras CS-3, powered by the industry’s largest chip, the 4-trillion-transistor Wafer-Scale Engine 3 (WSE-3), delivering real-time, multi-modal inference on Meta’s Llama 3.1 405B model at over 1,800 tokens per second for a single user, and sustaining over 1,000 tokens per second even under heavy concurrent multi-user workloads. Running entirely in memory with no external DRAM bottlenecks, the CS-3 processed complex reasoning, vision-language, and long-context tasks with sub-200-millisecond latency performance previously considered impossible at this scale.

TSMC’s selection committee, composed of senior executives and technical fellows, cited three decisive factors:
  1. Unprecedented single-chip performance on frontier models without multi-node scaling
  2. True real-time interactivity on models larger than 400 billion parameters
  3. Seamless integration of TSMC’s most advanced 5 nm technology with Cerebras’ revolutionary wafer-scale architecture

During the live demo, the CS-3 simultaneously served dozens of concurrent users running Llama 3.1 405B with 128k context windows, answering sophisticated multi-turn questions, generating images from text prompts via integration with Flux.1, and performing real-time document analysis—all while maintaining conversational latency indistinguishable from smaller cloud-based models.

“Wafer-scale computing was considered impossible for fifty years, and together with TSMC we proved it could be done,” said Dhiraj Mallick, COO, Cerebras Systems. “Since that initial milestone, we’ve built an entire technology platform to run today’s most important AI workloads more than 20x faster than GPUs, transforming a semiconductor breakthrough into a product breakthrough used around the world.”

“At TSMC, we support all our customers of all sizes—from pioneering startups to established industry leaders—with industry-leading semiconductor manufacturing technologies and capacities, helping turn their transformative idea into realities,” said Lucas Tsai, Vice President of Business Management, TSMC North America. “We are glad to work with industry innovators likes Cerebras to enable their semiconductor success and drive advancements in AI.”

The CS-3’s memory fabric provides 21 petabytes per second of bandwidth and 44 gigabytes of on-chip SRAM—equivalent to the memory of over 3,000 GPUs—enabling entire 405B-parameter models to reside on a single processor. This eliminates the inter-GPU communication overhead that plagues traditional GPU clusters, resulting in dramatically lower latency and up to 20x higher throughput per dollar on large-model inference.

The recognition comes as enterprises increasingly demand cost-effective, low-latency access to frontier-scale models. Independent benchmarks published last month by Artificial Analysis confirmed the CS-3 as the fastest single-accelerator system for Llama 3.1 70B and 405B inference, outperforming NVIDIA H100 and Blackwell GPU clusters on both tokens-per-second and time-to-first-token metrics.

TSMC’s annual symposium attracts thousands of engineers and executives from across the semiconductor ecosystem. The Demo of the Year award has previously gone to groundbreaking advancements in 3 nm and 2 nm process technology; this year marks the first time an AI systems company has claimed the honor.

Cerebras is now shipping CS-3 systems to customers in healthcare, finance, government, and scientific research. The company also announced general availability of Cerebras Inference Cloud, offering developers instant API access to Llama 3.1 405B at speeds up to 1,800 tokens/second—the fastest publicly available inference for models of this scale.

Bottom line: With this award from TSMC, Cerebras solidifies its position as the performance leader in generative AI inference, proving that wafer-scale computing has moved from bold vision to deployed reality.

Also Read:

TSMC Kumamoto: Pioneering Japan’s Semiconductor Revival

AI-Driven DRC Productivity Optimization: Revolutionizing Semiconductor Design

Exploring TSMC’s OIP Ecosystem Benefits

Breaking the Thermal Wall: TSMC Demonstrates Direct-to-Silicon Liquid Cooling on CoWoS®


CEO Interview with Pere Llimós Muntal of Skycore Semiconductors

CEO Interview with Pere Llimós Muntal of Skycore Semiconductors
by Daniel Nenni on 12-05-2025 at 12:00 pm

pere headshot alternative

Pere Llimós Muntal is the CEO and co-founder of Skycore Semiconductors, driving the strategy, business development, and growth of the company as it delivers next-generation power integrated circuit (IC) solutions for applications with extreme power density, efficiency, and form factor demands, such as data center power delivery.

Pere received the combined B.Sc. and M.Sc. degree in industrial engineering from the Polytechnic University of Catalonia in 2012, and a Ph.D. in Electrical Engineering from the Technical University of Denmark (DTU) in 2016, where he developed high-voltage and analog front-end ICs for portable ultrasound systems. He continued his research at DTU as a postdoctoral researcher and assistant professor, focusing on high-voltage integrated switched-capacitor power conversion.

His technical expertise includes switched-capacitor power conversion, high-voltage integrated circuit design, analog front-ends for ultrasonic transducers, and continuous-time sigma-delta A/D converters.

Today, he leads Skycore’s efforts to deliver advanced power IC solutions for next-generation data center HVDC architectures.

Tell us about your company?

Skycore Semiconductors is a Denmark-based fabless semiconductor company developing advanced Power Integrated Circuits (ICs) solutions for applications with extreme power density and efficiency demands, such as the 800V HVDC power architectures of the next generation AI data centers.

Our Power IC technology platform delivers extreme power density and efficiency in compact, flat form factors. This allows system designers to rethink how power is distributed in high-performance compute environments, especially as the industry moves from traditional 54 VDC systems to 800V HVDC architectures.

With roughly €7.5M raised to date, including our recent €5M seed round, we are scaling our team, deepening our partnerships, and preparing our first commercial products for market entry.

What problems are you solving?

AI data centers are hitting a physical limit when it comes to power. Today’s 54 VDC distribution cannot keep up with racks pushing beyond 200 kW. The current in the busbars, power density requirements, thermal constraints, and size limitations have become real bottlenecks.

Scaling today’s AI compute infrastructure requires a fundamental change in how data centers are powered, and 800V HVDC power architectures are the first step on that path.

Our technology enables the transition to 800V HVDC architectures, an industry shift that is now accelerating across hyperscalers and accelerated compute vendors. Our Power IC solutions unlock new architecture possibilities which are the key to scaling the compute and power density for the next generation of AI factories.

What application areas are your strongest?

Our strongest application area is AI compute infrastructure, more specifically the power delivery path inside high-density, high-efficiency data centers moving to HVDC architectures.

That said, the underlying technology platform provides benefits for any application with extreme power demands, such as high-performance computing, EVs and advanced robotics. But our immediate focus is clear: enabling the rapidly growing ecosystem of 800V AI data centers.

What keeps your customers up at night?

Their main question is how to continue to scale compute without running into the limits of physics.

They are facing exponentially growing power demands, insufficient rack-level power density, rising thermal challenges, and pressure to deliver more performance per watt, while maintaining reliability and compute scalability.

The shift to 800V HVDC is happening because customers know the current approach cannot scale. The question is not whether the transition is coming, but how fast can they get there, and with what technology. We provide them with the technology to cross that gap.

What does the competitive landscape look like and how do you differentiate?

The competitive landscape is a mix of traditional power semiconductor players and newer efforts focused on dense power conversion. But most existing solutions were never designed for the demands of HVDC power delivery in AI data centers. They are adaptations of legacy solutions.

We design our solutions from the ground up to be inherently scalable and meet the evolving demands of HVDC power architectures. We aim to provide the building blocks for the power architectures that AI infrastructure will rely on for the next decade.

Our differentiation is centered around three pillars:

  1. Power IC technology platform
    Silicon-proven and tailored for applications with extreme power demands and fast development cycles.
  2. Scalable power solutions
    Our power solutions are modular and scale in power, voltage and conversion ratio to meet the growing demands of power, efficiency, and trend towards higher-voltage architectures.
  3. Architecture alignment with the development of next-generation AI data centers
    Alignment via industry partnerships, strategic providers, and development projects as members of the Open Compute Project (OCP) and Berkeley Power & Energy Center (BPEC), alongside prominent industry members like Nvidia, Google, Intel, Tesla, and Analog Devices.
How do customers normally engage with your company?

Our typical engagement model is collaborative. We work closely with hyperscalers, system vendors, and power architecture teams who are planning or already deploying the transition to HVDC.

This often begins with technical exploration and architecture definition, followed by co-development projects to tailor our Power IC solutions to their system-level requirements.

Because we operate at the intersection of semiconductor technology and data center system design, early engagement allows customers to shape the integration of our ICs into their next-generation racks and compute platforms. Our goal is to be a long-term partner and enabler, not just a component provider.

Also Read:

CEO Interview with Brandon Lucia of Efficient Computer

CEO Interview with Dr. Peng Zou of PowerLattice

CEO Interview with Roy Barnes of TPC