Synopsys IP Designs Edge AI 800x100

Soitec’s “Engineering the Future” Event at Semicon West 2025

Soitec’s “Engineering the Future” Event at Semicon West 2025
by Daniel Nenni on 09-23-2025 at 10:00 am

Image Soitec event Semicon West

As part of the broader Semicon West ecosystem in Phoenix, Arizona, Soitec, a global leader in engineered substrates for semiconductors, hosts an exclusive, invitation-style event titled Engineering the Future: Soitec Substrates Powering Technology Megatrends on Wednesday, October 8, 2025, from 2:30 PM to 6:00 PM MST.

Held at the Residence Inn by Marriott Phoenix Downtown (132 South Central Avenue, Phoenix, AZ 85004), this free, in-person gathering targets a select audience of about 50 semiconductor professionals, analysts, and investors. Spanning 3.5 hours, it blends presentations, expert panels, and networking to spotlight how Soitec’s innovative substrates are addressing critical industry challenges amid megatrends like 5G/6G connectivity, AI proliferation, and data center expansion.

The event underscores Soitec’s pivotal role in substrate engineering, which forms the foundational “canvas” for advanced chips. These materials enable higher performance, lower power consumption, and smaller form factors in next-gen devices. With the semiconductor sector facing supply chain strains, geopolitical tensions, and escalating demands for efficiency, Soitec positions its solutions—such as silicon-on-insulator (SOI) and strained silicon—as enablers for sustainable innovation. Expect deep dives into real-world applications, backed by data on market growth projections (e.g., RF markets exceeding $20B by 2030) and case studies from partners like Qualcomm and IBM.

Detailed Agenda
  • 2:30 PM – 3:00 PM: Welcome & Greetings Kick off with introductory remarks, setting the stage for Soitec’s vision in a rapidly evolving landscape. This casual opener fosters early connections among attendees.
  • 3:00 PM – 3:45 PM: Soitec Executive Insights A high-level presentation and panel featuring members of Soitec’s Executive Committee, led by CEO Paul B. Barnabé. The session introduces key technology megatrends, including the shift toward heterogeneous integration and energy-efficient computing. Barnabé, a veteran in the field, will likely highlight Soitec’s R&D investments (over €300M annually) and recent milestones, such as advancements in 300mm wafer production for AI accelerators.
  • 3:45 PM – 5:00 PM: Industry Deep Dives & Panels The core of the event: Three focused sessions with market specialists, each 30-40 minutes followed by Q&A. Topics align with high-growth areas:
    1. RF Technologies for Smartphones: Exploring substrates for next-gen filters (e.g., BAW/TC-SAW) that boost 5G/6G signal integrity, reduce interference, and support mmWave bands. Panelists may discuss Qualcomm’s integration challenges and the $15B+ RF market.
    2. Optical Interconnects in Data Centers: Addressing photonics-enabled substrates for faster, low-latency links amid AI-driven data explosion. Expect talks on silicon photonics reducing power by 50% versus copper, with insights from hyperscalers like Google.
    3. Technologies for Edge AI Devices: Focusing on substrates optimizing on-device inference for wearables, drones, and IoT—tying into themes like ultra-low power (sub-1V operation) and thermal management. This resonates with the Edge AI surge, projected to hit $100B by 2028. A culminating panel synthesizes cross-topic synergies, debating supply chain resilience and U.S. CHIPS Act implications.
  • 5:00 PM – 6:00 PM: Networking Reception Wind down with appetizers and drinks, providing ample time for one-on-one discussions. This informal segment is ideal for forging partnerships, with Soitec execs circulating to address investor queries.

Organized under Soitec’s banner (a company with 4,000+ employees and €1.2B+ revenue in FY2024), the event emphasizes actionable insights over hype. It’s not just a talk shop—attendees gain foresight into how substrates will underpin $1T+ in semiconductor value by 2030, per McKinsey estimates.

For registration, head to the Eventbrite Registration Page; spots are limited, so early RSVP is advised. Whether you’re tracking fab investments or scouting RF/AI plays, this is a prime opportunity to engage with substrate innovators shaping tomorrow’s tech stack.

About Soitec

Soitec (Euronext – Tech Leaders), a world leader in innovative semiconductor materials, has been developing cutting-edge products delivering both technological performance and energy efficiency for over 30 years. From its global headquarters in France, Soitec is expanding internationally with its unique solutions, and generated sales of 0.9 billion Euros in fiscal year 2024-2025. Soitec occupies a key position in the semiconductor value chain, serving three main strategic markets: Mobile Communications, Automotive and Industrial, and Edge and Cloud AI. The company relies on the talent and diversity of more than 2,200 employees, representing 50 different nationalities, working at its sites in Europe, the United States and Asia. Nearly 4,300 patents have been registered by Soitec.

Also Read:

How FD-SOI Powers the Future of AI in Automobiles

Powering the Future: How Engineered Substrates and Material Innovation Drive the Semiconductor Revolution

Soitec: Materializing Future Innovations in Semiconductors


The Impact of AI on Semiconductor Startups

The Impact of AI on Semiconductor Startups
by Kalar Rajendiran on 09-23-2025 at 6:00 am

AI Infra Summit 2025 Banner

At the AI Infra Summit 2025 was a panel conversation that captured the semiconductor industry’s anxieties and hopes. The session, titled “The Impact of AI on Semiconductor Startups,” examined how artificial intelligence is transforming not just what chips can do, but how we design them.

The backdrop is stark. Developing a leading-edge chip can take three to five years and cost over $100 million, even as the industry faces a projected shortage of one million skilled workers by 2030. Startups, without the vast data sets and large scale engineering teams of well established companies, face an especially steep climb. Could AI truly level the playing field?

Moderator Sally Ward-Foxton, senior reporter at EE Times, put that question to a well-represented panel: Laura Swan, General Partner at Silicon Catalyst Ventures; Arun Venkatachar, Vice President of AI & Central Engineering at Synopsys; and Stelios Diamantidis, Chief Product Officer of CogniChip—an investor, a market leader, and a promising startup, respectively. Over the next 30 minutes, they painted a vivid picture of how AI is accelerating chip development, lowering barriers to entry, and expanding who can participate in the next era of hardware innovation.

Startups, Speed, and the Need for Both Giants and Upstarts

Sally opened with a sobering statistic: U.S. venture capital once funded nearly 200 semiconductor startups each year, but by 2010 that number had fallen to single digits. “Even if you have a brilliant idea and a committed team, you’re looking at three to five years from concept to product,” said Stelios. “Meanwhile, an AI application can scale to millions of users overnight. Investors compare those timelines and often decide hardware is too slow and too risky for a return on their investment.”

Yet, as Laura emphasized, startups remain indispensable. “Innovative ideas, early funding, and sheer speed of execution are the lifeblood of progress,” she said. Laura explained that Silicon Catalyst—a hybrid incubator, accelerator, and venture fund—holds a unique position in nurturing these young companies. “As much as startups can be the bane of established players, the industry needs both,” she added. Healthy competition depends on the creative spark of startups and the scale, resources, and stability of established companies. One cannot thrive without the other.

AI Inside the Design Flow

Arun described how Synopsys began introducing machine learning into its design tools almost a decade ago. “We started replacing decades-old heuristics with AI,” he said. “Today those algorithms optimize power, performance, and area, accelerate verification, and even shorten manufacturing test cycles. In some flows we’ve cut design times by up to 40 percent.”

This is not a minor efficiency tweak. Stelios sees it as an inflection point akin to the arrival of logic synthesis in the 1980s. By connecting architecture, design, verification, and manufacturing into a continuous AI-assisted process, productivity gains can cascade across the entire chip-development cycle.

Cloud as the Great Equalizer

A recurring theme was how cloud-based design amplifies AI’s impact. Instead of buying racks of servers and expensive perpetual EDA licenses, a startup can now log in from a laptop and rent state-of-the-art tools on demand. Stelios and Arun were in agreement on this. “I know the moderator would love for us to disagree,” Stelios said with a grin, “but we’re on exactly the same page. Cloud-based design is essential if we want a healthier semiconductor ecosystem.”

By pushing sophisticated design environments to the cloud, companies can share resources, scale compute power instantly, and give even small teams access to capabilities once reserved for the largest players.

Human Ingenuity Still Matters

Despite all the talk of automation, no one on the panel predicted the death of engineering talent. “AI can remove drudgery and reduce errors,” said Laura, “but human creativity and architectural insight remain essential.”

Stelios invoked an evocative metaphor from his former employer Synopsys’ founder Aart de Geus, comparing great chip architects to the master builders of Europe’s cathedrals—people who understood the properties of every material and could see the entire structure from conception to completion. AI, he argued, will augment that holistic thinking rather than replace it.

Toward “Chips as a Service”

“What if building a chip were as easy as launching an app?” Sally asked the panel. If AI and cloud computing continue their rapid advance, the semiconductor world might soon resemble modern software development.

Laura offered a memorable quip: “We might eventually have something like a TSMC vending machine—not literally, of course, but a world where you feed in an idea, run it through automated flows, and pop out a prototype ready for market testing.”

The joke underscored a serious point. Faster, cheaper design cycles could entice investors back to hardware and open the door for entrepreneurs who today would never consider starting a chip company.

Summary

The AI Infra Summit panel delivered a clear message: artificial intelligence is reshaping semiconductor design from the ground up. AI-driven tools are compressing design and verification times, while cloud platforms are democratizing access to world-class design environments so that a small startup can compete with giants. At the same time, a healthy ecosystem depends on the coexistence of nimble startups and established companies—the former driving innovation and speed, the latter providing scale and resources. Human engineers remain central, guiding system-level decisions and bringing creative architecture to life.

Taken together, these forces could shrink chip-development timelines from years to mere months, making semiconductor ventures far more attractive to investors and far more accessible to entrepreneurs. Whether or not we ever see a “TSMC vending machine,” the vision is unmistakable: a future in which creating custom silicon is as agile, collaborative, and entrepreneurial as writing software—ushering in a true hardware renaissance.

Also Read:

CEO Interview with Andrew Skafel of Edgewater Wireless

Podcast EP278: Details of This Year’s Semiconductor Startups Contest with Silicon Catalyst’s Nick Kepler

Cutting Through the Fog: Hype versus Reality in Emerging Technologies


Yes Intel Should Go Private

Yes Intel Should Go Private
by Daniel Nenni on 09-22-2025 at 10:00 am

Lip Bu Tan Intel

Lip-Bu Tan started as Intel CEO on March 18th of this year and some very impressive changes have already taken place. Intel started the year with more than 100,000 employees and will finish the year with around 75,000. Reporting structures have been flattened and the Intel culture is being transformed back into an innovation driven semiconductor manufacturing company.

The most impressive transformation however has been on the financial side. Both Softbank and Nvidia have invested $7B and the US Government made a 10% $8.9B equity investment. The biggest value here, in my opinion, is the trust placed in Lip-Bu Tan, absolutely.

Posted on Truth Social by @realDonaldTrump 9/19/2025

What will Lip-Bu Tan do next?

Will there be more billion dollar investments? Yes, I think there could be. Will big customers do business with Intel Foundry. Yes, I think they will. In fact, I know they will but I totally respect Lip-Bu’s promise to keep wafer agreement negotiations private until the ink is dry so that is all I will say about that. And for those analysts who keep asking that question I would suggest they do the same, respect Lip-Bu Tan.

The latest Intel question that is running through the media:  Should Intel go fully private via a government-led buyout of public shares, potentially with private equity or consortium partners? This would remove Intel from public markets freeing it from quarterly reporting pressures and allowing bold, long-term moves. Based on Intel’s history, current challenges, and ecosystem chatter, yes, I think Intel should go private. Below, I’ll try to break down the pros, cons, and a possible path forward. Help me out in the comments.

Pros and Cons of Privatization for Intel

Privatization isn’t a silver bullet, but it aligns with Intel’s need for more changes in order to stay on the leading edge of semiconductor manufacturing. Here’s a balanced comparison drawing from recent developments:

Aspect Pros of Going Private Cons of Going Private
Strategic Flexibility Frees management from short-term Wall Street demands, enabling focus on long-term R&D and a full breakup into specialized units (foundry, design, Mobileye, Altera, Intel Capital, etc…). Experts argue this could create more value than the “conglomerate” model. Risk of bureaucratic inertia if government influence dominates; state-owned enterprises often prioritize politics over innovation, as seen in global examples like China’s SMIC.
Financial Stability Access to patient capital (government / consortium) without dilution from public offerings, could fund $20B+ Ohio fabs without bankruptcy fears, privatization could unlock more value. High buyout cost burdens taxpayers, past bailouts have had mixed returns (Solar/Auto). Recent stock surges suggest public markets still value upside.
National Security & Competition Bolsters U.S. chip independence amid China tensions; a private Intel foundry could serve the top fabless companies without conflicts, reducing reliance on Taiwan. Distorts markets by favoring Intel politically, harming competitors like AMD, possible foreign retaliation.
Talent & Operations Long-term focus could stem talent drain, private status might attract top engineers with equity incentives tied to recovery. Government oversight risk, eroding private-sector confidence.
Shareholder Value Breakup could unlock billions in value, Softbank and Nvidia’s stake signals private interest in AI collaboration. Public investors lose liquidity if breakup fails, possible value destruction.

Overall, my pros outweigh cons if privatization is temporary and mission-driven. Intel’s vertical integration, once a strength, now drags it down as design and manufacturing compete for resources. Public status amplifies scrutiny on past failures but privatization could mimic Dell’s 2013 turnaround, where Michael Dell and Silver Lake took it private to refocus and list again at a much higher value.

Why Now? The Tipping Point in 2025
  • Government’s Foot in the Door: The 10% stake (no board seats, but voting alignment) blurs public-private lines. Given the hostile political environment, Intel is at risk of becoming a political football when there is an administration change.
  • Market Signals: Softbank and Nvidia’s investment isn’t charity, it’s a strategic bet on AI collaboration and as I said, there could be more $5B investments.
  • Global Context: With TSMC’s limited US manufacturing and Samsung failing on the leading edge, the U.S. can’t afford Intel’s collapse. Privatization could create a “pure-play” U.S. foundry, echoing GE’s 2021-2024 breakup success with parts of GE now trading at premiums (GE Aerospace, GE HealthCare, and GE Vernova).

Bottom line: Intel’s survival demands escaping public-market and political quicksand. Privatization isn’t “handing over to China” (as some fear) but a U.S.-centric reset to reclaim leadership. Without it, Intel fades into the background, with it, Intel could power the next AI or quantum computing boom, absolutely.

Also Read:

AI Revives Chipmaking as Tech’s Core Engine

Advancing Semiconductor Design: Intel’s Foveros 2.5D Packaging Technology

Revolutionizing Processor Design: Intel’s Software Defined Super Cores


MediaTek Dimensity 9500 Unleashes Best-in-Class Performance, AI Experiences, and Power Efficiency for the Next Generation of Mobile Devices

MediaTek Dimensity 9500 Unleashes Best-in-Class Performance, AI Experiences, and Power Efficiency for the Next Generation of Mobile Devices
by Daniel Nenni on 09-22-2025 at 6:00 am

MediaTek Dimensity9500 EN Transparent

In a bold move to dominate the premium mobile chipset market, MediaTek unveiled the Dimensity 9500 on September 22, 2025, from Shenzhen, China. This flagship SoC promises to elevate 5G smartphones with unparalleled performance, on-device AI capabilities, and energy efficiency, positioning MediaTek as the undisputed leader in gaming, compute, imaging, and artificial intelligence. As the world’s top supplier of smartphone SoCs, MediaTek’s latest innovation arrives at a pivotal moment when consumers demand devices that are not just powerful but intelligently adaptive to daily life.

Manufactured using the TSMC N3P (enhanced performance) process, the heart of the Dimensity 9500 lies a third-generation All Big Core CPU architecture, featuring a blazing-fast 4.21GHz ultra core, three premium cores, and four performance cores. Paired with four-lane UFS 4.1 storage, it delivers a staggering 32% uplift in single-core performance and 17% in multi-core tasks over its predecessor. Yet, the real magic is in efficiency: the ultra core slashes power consumption by up to 55% at peak loads, ensuring longer battery life without compromising speed.

“As AI becomes part of everyday life, consumers want devices that feel smarter, faster, and more personal without sacrificing battery life,” said JC Hsu, corporate senior vice president at MediaTek and general manager of the Wireless Communications Business Unit. “The MediaTek Dimensity 9500 delivers exactly that: Breakthrough on-device AI, top-tier performance and efficiency, and a full suite of premium experiences that our partners can bring to users around the world.”

Enhancing this prowess is a revamped cache and memory system, including the industry’s first 4-channel UFS 4.1 support. This doubles read/write speeds and accelerates large AI model loading by 40%, while the second-generation Dimensity scheduler ensures seamless responsiveness under heavy multitasking. Gamers will rejoice with the integration of the Arm G1-Ultra GPU, boasting 33% higher peak performance and 42% better power efficiency. It introduces double frame rate interpolation up to 120FPS, enabling console-level ray tracing. Through collaborations with top studios, the chipset supports MegaLights in Unreal Engine 5.6 and Nanite in Unreal Engine 5.5, unlocking AAA real-time rendering and immersive lighting for mobile titles.

AI takes center stage with the ninth-generation MediaTek NPU 990, powered by Generative AI Engine 2.0. This doubles compute power and pioneers BitNet 1.58-bit large model processing, cutting energy use by 33%. The ultra-efficient NPU boasts over 56% less power draw at peak performance, facilitating 100% faster output from 3-billion-parameter LLMs, 128K token long-text processing, and the world’s first 4K ultra-high-definition image generation. The result? A truly “agentic” AI user experience—proactive, personalized, collaborative, evolving, and secure—that anticipates user needs in real time.

The Dimensity 9500 is the first to support an integrated compute-in-memory architecture for its newly-added Super Efficient NPU, significantly reducing power consumption and enabling AI models to run continuously. This advancement further enhances end-user experiences with more sophisticated proactive AI.

Imaging enthusiasts aren’t left behind. The Imagiq 1190 ISP handles RAW-domain pre-processing, up to 200MP capture, 30fps continuous focus tracking and new portrait engine, while supporting cinematic 4K 60FPS portrait videos. It offers offers the latest MiraVision Adaptive Display technology, which dynamically adjusts contrast and color saturation based on ambient lighting, panel characteristics, and real-time content analysis. This ensures a clear viewing experience both outdoors in high-brightness scenarios — without overheating during prolonged use — and indoors in extremely dark environments, providing eye protection while maintaining clarity.”

Connectivity shines too, with MiraVision for adaptive displays, Bluetooth calls, Wi-Fi fast transfer, and multi-network intelligence for uninterrupted 5G/Wi-Fi handoffs. AI-driven communication tech reduces 5G power by 10% and Wi-Fi by 20%, with 5CC carrier aggregation boosting bandwidth 15%. Plus, AI positioning and network selection yield 20% higher accuracy and 50% lower latency than rivals.

MediaTek’s Dimensity 9500 stems from years of R&D and ecosystem partnerships with game studios, OEMs, and software giants. Flagship devices powered by this chipset are slated for Q4 2025 launches, promising to flood the market with smarter, greener flagships. For more on MediaTek’s 5G lineup, visit i.mediatek.com/mediatek-5g.

Press release URL

Also Read:

MediaTek Develops Chip Utilizing TSMC’s 2nm Process, Achieving Milestones in Performance and Power Efficiency

TSMC’s 2024 Sustainability Report: Pioneering a Greener Semiconductor Future

TSMC 2025 Update: Riding the AI Wave Amid Global Expansion


Video EP10: An Overview of Mach42’s AI Platform with Brett Larder

Video EP10: An Overview of Mach42’s AI Platform with Brett Larder
by Daniel Nenni on 09-19-2025 at 10:00 am

In this episode of the Semiconductor Insiders video series, Dan is joined by Brett Larder, co-founder and CTO at March42. Brett explains what March42’s AI technology can do and the benefits of using the platform to quickly analyze designs to find areas that may be out of spec and require more work. He describes the way Mach42 trains AI models and discusses some of the benefits for tasks such as IP reuse and design iteration.

Contact Mach42

The views, thoughts, and opinions expressed in these videos belong solely to the speaker, and not to the speaker’s employer, organization,
committee or any other group or individual.


CEO Interview with Adam Khan of Diamond Quanta

CEO Interview with Adam Khan of Diamond Quanta
by Daniel Nenni on 09-19-2025 at 6:00 am

Adam Khan Diamond Quanta.jpg.


Key Takeaways

  • Diamond Quanta is bringing diamond out of the lab and into manufacturable devices today — converting decades of promise into practical impact for semiconductors, optics, and quantum. What was once thought ‘unavailable’ is becoming inevitable.
  • The company’s platform centers on proprietary doping and annealing methods that enable both n- and p-type behavior in diamond, supporting real devices (diodes, FETs, and quantum emitters).
  • Early collaborations with industry and research partners focus on high-temperature, high-voltage operation and reliability, targeting use in aerospace/defense, energy, and next-gen computing.
  • At Diamond Quanta, we call this vision The Physics of Forever — unlocking the enduring properties of diamond to enable a new era of performance and reliability.

Adam Khan is a vanguard in diamond semiconductor technology, celebrated for his foresight and expertise in the industry. As the founder of AKHAN Semiconductor, he was instrumental in innovating lab-grown diamond thin-films for a myriad of applications, from enhancing the durability of smartphone screens and lenses with Miraj Diamond Glass® to bolstering the survivability of aircraft with Miraj Diamond Optics®.

Tell us about your company.

Diamond Quanta is pioneering engineered diamond as a practical semiconductor and quantum material platform.

Our team combines decades of proprietary diamond growth and processing expertise with business development, IP strategy, and financial leadership. Our founders have committed capital and sweat equity, reflecting grit and full-time commitment. The goal: deliver devices that run cooler, last longer, and perform in places silicon, SiC, and GaN struggle—think high temperature, high field, high radiation, and high frequency environments. We’re building a platform that spans power electronics (diodes and FETs), quantum photonic sources, and ruggedized optical/sensor components. This platform embodies The Physics of Forever — our mission to make engineered diamond the foundation for the next era of electronics, optics, and quantum technologies, with physics-informed machine learning (ML) accelerating breakthroughs.

What problems are you solving?

Modern power and sensor systems are hitting thermal and reliability walls. Wide-bandgap incumbents have extended performance, but at the highest voltages, temperatures, and power densities, margins are thin. Diamond’s unique properties — a combination of ultra-wide bandgap, thermal conductivity, breakdown field, and carrier velocity — offer new headroom. Our focus is manufacturable doping and activation, so diamond can move from materials promise to device reality. For customers, this translates into significant economic value: up to 70% BOM savings, better reliability, and reduced cooling/qualification costs. In practice, this means up to 50% fewer cooling components are required in system designs, directly reducing weight, complexity, and cost. This is why leading OEMs are already engaging with Diamond Quanta — the industry cannot afford to wait.

What application areas are your strongest?

Our beachhead is display coatings, proving manufacturability and customer pull with Tier-1 glass suppliers. Beyond this zero-step, three near-term areas are:

  1. Power electronics for aerospace/defense, energy, and mobility where high-temperature, high-voltage switches reduce size, weight, and cooling needs.
  2. Quantum photonics with diamond color centers that enable secure comms, sensing, and computing.
  3. Extreme-environment sensing and optics such as high-temp pressure/current sensors and radiation-hard windows.
What keeps your customers up at night?

Reliability at temperature, efficiency under brutal duty cycles, and qualification risk. Many are boxed in by thermal budgets, derating, and complex cooling. They want devices that survive heat and radiation with predictable lifetime models—and they want a path to volume without a science-project supply chain.

What does the competitive landscape look like and how do you differentiate?

We respect SiC and GaN—they unlocked a generation of power density. Our differentiation is the engineered diamond device stack: co-doping, activation/defect-management anneals, and physics-first modeling. This enables both n- and p-type device functionality at higher breakdown and hotter junction operation while remaining compatible with mainstream fab flows. Compared to SiC and GaN, diamond offers >2x thermal conductivity and 10x higher heat tolerance, which translates into fewer design trade-offs at scale. We also maintain a strong IP portfolio across doping, annealing, and device architectures. Given recent M&A in coatings and semiconductor materials, we see optics as a divestiture option and the broader platform as a strategic acquisition target.

What new features/technology are you working on?

We are advancing ion-implantation co-doping with pulsed-laser and high-temp anneals to minimize defect complexes while activating dopants. Prototypes include Schottky and PiN diodes, followed by FETs exploiting diamond’s breakdown and thermal transport. For optics and quantum, coatings serve as a zero-step proving manufacturability from the start. We are also engineering emitters and coupling structures to deliver brighter, more uniform quantum sources.

  • Process integration: Ion-implantation-based co-doping with pulsed-laser and high-temperature anneals designed to minimize defect complexes while activating dopants.
  • Device prototypes: Next-gen Schottky and PiN diodes, followed by FET topologies that exploit diamond’s breakdown and thermal transport.
  • Quantum photonics: Engineered emitters and coupling structures targeting brighter, more uniform sources for integrated photonics.
How do customers normally engage with your company?

We run structured evaluation and co-development programs: NDAs and problem statements → sample/device evaluation or compact model sharing → joint reliability plans → pre-production pilots. For quantum photonics, we offer early-access engagements around emitter performance and packaging. For power, we collaborate on application-specific stress profiles and targets (voltage class, Tj, SOA, RDS(on)/VF, switching loss).

What results can you share today?

We’ve demonstrated device-relevant doping/activation and early diode behavior at temperatures where SiC and GaN derate. Independent labs and partners validated high electron mobility (>555 cm²/V·s), reduced defect scattering in co-doped diamond, as published in a peer-reviewed MRS Advances white paper (Feb. 2025).[1] Building on this validation, our customer engagements show how these advances translate into system economics: up to 70% BOM savings, improved reliability, and fewer cooling components.

What’s next?

Our focus is converting prototypes to qualified parts in a few focused voltage/current classes, expanding our foundry-friendly process modules, and broadening our partner ecosystem—from epi and substrates to packaging and test. The through-line is the same: engineered diamond devices that simplify thermal design and push performance per watt in regimes that matter.

How can interested teams engage?

If you’re wrestling with heat, reliability, or extreme environments in power or sensing—or need practical quantum photonic sources—let’s compare requirements and agree on a pilot plan. We bring the materials, process, and device stack; you bring the mission profile. We have active customer discovery and development engagements (i.e., 20+ MNDAs, 2 MoUs, and a JTEA / SOW). Join the wave — Diamond Quanta is moving fast from promise to product. Let’s define your pilot plan now and help shape the next era of performance. Be part of The Physics of Forever.

Why did you join Silicon Catalyst and what are your goals in their 24-month program?

We joined Silicon Catalyst because it represents What’s Next in semiconductors — a platform proven to help deep-tech startups move from breakthrough science to market adoption. For Diamond Quanta, it’s not about incubation, it’s about accelerating impact through a network of industry partners, investors, and mentors.

Our goals in the 24-month program are clear: validate our engineered diamond platform in customer systems, secure early design-ins with Tier-1 partners, and build the operational and investor readiness to scale from prototypes into production.

Silicon Catalyst amplifies our mission — The Physics of Forever — making diamond the enduring foundation for electronics, optics, and quantum technologies that last longer, run cooler, and redefine performance.

  1. Khan, A.H., Kim, T.S., “Advanced co-doping techniques for enhanced charge transport in diamond-based materials,” MRS Advances, Feb. 2025. https://doi.org/10.1557/s43580-025-01206-x
Also Read:

CEO Interview with Andrew Skafel of Edgewater Wireless

Cutting Through the Fog: Hype versus Reality in Emerging Technologies

CEO Interview: John Chang of Jmem Technology Co., Ltd.


Rise Design Automation Webinar: SystemVerilog at the Core: Scalable Verification and Debug in HLS

Rise Design Automation Webinar: SystemVerilog at the Core: Scalable Verification and Debug in HLS
by Daniel Nenni on 09-18-2025 at 10:00 am

Rise SemiWiki Webinar October


Key Takeaways

– High-Level Synthesis (HLS) delivers not only design productivity and quality but also dramatic gains in verification speed and debug – and it delivers them today.
–  Rise Design Automation uniquely enables SystemVerilog-based HLS and SystemVerilog verification, reusing proven verification infrastructure.
– The webinar features expert insights from verification methodology architect Mark Glasser, and Mike Fingeroff, HLS expert, presenting the technical content and a live demonstration.
– Attendees will learn how to unify design, verification, and debug across abstraction levels without duplicating effort.

Register Here

High-Level Synthesis (HLS) and raising design abstraction have been proven to deliver significant productivity and value to design teams — faster design entry, improved architectural exploration, and tighter system-level integration. These benefits are real, but experienced users and teams often cite a different advantage as the most valuable: verification.

By enabling earlier testing, running regressions 30×–1000× faster than RTL, and simplifying debug, HLS can dramatically accelerate verification. The challenge, however, is that existing HLS flows rely on C++ or SystemC, often leaving verification disconnected from established SystemVerilog/UVM environments. This gap forces teams to bridge methodologies on their own and uncover problems only after RTL is generated — slowing adoption and raising risk.

Rise Design Automation addresses this directly by making SystemVerilog a first-class citizen in HLS. In collaboration with SemiWiki, Rise will host a webinar that demonstrates how teams can apply familiar SystemVerilog and UVM methodologies consistently from high-level models through RTL, simplify debug, and unify design and verification across abstraction levels. The live event takes place on Wednesday, October 8, 2025, from 9–10 AM Pacific Time.

The Webinar Presenters:

The session begins with Mark Glasser, a distinguished verification architect and methodology expert. Mark co-invented both OVM and UVM and is the author of the recently published book, Next Level Testbenches: Design Patterns in SystemVerilog and UVM (2024). He will provide historical and forward-looking context on how verification methodology has evolved and the need driving raising abstraction.

The majority of the session will be presented by Mike Fingeroff, Chief of HLS at Rise DA. With over 25 years of experience and as the author of The High-Level Synthesis Blue Book, Mike specializes in HLS, SystemVerilog, SystemC, and performance modeling. He will deliver the technical deep dive and a live demonstration of Rise’s flow.

Key Topics

The webinar will address how Rise enables:

  • SystemVerilog for HLS – untimed and loosely timed modeling and the constructs synthesized into RTL.
    Verification continuity – applying SystemVerilog methodologies consistently from high-level models through RTL.
    Mixed-language and mixed-abstraction simulation – explain the automatically generated adapters that bridge between HL and RTL and how to mix-and-match in verification including UVM.
    Advanced debug features – HL↔RTL correlation, transaction-level waveforms, RTL signal visibility, and synthesized assertions and coverage.
    Familiar debug practices – including $display support and line-number annotations for RTL signals.

A highlight of the session will be a live demonstration, where attendees will see a design example progress from high-level verification through RTL, showcasing methodology reuse and debug continuity.

To Learn More

If you’re looking to accelerate verification, reduce duplicated effort, and understand how to apply your existing SystemVerilog/UVM expertise in an HLS context, this webinar will step you through the code.

Learn More and Register Here

Don’t miss the opportunity to see how SystemVerilog at the core of HLS can streamline your design process and verification flow.

About Rise Design Automation

Our mission at Rise Design Automation is to raise the level of abstraction of design and verification beyond RTL and have it adopted at scale across the industry in order to transform how designs will be done for years to come.   So, Adoption at Scale with Innovation at Scale.

Also Read:

Moving Beyond RTL at #62DAC

Generative AI Comes to High-Level Design

Upcoming Webinar: Accelerating Semiconductor Design with Generative AI and High-Level Abstraction


CEO Interview with Barun Kar of Upscale AI

CEO Interview with Barun Kar of Upscale AI
by Daniel Nenni on 09-18-2025 at 10:00 am

Barun Kar Headshot

Barun Kar is CEO of Upscale AI. He is also the co-founder of Auradine and previously served as COO. Barun has over 25 years of experience leading R&D organizations to deliver disruptive products and solutions, resulting in multi-billion-dollar revenue. Barun was on the founding team at Palo Alto Networks and served as the company’s Senior Vice President of Engineering where he spearheaded two acquisitions and led five post-merger integrations. Prior to that, Barun oversaw the entire Ethernet portfolio at Juniper Networks.

Tell us about your company

Upscale AI is developing open-standard, full-stack turnkey solutions for AI networking infrastructure. We’re redesigning the entire AI networking stack for ultra-low latency networking, offering next-level performance and scalability for AI training, inference, generative AI, edge computing, and cloud-scale deployments. Upscale AI just raised $100 million of funding for our seed round, so we look forward to using this funding to bring our solutions to market and define the future of scalable, interoperable AI networking.

What problems are you solving?

It’s becoming more challenging for network infrastructure to keep up with AI/ML model sizes, inferencing workloads, token generation rates, and frequent model tuning with real-time data. To meet today’s networking challenges, the industry needs scalable, high-performance networking infrastructure built on open standards. Upscale AI’s open standard solutions meet the latest bandwidth requirements and low latency needs of AI workloads, while also offering customers more scalability. Plus, Upscale AI is providing companies with much more flexibility and interoperability compared to the closed, proprietary solutions that dominate the market today.

What application areas are your strongest?

Upscale AI’s silicon, systems, and software are specifically optimized to meet AI requirements today and in the future. Our ultra-low latency and high bandwidth networking fabric will not only drive the best xPU performance, but will also offer a huge reduction in total cost of ownership at the data center level. Our unified NOS, which is based on SAI/SONiC open standards, makes it easy for companies to scale their infrastructure as needed and perform in-service network upgrades to maximize uptime. Additionally, our networking and rack scale solutions enable companies to host an array of AI compute without vendor lock-in.

What keeps your customers up at night?

Increasing network bandwidth demands have put a lot of pressure on infrastructure to deliver high bandwidth, low latency, and reliable interconnectivity. While you often hear about how powerful AI applications are now accessible to anyone with an internet connection, AI network infrastructure remains limited to companies with a lot of capital.  Furthermore, even hyperscalers and AI neocloud providers with deep pockets are limited to closed, proprietary solutions for AI network infrastructure. These companies don’t like being locked into a closed ecosystem. Upscale AI is giving its customers a new level of flexibility with our portfolio that is built using UALink, Ultra Ethernet, SONiC, SAI, and other cutting-edge open source technologies and open standards.

What does the competitive landscape look like and how do you differentiate?

Today there is no established AI network player that is offering an alternative to proprietary solutions. AI networking innovation should not be locked into a closed ecosystem. We strongly believe at Upscale AI that open standards are the future and we’re working to democratize AI network infrastructure by pioneering open-standard networking technology. Our portfolio gives companies bring-your-own-compute flexibility to help realize the full potential of AI. A truly differentiated full-stack solution—engineered for AI-scale networking, vertically integrated from product to support, diversified for optionality, and built on open standards to power the next wave of AI infrastructure growth.

What new features/technology are you working on?

We’re working to bring to market full stack AI networking infrastructure, including robust silicon, systems, and software. Stay tuned for more updates on what’s coming out next.

How do customers normally engage with your company?

Upscale AI has a direct salesforce working with hyperscalers, neocloud providers, and other companies in the AI networking space. Prospective customers can reach out to our team via our website: https://upscaleai.com/.

Also Read:

CEO Interview with Adam Khan of Diamond Quanta
CEO Interview with Nir Minerbi of Classiq
CEO Interview with Russ Garcia with Menlo Micro


Eric Xu’s Keynote at Huawei Connect 2025: Redefining AI Infrastructure

Eric Xu’s Keynote at Huawei Connect 2025: Redefining AI Infrastructure
by Daniel Nenni on 09-18-2025 at 8:00 am

Eric Xu at Huawei Connect 2025

At Huawei Connect 2025, held in Shanghai, Eric Xu, the Rotating Chairman of Huawei, delivered a keynote speech that laid out the company’s ambitious roadmap for AI infrastructure, computing power, and ecosystem development. His speech reflected Huawei’s growing focus on building high-performance systems that can support the next generation of artificial intelligence while advancing self-reliant technology development.

Setting the Stage

Xu began his keynote by reflecting on the rapid evolution of AI models and how breakthroughs over the past year have pushed the boundaries of computing. He noted that the increasing complexity of large models, particularly in inference and recommendation workloads, demands not just more powerful chips, but fundamentally new computing architectures. According to Xu, AI infrastructure needs to be both scalable and efficient—capable of handling petabyte-scale data and millisecond-level inference.

He also reminded the audience of the five key priorities he had previously outlined, such as the need for sustainable compute power, better interconnect systems, and software-hardware co-optimization. This year’s keynote built upon those principles and introduced Huawei’s vision for its next-generation systems.

New Products and Roadmap

One of the most significant parts of Xu’s speech was the unveiling of Huawei’s updated roadmap for chips and AI computing platforms. Over the next three years, Huawei will roll out several generations of Ascend AI chips and Kunpeng general-purpose processors. Each generation is designed to increase performance and density while supporting the growing needs of training and inference workloads.

Xu introduced the TaiShan 950 SuperPoD, a general-purpose computing cluster based on Kunpeng processors. It offers pooled memory, high-performance storage, and support for mission-critical workloads such as databases, virtualization, and real-time analytics. The design is intended to support diverse computing needs, with significant improvements in memory efficiency and processing speed.

On the AI side, Xu announced the Atlas 950 and Atlas 960 SuperPoDs. These are high-density AI compute systems capable of scaling to tens of thousands of AI processors. The upcoming Atlas 960 SuperCluster will combine over one million NPUs and deliver computing power measured in zettaFLOPS. This marks a shift toward ultra-large-scale AI systems, designed to handle foundation models, search, recommendation, and hybrid workloads.

To enable this, Huawei developed UnifiedBus, a proprietary interconnect that supports high-bandwidth, low-latency communication between nodes. It also supports memory pooling and intelligent task coordination. According to Xu, this interconnect is critical for scaling AI systems efficiently and supporting hybrid PoDs that combine AI, CPU, and specialized compute.

Open Source and Ecosystem Strategy

Another core element of the keynote was Huawei’s strong push toward openness. Xu announced that the company will fully open-source its core AI software stack, including its CANN compiler and virtual instruction set. Toolchains, model kits, and the openPangu foundation models will also become available to developers and partners by the end of the year.

This move toward open-source infrastructure is part of Huawei’s strategy to lower adoption barriers and encourage collaboration across the AI ecosystem. Xu emphasized that AI innovation cannot happen in silos, and by opening up its tools and platforms, Huawei hopes to enable more organizations to build on its technology.

Strategic Implications

Xu’s keynote also carried strategic overtones, reflecting Huawei’s response to geopolitical challenges and technology restrictions. With limited access to advanced semiconductor manufacturing, Huawei is shifting its focus toward system-level innovation—building powerful infrastructure using available nodes while maximizing performance through architecture and software.

The message was clear: Huawei is betting on large-scale infrastructure, hybrid compute systems, and interconnect innovation to maintain competitiveness in AI. The company aims to provide alternatives to traditional U.S.-centric AI platforms and chip providers, especially in markets seeking greater technological independence.

Bottom line: Eric Xu’s keynote at Huawei Connect 2025 outlined a bold vision for the future of AI infrastructure. From SuperPoDs and interconnect breakthroughs to open-source initiatives, Huawei is positioning itself as a central player in the next phase of AI development. If the company can execute its ambitious roadmap and foster a strong ecosystem, it may reshape the global AI landscape—especially in regions looking to build homegrown compute capabilities.

The full transcript is here.

Also Read:

MediaTek Dimensity 9500 Unleashes Best-in-Class Performance, AI Experiences, and Power Efficiency for the Next Generation of Mobile Devices

AI Revives Chipmaking as Tech’s Core Engine

MediaTek Develops Chip Utilizing TSMC’s 2nm Process, Achieving Milestones in Performance and Power Efficiency

 


SiFive Launches Second-Generation Intelligence Family of RISC-V Cores

SiFive Launches Second-Generation Intelligence Family of RISC-V Cores
by Kalar Rajendiran on 09-18-2025 at 6:00 am

SiFive 2nd Gen Intelligence Family

SiFive, founded by the original creators of the RISC-V instruction set, has become the leading independent supplier of RISC-V processor IP. More than two billion devices already incorporate SiFive designs, ranging from camera controllers and SSDs to smartphones and automotive systems. The company no longer sells its own chips, choosing instead to license CPU IP and collaborate with silicon partners on development boards. This pure-play IP model allows SiFive to focus on innovation across its three core product families: Performance for high-end applications, Essential for embedded control, and Intelligence for AI-driven compute. The company also has an Automotive family of products with auto-grade safety and quality certifications.

The company recently announced the second generation of its Intelligence Family of processor IP cores, a complete update of its AI-focused X-Series. The new portfolio introduces the X100 series alongside upgrades to the X200, X300, and XM lines designed for low power and high performance in a small footprint for applications from the far edge to the data center.

On the eve of the AI Infra Summit 2025, I chatted with SiFive’s Martyn Stroeve, Vice President of Corporate Marketing and Marisa Ahmad, Product Marketing Director, to gain the following deeper insights.

Two Popular X-Series Use Cases

While very flexible and versatile, the second-generation X-Series targets two distinct use cases. The first is as a standalone vector CPU, where the cores handle complex AI inference directly without the need for an external accelerator. A leading U.S. semiconductor company has already licensed the new X100 core for its next-generation edge-AI system-on-chips, relying on the core’s high-performance vector engine to process filters, transforms, and convolutions efficiently.

The second and increasingly critical application is as an Accelerator Control Unit. In this role, the X-Series core replaces the discrete DMA controllers and fixed-function state machines that traditionally orchestrate data movement in accelerators. Another top-tier U.S. semiconductor customer has adopted the X100 core to manage its industrial edge-AI accelerator, using the processor’s flexibility to control the customer’s matrix engine accelerator and to handle corner-case processing.

The Rising Importance of Accelerator Control

AI systems are becoming more complex, with vast data sets moving across heterogeneous compute fabrics. Conventional accelerators deliver raw performance but lack flexibility, often suffering from high-latency data transfers and complicated memory access hardware. SiFive’s Accelerator Control Unit concept addresses these pain points by embedding a fully programmable scalar/vector CPU within the accelerator itself. This design simplifies programming, reduces latency, and makes it easier to adapt to new AI models without extensive hardware redesign—an area where competitors such as Arm have scaled back their investment. Here is a link to a video discussing how Google is leveraging SiFive’s first generation X280 as AI Compute Host to provide flexible programming combined with the Google MXU accelerator in the datacenter.

Four Key Innovations in the Second Generation

SiFive’s new Intelligence cores introduce four standout enhancements. First are the SSCI and VCIX co-processing interfaces, high-bandwidth links that provide direct access to scalar and vector registers for extremely low-latency communication with attached accelerators.

Second is a hardware exponential unit, which reduces the common exp() function operation from roughly fifteen instructions to a single instruction, an especially valuable improvement given that exponential function operations are second only to multiply–accumulate in AI compute workloads.

Third is a new memory-latency tolerance architecture, featuring deeper configurable vector load data queues and a loosely coupled scalar–vector pipeline to keep data flowing even when memory access is slow. Finally, the family adopts a more efficient memory subsystem, replacing private L2 caches with a customizable hierarchy that delivers higher capacity while using less silicon area.

Performance Compared to Arm Cortex-M85

SiFive highlighted benchmark data showing that the new X160 core,  delivers roughly twice the inference performance of Arm’s Cortex-M85 at comparable silicon area. Using MLPerf Tiny v1.2 workloads such as keyword spotting, visual wake-word detection, image classification, and anomaly detection, the X160 demonstrated performance gains ranging from about 148 % to over 230 % relative to the Cortex-M85 while maintaining the same footprint. This two-times advantage underscores SiFive’s claim that its second-generation Intelligence cores can outpace the best current Arm microcontroller-class AI processors without demanding more die area or power budget.

A Complete AI Software Stack

Hardware is supported by a robust AI RISC-V based software ecosystem . The stack includes an MLIR-based compiler toolchain, a SiFive-tuned LLVM backend, and a neural-network graph analyzer. A SiFive Kernel Library optimized for vector and matrix operations integrates with popular frameworks such as TensorFlow Lite, ONNX and PyTorch. Customers can prototype on QEMU, FPGA, or RTL/SystemC simulators and seamlessly transition to production silicon, allowing rapid deployment of AI algorithms on SiFive’s IP.

Summary

By marrying a mature software platform with cutting-edge vector hardware, SiFive’s second-generation Intelligence Family positions RISC-V as a compelling alternative for next-generation AI processing. These new products all feature enhanced scalar, vector and specifically with XM, matrix processing capabilities designed for modern AI workloads. All of these cores build on the company’s proven fourth-generation Essential architecture, providing the reliability valued by automotive and industrial customers while adding advanced features for AI workloads from edge to data center.

With initial design wins at two leading U.S. semiconductor companies and momentum across industries from automotive to data centers, the Intelligence Gen 2 products stands ready to power everything from tiny edge devices to massive training clusters—while setting a new performance bar by outclassing Arm’s Cortex-M85 in key AI inference tasks.

Access the press announcement here.

To learn more, visit SiFive’s product page.

Also Read:

Podcast EP197: A Tour of the RISC-V Movement and SiFive’s Contributions with Jack Kang

Enhancing RISC-V Vector Extensions to Accelerate Performance on ML Workloads

Enabling Edge AI Vision with RISC-V and a Silicon Platform