SILVACO 073125 Webinar 800x100

S2C: Empowering Smarter Futures with Arm-Based Solutions

S2C: Empowering Smarter Futures with Arm-Based Solutions
by Daniel Nenni on 03-07-2025 at 8:00 am

S2c EDA ARM 2025

The tech world is sprinting toward a future where your fridge orders groceries, your car avoids traffic before you hit it, and security cameras don’t just watch—they understand. But behind these innovations lies a messy truth: building the brains for these smart systems is complicated.

Fresh off the 2024 Arm Tech Symposia circuit in Asia-Pacific, S2C is making waves with ready-to-deploy solutions for Smart Vision IoT and next-gen automotive tech. Let’s unpack how they’re helping developers cut through the noise and ship smarter products faster.

Smart Vision IoT: No More Reinventing the Wheel

Picture this: You’re designing a smart camera for retail stores. It needs to count customers, recognize loyal shoppers, and alert staff about empty shelves—all while sipping battery life. Sounds cool, right? Now imagine doing this from scratch in a market where every device has wildly different needs.

That’s where S2C’s Arm-Based Smart Vision Reference Design swoops in. Built on Arm’s rock-solid IP and S2C’s highly expandable prototyping system, this platform is like a Lego set for IoT innovators. Developers get pre-verified IPs that ditch months of grunt work, plus power-efficient performance for gadgets that never sleep. And here’s the kicker: FPGA prototyping lets teams test software before the chip even exists. No more crossing fingers during crunch time.

At the Arm Tech Symposia, S2C showcased two demos. One turned raw H.264 video into buttery-smooth playback, while another used AI to spot humans in live footage—think “smart surveillance meets Minority Report.” The message? Stop building foundations. Start stacking your genius on top of theirs.

Cars Are Computers Now. Let’s Treat Them That Way.

Your car’s codebase is now longer than War and Peace… times a million. Software-defined vehicles are turning dashboards into app stores and safety systems into AI co-pilots. But with great code comes great complexity. Automakers need tools to prototype, test, and iterate faster than a Tesla hitting Ludicrous Mode.

S2C’s answer? An Arm-Based Hybrid MCU Prototyping Platform that’s part sandbox, part crystal ball. It blends S2C’s Prodigy logic systems with Arm’s Cortex-R52+ processors—think of it as a playground for tomorrow’s car brains. Engineers can migrate from the traditional distributed processing architecture and simulate multi-domain controllers, and test new software, all before committing to hardware.

Why This Collab Feels Like Cheat Codes for Developers

S2C and Arm aren’t just selling widgets—they’re handing out shortcuts. With over 600 global customers already using S2C’s platforms, this partnership packs three big punches:

  • Speed: Skip the “plumbing phase”with pre-validated designs.
  • Freedom: Bake in your IP without starting from zero.
  • Trust: Lean on Arm’s certifications (SystemReady™, PSA Certified) and S2C’s street cred.

As Zhao Yongchao from Arm China puts it: “S2C’s partnership with Arm China enables us to address the unique challenges of the IoT and automotive industries. With their leadership in EDA and prototyping, S2C is well-positioned to help clients innovate and meet the growing demands for smarter, more efficient solutions.”

The Takeaway? Future-Proofing Is Now a Plug-in

The race to innovate isn’t slowing down—but S2C  just gave developers a jetpack. Whether you’re crafting AI-powered cameras or reimagining how cars think, their platforms slice through complexity like a hot knife through butter.

So here’s the real question: What could you build if someone else handled the heavy lifting?

Ready to turn your “someday” ideas into “shipped yesterday”? Let’s chat about how S2C can fast-track your next breakthrough with Arm-based solutions.

Craving more tech insights? Follow us for updates on IoT, automotive innovation, and the tools rewriting the rules of design.

Speak to an Expert

Also Read:

Accelerating FPGA-Based SoC Prototyping

Unlocking SoC Debugging Challenges: Paving the Way for Efficient Prototyping

Evolution of Prototyping in EDA


CEO Interview with Mike Noonen of Swave Photonics

CEO Interview with Mike Noonen of Swave Photonics
by Daniel Nenni on 03-07-2025 at 6:00 am

Mike Noonen

Mike Noonen is CEO of Swave Photonics and has 30 years of experience leading technology businesses resulting in two IPOs and multiple acquisitions. Most recently he was the CEO of MixComm acquired by Sivers Semiconductor in early 2022.

Noonen was the Chairman and co-founder of Silicon Catalyst, the world’s first semiconductor incubator and EE Times 2015 Start-up of the Year. He has advised and led turnarounds at numerous innovative private and public companies such as Ambiq Micro, SiFive, Silego (acquired by Dialog Semiconductor), Mythic, Kilopass, Adapteva and Rambus.

In 2013 he was elected to the Global Semiconductor Alliance Board of Directors. Noonen holds multiple patents in the areas of Internet telephony and video communications.

Tell us about your company?

Swave Photonics is a fabless semiconductor company that designs Holographic eXtended Reality (HXR) chipsets with proprietary diffractive photonics technology. Using proven non-volatile Phase Change Materials (PCM) on a standard CMOS semiconductor process, Swave’s HXR technology creates high-resolution 3D images for Augmented Reality (AR) glasses and other applications. The HXR chip with 100s of millions to billions of these nano-pixels is illuminated by a low-power laser light source. The resulting images accurately portray an image’s depth in comparison to its surroundings, providing a natural and immersive viewing experience. Swave will deliver a reality-first user experience that integrates seamlessly with the physical world.

What problems are you solving?

Swave addresses the limitations of traditional AR form factors and displays. Its proprietary technology steers and sculpts lightwaves to achieve true holography. This allows the human brain and eyes to visually process the image naturally, solving the Vergence-Accommodation Conflict (VAC) — the phenomena in existing AR options that causes headaches, nausea and fatigue. Swave’s HXR technology does not require a waveguide, reducing size, weight, and cost, and greatly improving overall efficiency allowing for all-day use.  By using holography, Swave is able to handle prescription lens compatibility in software making it easy to provide optical correction for those that need it without requiring lens inserts or other cumbersome solutions.

What application areas are your strongest?

Swave’s first application area is in AR, with a focus on enabling compact, lightweight smartglasses that offer all-day battery life and prescription compatibility. Its HXR chip is the first spatial light modulator specifically designed for digital holography and AI-powered spatial computing. This groundbreaking technology provides a reality-first user experience where digital elements seamlessly interact with and complement the physical world. While the initial focus is on AR glasses, the applications for Swave’s HXR technology extend across a wide range of industries, including healthcare, manufacturing, logistics, retail, communication, gaming, education, automotive and aerospace.

What keeps your customers up at night?

Customers are challenged with creating AR solutions that are functional and stylish and don’t sacrifice performance or comfort. Current spatial computing experiences often rely on bulky and uncomfortable form factors, which isolate users and deliver unnatural or predominantly digital experiences. Customers are concerned with finding a technology that enables lightweight, attractive form factors while maintaining high performance and extended battery life.

What does the competitive landscape look like and how do you differentiate?

The competitive landscape for spatial computing is challenging, with many companies working to overcome the limitations of current AR and XR technologies. Swave’s HXR technology stands out and differentiates itself for several reasons:

  • True holography: Swave achieves true 3D holography with up to 64 gigapixels, providing high quality, immersive images that accurately depict depth and context.

  • All-day use:  Swave’s highly efficient holographic technology enables all-day use through light-steering to ensure all the light is used where it’s needed without throwing any away, it’s bistable pixels don’t require refresh until the image actually changes, and it doesn’t require inefficient waveguide technology

  • Form factor: Unlike traditional solutions that rely on bulky and complex optics, Swave’s HXR eliminates the need for waveguides, varifocal lenses, and stereoscopy enabling glasses that look similar to those you already wear.

  • DynamicDepth: Swave’s patented DynamicDepth technology allows images to be portrayed at life-like distances.

What new features/technology are you working on?

Swave is actively working on advancing its HXR technology to bring it closer to commercialization.

Swave’s technology also has the potential to extend beyond smartglasses. Future applications include heads-up displays (HUDs) for automotive use that offer drivers an augmented and immersive visual experience that enhances safety and usability. In the long term, Swave aims to create immersive holographic displays that do not require glasses, paving the way for a revolutionary shift in how we interact with digital information.

How do customers normally engage with your company?

The company is now taking orders for HXR development kits. These development kits include hardware and software that allow companies to design, prototype, and test new AR hardware and form factors using Swave’s cutting-edge chipset, offering a streamlined and efficient pathway for companies to bring their products to the market. The Swave team also engages with new customers and provides demonstrations at industry events.

Also Read:

CEO Interview: With Fabrizio Del Maffeo of Axelera AI

CEO Interview with Pradyumna (Prady) Gupta of Infinita Lab

Executive Interview: Steve Howington of the Protective, Marine & High Performance Flooring Division of Sherwin-Williams


DVCon 2025: AI and the Future of Verification Take Center Stage

DVCon 2025: AI and the Future of Verification Take Center Stage
by Lauro Rizzatti on 03-06-2025 at 10:00 am

DVCon 2025

The 2025 Design and Verification Conference (DVCon) was a four-day event packed with insightful discussions, cutting-edge technology showcases, and thought-provoking debates. The conference agenda included a rich mix of tutorial sessions, a keynote presentation, a panel discussion, and an exhibit hall with Electronic Design Automation (EDA) vendors demonstrating their latest tools and engaging with customers.

AI Dominates the Discussion

A dominant theme throughout the event was Artificial Intelligence (AI), which was featured in over 60 technical papers, 18 technical posters, a dedicated keynote, and a high-profile panel discussion. The tutorial sessions included real-world customer case studies and pioneering university research, demonstrating how AI is reshaping verification methodologies and challenging traditional workflows.

Prime Time for Hardware-Assisted Verification

Hardware-Assisted verification (HAV) was also a prominent topic in technical sessions, keynote and panel. As AI drives innovation across virtually all industries, from industrial, to agriculture, banking, medicine, automotive, mobile, and more, the verification engineers are grappling with the complexity of increasingly sophisticated AI processing hardware. The surge in AI-specific accelerators, custom chips, and groundbreaking computing architectures has amplified verification challenges, pushing traditional software-based verification methods to their limits.

In response, hardware-assisted verification (HAV) platforms have become a cornerstone of modern verification strategies. Their ability to manage massive workloads, expedite test cycles, facilitate shift-left methodologies, and provide comprehensive system-level validation and debugging is increasingly vital. The surge in user interest, demonstrated by the strong attendance at conference technical sessions, underscores this trend. With the continued advancement of AI hardware, the necessity for HAV solutions will only intensify, cementing their position in guaranteeing the performance, accuracy, and reliability of next-generation computing systems.

The Rise of Portable Stimulus Technology

Following AI and HAV, portable stimulus technology emerged as another significant subject of interest. This methodology, now adopted by multiple companies, was explored in-depth, with discussions on internally developed frameworks and EDA vendor-driven solutions. Attendees witnessed how the industry is increasingly leveraging portable stimulus to improve test coverage and verification efficiency.

Other Key Topics

Beyond AI, HAV and portable stimulus, DVCon also highlighted significant advancements in:

  • UVM Deployment Case Studies: The industry continues to refine and expand the Universal Verification Methodology (UVM) framework, with companies sharing their successes and lessons learned.
  • A Broad Spectrum of Verification Topics: From formal verification techniques to new methodologies in functional safety and security, DVCon showcased a diverse array of technical advancements in design verification.
Keynote: The AI-Driven Revolution in Chip Design and the Rise of the AI Factory

The much-anticipated keynote, “AI Factories Will Drive the Re-invention of Chip Design, Verification, and Optimization,” delivered by Ravi Subramanian, Chief Product Management Officer and leader of the Product Management & Markets Group (PMG) at Synopsys, and Artour Levin, Vice President of AI Silicon at Microsoft, provided a compelling analysis of how artificial intelligence is fundamentally reshaping the semiconductor industry.

The speakers emphasized that AI is no longer just an enabler of innovation, rather, it has become the driving force behind a radical transformation in chip design and verification. This shift is fueled by the relentless expansion of large language models (LLMs) and their insatiable demand for high-performance AI accelerators. Ravi framed the magnitude of this evolution by comparing Moore’s Law, which historically predicted the doubling of transistor density approximately every 18 months, to the explosive growth in LLM parameters, which now double—or even quadruple—within just three to six months.

The presentation featured detailed charts and striking data points that underscored the seismic changes underway. These insights illustrated the mounting complexity in verification, design optimization, and system-level architecture, highlighting how the industry is contending with an era where traditional methodologies are losing steam. The increasing demand for processing throughput presents one of the biggest engineering challenges, as AI workloads continue to scale exponentially. At the same time, memory bandwidth and capacity are struggling to keep pace with the ever-growing model sizes that demand faster access and larger storage capabilities. The tsunami of data required for AI training and inference is estimated to double the total amount of data traversing the Internet each year, adding pressure to an already strained infrastructure.

Another critical issue is interconnect bandwidth, which has become a major bottleneck as AI workloads require ultra-high-speed data movement between compute nodes.

The challenge of energy efficiency looms large, as the industry strives to balance performance gains with power constraints for sustainable scaling. Artour emphasized, “Managing power while scaling performance is critical. If power isn’t controlled, deploying these chips in data centers becomes infeasible. The industry must figure out ways to exponentially grow compute, memory bandwidth, and interconnect efficiency while keeping power consumption sustainable.” He further noted, “Historically, software entered the chip development cycle late, but AI accelerators are fundamentally software accelerators. This shift requires software modeling to begin at the architectural phase. Understanding workloads early enables more efficient hardware design, optimizing transistors and silicon resources to maximize performance while at minimizing power. Additionally, today’s software stacks are highly complex. Waiting for silicon to develop software is no longer viable, pre-silicon software development is essential, adding another layer of design challenges.”

Compounding these technical challenges is the massive financial burden of developing next-generation AI hardware. The capital expenditure (CapEx) required to sustain innovation in this space is reaching unprecedented levels, forcing companies to make strategic, long-term investments in infrastructure.

Ravi summed up the momentous shift by declaring that we are on the cusp of a new industrial revolution, one defined not by traditional manufacturing but by AI-powered computation at an unprecedented scale. The AI Factory, a paradigm where AI not only designs chips but optimizes, verifies, and accelerates the next generation of semiconductor breakthroughs, is no longer a vision for the future. It is happening now.

With AI taking center stage in the reinvention of chip design, verification engineers, system architects, and semiconductor companies must adapt to a landscape that is evolving faster than ever before. The keynote left attendees with a powerful message: embracing AI is no longer optional, it is essential for those looking to stay ahead in the age of AI-driven silicon innovation.

Panel: Are AI Chips Harder to Verify?

One of the conference highlights was the panel discussion titled “Are AI Chips Harder to Verify?”

Moderated by Moshe Zalcberg, CEO of Veriest Solutions, the discussion brought together a distinguished panel of industry experts: Harry Foster, Chief Scientist, Verification at Siemens EDA; Ahmad Ammar, Technical Lead, AI, Infrastructure, and Methodology (AIM) at AMD; Stuart Lindsay, Principal Hardware EDA Methodology Engineer at Groq; Shuqing Zhao, Formal Verification Lead at Meta; and Shahriar Seyedhosseini, Generalist Engineer at MatX.

The panel unanimously acknowledged the substantial hurdles in verifying AI chips, but their detailed analysis revealed nuanced perspectives.

Harry Foster (Siemens EDA) highlighted a crucial divergence from traditional SoC verification. He emphasized that AI architectures operate on probabilistic principles, contrasting sharply with the deterministic nature of conventional designs. This shift implies that AI chips aim for “approximate correctness” within acceptable thresholds, rather than strict pass/fail outcomes, thereby necessitating a fundamental recalibration of verification methodologies. How, he questioned, do you effectively verify a system that inherently operates on non-deterministic principles?

The discussion debated the limitations of existing EDA tools. While these tools have significantly advanced the verification of traditional chips through formal verification, simulation, and emulation, they struggle to adapt to the dynamic and adaptive behavior of AI accelerators. These accelerators, driven by constantly evolving learning models, diverse data distributions, and statistical inference, present a moving target for verification.

Ahmad Ammar brought attention to the scale of deployment, emphasizing that AI chips are typically deployed in massive clusters to handle demanding AI workloads. He pinpointed the difficulty of adapting those massive workloads onto individual chips or small subsets to achieve realistic verification.

Stuart Lindsay focused on the complexities of AI chip data paths, where the flow of data is not static but dynamically changes based on the parameter values being processed. This variability, coupled with mixed-precision operations where precision levels shift throughout the pipeline, adds significant complexity to modeling and prediction. Furthermore, the dynamic evolution of system states and the presence of feedback loops further complicate the verification process.

Shuqing Zhao championed the role of formal verification in AI chip validation, while acknowledging the need for adaptation to handle the probabilistic and approximation-driven nature of AI workloads.

The panel collectively recognized the imperative of adopting a “divide and conquer” strategy to manage the sheer complexity of AI chip verification.

A final, provocative question from DVCon Panel Chair Ambar Sarkar asked the panelists to rate the difficulty of AI chip verification on a scale of 0 to 100% (0 being traditional chips, 100% being twice as hard), and proposed his own estimate at just 5%. The panelists’ responses varied, reflecting their diverse perspectives. While most leaned towards the higher end of the scale, acknowledging the increased difficulty, Shahriar Seyedhosseini offered a contrasting view. He pointed out that, unlike general-purpose processors, AI workloads are statically compiled, which simplifies fine-tuning and coverage. This, he argued, offsets some of the added complexity, limiting the verification challenge to only 5% more than that of a traditional SoC. He also noted that AI chip verification is, in many ways, more enjoyable.

The panel concluded with a resounding acknowledgment that the increasing complexity of AI accelerators necessitates a fundamental rethinking of hardware verification. The industry must adapt and innovate to ensure the performance and correctness of these critical components in the evolving landscape of AI-driven computing.

Conclusion

DVCon 2025 delivered a comprehensive look at the future of design verification, with AI at the forefront of innovation. As verification engineers navigate new challenges in AI hardware, portable stimulus, and hardware-assisted verification, DVCon continues to be the premier platform for knowledge sharing and industry collaboration.

Also Read:

Synopsys Expands Hardware-Assisted Verification Portfolio to Address Growing Chip Complexity

How Synopsys Enables Gen AI on the Edge

What is Different About Synopsys’ Comprehensive, Scalable Solution for Fast Heterogeneous Integration


AlphaDesign AI Experts Wade into Design and Verification

AlphaDesign AI Experts Wade into Design and Verification
by Bernard Murphy on 03-06-2025 at 6:00 am

DVCon 2025 talk min

I mentioned in an earlier blog that multiple presentations at DVCon 2025 went all-in on AI-assisted design and verification. The presentation was one such example, looking very much at top-down AI-expert application of agentic flows to design and verification. AlphaDesign is a new startup out of UC Santa Barbara headed by William Wang (Professor of AI and a track record of research engagements with Amazon, Intel, Nvidia and others.)

The role of AI experts in advancing AI for design and verification

The promise of AI in this domain is both exciting and concerning. Exciting because there is potential to revolutionize productivity. Concerning not for loss of jobs (that will never happen), but because AI is still viewed as about approximate, probabilistic answers while engineering is about precision; approximate may be helpful for beginners and quick starts but not for production quality. We will still lean heavily on production tools (synthesis, simulation, etc) to validate and optimize, initially all the way through the flow, likely moving later in the flow as we build confidence in the quality of AI-based design generation.

Yet if we want to seize this opportunity and not talk ourselves out of big advances before we start, we need native AI experts to be involved in this journey as much as native EDA experts. EDA teams with their own AI experts will continue to push from the bottom up, very much with a focus on near-term profitability in optimizing proven flows, because that’s how they can run a healthy business. Top-down AI experts meanwhile can push what could be possible in generation and analysis from natural language prompts/specs, and beyond. That’s where I see AlphaDesign fitting in.

Certainly trust will need to be built along that journey, initially in helping refine verification suites for improved coverage. And perhaps in generating snippets of RTL as designers start to become comfortable with that idea. Later becoming a more accomplished aid in the design and verification process. We’ve been down similar paths before in EDA. I see what AlphaDesign is proposing an yet another improvement in productivity, initially helping tune current flows, gradually switching us over to new ways of thinking about the design task.

Agentic flows

The company calls their solution ChipAgents™, reflecting that the approach uses LLM agents to accomplish a goal. An agent in this world incorporates planning (decomposing a task into subtasks and refining past action), memory (managing context over a long period of time), and tool use (for assessment on a proposed solution and to elaborate designs/tests).

Agentic flows have been making big strides in the software engineering world. For automatically locating and fixing bugs in a software repository, LLM-only success rate is pretty sad (<3%). Adding basic agent support improved that rate by 8X. Amazon Q developer agent doubles that rate to 55%. A further refinement gets to 62%+. Not hands-free yet but an impressive advance.

Of course this is for software which can draw on a massive training corpus. Hardware is much more difficult, not because we have to be more clever but because there is so little of it to use in training (by one estimate, SystemVerilog code amounts to 0.28% of the lines of code accessible in Python+Java+Go+Javascript.) Also toolchains in hardware design compound complexity over software engineering.

Progress

This is an early-stage company, receiving their seed funding round in August 2024. Initial staffing has drawn from UCSB graduates with a heavy emphasis on AI and data science training.

The first step has been to build a serious reference design they call ChipAgentsBench, curated from OpenSource projects like OpenTitan. Good move, since many GenAI demonstrations for design that I have seen so far have been based on toy examples. This reference has 2.8k SystemVerilog files, amounting to over 600k lines of code. AlphaDesign say they plan to open-source a subset of this design at some point.

Details on demonstrated agent capabilities are thin so far. There is a CoverAgent aiming to help improve coverage in testing. As a general concept, using AI to help improve bottom-up coverage is not new. What looks intriguing here in talking to a couple of the R&D folks is looking at coverage based on reading natural language specs. As an example, finding ways to boost coverage in error-handling logic is always challenging in bottom-up testing but may be easier to spot/exercise based on reading a spec.

Unfortunately I missed their talk, thanks to conflicts, so take my limited understanding with a grain of salt. The company cites common use models in early engagements include generating DV documentation and code snippet generation for utility scripts. They also mention code summarization, RTL/testbench generation predicated on existing files and design verification IPs – all high value targets if/when proven.

Definitely a company to watch. You can check out the website HERE.

Also Read:

An Imaginative Approach to AI-based Design

Powering the Future: How Engineered Substrates and Material Innovation Drive the Semiconductor Revolution

The Double-Edged Sword of AI Processors: Batch Sizes, Token Rates, and the Hardware Hurdles in Large Language Model Processing


CEO Interview with Pradyumna (Prady) Gupta of Infinita Lab

CEO Interview with Pradyumna (Prady) Gupta of Infinita Lab
by Daniel Nenni on 03-05-2025 at 10:00 am

DSC01362

Dr. Pradyumna (Prady) Gupta is the Founder and Chief Scientist at Infinita Lab and Infinita Materials, pioneering advancements in materials testing and specialized chemicals for cutting-edge industries. A passionate advocate for onshoring critical manufacturing industries, such as semiconductors, back to America, Prady is dedicated to fostering innovation that strengthens domestic supply chains and reinforces technological sovereignty.

At Infinita Lab— “Uber of Materials Testing”—Dr. Gupta leads network of over 2,000 labs that deliver solutions in metrology, materials testing, product validation, and ASTM/ISO standardized testing. Through Infinita Materials, he provides specialized chemicals and materials that power critical applications across semiconductors, electric vehicles, aerospace, and emerging technologies.

Prady’s career includes key roles at Saint-Gobain and Corning’s Gorilla Glass commercialization team. He co-founded multiple successful startups. An accomplished scientist, Prady holds several patents and has authored numerous research papers in materials science.

He holds an MBA from INSEAD / Wharton and a PhD in Materials Science, combining deep technical expertise with strategic business acumen. Prady’s entrepreneurial and scientific leadership continues to bridge today’s industrial needs with the innovations required to solve the challenges of tomorrow.

Tell us about your company.

I run two companies: Infinita Lab and Infinita Materials.

Infinita Lab is the “Uber” of materials testing, offering a comprehensive range of testing services. From advanced metrology techniques such as SEM, TEM, RBS, and XPS to environmental, dielectric, mechanical testing, and standardized ASTM or ISO testing, we provide it all through our network of 2,000+ accredited labs.

Our clients include industry leaders like Intel, Tesla, Applied Materials, and Lam Research, who rely on us to equip their engineers with a full spectrum of testing options. We are also the go-to lab for startups and smaller companies that lack in-house testing facilities.

Infinita Materials, on the other hand, specializes in delivering custom inorganic chemicals and sputtering targets for industries such as semiconductors, batteries, fuel cells, and electronics.

Clients value us because you’ll speak directly with a master-level materials scientist who can address your materials-related challenges—not just someone taking your contact information.

Based in Newark, California, we have a national reach thanks to our partnerships with accredited labs across the U.S. Our core services include nondestructive testing, advanced material characterization, chemistry analysis, vibration testing, and root cause failure analysis.

We collaborate with everyone from startups to Fortune 500 companies, tailoring our solutions to meet their unique needs. As an innovation partner, we are as invested in our clients’ success as they are.

What problems are you solving?

Engineers often face the tedious and time-consuming task of finding labs capable of performing the specific materials testing they need. This discovery challenges inspired Infinita Lab, designed to streamline and simplify the process.

Infinita Materials addresses the challenge of designing new compositions for the electronics and semiconductor industries. It’s currently costly and difficult for engineers to find specialized composition-making facilities. Many Chinese manufacturers overlook these needs due to low ROI in small-volume compositions. Additionally, confidentiality and communication issues arise. We guarantee confidentiality and provide consultancy from masters-level materials scientists, ensuring specialized composition needs are met efficiently.

What application areas are your strongest?

Infinita Lab’s strength lies in adapting to the unique needs of diverse industries. We’ve built a solid reputation in high-tech sectors like semiconductors, nanotechnology, and energy storage. For instance, in the semiconductor space, we assist companies with thermal and failure analysis to meet rigorous performance and reliability standards.

In the energy sector, we focus on testing advanced battery technologies and solar panel materials, ensuring they’re efficient and durable in extreme conditions. Aerospace is another area of expertise, where we perform vibration and nondestructive testing to ensure components meet safety-critical requirements.

For Infinita Materials, we target semiconductor sputtering targets and specialized inorganic powders used in electronics, batteries, fuel cells, superconductors, and other cutting-edge applications. Additionally, we’re exploring additive manufacturing as a growing field, leveraging our expertise to innovate in 3D-printed materials for high-tech and industrial applications.

What keeps your customers up at night?

If I had to sum it up, I’d say it’s uncertainty—uncertainty about product performance, meeting regulatory standards, or potential failures in the field. For R&D teams, it’s the pressure of innovation—getting their product to market before the competition while ensuring reliability and performance. For manufacturers, it’s the fear of supply chain risks or defective materials. It’s about ensuring consistent quality. A single material defect in a supply chain can lead to catastrophic failures, recalls, or even safety risks. Engineers often face the tedious and time-consuming challenge of finding labs capable of performing the specific material testing they need. Compliance is another challenge—meeting ISO, ASTM, and other standards, which is equally difficult to adhere to.

Our role is to make all of this easier. We help clients identify potential risks early, solve complex material-related problems, and ultimately give them confidence that their products will perform as expected. Whether it’s failure analysis for a semiconductor company or environmental durability testing for a solar manufacturer, we’re solving problems that can make or break our clients’ success.

We understand these pressures because we’ve seen them time and again. That’s why we focus on more than just testing—we help our clients see the bigger picture. Our detailed, transparent reports don’t just identify problems; they provide tailored solutions for different industries, giving our clients peace of mind knowing they’re staying ahead of the curve.

What does the competitive landscape look like and how do you differentiate?

The testing industry is diverse, with large players and smaller, specialized labs often dominating specific niches. This fragmented landscape frequently forces clients to juggle multiple providers to meet their varied testing needs.

At Infinita Lab, we stand apart by offering a comprehensive range of services through partnerships with accredited labs, ensuring everything is accessible under one roof.

Our primary competition comes from in-house labs. While convenient and easily accessible, in-house labs are often sub-optimal solutions that can significantly handicap engineers. Here’s why:

Lack of Incentives: Managers in in-house labs typically don’t have the incentive to optimize performance. I’ve seen firsthand how instruments are down half the time or turnaround times stretch to months.

Obsolete Infrastructure: Instruments and technician skills must keep pace with the rapid innovations in materials science. As technology advances at an accelerating rate, in-house labs are often obsolete even before they are set up. This trend is only going to accelerate with the advent of AI.

Convenience vs. Capability: While in-house labs are convenient, they often lack the resources and capacity to provide cutting-edge solutions. We are working to make external labs just as accessible as in-house facilities, without compromising on quality or innovation.

Our differentiation lies in the following:

Expert Access: A master-level materials scientist will personally pick up your call, offering the expertise and guidance you need.

Concierge Service: We provide a seamless and easy concierge service, ensuring your needs are promptly and efficiently met.

Comprehensive Solutions at a Fraction of the Cost: With our expansive network of over 2,000 labs in the US, we provide a complete range of services at a fraction of the cost compared to in-house labs or most external providers. Engineers can request any type of materials testing and receive it quickly and affordably—a powerful proposition that outpaces both in-house and external lab alternatives.

For Infinita Materials, the competitive landscape primarily features Japanese small companies that excel in specialized materials and chemicals. However, we differentiate ourselves through superior communication and a personalized approach. We provide clients with access to high-level experts, ensuring tailored discussions that lead to the creation of high-quality products. This personalized interaction sets us apart, offering both technical expertise and a consultative edge that many competitors lack.

What new features/technology are you working on?

At Infinita Lab and Materials, we’re always looking to push the boundaries. Right now, we’re investing in expanding our capabilities for next-generation materials. These materials hold enormous potential, but testing them requires specialized equipment and expertise—challenges we are stepping up to meet.
The evolving requirements for AI hardware, such as advanced packaging and memory, have introduced new testing challenges. With our unique vantage point of the testing industry as a whole, we are leading the charge to upgrade and prepare the industry for these upcoming demands.

At Infinita Lab, we are working on:

  • A UPS-like tracker to better predict turnaround times (TAT) for samples in the lab.
  • A simplified system for sending samples for analysis.
  • A system to provide more testing options and better match testing needs with appropriate testing methods.

With Infinita Materials, we are addressing the challenge of designing new compositions for the electronics and semiconductor industries. Currently, it is both costly and difficult for engineers to locate specialized composition-making shops. Chinese manufacturers often find small-volume specialized compositions unattractive in terms of ROI. Additionally, issues surrounding confidentiality and communication further complicate the process.
We guarantee confidentiality and provide access to master-level materials scientists who offer expert consultancy on specialized compositions, ensuring high-quality solutions tailored to our clients’ needs.

How do customers normally engage with your company?

We strive to make the process seamless and straightforward. It typically begins with a conversation where clients outline their problem or testing needs. What sets us apart is that when you call Infinita Lab, you are greeted by a master-level materials scientist—not just someone taking your contact information. This expert access ensures that your concerns are addressed immediately, with tailored guidance and actionable insights. Together, we define the project scope. If additional support is needed, our experts are available around the clock to provide guidance.

For Infinita Materials, clients often engage with us for specialized compositions, such as semiconductor sputtering targets or custom inorganic powders. Our master-level experts work closely to understand their unique requirements, guaranteeing confidentiality and precision throughout the process. Our clients value this highly personalized approach, which includes direct access to experts who can discuss and refine their needs in detail. We provide a seamless, efficient concierge experience, ensuring your needs are met promptly and without unnecessary hurdles. With our expansive network of over 2,000 labs in the U.S., we offer a complete range of services quickly and affordably. Engineers can request any type of materials testing, confident they’ll receive high-quality results at a fraction of the cost compared to in-house labs or external providers.

Many of our clients build long-term relationships with us, treating Infinita Lab and Infinita Materials as extensions of their teams. One of the most rewarding aspects of our work is seeing these partnerships empower our clients to achieve their goals and push the boundaries of innovation.

Also Read:

Executive Interview: Steve Howington of the High Performance Flooring Division of Sherwin-Williams

2025 Outlook with Sri Lakshmi Simhadri of MosChip

CEO Interview: Mouna Elkhatib of AONDevices


An Imaginative Approach to AI-based Design

An Imaginative Approach to AI-based Design
by Bernard Murphy on 03-05-2025 at 6:00 am

Rise DA advantages min

DVCon 2025 was unquestionably a forum for pulling out all the stops in AI-based (RTL) design and verification, particularly around generative AI and agentic methods. I heard three product pitches and a keynote and have been told that every AI talk was standing room only. A pitch from Rise-DA particularly appealed to me because they have clearly taken care to balance intelligently between the promise of AI, the pros and cons of abstraction and the real dynamics of introducing new methods into established and proven flows and training.

Abstraction made easy for designers, and for training

Given the heritage of Rise-DA, it shouldn’t be surprising that abstraction is important to this story. The CEO, Badru Agarwala, was GM of the Calypto System Division at Mentor; high-level design/synthesis is in his DNA. Yet C/C++ HLS is still a barrier to adoption for most RTL designers. Rise-DA simplifies adoption by adding untimed/loosely timed SystemVerilog as a supported behavioral description. Rise also supports mixed language, allowing for reuse across multiple design styles.

The second key idea concerns training. A challenge in applying LLMs to RTL design in any capacity is that the code-corpus on which a tool can train is much smaller than for software, further reduced since no enterprise wants to share their trade secrets. Commonly a design team can train on their own RTL corpus, maybe adding some very generic training from the tool vendor. Hardly an extensive training set for generative AI.

However a high-level design tool can train on the full software corpus – C, C++, Python and more. There are some restrictions for synthesis which should be recognized, but those can be handled in fine-tuning and in linting to catch any escapes. What about synthesis from SystemVerilog – doesn’t that run into the same RTL corpus problems? According to the Rise folks the syntax you will use in synthesizable behavioral SV is (modulo some syntactic sugar) little different from that you would use in C/C++. So SV users in this context benefit from the same extensive software corpus training.

Connecting to production tooling through agents

Remember this is a high-level synthesis system. You’re going to use this flow to design new building blocks or subsystems from scratch. These might be for video/audio/radar/lidar pipelines or custom DNNs (or possibly a multi-layer perceptron). CPUs/GPUs/systolic arrays might be possible in principle but don’t play to the strengths of HLS.

The Rise flow will generate synthesizable RTL from your behavioral input, first through well-known HLS transformations (loop unrolling, pipeline scheduling, parallelism, etc.), then through technology/implementation mapping. Rise takes care of the first part, and they have integrated the Google XLS platform for the second part (in this context XLS is Google’s name for accelerated synthesis).

This flow is designed to be fast and lightweight, in support of fast turnaround synthesis/ implementation experiments to gauge performance and PPA. The Rise folks provided a couple of interesting insights here. They say this is “screaming fast”, allowing for a lot of experimentation to find optimal solutions. A designer might counter that ultra-fast synthesis can’t be very optimized; isn’t this a problem? Rise would agree for the early days when HLS was introduced, however today all that optimization can be left to production synthesis tools which are much better at handling that level of implementation detail.

To validate correctness and optimality of generated solutions the flow must run production tools like synthesis or RTL simulation. This is handled through agents which will launch said tools as and when you require. Rise will feedback estimates like PPA to provide you with insight on how you want to tune the high level.

For verification, you will want to validate that generated RTL works the same way as the behavioral source against the behavioral tests you have been using in algorithm development. Rise instruments the generated RTL with transactors so you can plug the generated RTL back into those behavioral sims to check correspondence.

You can also add asserts, cover statements, even display statements, to your HLS model, which will be mapped through to the RTL in support of UVM-based testing. Rise will also add SV attributes (if requested) to the RTL to help you trace back and forth when you’re trying to localize a problem. All providing aids to help you localize mismatches or unexpected behavior, as a guide to further refining the HLS model.

Now add GenAI

With a solid foundation and training scope that can leverage the full range of learning drawn from software engineering, you might understand why I find this direction appealing. Rise supports the kinds of generative code development you might see in a CoPilot platform –statement completion, prompt-based code snippet generation, and retrieval-augmented generation (RAG) to find real code examples, documentation, test suggestions, etc. I believe RAG feedback is limited to customer in-house sources for obvious reasons.

I’m impressed. Well thought through, closely coupled to production tools and a way for RTL designers to progress past the C/C++ barrier. (I suspect even that dam will break as more system enterprises demand flows better suited to their ecosystems.) You can learn more HERE.

Also Read:

CEO Interview: Badru Agarwala of Rise Design Automation

SemiWiki Outlook 2025 with yieldHUB Founder & CEO John O’Donnell

TRNG for Automotive achieves ISO 26262 and ISO/SAE 21434 compliance


Executive Interview: Steve Howington of the Protective, Marine & High Performance Flooring Division of Sherwin-Williams

Executive Interview: Steve Howington of the Protective, Marine & High Performance Flooring Division of Sherwin-Williams
by Daniel Nenni on 03-04-2025 at 10:00 am

howington steve

Steve Howington is Global Vice President of Marketing for the Protective, Marine and High Performance Flooring Division of The Sherwin-Williams Company. During his 22 years with the company, Steve has held multiple commercial and business leadership roles in both the architectural and industrial groups within Sherwin-Williams.

Tell us about your company.

Sherwin-Williams is one of the largest paint companies in the world, with a portfolio that includes industrial coatings for advanced manufacturing facilities, like semiconductor fabs. Paint is often associated with aesthetics; however, safeguarding a facility’s investments and substrates requires protective coatings.

Our Protective, Marine & High Performance Flooring division has tailored advanced coatings systems designed to meet the unique standards of semiconductor fabs. These more robust coating systems can be seamlessly integrated to create more efficient Fab construction processes which compresses construction schedule and ultimately accelerates chip production. We have been able to continually find ways to make the fab construction process more efficient for some of the largest fab projects across the country.

What problems are you solving?

We simplify fab construction which accelerates chip production. Fab construction is a complex, costly, and lengthy process involving many partners, suppliers, and trades. Through our industry and mega project experience, along with our global research and development teams, we’ve found many ways to simplify this process and accelerate construction timelines while improving the lifecycle of each fab, without sacrificing safety and sustainability. Additionally, our coatings and application processes ensure extended maintenance cycles and overall reduced costs over the lifetime of each area and asset.

We do this not only through our products, but also through the preconstruction guidance and expertise we provide to the largest semiconductor companies in the U.S. and worldwide.

What application areas are your strongest?

We work in several critical areas of the semiconductor fab. Our protective coatings are designed to protect both clean and non-clean zones, including walls, ceilings, floors, and structural elements. We are leaders in cleanroom-specific applications, providing low VOC, high-performance coatings that meet outgassing standards and prevent contamination.

Beyond clean zones, we deliver durable coatings for industrial wastewater treatment plants, central utility buildings, gas chemical storage and other campus buildings that support the fabs operation and meet both performance and environmental standards. Our shop-applied steel and concrete protective coatings solutions streamline construction by application taking place offsite, reducing onsite labor and risks while accelerating timelines.

What keeps your customers up at night?

We’re all aware of the focus that is being placed on semiconductor chip production right now. With the worldwide implications of this technology, those who are responsible for making it must produce with speed and efficiency. That leads to our customers experiencing significant pressure to meet aggressive construction timelines and production targets. Tight deadlines, mixed with labor shortages and rising material costs, create a lot of stress for our customers as they navigate these issues.

What does the competitive landscape look like and how do you differentiate?

Participating in semiconductor fab construction mega projects demands deep knowledge of specific standards and requirements. There are plenty of coatings providers, but where we differentiate ourselves is our mindset – we don’t see ourselves as just a coatings company, we are a partner to the semiconductor industry and its stakeholders. That mindset shift helps set us apart from other coatings companies. We partner with every stakeholder in the construction value chain, know the technology and industry and we’re invested in the future of chip technology.

How do customers normally engage with your company?

Our Semiconductor Construction Solutions experts are available for direct consultation to answer any questions you may have no matter what phase of the project you’re in. We also have resources on our website, such as whitepapers like Maximizing Cleanroom Performance and Optimizing Facility Management with High-Performance Coatings, which help customers understand how our solutions simplify the most intricate aspects of fab construction. We work to ensure our customers feel supported at every step of their project, ultimately making their fab construction process safer, faster, and simpler.

To learn more about Sherwin-Williams Protective, Marine & High Performance Flooring coatings, visit our website or follow us on LinkedIn.

Also Read:

2025 Outlook with Dr Josep Montanyà of Nanusens

CEO Interview: John Chang of Jmem Technology Co., Ltd.


Unlocking the cloud: A new era for post-tapeout flow for semiconductor manufacturing

Unlocking the cloud: A new era for post-tapeout flow for semiconductor manufacturing
by Bassem Riad on 03-04-2025 at 6:00 am

figure2 FullScale

As semiconductor chips shrink and design complexity skyrockets, managing post-tapeout flow (PTOF) jobs has become one of the most compute-intensive tasks in manufacturing. Advanced computational lithography demands an enormous amount of computing power, putting traditional in-house resources to the test. Enter the cloud—an agile, scalable solution with hundreds of compute options, set to revolutionize how foundries manage PTOF workloads.

The unpredictability problem: Bridging the gap in resources

For years, foundries have relied on powerful in-house resources to handle PTOF tasks. But PTOF workloads aren’t consistent—sometimes demand surges, leading to waiting queues that delay production, while at other times, costly resources sit idle. Expanding on-premises infrastructure to match peak demand is both costly and slow, often taking months to deploy. In an industry where every day counts, finding a flexible solution is essential. This is where the cloud steps in, offering dynamic scaling and the freedom to match resources with demand as needed.

Cloud elasticity: Pay only for what you need

This on-demand scaling means foundries no longer must overprovision or commit to massive hardware investments upfront. Cloud platforms are transforming PTOF workflows by allowing foundries to pay only for what they use. With infrastructure managed by cloud providers, teams can shift their focus to developing applications and improving customer engagement while resources expand, or contract as needed. Cloud services offer semiconductor companies access to a global network of tools, empowering them to adapt quickly and push the boundaries of innovation.

Scaling up seamlessly: Siemens EDA and AWS join forces

This vision of agility and scalability became a reality in July 2023, when Siemens EDA and AWS signed a Strategic Collaboration Agreement to accelerate EDA workloads in the cloud. Out of this partnership came Cloud Flight Plans—automation scripts and best practices that streamline EDA deployment on AWS. Now, semiconductor manufacturers can effortlessly scale up resources, deploying hundreds of thousands of cores on demand. No more waiting months to expand data centers; cloud resources are available instantly, without capital investments or maintenance.

Building the foundation: A reference architecture for PTOF in the cloud

This agility is enhanced by Siemens EDA’s Cloud Reference Environment, an architecture purpose-built to handle PTOF jobs on AWS. Designed with secure principles and optimized for seamless workload management, this setup dynamically scales resources based on current demand. A central management system allocates resources to high-priority jobs and quickly redirects any underutilized capacity. Real-time spending insights empower semiconductor companies to control their cloud costs, ensuring resources are optimized at every step and that budget surprises are a thing of the past.

Real-time cost control with CalCM+: Smart scaling for smarter budgets

But it’s not just about scaling—it’s also about managing those costs smartly. Enter CalCM+, a

solution for maximizing cloud efficiency of Calibre PTOF jobs. Central to CalCM+ is adaptive resource management, which monitors active jobs and allocates resources based on actual demand. This intelligent scaling ensures resources aren’t wasted on overprovisioning, keeping budgets lean.

At the heart of CalCM+ is the cost calculation app, offering real-time spending insights by integrating directly with AWS pricing and the Slurm scheduler. Teams can track job costs in real-time, make informed decisions, and optimize resources based on precise needs. A recent study (see chart below) highlights how CalCM+ delivers measurable cost savings through smart scaling and predictive insights, proving that cloud efficiency is as much about cost control as it is about performance.

Data-driven insights: Predicting the future of resource use

CalCM+ goes a step further with a data analysis module that records usage metrics and job metadata, enabling predictions for future jobs. By studying historical data, this tool provides insights into expected runtime and memory usage, allowing teams to pick the best instance types for each task.

Lean Computing 

The AUTOREVOKECYCLE feature dynamically releases underutilized CPUs and reallocates them to high-demand jobs. This lean computing approach doesn’t just keep costs down—it ensures resources are used precisely where they’re needed, avoiding the waste that comes from overprovisioning. Figure 1 shows the effect of using the AUTOREVOKECYCLE feature.

Figure 1. The AUTOREVOKECYCLE feature dynamically releases underutilized CPUs and reallocates them to high-demand jobs.

Cost savings through the power of spot instances

Adding to the cost-saving toolkit is the cloud’s ability to offer dynamic pricing. Foundries can now use spot instances to run high-performance tasks at a fraction of the regular cost. These spot instances, ideal for peak demand, tap into unused cloud capacity at lower rates, helping companies stay within budget without compromising performance.

FullScale processing: Speeding up time-to-tapeout

Cloud elasticity also shines with Calibre FullScale high-throughput processing capabilities, a compelling answer to the compute-intensive demands of PTOF. By enabling parallel lithography simulations, Calibre FullScale slashes job completion times, making faster tapeouts more attainable than ever. With the flexibility to adjust resources based on cost and performance needs, FullScale delivers optimal efficiency, ensuring every task is completed on schedule and with maximum precision (figure 2).

Figure 2. Calibre FullScale speeds time to tapeout.

Tapping into GPU power: Acceleration for compute-intensive tasks

For leading-edge technology nodes, the availability of GPU instances in the cloud is a game-changer. Compute-intensive tasks—like lithography, etch, and e-beam simulations—now run with hardware-accelerated performance, reducing runtimes dramatically. With GPU acceleration, manufacturers can conduct highly detailed simulations that were previously limited by on-premises constraints. The cloud’s GPU capabilities bring precision and scale, redefining what’s possible in PTOF simulations.

Cloud-native orchestration: The Kubernetes advantage

Orchestration systems like Kubernetes are also part of this cloud-driven transformation. Siemens EDA’s solutions leverage container orchestration to enable seamless job distribution across cloud resources. With Kubernetes automating deployment, scaling, and workload management, running complex Calibre PTOF jobs becomes effortless, whether on-premises or in the cloud. This cloud-native execution model maximizes resource use, delivering scalability, efficiency, and flexibility for semiconductor manufacturers.

A new era for semiconductor manufacturing

As semiconductor manufacturing embraces the cloud, a new era is taking shape—one where agility, efficiency, and cost control redefine the way PTOF tasks are managed. With the flexibility to scale on demand, optimize budgets, and orchestrate workloads seamlessly, cloud-based PTOF workflows are setting new standards. By tapping into cloud capabilities, container orchestration, and GPU resources, semiconductor manufacturers gain the edge needed to drive innovation, speed time-to-market, and thrive in an ever-evolving industry.

For a deep dive into this PTOF cloud flow, please see the technical paper, Crush Semi-manufacturing runtimes with Calibre in the cloud.

Bassem is a cloud product engineer specializing in scalable and cost-efficient computing solutions for semiconductor design and manufacturing. With expertise in Kubernetes, high-performance computing, and cloud infrastructure, Bassem focuses on optimizing post-tapeout workflows, EDA tool deployment, and hybrid cloud strategies.

Also Read:

Getting Faster DRC Results with a New Approach

Full Spectrum Transient Noise: A must have sign-off analysis for silicon success

PSS and UVM Work Together for System-Level Verification

Averting Hacks of PCIe® Transport using CMA/SPDM and Advanced Cryptographic Techniques


SemiWiki Outlook 2025 with yieldHUB Founder & CEO John O’Donnell

SemiWiki Outlook 2025 with yieldHUB Founder & CEO John O’Donnell
by Daniel Nenni on 03-03-2025 at 10:00 am

John O’Donnell YieldHUB SemiWiki

What was the most exciting high point of 2024 for your company?

One of the most exciting milestones in 2024 was the further expansion of our data science team, which allowed us to take a bold step toward fully integrating AI into our solutions. This not only is enhancing our offerings but also helped us grow within our existing customer base.

Another highlight for yieldHUB was attracting new and strategic customers, for example those developing AI chips and others involved in onshoring testing in the USA and Europe.

What was the biggest challenge your company faced in 2024?

The biggest challenge in 2024 was how to keep developing yieldHUB’s next-generation platform while meeting the increasing demand for our current platform as we added new customers.

How is your company’s work addressing this biggest challenge?

We expanded our R&D and customer success teams to accelerate the new platform’s progress while ensuring that our customers continued to receive top-tier support and service. Maintaining strong customer relationships and responsiveness remains a top priority.

Question: What do you think the biggest growth area for 2025 will be, and why?

We have a new product coming out soon called yieldHUB Live, our AI driven, tester-agnostic real time monitoring system for test and probe. It speeds up testing by recommending to the operator what to do when there are issues. It also allows in-depth remote monitoring of the test/probe floor and tracks key parameters that reflect the integrity, or not, of testing and trimming. The demand for real-time insights is increasing and we believe yieldHUB Live will be a game-changer for test houses as the time that lots will spend on hold will greatly decrease and fewer testers will need to be bought when volumes increase again.

How is your company’s work addressing this growth?

We’ve worked hard to ensure yieldHUB Live, although complex behind the scenes, is simple to implement on any tester type, but is also scalable and exceptionally reliable. So once setup, it can quickly fan out in days to hundreds of testers as it requires no additional hardware.

Question: What conferences did you attend in 2024 and how was the traffic?

We participated in several key industry events in 2024, including ITC Test Week, Semicon West, International Microwave Symposium, PSECE, the Annual NMI Conference, IEEE VLSI Test Symposium, and the Semiconductor Wafer Test Expo. Attendance was strong across all these events, and we had great engagement with both existing and potential customers.

Question: Will you attend conferences in 2025? Same or more?

Absolutely! We’ve already confirmed that we’ll be exhibiting at the NMI Annual Conference (UK), Semicon West, ITC, and PSECE (Philippines), with plans to attend additional events throughout the year. We recently became a member of Silicon Saxony so the plan is to expand our presence in Germany and the EU.

Question: How do customers engage with your company?

We like to make sure that all yieldHUB customers receive exceptional support and value at every stage. Our dedicated Customer Success team is committed to providing proactive, personalized assistance, and our exclusive library of tools and resources empowers customers to maximize the benefits of our solutions.

New customers receive comprehensive online training and all customers have access to our highly efficient ticketing system, ensuring that any inquiries or issues are addressed swiftly. In fact, our median first response time in 2024 was just 5 minutes, meaning customers hear from one of our engineers almost instantly:

https://www.yieldhub.com/request-a-demo/

Beyond reactive support, we prioritize ongoing engagement. Our Director of Customer Success, Michael Clarke, regularly connects with customers via face-to-face video calls to ensure they are fully supported and to gain valuable feedback.

The results speak for themselves: Our customer satisfaction rating for closed tickets in 2024 was an impressive 95%, far exceeding the global benchmark of 74%. This level of responsiveness and care is another area that sets yieldHUB apart and we’re committed to continuing this high standard in 2025 and beyond.

Additional questions or final comments?

We’re excited for what’s to come in the next two years. Our focus remains on delivering cutting-edge AI-driven data analytics that empower semiconductor companies, especially at the test stage,  to improve efficiency and maximize profitability. We look forward to continuing our journey with customers, partners and the industry as a whole!

Talk to a yield expert
Also Read:

yieldHUB Improves Semiconductor Product Quality for All

Podcast EP167: What is Dirty Data and How YieldHUB Helps Fix It With Carl Moore

Podcast EP181: A Tour of yieldHUB’s Operation and Impact with Carl Moore

Podcast EP243: What is Yield Management and Why it is Important for Success with Kevin Robinson

Podcast EP254: How Genealogy Correlation Can Uncover New Design Insights and improvements with yieldHUB’s Kevin Robinson


Powering the Future: How Engineered Substrates and Material Innovation Drive the Semiconductor Revolution

Powering the Future: How Engineered Substrates and Material Innovation Drive the Semiconductor Revolution
by Kalar Rajendiran on 03-03-2025 at 6:00 am

Substrate Vision Summit Engineered Substrate Panel Session

Engineered substrate technology is driving an evolution within the semiconductor industry. As Moore’s Law reaches its limits, the focus is shifting from traditional planar wafer scaling to innovative material engineering and 3D integration. Companies like Soitec, Intel and Samsung are pioneering this transition, unlocking new levels of performance, efficiency, and scalability.

The topic of engineered substrates and material innovation was the focus of an interesting panel discussion at the Substrate Vision Summit 2025. Daniel Nenni, Founder of SemiWiki.com, moderated the session. SemiWiki.com is a popular online platform featuring an active discussion forum dedicated to semiconductors. Christophe Maleville, CTO & SEVP of Innovation at Soitec, David Thompson, VP Technology Research at Intel, and Kelvin Low, VP Market Intelligence & Business Development at Samsung Foundry, were the panelists.

Engineered Substrates: Changing the Competitive Landscape

One of the most compelling advantages of engineered substrates is the ability to preinstall critical performance elements into the wafer itself. By embedding functionality at the substrate level, chip designers can achieve significant improvements in efficiency and power savings.

A clear example of this was shown several years ago with RF-SOI wafers, where Soitec proved how a 2G design achieved 3G-level performance simply by switching to an RF-SOI wafer. This breakthrough provided GaAs-like performance without using GaAs technology, proving the potential of engineered wafers across various applications. Such advancements not only enhance performance but also accelerate product development cycles and reduce design complexity.

Addressing Challenges of Engineered Wafers

Semiconductor manufacturers face two major cost components: the cost of processing the wafer (internally or through procurement) and the cost of time (technology development cycles, learning curves, and integration challenges).

If every manufacturer were to independently develop SOI wafer technology, it would be an inefficient process with a steep learning curve. Instead, by relying on specialized providers like Soitec, foundries and chipmakers can source mature, high-performance engineered substrates and focus on differentiation at the chip level. This ecosystem-driven approach accelerates technology readiness and product development while ensuring cost efficiency.

Foundry Adoption and Market Demand

Foundries are recognizing the strategic importance of engineered substrates, particularly for Fully Depleted SOI (FD-SOI) technology. Samsung Foundry, a key player in this space, has already adopted 28FD-SOI in high-volume production at its Austin, TX fab, with customers like NXP and Lattice leveraging its benefits. Furthermore, Samsung is expanding its FD-SOI capacity to meet rising demand, while GlobalFoundries has also joined the ecosystem, reinforcing the technology’s viability. 18FD-SOI is on Samsung Foundry’s roadmap with ST Microelectronics as the lead customer.

Despite early concerns about cost and supply chain stability, FD-SOI has proven to be a compelling solution for applications that can manipulate body-biasing to achieve low power and high efficiency. Soitec has further addressed adoption challenges by investing in design infrastructure—including the acquisition of Dolphin Integration—to enhance support for SOI-based designs.

The 3D Future of Engineered Wafers

Both Soitec and Intel are embracing the 3D way of building engineered wafers. Soitec is advancing Smart Cut™ technology to enable precise layer transfer, facilitating hybrid bonding and wafer stacking for 3D integration. Intel, on the other hand, is developing Foveros 3D stacking, which enables transistors and logic units to be vertically integrated for improved performance and energy efficiency.

Unlike the traditional planar approach, where transistors are arranged side by side, the 3D method stacks layers vertically, reducing interconnect distances and power consumption. This shift is critical for sustaining Moore’s Law and ensuring future generations of semiconductors meet the growing demands of AI, high-performance computing, and edge applications.

Standardization and Scalability: Key to Mass Adoption

The conversation around wafer size standardization is evolving, but the real challenge lies in standardizing die-to-die interconnects for chiplet-based designs. UCIe (Universal Chiplet Interconnect Express) is leading this initiative, enabling interoperability across different foundries and manufacturers.

From an economic standpoint, scaling wafer size can yield more dies per wafer. For engineered materials like SiC or GaN, the cost-benefit analysis varies. A 300mm GaN substrate, for example, can achieve 20X Figure of Merit improvement over a 200mm GaN wafer, demonstrating the potential for engineered substrates to revolutionize power electronics and RF applications.

Value Creation Beyond Die Cost

Ultimately, the value of engineered substrates extends beyond raw die cost. By enhancing performance, reducing power consumption, and enabling new system architectures, these wafers deliver system-wide cost savings and new application possibilities. Without this broader perspective, certain technologies—such as SiC for power electronics—would struggle to establish a strong business case based solely on die cost.

Summary

As the semiconductor industry moves toward a 3D future, engineered substrates are becoming a strategic enabler of next-generation computing. Preinstalling critical performance elements into the wafer itself is helping redefine what’s possible in chip design. Foundries are embracing FD-SOI, and the push for larger, high-performance wafers is opening the door for more efficient, scalable, and cost-effective semiconductor manufacturing.

With increasing demand for AI, 5G, automotive, and high-performance computing, engineered substrates will be at the heart of the semiconductor industry’s next wave of innovation. The companies that leverage this technology early will be the ones shaping the future of computing.

Also Read:

Soitec: Materializing Future Innovations in Semiconductors

I will see you at the Substrate Vision Summit in Santa Clara

EVs, Silicon Carbide & Soitec’s SmartSiC™: The High-Tech Spark Driving the Future (with a Twist!)