Banner 800x100 0810

Anirudh Fireside Chats with Jensen and Lip-Bu at CadenceLIVE 2025

Anirudh Fireside Chats with Jensen and Lip-Bu at CadenceLIVE 2025
by Bernard Murphy on 06-04-2025 at 6:00 am

Anirudh with Jensen and Lip Bu

Anirudh (Cadence President and CEO) had two fireside chats during CadenceLIVE 2025, the first with Jensen Huang (Founder and CEO of NVIDIA) to kick off the show, and later in the day with Lip-Bu Tan (CEO of Intel). Of course Jensen and Lip-Bu also turn up for other big vendor shows but I was reminded that there is something special about Cadence and Anirudh’s relationship with both CEOs. Cadence and Jensen/NVIDIA are mutual customers and partners in providing services. Cadence’s latest hardware accelerator (the Millennium M2000 Supercomputer) is built on NVIDIA Blackwell and they collaborate on datacenter design technologies and drug design technologies. On the other hand, as Cadence’s former CEO, Lip-Bu  rescued it from a slump, much as he now aims to do with Intel, before handing over the reins to his protégé (Anirudh). Lip-Bu hugged Anirudh when he came on stage which says a lot about the close relationship between these two leaders. Cadence is tight with two of the world’s top semis, notwithstanding Intel’s current trials. Good for Cadence and for Anirudh!

Fireside chat with Jensen

Too much material here to summarize in a short section, instead I’ll select a few highlights. First, though officially announced in Anirudh’s following keynote, Cadence’s new AI accelerator, the Millennium M2000 Supercomputer, was revealed in this talk. This accelerator is based on the NVIDIA Blackwell platform and Jensen sprang a surprise by ordering 10 NVL72 Millennium M2000s on the spot. He’s not just happy that Cadence built their machine on his product, he also wants to build the Cadence product into the NVIDIA datacenter in support of their joint digital twin initiatives.

Now imagine how far digital twins can permeate into every aspect of industrial design. The concept is already big in logic chip design through platforms like Cadence’s Palladium and Protium systems. In the EDA world we don’t call these platforms digital twins but Jensen does – NVIDIA has been using Palladium for years to design GPUs. Now platforms like Millennium can extend design support out to non-electronic domains through AI on top of principled simulation: datacenters, wind tunnels and turbines, factory design and automation, drug discovery and design, the possibilities are endless.

Which leads to a question – if all design is going to move in this direction, what infrastructure will support all this AI and compute? Datacenters certainly but we’re talking about datacenters at hyperscale. AI factories – Jensen says they are now building Gigawatt factories. The capital required for such a factory is $60B – at the same level as Boeing’s revenues for a year. Few enterprises can make investments at that scale; most will be using digital twins as a service to design their products, drugs, and manufacturing systems. Which makes talk of sovereign AI less hyperbolic that it might have appeared. There are moves in this direction already in the US, in Japan and in the UK.

You could argue that this is just fear of missing out (FOMO) but it’s now global FOMO. The success or failure of enterprises, even country GDPs can swing on being ready or being late to the party. AI has indeed made these interesting times to live in.

Fireside Chat with Lip-Bu

I’m far from the first and far from the most qualified to offer an opinion on Lip-Bu taking the CEO position at Intel but this is my article, so here’s my opinion. That Intel is struggling is not news, and I’m sure a comfortable choice for many would have been a long-time semi or foundry exec, but I’m also guessing that the board decided it was time to be bold. Lip-Bu has served on the Intel board, he turned around Cadence from a not dissimilar slump, and he has an enviable reputation in running his own venture fund (40+ years, over 500 startups, and 145 IPOs). He is also well connected to a lot of influential people in tech, rated on board memberships and connection to money. Not a bad start.

I confess I have a bias to seeing Intel regain its design mojo (my domain). I’m not qualified to speak to the foundry side – I’ll leave commentary there to others. What I heard in the fireside chat is consistent with what I have heard of Lip-Bu’s management style at Cadence. A big focus on product and delighting the customer by going the extra mile. He is already trimming layers of management so that he hears directly from R&D and sales. He also intends to instill a culture of humility both towards customers and towards each other (quite a change for Intel?).

He intends to continue his VC work, not as a sideline but as a very active channel to spot big waves as they approach. And to stay in touch with the startup culture, which he wants to bring back to the company. An ability to move fast in promising directions with a minimum of approval layers and oversight.

We all see a trend to purpose-built silicon, especially around AI. Lip-Bu believes that Intel must adapt to this need, not only to delight but also to build trust. They must embrace opportunities for custom development with big customers. In my paltry knowledge of foundry opportunities I know that Intel is well established in advanced packaging. This is an area they have potential to differentiate.

For EDA/SDA suppliers he suggests there is plenty of opportunity to help. In tooling he wants to see Intel supporting more compatibility with customer preferences, meaning support across the board wherever needed. And I’m sure he will be looking for out-of-the-box thinking from partners, opportunities not just to polish existing solutions but to truly explore multi-way opportunities – customer, Intel, foundry, EDA/IP supplier.

Opportunities to be bold everywhere, not just for Intel but also for their partners.

Also Read:

Anirudh Keynote at CadenceLIVE 2025 Reveals Millennium M2000

Optimizing an IR for Hardware Design. Innovation in Verification

LLMs Raise Game in Assertion Gen. Innovation in Verification


TCAD for 3D Silicon Simulation

TCAD for 3D Silicon Simulation
by Daniel Payne on 06-03-2025 at 10:00 am

Silvaco TCAD min

Semiconductor fabs aim to have high yields and provide processes that attract design firms and win new design starts, but how does a fab deliver their process nodes in a timely manner without having to run lots of expensive silicon through the line? This is where simulation and TCAD tools come into play, and to learn more about this field I attended a Silvaco webinar. Mao Li from Silvaco presented the webinar and had over 50 slides to cover in under 45 minutes, so it was fast-paced.

Silvaco offers TCAD tools for accurate 3D simulation and device optimization, applicable to processes spanning CMOS, memory, RF and power devices. Logic CMOS technology in the past 20 years has gone from 90nm to the Angstrom era, using planar, FinFET, GAA and 3D structures, like CFET. Each generation of fab technology has presented unique technical challenges that required new modeling capabilities for simulation in TCAD tools.

Mr. Li talked about stress and HKMG (High-K Metal Gate)  process challenges that required both 2D and 3D simulation approaches. FinFET technology required new transistor-level process simulation for structure, doping and stress effects, this is where Victory Process is used.

Following process simulation comes device simulation, where the transistor characteristics are predicted for NFET and PFET devices using Victory Device.

Going beyond individual transistors, they can simulate standard cell layouts in their 3D structure, followed by parasitic extractions with Victory RCx to enable the most accurate SPICE circuit simulations. Silvaco showed their flow from TCAD to SPICE, enabling Design Technology Co-Optimization (DTCO).

Memory technology was presented, starting with the history of DRAM evolution, and the pursuit of ever-smaller cell sizes. 3D modeling of saddle fin shapes is supported for DRAM cell arrays.

3D NAND process integration was explained using two engines, Victory Cell mode using an explicit mesh and Victory Process mode using a level set. Stress simulation for 3D NAND results were presented, along with the cell device electrical characteristics.

High-frequency and high-performance applications like wireless communications, radar, satellite and space communications use RF-SOI process technology, and this is modeled and simulated with the Victory tools. High-voltage power devices in LDMOS technology were accurately modeled using 2D or 3D techniques.

The big picture from Silvaco is that their tools are used by both simulation and fab engineers to enable Fab Technology Co-Optimization (FTCO), from automating Design of Experiments using modeling of process, device and circuits, all the way to building a Digital Twin for fab engineers.

For process simulation each step is modeled: etch/deposit, implantation, diffusion and activation, stress. Device simulation includes both a basic model and advanced model. Parasitic extraction uses a 3D structure, then applies a field solver for most accurate RC values. The Victory Process tool is continually improved to include two new diffusion models for best accuracy, especially for 3D FinFET devices. These models are extensively calibrated across doping species, implantation dose ranges, temperature ranges and annealing time ranges.

Development continues for advanced structure generation, along with speed ups in runtime performance. Support of orientation of the silicon lattice has been added, plus new quantization models, and advanced mobility effects.

Instead of using trial and error fab runs to develop a process, using this AI-driven FTCO approach will save engineering time, effort and costs. A case study for FinFET device performance was shared that used machine learning from the Victory DoE and Victory Analytics tools, allowing users to find the optimal input values to satisfy multiple outputs. MonteCarlo simulation was used for both margin analysis and Cp/Cpk characterization.

Summary

Silvaco has a long history in TCAD tools and over time their products have been updated to support fab processes across CMOS, memory, RF and Power devices. Using TCAD for 3D silicon simulation is a proven approach to save time to market. FTCO is really happening.

View the webinar recording online for more details.

Related Blogs


Relaxation-Aware Programming in ReRAM: Evaluating and Optimizing Write Termination

Relaxation-Aware Programming in ReRAM: Evaluating and Optimizing Write Termination
by Admin on 06-03-2025 at 6:00 am

Im1 Weebit Relaxation Aware Programming in ReRAM Optimizing Write Termination RRAM 1024x704
By Marcelo Cueto R&D Engineer, Weebit Nano Ltd

Resistive RAM (ReRAM or RRAM) is the strongest candidate for next-generation non-volatile memory (NVM), combining fast switching speeds with low power consumption. New techniques for managing a memory phenomenon called ‘relaxation’ are making ReRAM more predictable — and easier to specify for real-world applications.

What is the relaxation problem in memory? Short-term conductance drift – known as ‘relaxation’ – presents a challenge for memory stability, especially in neuromorphic computing and multi-bit storage.

At the 2025 International Memory Workshop (IMW), a team from CEA-LetiCEA-List and Weebit presented a poster session, “Relaxation-Aware Programming in RRAM: Evaluating and Optimizing Write Termination.” The team reported that Write Termination (WT), a widely used energy-saving technique, can make these relaxation effects worse.

So what can be done? Our team proposed a solution: a modest programming voltage overdrive that curbs drift without sacrificing the efficiency advantages of the WT technique.

Energy Savings Versus Stability

Write Termination improves programming efficiency by halting the SET (write) operation once the target current is reached, instead of using a fixed-duration pulse. This reduces both energy use and access times, supporting better endurance across ReRAM arrays.

It’s desirable, but problematic in action.

Tests on a 128kb ReRAM macro showed that unmodified WT increases conductance drift by about 50% compared to constant-duration programming.

In these tests, temperature amplified the effect: at 125°C, the memory window narrowed by 76% under WT, compared to a fixed SET pulse. Even at room temperature, degradation reached 31%.

Such drift risks destabilizing systems that depend on tight resistance margins, including neuromorphic processors and multi-level cell (MLC) storage schemes, where minor shifts can translate into computation errors or data loss.

The experiments used a testchip fabricated on 130nm CMOS, integrating the ReRAM array with a RISC-V subsystem for fine-grained programming control and data capture.

Conductance relaxation was tracked from microseconds to over 10,000 seconds post-programming. A high-speed embedded SRAM buffered short-term readouts, allowing detailed monitoring from 1µs to 1 second, while longer-term behavior was captured with staggered reads.

This statistically robust setup enabled precise analysis of both early and late-stage relaxation dynamics.

To measure stability, the researchers used a metric called the three-sigma memory window (MW₃σ). It looks at how tightly the memory cells hold their high and low resistance states, while ignoring extreme outliers.

When this window gets narrower, the difference between a “0” and a “1” becomes harder to detect — making it easier for errors to creep in during reads.

By focusing on MW₃σ, the team wasn’t just looking at averages — they were measuring how reliably the memory performs under real-world conditions, where even small variations can cause problems.

Addressing Relaxation with Voltage Overdrive

Voltage overdrive is the practice of applying a slightly higher voltage than the minimum required to trigger a specific operation in a memory cell — in this case, the SET operation in ReRAM.

Write Termination cuts the SET pulse short as soon as the target current is reached. That saves energy, but it also means some memory cells are just barely SET. They’re fragile — sitting near the edge of their intended resistance range. That’s where relaxation drift kicks in: over time, conductance slips back toward its original state.

So, the team asked a logical question:

“What if we give the cell just a bit more voltage — enough to push it more firmly into its new state, but not so much that we burn energy or damage endurance?”

Instead of discarding WT, the team increased the SET voltage by 0.2 Arbitrary Units (AU) above the minimum requirement.

Key results:

  • Relaxation dropped to levels comparable to constant-duration programming
  • Memory windows remained stable at both room and elevated temperatures
  • WT’s energy efficiency was mostly preserved, with only a ~20% increase in energy compared to unmodified WT

Modeling predicted that without overdrive, 50% of the array would show significant drift within a day. With overdrive, the same drift level would take more than 10 years, a timescale sufficient for most embedded and computing applications.

Balancing Energy and Stability

The modest voltage increases restored conductance stability without negating WT’s energy and speed benefits. Although the overdrive added some energy overhead, overall consumption remained lower than that of fixed-duration programming.

This adjustment offers a practical balance between robustness and efficiency, critical for commercial deployment.

As ReRAM moves toward wider adoption and is a prime candidate for use in neuromorphic and multi-bit storage applications, conductance drift will become a defining challenge.

The results presented at IMW 2025 show that simple device-level optimizations like voltage overdrive can deliver major gains without requiring disruptive architectural changes.

Check out more details of the research here.

Also Read:

Weebit Nano is at the Epicenter of the ReRAM Revolution

Emerging Memories Overview

Weebit Nano at the 2024 Design Automation Conference


Breker Verification Systems at the 2025 Design Automation Conference #62DAC

Breker Verification Systems at the 2025 Design Automation Conference #62DAC
by Daniel Nenni on 06-02-2025 at 10:00 am

62nd DAC SemiWiki

Breker Verification Systems Plans Demonstrations of its Complete Synthesis and SystemVIP Library and Solutions Portfolio

Attendees who step into the Breker Verification Systems booth during DAC (Booth #2520—second floor) will see demonstrations of its Trek Test Suite Synthesis and SystemVIP libraries and solutions portfolio.

They will learn about complex application processor projects across the SoC and RISC-V core verification stack including data center, automotive, AI accelerator and consumer device applications where Breker’s Trek Test Suite Synthesis and Cache Coherency SystemVIP are deployed.

Breker’s SystemVIP library with Test Suite Synthesis allow for enhance verification coverage while significantly reducing test development time for complex scenarios incorporates debug and coverage analysis and can be ported across simulation, emulation, prototyping, post-silicon and virtual platform environments.

SystemVIP includes prepackaged, automated, self-checking scenario verification libraries, while Test Suite Synthesis is AI-driven, offering high-coverage, corner case bug hunting, test content generation and abstract reusable portability across verification platforms.

Starting with randomized instruction generation and microarchitectural scenarios, SystemVIP includes unique tests that check all integrity levels ensuring the smooth application of the core into an SoC, regardless of architecture, and the evaluation of possible performance and power bottlenecks and functional issues.

The SystemVIP Scenario Library enables high-coverage test generation using AI planning algorithms, test cross combination and concurrent test scheduling. The scenario library includes tests for system coherency in multicore SoCs, Arm integration, RISC-V core integrity, power domain switching, hardware security access rules, automated packet generation and performance profiling.

Engineers developing complex RISC-V cores or leveraging them in their SoCs must take on new verification scenarios that require different techniques. Breker’s SystemVIP can be extended for custom RISC-V instructions to be fully incorporated into the complete test suite crossed with other tests and used for a variety of complex RISC-V core designs. Those include system coherency in multicore SoC integrity test sets, high-coverage core test, power domain switching, hardware security access rules and automated packet generation

The verification of processor cores that leverage the RISC-V Open Instruction Set Architecture (ISA) requires testing specialized, unique scenarios, making Breker’s RISC-V SystemVIP libraries ideal scenario platforms. The libraries use AI Planning Algorithms, cross-test multiplication and concurrent, multi-threaded scheduling to provide rigorous testing from randomized instructions to unique coherency, paging and other complex system integration validation.

Semiwiki readers are invited to arrange demonstrations or private meetings by sending email to info@brekersystems.com or stopping by Booth #2520.

DAC Registration is Open

About Breker Verification Systems

Breker Verification Systems solves complex semiconductor challenges across the functional verification process from streamlining UVM-based testbench composition to execution for IP block verification, significantly enhancing SoC integration and firmware verification with automated solutions that provide test content portability and reuse. Breker solutions easily layer into existing environments and operate across simulation, emulation and prototyping, and post-silicon execution platforms.

Its Trek family is production-proven at leading semiconductor companies worldwide and enables design managers and verification engineers to realize measurable productivity gains, speed coverage closure and easy verification knowledge reuse. As a leader in the development of the Accellera Portable Stimulus Standard (PSS), privately held Breker has a reputation for dramatically reducing verification schedules in advanced development environments. Case studies that feature Altera (now Intel), Analog Devices, Broadcom, IBM and other companies leveraging Breker’s solutions are available on the Breker website.

 

Engage with Breker at:
Website:
 www.brekersystems.com
Twitter: @BrekerSystems
LinkedIn: https://www.linkedin.com/company/breker-verification-systems/
Facebook: https://www.facebook.com/BrekerSystems/

DAC registration is open.

Also Read:

RISC-V Virtualization and the Complexity of MMUs

How Breker is Helping to Solve the RISC-V Certification Problem

Breker Brings RISC-V Verification to the Next Level #61DAC


The SemiWiki 62nd DAC Preview

The SemiWiki 62nd DAC Preview
by Daniel Nenni on 06-02-2025 at 6:00 am

62nd DAC SemiWiki

After being held in San Francisco since the pandemic the beloved Design Automation Conference will be on the move again. In 2026 DAC will be held in Huntington Beach. For you non-California natives, Huntington Beach is a California city Southeast of Los Angeles. It’s known for surf beaches and its long Huntington Beach Pier. I spent time there surfing, fishing, and sailing when I was growing up and have very fond memories.

DAC Registration is Open

This being my 41st DAC I also have fond memories of San Francisco and look forward to this year’s conference. In my opinion it will be a big DAC since next year it will be down South. Additionally, all of the conferences have been bigger thus far this year and I expect no less for #DAC62.

SemiWiki will have a handful of bloggers covering the conference live so stay tuned for updates. SemiWiki will also be highlighting some of the standout companies exhibiting and supporting DAC. It takes an ecosystem to make these chips and SemiWiki is the largest semiconductor forum in the world, absolutely.

There are some SemiWiki moderated panels this year. I am moderating a lunch panel on Tuesday: Can AI Cut Costs in Electronic Design & Verification While Accelerating Time-To-Market?. Did I mention a FREE LUNCH and a chance to relax and learn? I hope to see you there.

Another panel is: Breaking the Design Automation Mold; Wild and Crazy Ideas for Global Optimization”. Bernard Murphy will be the moderator for that one.

DAC has a Chips and Systems theme which is very appropriate since EDA and IP are a critical part of the chip and systems supply chain:

“Join 6,000+ designers, researchers, tool developers, and executives at the premiere global design automation event. DAC is where the electronic design ecosystem assembles each year to find the latest solutions and methodologies in AI, EDA, Chip Verification, and more.”

A new feature this year is the Chiplet Pavillion which is sponsored by EE Times. I’m expecting high traffic for this 2nd level section since chiplets are the future of semiconductor design:

“The EE Times Chiplets in-person conference & Chiplet Pavilion exhibition at DAC 2025 will discuss the progress of chiplet technologies and the chiplet supply chain in all their complexity. The agenda will examine the entire value chain and ecosystem, spanning from initial concept and design exploration to packaging and testing. It will also explore the emergence of initiatives aiming to establish relevant technical standards and a chiplet marketplace.”

The keynotes are always good at DAC. This year’s lineup includes AI related topics of course. It seems to be high level stuff but interesting just the same.

Even more interesting are the SkyTalks  and the TechTalks where deep semiconductor experience comes out to play. There is also an Analyst section, meaning people on the outside of the semiconductor industry looking in, which is always fun. At some point in time these analysts will be replaced by ChatGPT but for now they are live and in person.

The TechTalks have an AI theme and look the most interesting. Amit Gupta will talk about Unlocking the Power of AI in EDA. I worked for Amit at Solido Design and saw first hand the early integration of AI into EDA. Amit is followed by Dr. John Linford from Nvidia, an early Solido customer. John is followed by William Wang from ChipAgents. I met William at DVCON and have had the pleasure to work with his team this year. William introduced SemiWiki to AgenticAI and how it will revolutionize chip design and verification.

That is just a quick update. Expect a lot more DAC content leading up to the event and live coverage during. Register here for the I Love 62nd DAC.

Quick Video, learn how DAC has influenced the semiconductor ecosystem over the last 60 years:

About DAC

DAC is recognized as the global event for chips to systems. DAC offers outstanding training, education, exhibits and superb networking opportunities for designers, researchers, tool developers and vendors. The conference is sponsored by the Association for Computing Machinery (ACM) and the Institute of Electrical and Electronics Engineers (IEEE) and is supported by ACM’s Special Interest Group on Design Automation (SIGDA) and IEEE’s Council on Electronic Design Automation (CEDA).

Also Read:

WEBINAR: PCIe 7.0? Understanding Why Now Is the Time to Transition

Intel Foundry is a Low Risk Aternative to TSMC

The Road to Innovation with Synopsys 224G PHY IP From Silicon to Scale: Synopsys 224G PHY Enables Next Gen Scaling Networks


CEO Interview with Kit Merker of Plainsight

CEO Interview with Kit Merker of Plainsight
by Daniel Nenni on 06-01-2025 at 11:00 am

Kit Merker Headshot blazer gradient background sm

Kit Merker is a technology industry leader with over 20 years of experience building software products. He serves as CEO of Plainsight Technologies and previously held senior positions at Nobl9, JFrog, Google and Microsoft.

Tell us about your company.

Plainsight is focused on making computer vision accessible and scalable for everyone. Our core technology is OpenFilter, an open-source framework that lets you build, deploy, and manage computer vision applications using modular components we call “filters.” A filter is essentially an abstraction that combines code and models, packaged as an app. You can string these filters together to create pipelines, and because they’re containerized, you can deploy them pretty much anywhere Docker runs. The idea is to provide a universal way to describe, manage, and scale vision workloads, moving from prototyping to production seamlessly. Our team comes from a background in distributed and cloud systems. My CTO was an early engineer on Google Dataflow, and I was an early product manager on Kubernetes, so we’ve brought that operational rigor to vision workloads. We’ve battle-tested this technology internally and with customers, and now we’re open sourcing it to benefit the broader community.

What problems are you solving?

The biggest challenge in computer vision is the gap between prototyping and production. Many vision projects get stuck after the prototype phase because scaling and maintaining them is incredibly difficult. There aren’t enough vision engineers, and the infrastructure is complex and expensive. OpenFilter addresses this by providing a scale-agnostic way to describe and deploy vision applications. Developers can go from a working prototype to production without a complete rewrite, and the modular approach means updates, maintenance, and scaling are much simpler. We also help reduce infrastructure and inference costs by allowing smarter resource allocation and workload pooling. Ultimately, we’re unlocking latent demand for vision by making it easier and cheaper to build and deploy real-world applications.

What application areas are your strongest?

OpenFilter shines in large-scale, complex deployments. If you have hundreds or thousands of cameras, large amounts of data, or distributed environments, think retail chains, logistics, or manufacturing, our platform really stands out. It’s also great for building complex vision pipelines involving object detection, tracking, segmentation, and classification. The system integrates with a wide range of data sources, including RTSP streams, webcams (except on Mac), and IoT frameworks like MQTT. While you can use it for small projects, its real value comes when you need to scale, manage costs, and handle continuous updates across many locations or devices.

What keeps your customers up at night?

Our customers are concerned about how to take vision solutions from prototype to production, manage costs, especially GPU and inference costs, and keep everything updated as requirements evolve. Integration with existing data sources and business logic is another big pain point, as is the shortage of skilled vision engineers. They need to be able to scale quickly, manage infrastructure efficiently, and ensure their systems are maintainable over time. The complexity and cost of building and maintaining these systems is what keeps them up at night.

What does the competitive landscape look like and how do you differentiate?

The main competition we see is from homegrown solutions, where teams stitch together open-source libraries like OpenCV or YOLO with custom code. These systems are often brittle and hard to maintain, especially at scale. There are some commercial products out there, but few offer the open-source, modular, and scalable approach that OpenFilter does. Our differentiation comes from the filter abstraction, which lets you combine code and models into reusable, composable units. This makes it easy to move from prototype to production without rework, and the same abstractions work at any scale. We also offer both open-source and commercial support, with the commercial version adding features like supply chain security, telemetry, and proprietary model training. Our approach dramatically improves developer productivity and makes maintenance and scaling much easier.

What new features or technology are you working on?

We’re actively expanding model support. Right now we support PyTorch, but we plan to add other architectures. We’re also working on community edition Docker images to simplify deployment, and adding more downstream data connectors like Kafka, Postgres, and MongoDB. The commercial offering includes enhanced telemetry, supply chain security, and proprietary model training for advanced use cases. Looking ahead, we see potential to extend OpenFilter to other data modalities like audio, text, and geospatial data, and to integrate with agentic and generative AI systems for pre- and post-processing.

How do customers normally engage with your company?

Customers engage with us in several ways. Developers and vision engineers can download and use OpenFilter directly as open source, experimenting with their own models and data. Organizations that need enterprise features or support can license our commercial offering, VisionStack. We also work with services partners to deliver custom solutions and support for complex deployments. Community contributions are encouraged, and we’re building a community around reusable filters and best practices. For those moving from prototype to production, we provide expertise, patching, and support to help them succeed.

Is there anything else you want readers to know?

The biggest “aha” moment for me in computer vision was realizing the gap between supply and demand. There’s enormous latent demand for vision solutions, but the cost and complexity have limited adoption to only the highest-ROI projects. We believe the filter abstraction is the innovation that will unlock this value and democratize computer vision. By making it easier, cheaper, and more consistent to build and deploy vision applications, we hope to see much broader adoption and innovation in the field.

Also Read:

CEO Interview with Bjorn Kolbeck of Quobyte

Executive Interview with Mohan Iyer – Vice President and General Manager, Semiconductor Business Unit, Thermo Fisher Scientific

CEO Interview with Jason Lynch of Equal1


CEO Interview with Bjorn Kolbeck of Quobyte

CEO Interview with Bjorn Kolbeck of Quobyte
by Daniel Nenni on 06-01-2025 at 10:00 am

Bjoern

Bjorn Kolbeck received a PhD in Computer Science from Humboldt University in Berlin. Bjorn had previously worked at HPC centers and at Google. His experience with hyperscale architectures led him to co-found Quobyte in 2013.

Tell us about your company?

Quobyte is scale-out storage, and was designed for massive scalability and extreme availability on hyperscaler principles. It’s designed to handle thousands of nodes using commodity hardware, and the fault resilient architecture is designed to handle failures gracefully. For example, you can lose an entire server,  a rack, or even an entire datacenter, and the cluster will continue to operate with no data loss. At the same time, the software was designed to be simple, it runs completely in user space with no kernel modules or custom drivers or networks, and be  run with very small  teams, which need only basic Linux skills. It’s well suited for semiconductor design, as it excels at managing very large datasets while meeting performance criteria in simulation, design verification, and tape-out among others. The single namespace handles both file and objects, eliminating data silos and facilitates collaboration.

What problems are you solving?

Quobyte addresses a customer’s ever increasing need for performance and capacity, while they face static or even declining budgets, as well as staffing shortages. We do this by being extremely simple to operate, and run on cost-effective commodity hardware. This scalable performance helps maximize resource utilization for compute, GPUs and software licenses. Quobyte can be run on-premises on commodity X86 or ARM servers, in the public cloud, or hybrid cloud environments, depending on the customer’s needs.

What application areas are your strongest?

Our extreme simplicity from initial download to scaling with huge clusters with 100s of petabytes, running on inexpensive commodity hardware as opposed to expensive appliances, and our ability to operate in hybrid environments on premises and in the cloud offers a very compelling value proposition. We are particularly well suited for AI applications, as well as traditional HPC applications in EDA, life sciences, financial services, and oil and gas.

What keeps your customers up at night?

We realize that our customers are demanding more performance to run larger jobs, delivering on tight deadlines, and needing more capacity and better availability with no increase in budget. We satisfy those needs.

What does the competitive landscape look like and how do you differentiate?

The market is dominated by expensive appliances and very complex software. These products are complex to administer and require a large staff, which can be cost prohibitive. We have a completely different approach. Our software is so easy to use and operate that you can download our free edition and be in production in less than an hour. We haven’t seen that from anyone else.

What new features/technology are you working on?

We are continuing to develop our product by making it even easier to use, making large scale data storage even easier to manage as well as adding intelligence to our software and automating optimizations.

How do customers normally engage with your company?

It’s very easy for a customer to engage us. Depending on how you like to learn about new technologies you can download our free edition and install it on your servers or in the cloud, or alternatively you can contact us and get a demo and a customized solution tailored to your use case, environment and needs.

Also Read:

Executive Interview with Mohan Iyer – Vice President and General Manager, Semiconductor Business Unit, Thermo Fisher Scientific

CEO Interview with Jason Lynch of Equal1

CEO Interview with Sébastien Dauvé of CEA-Leti


Video EP7: The impact of Undo’s Time Travel Debugging with Greg Law

Video EP7: The impact of Undo’s Time Travel Debugging with Greg Law
by Daniel Nenni on 05-30-2025 at 10:00 am

In this episode of the Semiconductor Insiders video series, Dan is joined by Dr Greg Law, CEO of Undo, He is a C++ debugging expert, well-known conference speaker, and the founder of Undo. Greg explains the history of Undo, initially as a provider of software development and debugging tools for software vendors. He explains that due to the complex nature of the models driving chip design, Undo also supports the validation and debug of chip designs with a shift left methodology.

He also describes the benefits of Undo’s time travel debugging on large chip designs, both to quickly identify root causes and collaborate across the team to build confidence in the entire design.

Contact Undo

The views, thoughts, and opinions expressed in these videos belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


Podcast EP289: An Overview of How Highwire Helps to Deliver Advanced Fabs with Less Cost, Time and Risk with David Tibbetts

Podcast EP289: An Overview of How Highwire Helps to Deliver Advanced Fabs with Less Cost, Time and Risk with David Tibbetts
by Daniel Nenni on 05-30-2025 at 6:00 am

Dan is joined by David Tibbetts, Chief Safety Officer at Highwire. David is a Certified Safety Professional with 20+ years of occupational safety experience in both general industry and construction settings. He is currently supporting Highwire’s hiring partners utilizing the Highwire suite of software solutions to identify, manage, and mitigate risks presented by contracting partners on construction projects and in existing operational facilities.

Dan explores the unique model and technology that Highwire provides with David. Highwire delivers a platform that helps business owners manage the risks associated with the contractors they hire to support capital projects and on-going maintenance and management of existing facilities. The company serves many industries, including general construction, data centers, health, life sciences, manufacturing, property development, renewable energy, universities and semiconductors.

In the semiconductor area, Highwire works with the three largest semiconductor manufacturers in the world. Dan explores the technology and processes used by Highwire to optimize large-scale projects such as semiconductor fab and large data center construction. David describes how contractor assessment and risk management can have a significant impact on the time and cost associated with these mega projects. Beyond cost and time optimization, David also describes work to reduce injuries and fatalities in large projects. You can learn more about this unique company and how it impacts the semiconductor industry here.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


Synopsys Addresses the Test Barrier for Heterogeneous Integration

Synopsys Addresses the Test Barrier for Heterogeneous Integration
by Mike Gianfagna on 05-29-2025 at 10:00 am

Synopsys Addresses the Test Barrier for Heterogeneous Integration

The trend is clear, AI and HPC is moving to chiplet-based, or heterogenous design to achieve the highest levels of performance, while traditional monolithic system-on-chip (SoC) designs struggle to scale. What is also clear is the road to this new design style is not a smooth one. There are many challenges to overcome. Some are bigger versions of what came before and others are new, driven by a new way of designing and assembling semiconductor systems. Synopsys has been at the forefront of innovation to address these challenges and pave the way for future heterogeneous chip designs. The company recently published an informative article on a way to tame yet another challenge for this new design style.  Let’s examine how Synopsys addresses the test barrier for heterogeneous integration.

An Overview of the Heterogeneous Design Landscape

There are many dimensions to the challenges of heterogeneous chip design. In a recent post I took a close look at several issues. What was clear is the need for a holistic approach to address these design challenges. All of the items discussed in that post were tied together. A change in one aspect of the design affected others, and so the only way to success was to balance everything in a holistic manner.

Synopsys has developed a strong approach here with a comprehensive suite of solutions to address and balance various design parameters. The list includes architectural design, verification, implementation, and device health. My colleague Kalar Rajendiran also discussed the importance of next generation interconnects in this post. You can learn more about how multi-die packaging is enabling the next generation of AI SoCs and how Synopsys is helping to make this happen.

The Problem with Test for Heterogeneous Chip Design

It seems that this design style increases complexity across many vectors. In the new Synopsys article, advanced testing methodologies are discussed through the lens of needed improvements in automatic test equipment (ATE) to maintain signal integrity, accuracy, and performance.

The article points out that, in the heterogeneous design world structural testing of devices requires high-bandwidth test data interfaces for at-speed testing, to confirm truly known-good devices (KGDs) and to achieve high test coverage and a low DPPM number in a reasonable timeframe. The piece points out that ensuring the highest test coverage for individual chiplets is crucial, before integrating them into complex 2.5D or 3D packages, to prevent yield fallout once they are combined with other chiplets in a complete package.  

The article goes on to discuss the number of patterns required to test complex new devices. The patterns have increased significantly, and this is coupled with the fact that there are a limited number of general-purpose IO (GPIO) pins to perform the tests. Furthermore, GPIO speed restricts test data throughput, reducing overall coverage to test advanced designs efficiently. Though the conventional high-speed I/O protocol (PCIe/USB) satisfies the bandwidth requirements, it requires expensive hardware to set up.

Digging a bit deeper, in scenarios where the number of IO pins is limited, the bottleneck often lies in validation time, which extends the product development cycle and significantly increases the test costs.  And the limited availability of high-band width test access ports, especially in multi-die design, highlights the need for a new kind of IO. One that can operate at much higher speeds than GPIO but adds no additional hardware components or complex protocol support on the initialization/calibration sequence, while maintaining signal integrity for the latest manufacturing processes.

How Synopsys Addresses the Test Barrier

The Synopsys article describes another well designed, holistic approach to the problem. Synopsys High-Speed Test GPIOs (HSGPIO) are optimally designed to meet high-speed test requirements. This versatile offering ensures that single IOs can be multiplexed based on their usage as test ports during manufacturability, performing high-speed clock observation during debug and configurable to GPIO during production, making them unique in the industry in supporting comprehensive test needs.

The article provides a comprehensive overview of the Synopsys solution that includes a detailed discussion of:

  • The benefits of high-speed test IO for simplified and reliable testing
  • How to enhance IO performance and optimizing power with multiple modes

The graphic at the top of this post illustrates where the Synopsys HSGPIO fits.

If you are planning to utilize heterogeneous design for your next project, the new article from Synopsys is a must-read. Don’t get caught at the end of a complex design process with test headaches. You can access your copy of Synopsys Test IO to address the High-Performance Efficient Data Transmission and Testing requirements for HPC & AI Applications here. And that’s how Synopsys addresses the test barrier for heterogeneous integration.

Also Read:

Design-Technology Co-Optimization (DTCO) Accelerates Market Readiness of Angstrom-Scale Process Technologies

SNUG 2025: A Watershed Moment for EDA – Part 2

Automotive Functional Safety (FuSa) Challenges