Banner 800x100 0810

Podcast EP296: How Agentic and Autonomous Systems Make Scientists More Productive with SanboxAQ’s Tiffany Callahan

Podcast EP296: How Agentic and Autonomous Systems Make Scientists More Productive with SanboxAQ’s Tiffany Callahan
by Daniel Nenni on 07-09-2025 at 8:00 am

Dan is joined by Dr. Tiffany Callahan from SandboxAQ. As one of the early movers in the evolving sciences of computational biology, machine learning and artificial intelligence, Tiffany serves as the technical lead for agentic and autonomous systems at SandboxAQ. She has authored over 50 peer-reviewed publications, launched several high-impact open-source projects and holds multiple patents.

Dan explores the foundation of the agentic and autonomous systems SandboxAQ is developing with Tiffany. She describes the impact of large quantitative models, or LQMs, particularly in drug discovery and material science research. Unlike LLMs that are trained on broad-based Internet data for text reasoning, LQMs are trained on first principles of physics, chemistry and engineering, This creates AI that can reason about the physical world. SanboxAQ aims to deploy this technology as an adjunct to existing research experts by simulating and predicting physical outcomes on a massive scale. This provides scientists with tools that are both grounded in physical science and generative, facilitating more targeted and efficient research,

You can learn more about this unique company and the impact it aims to have on advanced research here.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


Insider Opinions on AI in EDA. Accellera Panel at DAC

Insider Opinions on AI in EDA. Accellera Panel at DAC
by Bernard Murphy on 07-09-2025 at 6:00 am

Accellera Panel on AI in EDA min

In AI it is easy to be distracted by hype and miss the real advances in technology and adoption that are making a difference today. Accellera hosted a panel at DAC on just this topic, moderated by Dan Nenni (Mr. SemiWiki). Panelists were: Chuck Alpert, Cadence’s AI Fellow driving cross-functional Agentic AI solutions throughout Cadence; Dr. Erik Berg, Senior Principal Engineer at Microsoft, leading generative AI strategy for end-to-end silicon development; Dr. Monica Farkash, AMD fellow, creator of ML/AI based solutions to reshape HW development flows; Harry Foster, Chief Scientist for Verification at Siemens Digital Industries Software; Badri Gopalan, R&D Scientist at Synopsys, architect and developer for coverage closure and GenAI related technology; and Syed Suhaib leading CPU Formal Verification at Nvidia.

Where are we really at with AI in EDA?

In 2023 everyone in EDA wanted to climb on the AI hype train. There was some substance behind the stories but in my view the promise outran reality. Two years later in this panel I heard more grounded views, not a reset but practical positions on what is already in production, what is imminent, and what is further out. Along with practical advice for teams eager to take advantage of AI but not sure where to start.

I like Chuck’s view, modeling AI evolution in EDA like the SAE model for automotive autonomy, progressing through a series of levels. Capabilities at level 1 we already see in production use, such as PPA optimization in implementation or regression optimization in verification. Level 2 should be coming soon, providing chat/search help for tools and flows. Level 3 introduces generation for code, assertions, SDCs, testbenches. Level 4 will support workflows and level 5 may provide full autonomy – someday. Just as in automotive autonomy, the higher you go, the more levels become aspirational but still worthy goals to drive advances.

According to Erik, executives in Microsoft see accelerating adoption in software engineering and want to know why the hardware folks aren’t there yet. Part of the problem is the tiny size (~1%) of the training corpus versus the software corpus, also a significantly more complex development flow. Execs get that but want hardware teams to come up with creative workarounds, to not keep falling further behind. An especially interesting insight is that in Microsoft teams are building more data awareness and learning how to curate and label data to drive AI based optimizations.

Monica offered another interesting insight. She has been working in AI for quite a long time and is very familiar with the advances that many of us now see as revolutionary. The big change for her is that, after a long period of general disinterest from the design community, suddenly all design teams want these capabilities yesterday. This sudden demand can’t be explained by hype. Hype generates curiosity, urgency comes from results seen in other teams. I know that this is already happening in implementation optimization and in regression suite optimization. Results aren’t always compelling, but they are compelling often enough to command attention.

Harry Foster added an important point. We’ve had forms of AI in point tools for some time now and they have made a difference, but the big gains are going to come from flow/agentic optimizations (Erik suggested between 30% and 50%).

Badri echoed this point and added that progress won’t just be about technical advances, it will also be about building trust. He sees agents as a form of collaboration which should be modeled on our own collaboration. While today we are allergic to the idea of any kind of collaboration in AI, he thinks we need to find ways to make some level of collaboration more feasible. Perhaps in sharing weights or RAG data. Unclear what methods might be acceptable and when, but more will be possible if we could find a path.

Syed offered some very practical applications of AI. Auto-fixing (or at least suggesting fixes) for naming compliance violations. At first glance this application might seem trivial. What’s important about a filename or signal name? A lot, if tools use those names to guide generation or verification, or AI itself. Equivalence checking for example uses names to figure out correspondence points in a design. At Nvidia, among other applications they use AI to clean up naming sloppiness, saving engineers significant effort in cleanup and boosting productivity through improved compliance. AI is also used to bootstrap testbench generation, certainly in the formal group.

Audience Q&A

There were some excellent questions from the audience. I’ll pick just a couple to highlight here. The first was essentially “how do you benchmark/decide on AI and agentic systems?” The consensus answer was to first figure out in detail what problem you want to solve and how you would solve it without AI. Then perhaps you can use an off-the shelf chatbot augmented with some well-organized in-house RAG content. Maybe you can add some fine-tuning to get close to what you want. Maybe you can use a much simpler model. Or if you have the resources and budget, you can go all the way to a customized LLM, as some companies represented on this panel have done.

Design houses have always built their own differentiated flows around vendor tools, often a mix of tools from different vendors. They build scripting and add in-house tools for all kinds of applications: creating or extracting memory and register maps, defining package pin and IO muxing maps and so on. In-house AI and particularly agentic AI could perhaps over time supersede scripting and even drive new approaches to agents for product team-specific tasks. EDA agents will likely also play a part in this evolution around their own flows. For interoperability in such flows one proposal was increased use of standards like MCP.

Another very good question came from the leader of a formal verification team who is ramping up a few engineers on SVA, while also aiming to ramp them up on machine learning. His question was how to train his team in AI methods, a challenge that I am sure is widely shared. Erik said “ask ChatGPT” and we all laughed, but then he added (I’ll roughly quote here):

“I’m 100% serious. I’ve had people complain, where’s the help menu? I said, just ask it your question. And if you’re having trouble with your prompts, give it your prompt and say, this is the output that I want. What am I doing wrong? It will be very frank with you. Use the tool to learn.”

Now that is a refreshing perspective. A technology that isn’t just useful for individual contributors, but also for their managers!

I’m not always a fan of panels. I often find that they offer few new insights, but this panel was different. Good questions and thought-provoking responses. More of these please Accellera. Benchmarking AI and agentic systems sounds like one topic that would draw a crowd!

See Replay: “Insider Opinions on AI in EDA: Accellera Panel at DAC

Also Read:

Accellera at DVCon 2025 Updates and Behavioral Coverage

Accellera 2024 End of Year Update

SystemC Update 2024


Revolutionizing Simulation Turnaround: How Siemens’ SmartCompile Transforms SoC Verification

Revolutionizing Simulation Turnaround: How Siemens’ SmartCompile Transforms SoC Verification
by Kalar Rajendiran on 07-08-2025 at 10:00 am

SmartCompile

In the race to deliver ever-larger SoCs under shrinking schedules, simulation is becoming a bottleneck. With debug cycles constrained by long iteration times—even for minor code changes—teams are finding traditional flows too rigid and slow. The problem is further magnified in continuous integration and continuous deployment (CI/CD) environments, where each commit may trigger a full simulation cycle, consuming unnecessary time and compute resources.  Siemens EDA’s SmartCompile aims to break this logjam.

SmartCompile: A Paradigm Shift in Simulation Workflows

Siemens EDA addresses this critical challenge with SmartCompile, a feature of its Questa One simulation environment. Rather than iterating on top of the traditional flow, SmartCompile introduces a fundamental redesign of the compile-optimize-simulate pipeline. It adopts a modular and highly parallel approach to managing design verification tasks, enabling faster turnaround times without compromising design integrity.

The foundation of SmartCompile’s innovation lies in its ability to break apart large, monolithic processes into discrete, manageable units. This divide-and-conquer philosophy allows each component—be it compilation, optimization, or test loading—to be performed independently and in parallel, dramatically improving simulation readiness and design iteration velocity.

Enhancing Performance through Incremental Workflows

One of the most significant advantages of SmartCompile is its incremental compilation and optimization strategy. By utilizing timestamp tracking and smart signature analysis, the system identifies precisely which parts of the design have changed and compiles only those. This targeted approach drastically reduces build times across repeated verification cycles and streamlines test and debug cycles for developers.

Furthermore, the introduction of separate test loading revolutionizes how simulation teams manage test scenarios. Instead of recompiling the entire testbench for each new test, SmartCompile allows users to reuse the base compilation and optimization while isolating and processing only the new or modified tests. This capability significantly accelerates the test development process and promotes faster feedback loops during debugging.

Tackling Design Scale with Intelligent Partitioning

As designs increase in complexity, optimization becomes one of the most time-consuming stages of verification. To combat this, SmartCompile introduces the concept of AutoPDU—automatically pre-optimized design units. This feature partitions large designs into smaller, manageable units that can be independently compiled and optimized. When changes are made, only the affected units need to be processed again, leaving the rest untouched. This approach not only reduces the time required for each optimization run but also allows the process to be distributed across multiple grid computing nodes. By enabling parallelism at the design unit level, AutoPDU transforms how large SoCs are handled, dramatically decreasing overall simulation setup time.

Boosting CI/CD Efficiency with SmartCompile

Questa One’s SmartCompile is uniquely suited to enhance CI/CD (Continuous Integration and Continuous Deployment) pipelines in hardware design. By enabling rapid, incremental builds and leveraging precompiled design caches, SmartCompile allows frequent code check-ins to be verified quickly without reprocessing the entire design. Its intelligent reuse of elaboration and optimization data significantly reduces turnaround times in automated workflows. This capability ensures that regression tests, triggered automatically by CI systems, execute efficiently, allowing development teams to scale their productivity while maintaining robust quality assurance throughout the design lifecycle. This feature is particularly valuable for large teams and distributed projects, where multiple engineers may need to reproduce simulation environments “on demand—without losing valuable time.”

Flexible Configuration for Advanced Use Cases

In many simulation environments, different abstraction levels—such as RTL, gate-level, or behavioral models—are needed for different verification tasks. Traditionally, switching between these configurations requires recompilation and re-optimization. SmartCompile’s dynamic reconfiguration capability removes this barrier by allowing blocks to be swapped in or out at simulation time. This feature lets users pre-compile various block configurations and select the appropriate one during elaboration, enabling greater flexibility and reducing redundant processing.

Additionally, debug data generation in SmartCompile is no longer tightly coupled with optimization. Engineers can generate debug files on demand, rather than each time a build is processed. This not only improves resource efficiency but also empowers teams to target their debugging efforts more precisely.

The Business Value of Smarter Simulation

The cumulative effect of these innovations is substantial. SmartCompile enables design teams to iterate faster, simulate more often, and reduce wasted compute cycles. With its support for incremental workflows, distributed optimization, configuration flexibility, and CI-friendly features, it presents a compelling solution for organizations looking to scale their design verification capabilities without scaling their costs. This means faster time-to-market, reduced operational expenses, and more reliable development pipelines. As competition in the semiconductor market intensifies, the ability to verify designs quickly and efficiently becomes a critical differentiator. By integrating SmartCompile into their verification strategy, companies can better manage complexity while maintaining agility and performance.

Summary

Simulation has always been a cornerstone of digital design verification, but as designs grow more complex and development timelines shrink, traditional flows no longer meet the needs of modern engineering teams. Siemens EDA has recognized this shift and responded with a comprehensive and intelligent approach in SmartCompile. It tackles the fundamental inefficiencies of traditional workflows, enabling faster, smarter, and more scalable verification from the ground up.

Also Read:

Siemens EDA Unveils Groundbreaking Tools to Simplify 3D IC Design and Analysis

Jitter: The Overlooked PDN Quality Metric

DAC News – A New Era of Electronic Design Begins with Siemens EDA AI


Arteris Simplifies Design Reuse with Magillem Packaging

Arteris Simplifies Design Reuse with Magillem Packaging
by Mike Gianfagna on 07-08-2025 at 6:00 am

Arteris Simplifies Design Reuse with Magillem Packaging

Many know Arteris as the “network-on-chip”, or NoC, company. Through acquisitions and forward-looking development, the footprint for Arteris has grown beyond smart interconnect IP. At DAC this year, Arteris highlighted its latest expansion with a new SoC integration automation product called Magillem Packaging. The announcement focused on substantial new capabilities to simplify and speed up the process of building advanced chips used in everything from AI data centers to edge devices. I had an opportunity to visit Arteris at DAC and to speak with some of the executives there. Let’s examine how Arteris simplifies design reuse with Magillem Packaging.

The Announcement

The announcement made at DAC pointed out that chip design is becoming increasingly complex, with more components, higher performance demands, and tighter timelines. There is no argument there. The release states that Magillem Packaging helps engineering teams work faster and more efficiently by automating one of the most time-consuming parts of the design process: assembling and reusing existing technology.

Going deeper, Magillem Packaging enables IP teams to quickly and reliably package and prepare hundreds or even thousands of components for integration into a single chiplet or chip design, including new, existing, or third-party IP blocks.

Some of the key capabilities of this new product from Arteris are:

  • IP reuse with comprehensive IP, subsystem, and chiplet packaging in a reusable format, including configuration, implementation, and verification for incremental and full packaging with a proven methodology.
  • IEEE 1685-2022 generation is correct-by-construction without requiring any pre-requisite IP-XACT expertise. Standard compliance and data consistency are ensured by construction and assessed with a built-in Magillem checkers suite.
  • Scalable and fully automated generation of IP packaging for reused and new IP blocks, with support for legacy 2009 and 2014 versions of the IEEE 1685 standard, with intuitive graphical editors enabling fast viewing and editing of IP block descriptions.

Ecosystem Support

Arteris technology is agnostic and works across the ecosystem to ensure ease of integration for end customers. Among those voicing support for the new capability are:

Andes Technology

“Andes Technology is recognized for our comprehensive family of RISC-V processor IP and customization tools that empower customers to easily differentiate their SoC designs,” said Marc Evans, director of business development & marketing at Andes Technology Corporation.  “The latest IP-XACT 2022 specifications enable structured automation, optimizing IP packaging and integration. Magillem Packaging complements Andes’ commitment to streamlined workflows, enabling faster and more reliable SoC development.”

MIPS

“The MIPS Atlas portfolio is engineered for high-efficiency compute in autonomous, industrial, and embedded AI applications, where rapid integration and design reuse are critical,” said Drew Barbier, VP & GM of the IP Business Unit at MIPS. “Arteris Magillem Packaging, with its automation of IP-XACT 2022-compliant packaging and support for industry standards, aligns with customer needs to accelerate SoC development. Together, we empower customers to streamline IP integration, reduce design complexity, and bring innovative silicon to market faster.”

More From the Show Floor at DAC

While visiting Arteris at DAC, I had the opportunity to discuss this announcement with two key members of the management team in more detail.

Insaf and Andy at the SoC Integration pod in the Arteris booth

Insaf Meliane is a product management and marketing director at Arteris. Before joining the product team, she was a field application manager, supporting customers with complex SoC design integration. She holds an engineering degree in microelectronics option system-on-chips from École Nationale Supérieure d’Electronique et de Radioélectricité de Grenoble.

Andy Nightingale is the VP of product marketing at Arteris. Andy is a seasoned global business leader with a diverse engineering and product marketing background. He’s a Chartered Member of the British Computer Society and the Chartered Institute of Marketing and has over 35 years of experience in the high-tech industry.                                            

We began by discussing the overall reaction to Magillem Packaging at DAC. Interest was high, and reactions were quite positive. There has been an increase in momentum for IP-XACT. The features of the latest IP-XACT 2022 version have helped. Arteris has been a major supporter of this standard, and the new capabilities delivered by Magillem Packaging have helped as well.

Insaf explained that Magillem Packaging leverages the Arteris Magillem Platform by integrating parts of Magillem Connectivity and Magillem Registers to create the new product. The figure below provides an overview of the platform and how the pieces fit together. Insaf described the significant benefits this new product delivers. The image at the top of this post includes a summary of the key benefits.

Arteris SoC Integration Automation with the Magillem Platform

She went on to explain the significant automation provided by Magillem Packaging. Keeping track of a complex system’s connectivity and interface requirements is a daunting challenge. With Magillem Packaging, these details are automated and verified as correct. She described how the new version of IP-XACT 2022 delivers substantial new capabilities, and Magillem Packaging leverages all these capabilities in an automated way. There is no need for the user to learn all those details.

She summarized some of the key benefits of the new tool as follows:

  • Effortless, scalable automation: handles both legacy and new IPs for a smoother assembly, faster scaling for large designs with less risks, reducing the potential for human error, and increasing efficiency
  • Single source of truth specification: ensures consistency across various uses, bringing up immediate collaboration across the relevant teams, and catching errors before they become costly roadblocks.
  • Safely, easily, and quickly adapt to changes: with a robust, rapid, highly iterative design environment. It reduces effort and rework to focus on core business, leverage technical expertise, and dream up what comes next.

She also pointed out that Arteris is working with various IP providers to ensure full support for IP-XACT 2022 so customers can fully enjoy its benefits.

I then explored the bigger development programs at Arteris with Andy. He described some of the joint efforts between the NoC and Magillem Connectivity teams. This work improves the target system’s overall connectivity management and helps with the complex verification tasks, thanks to the consistent views created across simulation, FPGA, emulation, synthesis, and fault injection.

Andy couldn’t disclose too many details about upcoming enhancements, but this is an area to observe going forward, and Arteris is leading the charge.

We concluded our discussion with a broader view of multi-die design requirements. On SemiWiki, you can learn more about how Arteris responds to these challenges. Some eye-opening statistics about Arteris technology include that over 200 customers have completed 860 design starts and shipped about 3.75 billion units.

To Learn More

Managing all the information associated with the new heterogeneous semiconductor systems under development can be a considerable challenge. One error can jeopardize the entire project. If these issues keep you up at night, you want to learn more about what Arteris is doing with its Magillem technology. You can read the press release announcing Magillem Packaging here.  And you can learn more about this new product here.  And that’s how Arteris simplifies design reuse with Magillem Packaging.

Also Read:

Arteris Expands Their Multi-Die Support

How Arteris is Revolutionizing SoC Design with Smart NoC IP

Podcast EP277: How Arteris FlexGen Smart NoC IP Democratizes Advanced Chip Design with Rick Bye


MEMS Technology – From Fringe to Mainstream

MEMS Technology – From Fringe to Mainstream
by Daniel Nenni on 07-07-2025 at 10:00 am

MEMs 2025 SemiWiki

Last month, Lj Ristic delivered an invited talk on MEMS technology as a driving force at the Laser Display and Lighting Conference 2025, held at Trinity College Dublin.  His talk included a review of some major successes of MEMS industry. We used that occasion to talk to him and discuss some major achievements and the status of MEMS technology today.

Dr. Lj Ristic is recognized for his pioneering contributions to the field of semiconductors, particularly in the development and commercialization of MEMS technology. He has been instrumental in creating innovative MEMS products, with hundreds of millions of units shipped globally. He is also credited with inventing a microprocessor with integrated sensing capabilities—widely adopted in smart sensors. Among his other notable achievements is the development of a novel method for integrating front-end antenna solutions for RF and wireless systems, which has been widely used by the mobile telecommunications industry. He has also conducted groundbreaking research in magnetic field sensors, advancing one- to three-dimensional sensing using lateral bipolar transistors. Dr. Ristic has also published ‘Sensor Technology and Devices’, the first book to introduce MEMS technology to general public.

In addition to his technical accomplishments, he has held senior leadership roles at major corporations and startups alike, including Motorola, ON Semiconductor, Alpha Industries, Sirific, Coniun, Crocus, and SensSpree. He currently serves as Chief of Business Development and Strategy at Mirrorcle Technologies, a leader in the development of MEMS mirror technology and products.

What can you tell us about the status of MEMS Technology today?

Let us briefly look at the history of leveraging Si as mechanical material. In the early 80’s Kurt Petersen opened the eyes of the rest of the world by saying, look silicon is not the only majestic material for integrated circuits, but it is equally majestic for its mechanical properties at micro scale. Why not leverage that? Of course, at the time it was considered to be exotic and on fringe. Then in 1983 Roger Howe and Prof. Muller came with surface micromachining, creating an additional toolbox for making micromechanical structures. And the race was on. Big companies jumped in, including Motorola (I was there), and they focused on leveraging this technology for automotive applications. Where there is a will and funding, there is success. 40 years later MEMS technology is the mainstream and MEMS products are delivered in billions per year serving all possible markets including automotive, commercial, consumer, communications, industrial, biomedical, space, and robotics.

What was the first MEMS product to gain credibility?

It is important to point out that the acronym MEMS was coined by Jacobs and Wood in 1986 in their proposal for grant to DARPA, and it was used to describe micro electro-mechanical systems consisting of micro-mechanical devices and driving electronics. Since micromechanical devices described in the proposal were made using surface micromachining, often the phrase MEMS at that time was associated with surface micromachining. In years to come, the acronym MEMS technology has evolved into an umbrella to include all aspects of micromachined devices, from bulk machining, to wafer bonding, to surface micromachining.

Going back to the question, considering the historic content and initial meaning of MEMS, one can say the first product that gave credibility to MEMS technology was an accelerometer developed for automotive industry. In the late 1980s and early 1990s, Motorola and Analog Devices led the development of accelerometers for airbag applications, neck and neck. While Analog Devices adopted the comb-structure approach invented by Howe and Lee, Motorola pursued its own distinct path of developing a three-polysilicon-layer surface micromachining technology and a differential capacitive device as the vertical stack, that became the foundation for its accelerometers. Ultimately, both companies were successful in introducing accelerometers to automotive customers, in volume production, and they showed that surface micromachining may take many shapes and flavors. Thus, the MEMS accelerometer has a distinct place of putting MEMS technology on the industrialization map. It should be also pointed out that the success of MEMS accelerometer without coupling the sensing element with CMOS ICs would have been impossible. This is an excellent example of the fusion of two technologies, MEMS and CMOS, to give something new that none of these technologies could do on their own,

Since the 1990s, MEMS accelerometers have significantly expanded their range of applications. Advances in miniaturization, high-yield manufacturing, and cost-effective production have driven their continued growth in the automotive market and facilitated their widespread adoption across new markets, including consumer electronics, industrial automation, robotics, medical devices, the Internet of Things (IoT), and defense. Today, MEMS accelerometers play a crucial role in applications such screen orientation detection, movement monitoring, step counting, gesture recognition, structural health monitoring, and vibration monitoring of machinery, among many others.

Current leaders in accelerometer products are Bosch, TDK-InvenSense, NXP, STM, ADI, Murata, and others.

Where do pressure sensors stand in acceptance of MEMS technology?

Pressure sensors are a granddaddy of silicon sensors and transducers. They typically leverage a piezoresistive effect that was discovered in mid 50s in silicon and germanium. It took more than a decade before the first commercial pressure sensors started appearing in the late 60s. They were made by using bulk micromachining to create thin silicon diaphragm that contain resistors diffused into it. Kulite was the first company to leverage piezoresistive effect using silicon. Many others followed. I firmly believe that without pressure sensors MEMS technology would not have been today where it is. The pressure sensors were predecessors to the broader field of MEMS technology as we know it today. They were the base on which we started building to achieve the status of mainstream technology – the MEMS is currently.

It should be mentioned that among numerus pressure sensor products in existence two product categories deserve a special place because of the impact they have had since their introduction. These two are MAP (Manifold Absolute Pressure) sensor and TPMS (Tire Pressure Measurement System). MAP is essential in maintaining efficient fuel injection of the engine, while TPMS is used for real-time pressure monitoring. Motorola/Freescale (today part of NXP), as a leading supplier of semiconductor products to the automotive industry, has contributed significantly to the reputation of these two products. Both are crucial in improving overall efficiency and safety management of vehicles and that is what puts these two in the special category on their own.

Current leaders in pressure sensor products are Bosch, STM, NXP, Honeywell, Infineon, and others.

What are the other significant achievements that have contributed to the credibility of MEMS technology?

With the success of accelerometers, the progress in development of other products was relatively fast. The MEMS gyro followed and then the integration of MEMS accelerometer and gyro in the single chip. Then others. Today, the list of MEMS products in existence is literally endless, but I will focus here on the exclusive club of products that have unique distinction of being made in billions per year. In addition to pressure sensors and accelerometers and gyros, the other products that make exclusive 1 billion units/year club are microphones, speakers, and timing devices.

Microphones have significantly benefited from ongoing advancements in MEMS technology, leading to substantial miniaturization of these devices. As a result of their small size and low power consumption, MEMS microphones are well-suited for applications in compact, battery-powered devices such as smartphones, tablets, smartwatches, earbuds, hearing aids, laptops, smart speakers, etc.

Current leaders in microphone products are Knowles, Goertek, Bosch, Cirus Logic, Infineon, and others.

MEMS speakers are among the latest product families to leverage MEMS technology, with commercial products emerging only over the last decade. Their development has generally been more challenging compared to MEMS microphones. This challenge is primarily driven by the need for large diaphragm displacement. On one hand, sufficient power is required to generate enough force to move the diaphragm, while on the other hand, the structural integrity of the diaphragm must be preserved despite the large displacement requirement. The advantages of MEMS speakers over traditional non-MEMS technologies (such as electrodynamic and balanced armature speakers) are their small form factor, lower power consumption, and ease of integration with electronics. These benefits make them highly attractive for many applications in consumer electronics and other size- and power-constraint applications.

Current leaders in speaker products are xMEMS, Usound, Bosch, Sonitron, SonicEdge, and others.

The MEMS timing devices are used to provide clock functions that are required in modern electronic products. From a functional point of view, they can be divided in three categories: resonators, oscillators, and clocks, and all of them can be made as silicon MEMS devices. They are an excellent alternative to classical quartz crystal timing devices and are gaining acceptance in many market segments including automotive, aerospace and defense, telecommunications, IT, consumer, and medical applications. They offer reliable performance, low power consumption, high stability, small form factors, and low cost. Current leaders in MEMS timing device segment are Si-Times, Microchip, Kyocera, Abracon, Rakon, and others.

In the end, a general assertion can be made for all MEMS products: they gained reputation because they perform reliably, they offer small power consumption, small form factor, and low cost, and that is a winning combination.

What is the latest in MEMS Technology?

One of the latest group of products developed in MEMS technology are silicon MEMS Mirrors that are essential for many applications in optoelectronics. MEMS Mirrors are small silicon-based devices capable of tilting along a single axis (one-dimensional mirror) or in two axes (two-dimensional mirror). Depending on their design they can also move along a third axis making a piston action. What makes them special are small diameters usually ranging from 0.5 mm to 10 mm, and their thickness, around only 40 um, which is thinner than the average thickness of human hair. The technology for making MEMS mirrors has significantly matured over the last two decades and these products are at the cusp of experiencing tremendous growth in the next decade.

A MEMS mirror’s primary function is to deflect a focused beam of light in different directions. The light beam can be steered along a single axis (one-dimensional mirror) or along two axes (two-dimensional mirror). This ability to receive and redirect a focused beam of light is fundamental to scanning technology.

The scanning capability of MEMS mirrors is cleverly utilized to enable two basic functions for manipulating laser light: directed light projection and directed light acquisition (imaging). In almost all MEMS Mirror applications these two basic functions are explored, and the rest are custom additions tailored to specific applications. And customers tell us applications are numerus from automotive and transportation to AR/VR, from consumer and commercial to industrial, from biomedical to free space optical communications, from robotics to smart city. I firmly believe MEMS Mirrors are the next big wave in MEMS technology in the next decade, poised to reach an annual production volume of billions/year.

Current leaders in MEMS Mirror products are Mirrorcle, Hamamatsu, Bosch, Sercalo, Ultimems, and others.

Stay tuned!

Products from pressure sensors to speakers have already reached shipments of more than billions annually. MEMS Mirrors are the next big wave in MEMS technology following in the footsteps of its predecessors.

Also Read:

WEBINAR: Edge AI Optimization: How to Design Future-Proof Architectures for Next-Gen Intelligent Devices

WEBINAR Unpacking System Performance: Supercharge Your Systems with Lossless Compression IPs

Siemens EDA Unveils Groundbreaking Tools to Simplify 3D IC Design and Analysis


Caspia Focuses Security Requirements at DAC

Caspia Focuses Security Requirements at DAC
by Mike Gianfagna on 07-07-2025 at 6:00 am

Caspia Highlights Security Requirements at DAC

As expected, security was a big topic at DAC this year. The growth of AI has demanded complex, purpose-built semiconductors to run ever-increasing workloads. AI has helped to design those complex chips more efficiently and with less power demands. There was a lot of discussion on these topics. But there is another part of this trend. While sophisticated, generative AI makes it easier to design complex AI chips, it also makes it easier to attack and compromise those same chips. GenAI must also be used to harden designs from these attacks to keep innovation moving ahead.

Caspia is a company that clearly sees this challenge and has developed a comprehensive approach to reduce these risks. At DAC, Caspia co-founders hosted a workshop on Sunday, the company presented a SKYTalk on Tuesday and issued a press release detailing collaboration to add its security technology to Siemens Questa One. Let’s take a closer look at how Caspia focuses security requirements at DAC.

The Workshop

Sunday at DAC is when various workshops and tutorials are held. One of these events was the third AI/CAD for Hardware Security Workshop, or AICAD4Sec 2025. Building on the success of the first two events, this one aimed to embrace the transformative intersection of AI, CAD, and hardware security. The stated vision of AICAD4Sec is to establish a cutting-edge platform that shows advancements and sets the roadmap for secure, AI-enabled hardware design. Organizations that are involved include Google, Microsoft, Synopsys, and ARM, alongside academia and government agencies such as DARPA and AFRL.

The event was hosted by a small group of researchers, including two co-founders of Caspia Technologies.

Dr. Mark Tehranipoor, Department Chair & Intel Charles E. Young Chair in Cybersecurity at University of Florida ECE. He is a founding Director of the Florida Institute for Cybersecurity Research and a former Associate Chair and Program Director at the University of Florida. He has authored 16 books and over 230 invited talks and holds 22 patents. Mark is a recipient of the IEEE, ACM, and National Academy of Inventors fellowships.

Dr. Farimah Farahmandi, Wally Rhines Endowed Professor in Hardware Security and Assistant Professor at the University of Florida ECE. She is the Founding Director of the Silicon Design and Assurance Laboratory and is Associate Director of the Florida Institute of Cybersecurity and Edaptive Computing Transition Center. She has authored seven textbooks, 120+ journal/conference papers, and holds 12 patents issued/pending.

These are the folks who founded this workshop three years ago. The event on Sunday covered a wide range of topics, including:

  • CAD Tools for Side-Channel Vulnerability Assessment (Power, Timing, and Electromagnetic Leakage)
  • Security-Oriented Equivalency Checking and Property Validation
  • Fault Injection Analysis and Countermeasure Integration in CAD
  • CAD for Secure Packaging and Heterogeneous Integration
  • Assessment of Physical Probing and Reverse Engineering Risks
  • AI-Powered Tools for Pre-Silicon Vulnerability Mitigation and Countermeasure Suggestions
  • Large Language Models for Security-Aware Design Automation
  • ML-Enhanced Threat Detection Across Design Abstractions
  • AI-Augmented Detection of Malicious Functionality in Hardware Designs
  • AI-Enabled Security Verification for Emerging SoC Architectures

This workshop provided a great opportunity for researchers from many organizations to come together to develop a big picture plan. One attendee was quoted as saying, “I was very energized by the workshop today.  It was a great dialogue, and I enjoyed the time with the Mark, Farimah, and the rest of the Caspia team.”

The SKYTalk

SKYTalks are keynote-style presentations delivered in the DAC Pavillion located on the show floor. Mark Tehranipoor delivered a very well received presentation entitled New Innovation Frontier with Large Language Models for SoC Security.

There were two parts to his talk. In the first part, he described the problem faced by design teams today. While there is a strong focus on performance, power and functional verification, there exists a significant blind spot regarding security verification. The graphic at the top of this post was used by Mark to illustrate the perils that lurk below the water line.

He cited several examples from recent headlines that show how significant and real these security threats are becoming. He described the platform Caspia is developing to address these security risks using GenAI technology. The LLM-powered security agents in this platform continually learn from real world behaviors so designers can stay ahead of new and emerging threats. The tools are designed to complement and not replace existing flows. These tools essentially adding GenAI fueled expert-level security verification to existing design flows. The figure below summarizes the current capabilities of the Caspia security verification platform.

Caspia’s GenAI Security Verification Platform

In the second part of his talk, Mark described the details of how GenAI can be applied to SoC security verification with real examples. He began by describing the overall architecture of the GenAI security platform. The layers of this platform and how they interact are summarized in the diagram below.

GenAI Security Platform

The functions of each layer can be summarized as follows:

Application Layer​

  • Handles user display, query submission, and UI/UX rendering
  • Provides chat-based interface and structured responses for ease of interaction

Supervisor and Orchestrator Layer​s

  • Performs LLM-driven user intent detection and input completion
  • Assigns tasks to appropriate agent
  • Generates and schedules task plans for execution
  • Initiates and confirms execution of the tasks​

Agent Layer​

  • Verification chat agent​
  • Security asset identification​
  • Threat modeling and test plan​
  • Security property generation​
  • Vulnerability detection​
  • Bug validation​

Data Layer​

  • Stores and provides access to datasets of text embeddings

Infrastructure Layer​

  • Leverages cloud GPU clusters, APIs
  • Ensures scalable deployment of LLM & secure backend

This system provides a robust environment to facilitate analysis of the design and interaction with the designer using highly focused security data to drive the overall process. Access to specialized security data is a key element to make the system useful for its intended purpose. Mark provided examples of results using general purpose LLMs (e.g., ChatGPT) and the specialized security LLMs and agents in this platform. The results from Caspia’s specialized technology were substantially more targeted, accurate and effective.

The Agent Layer is where specific analysis of a design occurs. Mark provided several examples of how security assets can be identified, analyzed for weaknesses and enhanced to deliver a security hardened design. This architecture will continue to grow and become more specialized and sophisticated over time.

The Press Release

Siemens booth interview

Just before DAC began, Caspia issued a press release describing how Caspia and Siemens are collaborating to add Caspia’s portfolio of security technologies to expand security verification features in Siemens’ recently announced Questa™ One smart verification software portfolio. The Caspia platform is designed to add expert security verification to existing flows, so this announcement is an example of that strategy.

At DAC, there was a follow-on event at the Siemens booth related to this announcement. Siemens had a sound-proof, glass enclosed recording booth at the show where they recorded discussions with various companies about collaborative efforts. Mark Tehranipoor was interviewed in the Siemens recording booth about the work Caspia is doing with Siemens Digital Industries Software.

Mark covered the challenges design teams are facing and the technology Caspia is developing to address those challenges. The collaboration work is still in the early phase, so there will be more on this work going forward. The final slide in Mark’s presentation brought together several points of view on the work to illustrate the possibilities as shown below.

To Learn More

Security verification is a new and growing area for chip designers. I expect a lot more discussion on this topic at next year’s DAC, and Caspia appears to be positioned to lead the discussion. You can read the entire press release announcing the collaboration with Siemens here. There is an excellent interview with Caspia’s CEO, Rich Hegberg on SemiWiki here. And you can learn more about Capsia’s products and plans on the company’s website here. The interview in the Siemens booth will be added to the Caspia website, so check back to see this discussion. And that’s how Caspia focuses security requirements at DAC.

Also Read:

Caspia Technologies at the 2025 Design Automation Conference #62DAC

CEO Interview with Richard Hegberg of Caspia Technologies

Podcast EP245: A Conversation with Dr. Wally Rhines about Hardware Security and Caspia Technologies


CEO Interview with Peter L. Levin of Amida

CEO Interview with Peter L. Levin of Amida
by Daniel Nenni on 07-05-2025 at 10:00 am

Peter L. Levin Headshot 2024

Peter L. Levin has served at senior levels of leadership in the federal government, the private sector, and academe. Immediately prior to co-founding Amida, he was Senior Advisor to the Secretary and Chief Technology Officer of the Department of Veterans Affairs, where he led their health record modernization initiative. His background is in applied math and computer simulations. He has published in peer-reviewed journals as well as distinguished outlets in the popular press. Peter is an adjunct senior fellow at the Center for a New American Security.

Tell us about your company?

Amida Technology Solutions, Inc. is a software company that specializes in solving the most complex challenges in data interoperability, exchange, governance, and security. Founded in 2013, Amida designs, implements, deploys, and administers data service pipelines, based on both custom and open-source solutions. The company is known for its expertise in data architectures and graph-based information infrastructure.

What problems are you solving?

Amida has developed a pre-synthesis tool that exposes, identifies and mitigates vulnerabilities in semiconductor and programmable devices. Conventional methods based on formal techniques are inherently forensic and retrospective. Our Achilles platform is is based on a novel graph transform that is predictive and prospective.

What application areas are your strongest?

We are experts in all aspects of semiconductor design and test. The team’s background includes JTAG, iJTAG, cybersecurity, graph theory, and foundational elements of AI. We needed all these competencies to build the platform. Our solution transforms RTL into a structural graph, using a patented technique that we created, and which illuminates parts of the threat surface that even the most-advanced formal (assertion-based) approaches cannot see.

What keeps your customers up at night?

That an adversary can somehow manipulate the design, manufacture, test, or deployment of an advanced semiconductor device. This is an especially pernicious problem in safety-of-life or national security applications, where Trojans or malicious inclusions could impact mission success.

What does your competitive landscape look like and how do you differentiate?

There are, in fact, some really excellent solutions out there, but they are limited in scope. For example, some folks are working at the systems level, but have no ability to trace anomalies to root causes, and cannot implement countermeasures. Alternatively, there are good tools at the manufacturing level and can monitor in-field behavior, but they don’t have pre-synthesis analytical tools that prevent problems before they are permanently baked in. We fit right in the middle – pre-synthesis analysis, tuned GenAI (and preventative) instruments, and remediation in case of trouble.

What new features/technology are you working on?

We launched vulnerability analysis product at DAC last year.  Since then our focus has been on risk mitigation. Specifically we can now automatically insert tuned instruments using a GenAI feature we developed. This year we will unveil Tordra, our GenAI-powered security assistant. Next we will further expand our ability to detect and mitigate even more vulnerabilities, and provide support for in-field remediation.

Contact Amida

Also Read:

CEO Interview with John Akkara of Smoothstack

CEO Interview with Dr. Naveen Verma of EnCharge AI

CEO Interview with Faraj Aalaei of Cognichip


CEO Interview with John Akkara of Uptime Crew

CEO Interview with John Akkara of Uptime Crew
by Daniel Nenni on 07-05-2025 at 6:00 am

John (2)

John Akkara is the Founder and CEO of Uptime Crew, where he channels his entrepreneurial spirit to create impactful opportunities in the IT industry. An immigrant from India, John’s journey began with a full-ride tennis scholarship to a Division I university, where he studied finance. Today, as the leader of Uptime Crew, John is dedicated to expanding access to tech careers and fostering an environment where talent from all backgrounds can thrive and create lasting positive impact.

Tell us about your company?

Uptime Crew is a workforce development company dedicated to solving the skilled labor shortage in mission-critical industries like data centers, semiconductor manufacturing, and advanced manufacturing. We specialize in a hire-train-deploy model that rapidly equips high-potential individuals with hands-on, industry-specific training, and the skills they will need to succeed. Our goal is to create a reliable pipeline of skilled technicians who not only meet urgent industry demands but also gain access to stable, high-value careers.

What problems are you solving?

Uptime Crew closes critical tech talent gaps in high-growth fields like semiconductor manufacturing and data centers by training and deploying overlooked individuals—especially from veteran or lower-income backgrounds—into well-paying, recession-resistant careers. This “hire-train-deploy” model solves companies’ hiring challenges while lifting up communities and through merit based evaluation.

An associates program can take 2 years. In general, traditional education pipelines often lack fast, industry specific training, that match the pace of rapidly evolving manufacturing technology. Additionally, educational programs consist of hand-raisers, rather than hand selected individuals who possess the characteristics of a successful technician. Uptime Crew’s immersive, custom training prepares technicians in weeks rather than years, aligning precisely with real-world industry needs, with talent that has been vigorously vetted.

Uptime Crew offers paid training and placement in mission-critical fields like semiconductor fabs and data centers—industries that require on-site specialists and thus remain secure, local, and recession-resistant. To put this into perspective there are currently 3,000 data centers in the US, with another planned 9,000 to be built in the next 5 years.  This doesn’t account for all of the chip manufacturing facilities that are planned to be built during this time frame.  Where will the operations and maintenance technicians come from to run these facilities?

What application areas are your strongest?

Semiconductor and data center technicians are becoming as indispensable as utility workers because they keep the digital infrastructure—particularly for AI—up and running. Just as the internet evolved from an obscure novelty to an essential tool (remember life before Google?), AI is poised to transform everyday life in ways we can only begin to imagine. These technicians maintain the hardware and systems that power AI’s expansion, making them the backbone of our increasingly digital-dependent world.

Our model creates a direct, accelerated path into these critical roles by identifying high-aptitude talent, providing hands-on, job-specific training, and deploying them in jobs where where uptime, safety, and reliability are non-negotiable.

What keeps your customers up at night?

People, Power and Places.

People – Not having enough experienced talent to fill the roles needed to work at the facilities.

Power – Not having enough power to run day to day as these types of operations, especially now more than ever with AI, requires tremendous amounts of electricity.

Places – In this case not having facilities to meet the the data housing or manufacturing demands

What does the competitive landscape look like and how do you differentiate?

We don’t view ourselves as having direct competition because our model is fundamentally different from anything else in the market. We’re not a traditional staffing agency, and we’re not offering open-enrollment education like community colleges.

Our approach is highly selective and purpose-built. Using a proprietary screening process and our Mirrored Environment Immersion (MEI™) platform, we identify the right candidates and prepare them through real-world, job-specific training. From how we source talent to how we train and deploy them, every part of our model is designed to deliver a workforce that’s ready to perform on day one. At this level of specialization and scale, we don’t see anyone else doing what we do.

We do not recognize anyone else doing what we are doing at the level we are doing it.

What new features/technology are you working on?

One of the most impactful initiatives we’re working on is expanding career pathways for transitioning veterans. Veterans are exceptionally well-suited for roles in data centers, advanced manufacturing, and infrastructure, due to their discipline, technical experience, and training.

We’re growing our outreach and apprenticeship programs to connect veterans to these high-value careers. As an approved DoD SkillBridge partner and GI Bill-eligible provider, we ensure veterans can use their benefits to support their training and supplement their income—without incurring tuition costs. Today, over 13% of our new hires are protected veterans, more than double the national workforce average of around 5.6%.

We’re especially focused on aligning specific military occupational specialties (MOS) with technician roles. For example, veterans from the Navy’s nuclear submarine program bring deep expertise in mechanical and environmental systems—skills that directly apply to data center infrastructure. Our goal is to continue developing clear, supportive pathways for veterans into long-term, well-paying careers that fully leverage their skills and honor their service.

How do customers normally engage with your company?

We’ve received consistently strong feedback from the companies we serve, which has been a meaningful validation of our model and mission.

Managers consistently note that Uptime Crew technicians demonstrate a higher level of professionalism, readiness, and initiative than they expect from new hires.

One manager shared that our training program enabled technicians to get “in and off the ground” quickly, with a fast progression than most other employees. In fact, he ranked our technicians among the top three performers on his entire team—including full-time, tenured staff.

Another leader described our technicians as “real go-getters” who volunteer for tasks, require minimal supervision, and help onboard newer engineers.

We even had one technician promoted to day shift, a competitive spot that must was earned because of his proactive attitude and exceptional performance.

Across the board, clients tell us our technicians stand out for their strong communication skills, positive attitudes, eagerness to learn, and reliability. These qualities make them valuable contributors from day one, and watching them grow into leadership roles is one of the most rewarding aspects of what we do.

For more, we invite you to explore our case study: https://uptimecrew.com/case-study/the-data-center-hiring-challenge

Also Read:

CEO Interview with Dr. Naveen Verma of EnCharge AI

CEO Interview with Faraj Aalaei of Cognichip

CEO Interview with Dr. Noah Strucken of Ferric


Podcast EP295: How Nordic Semiconductor Enables World-Class Wireless Products with Sam Presley

Podcast EP295: How Nordic Semiconductor Enables World-Class Wireless Products with Sam Presley
by Daniel Nenni on 07-04-2025 at 10:00 am

Dan is joined by Sam Presley, technical product manager at Nordic Semiconductor. With a background in electronics engineering, embedded firmware development and consumer products development, his current areas of expertise are hardware and software for IoT applications, with a special focus on enabling product manufacturers to build the next-generation of secure connected products. Sam has been instrumental in the launch of Nordic’s latest SoCs.

Sam describes Nordic’s product portfolio and how it enables customers to build world-class ultra-low power wireless products. Sam discusses the broad portfolio Nordic offers as well as support for efficient software stacks for applications such as Bluetooth Low Energy, Bluetooth Mesh, and Zigbee.

Dan explores the new nRF54L15 ultra-low-power wireless SoC product from Nordic with Sam. This product is the largest memory option in the nRF54L series, with 1.5 MB NVM and 256 KB RAM. It is targeted at more demanding applications, while still being cost-optimized for high-volume scenarios.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


CEO Interview with Dr. Naveen Verma of EnCharge AI

CEO Interview with Dr. Naveen Verma of EnCharge AI
by Daniel Nenni on 07-04-2025 at 6:00 am

Naveen Verma Headshot

Naveen Verma, Ph.D., is the CEO and Co-founder of EnCharge AI, the only company to have developed robust and scalable analog in-memory computing technology essential for advanced AI deployments, from edge to cloud. Dr. Verma co-founded EnCharge AI in 2022, building on six years of research and five generations of prototypes while serving as a Professor of Electrical and Computer Engineering at Princeton University since 2009. He also directs Princeton’s Keller Center for Innovation in Engineering Education and holds associated faculty positions in the Andlinger Center for Energy and Environment and the Princeton Materials Institute. Dr. Verma earned his B.A.Sc. in Electrical and Computer Engineering from the University of British Columbia and his M.S. and Ph.D. in Electrical Engineering from MIT.

Tell us about your company.

EnCharge AI is the leader in advanced AI inference solutions that fundamentally changes how and where AI computation happens. Our company was spun out of research conducted at Princeton University in 2022 to commercialize breakthrough analog in-memory computing technology, built on nearly a decade of R&D across multiple generations of silicon. We’ve raised over $144 million from leading investors, including Tiger Global, Samsung Ventures, RTX Ventures, and In-Q-Tel, as well as $18.6 million in DARPA funding. Our technology delivers orders-of-magnitude higher compute efficiency and density for AI inference compared to today’s solutions, enabling deployment of advanced, personalized, and secure AI applications from edge to cloud, including in use cases that are power, size, or weight constrained.

What problems are you solving?

Fundamentally, current computing architecture is unable to support the needs of rapidly developing AI models. Because of this, we are experiencing an unsustainable energy consumption and cost crisis in AI computing that threatens to limit AI’s potential impact across industries. Data center electricity consumption is projected to double by 2026 to Japan’s equivalent total consumption. The centralization of AI inference in massive cloud data centers creates cost, latency, and privacy, barriers, while AI-driven GPU demand threatens supply chain stability. Addressing these problems began at Princeton University with research aimed at fundamentally rethinking computing architectures in order to provide step-change improvements in energy efficiency. The result is scalable, programmable, and precise analog in-memory computing architecture, which delivers 20x higher energy efficiency compared to traditional digital architectures. These efficiency gains enable sophisticated AI to run locally on devices using roughly the power of a light bulb rather than requiring massive data center infrastructure.

What application areas are you strongest in?

Our strongest application areas leverage our core advantages in power and space-constrained environments, with AI inference for client devices such as laptops, workstations, and phones as our primary focus. We enable sophisticated AI capabilities without compromising battery life, delivering over 200 TOPS in just 8.25W of power. Edge computing represents another key strength, as our technology is capable of bringing advanced AI to industrial automation, automotive systems, and IoT devices where cloud connectivity is limited or low latency is critical.

What keeps your customers up at night?

Our customers face mounting pressure to meet ambitious roadmaps for integrating advanced AI capabilities into new products while navigating severe technical and economic constraints that threaten their competitive positioning. For our OEM customers, the rapidly growing AI PC market means companies struggle to meet emerging device requirements within laptop constraints while maintaining battery life and competitive pricing. Meanwhile, our independent software vendor customers grapple with cloud dependency costs, latency issues, and privacy concerns preventing local, personalized AI deployment, while enterprise IT teams face skyrocketing infrastructure costs and security risks from cloud data transmission.

What does the competitive landscape look like and how do you differentiate?

While much of the attention in the AI chip space has centered on the data center, we are instead focused on AI PCs and edge devices, where our chip architecture presents uniquely transformative benefits. That said, our technologies possess qualities that make them competitive even against the most established incumbents. Against digital chip leaders, our analog in-memory computing delivers 20x higher energy efficiency (200 vs. 5-10 TOPS/W) and 10x higher compute density, while our switched-capacitor approach overcomes the noise and reliability issues that plagued previous analog attempts. These differentiators are made possible by our unique technology and approach, which leverages analog in-memory computing. In fact, our newly launched EN100 chip is the first commercially available analog in-memory AI accelerator.

What new features/technology are you working on?

We’re actively commercializing our EN100 product family and have just announced the launch of the EN100 chip, delivering over 200 TOPS for edge devices. The chip, available in M.2 form factor for laptops and PCIe for workstations, features up to 128GB high-density memory and 272 GB/s bandwidth with comprehensive software support across frameworks like PyTorch and TensorFlow. Our development roadmap focuses on migrating to advanced semiconductor nodes for even greater efficiency improvements, while expanding our product portfolio from edge devices to data centers with performance requirements tailored to specific markets. We’re simultaneously enhancing our software ecosystem through improved compiler optimization, expanded development tools, and a growing model zoo designed to maximize efficiency across the evolving AI landscape. This enables new categories of always-on AI applications and multimodal experiences that were previously impossible due to power constraints.

How do customers normally engage with your company?

Initially, customers typically engage with us primarily through our structured Early Access Program, which provides developers and OEMs with opportunities to gain a competitive advantage by being among the first to leverage EN100 capabilities. We are opening a second round of our Early Access Program soon, thanks to popular demand. Beyond the Early Access Program, we typically engage through custom solution development, working closely with their teams to map transformative AI experiences tailored to their requirements, supported by our full-stack approach combining specialized hardware with optimized software tools and extensive development resources for seamless integration with existing AI applications and frameworks. Finally, we also maintain direct strategic partnerships with major original equipment manufacturers (OEMs) and our semiconductor partners for integration and go-to-market collaboration.

Contact EnCharge AI

Also Read:

CEO Interview with Faraj Aalaei of Cognichip

CEO Interview with Dr. Noah Strucken of Ferric

CEO Interview with Vamshi Kothur, of Tuple Technologies