100X800 Banner (1)

CEO Interview: Subi Krishnamurthy of PIMIC

CEO Interview: Subi Krishnamurthy of PIMIC
by Daniel Nenni on 12-31-2024 at 10:00 am

PIMIC CEO formal

Subi Krishnamurthy is the Founder and CEO of PIMIC, an AI semiconductor company pioneering processing-in-memory (PiM) technology for ultra-low-power AI solutions. With over 30 years of experience in silicon design and product development, Subi has led the mass production of 12+ silicon projects and holds 30+ patents. He began his leadership journey at Force10 Networks, advancing networking silicon as a lead designer and architect, and later served as Executive Director and CTO of Dell Networking, driving technology strategy, product architecture and technology partnerships.

Subi founded Viveka Systems to innovate in networking software and silicon solutions and later consulted for various companies on Smart NICs, AI pipelines, gaming silicon, and AI inference engines. Subi holds an M.S. in Computer Science from Southern Illinois University, Carbondale, and a Bachelor of Engineering in Computer Science from the National Institute of Technology, Tiruchirappalli.

Tell us about your company?

PIMIC is a groundbreaking AI semiconductor startup delivering highly efficient edge AI solutions with unparalleled performance and energy savings. PIMIC’s proprietary Jetstreme™ Processing-in-Memory (PIM) acceleration architecture brings remarkable gains in AI computing efficiency by addressing the key requirements in edge environments, including low power, compact design, and superior AI model parameter update performance. PIMIC is set to launch two ultra-efficient AI model silicon chips for edge applications at CES 2025, delivering 10x to 20x power savings. We are also advancing our efforts on a breakthrough AI inference silicon platform designed for large-scale models, with a focus on achieving unprecedented efficiency.

What problems are you solving?

By delivering the most efficient and scalable AI inference platform for tiny to large AI models, PIMIC’s solutions meet or exceed the rapidly increasing demand for the performance and efficiency required to run the AI agentic workflows and large multimodal modeling. Our solutions also address the need to run AI inferencing tasks seamlessly and effectively on local (at the edge), battery-powered devices.

What application areas are your strongest?

Initially, PIMIC’s focus is on tiny AI model inference applications such as keyword spotting and single-microphone noise cancellation (running at 20uA and 150uA respectively) for wearables and other battery-operated devices. These solutions deliver 10x to 20x power savings while reducing system costs through a highly integrated design.

What keeps your customers up at night?

Our customers are finding that the rapid increase in AI model size, complex agentic workflows, and multimodal models require much more inference compute power that outpaces the architectural capabilities of current edge AI silicon. The demand for inference compute performance is set to far exceed what existing hardware can deliver, creating a significant disparity. This challenge necessitates a new generation of silicon with breakthrough improvements in efficiency and performance.

What does the competitive landscape look like and how do you differentiate?

Most AI inference silicon architectures currently on the market were designed over the past six years. These older designs are struggling to meet the performance and efficiency demands of rapidly evolving AI modeling.

PIMIC’s solutions are built on a brand-new architecture that incorporates a number of AI innovations to significantly improve efficiency and scalability, including our proprietary Jetstreme™ Processing-in-Memory (PIM) technology. Our focus is on delivering an efficient, scalable silicon platform capable of handling everything from tiny to large AI models with billions of parameters, offering significant PPA (performance, power, area) advantages that we believe can keep-up with performance demands, and enabling the latest AI models to be run seamlessly and effectively on any local edge device. PIMIC’s first two AI inference silicon chips based on this architecture have already demonstrated 10x to 20x improvements in PPA compared to competitors. We are confident that PIMIC holds a distinct edge in addressing the future needs of AI inference.

What new features/technology are you working on?

We are leveraging our Jetstreme Processing-in-Memory (PIM) architecture, together with number of other critical silicon innovations, to dramatically improve compute efficiency and scalability. We are working on enabling the next generation of AI modeling.

How do customers normally engage with your company?

We have a flexible approach. We provide unpackaged chips, packaged SoCs, or ASIC solutions with specific functional requirements.

What challenges are you solving for edge devices in particular?

Edge devices—devices that act as endpoints between the data center and the real world—encompass a wide range of products, all with challenging performance requirements. Edge devices generally fall into two main categories: tiny edge devices and high-performance edge devices. PIMIC’s solutions address the challenges of both categories of device.

Tiny Edge Devices:

These devices, often located near sensors, must operate with extremely low power and cost constraints to achieve widespread adoption. The primary challenges for this category include energy efficiency, cost optimization, and low latency for real time response.

High-Performance Edge Devices:

Devices such as smartphones, smart TVs, and AI-powered PCs must run large AI models in real time, ensuring seamless user interactions by balancing computational demands, latency, privacy, and energy efficiency. The key challenges include overcoming hardware limitations in power, memory bandwidth, and computational throughput to enable advanced AI tasks locally, all while scaling to meet the performance demands by the latest AI models mentioned earlier.

About PIMIC

Founded in 2022 and based in Cupertino, California, PIMIC is an AI semiconductor company specializing in ultra-efficient silicon solutions for edge AI applications. The company’s chip products deliver industry-leading performance and power efficiency, enabling advanced AI capabilities in compact, low-power devices. With a focus on empowering devices at the edge, PIMIC aims to redefine how AI is integrated into everyday technology.

For more information, visit www.pimic.ai.

Also Read:

CEO Interview: Dr Josep Montanyà of Nanusens

CEO Interview: Marc Engel of Agileo Automation

CEO Interview with Dr. Dennis Michaelis of GEMESYS


CEO Interview: Dr Josep Montanyà of Nanusens

CEO Interview: Dr Josep Montanyà of Nanusens
by Daniel Nenni on 12-31-2024 at 6:00 am

Josep 202408

Dr. Josep Montanyà Chief Executive Officer – UK/Spain Co-founder leading the company, with +18 years of experience in MEMS, patents and the semiconductor industry. Founded Baolab Microsystems prior to Nanusens.

Tell us a little bit about your company?

We have a patented technology that allows us to build chips with nano-mechanisms inside (called NEMS) using the de facto standard manufacturing for solid state chips (called CMOS). This allows us to have better performance, smaller size and lower cost.

There are many applications for this technology, including sensors, RF tunable devices and even AI processors with higher speeds and less consumption. Our initial focus is on a particular product called RF DTC (Digitally Tunable Capacitor). We have a very clear route to market, with signed LOI, and we will place it into Tier 1 phones (and more) that will hit the market in 4 years.

Beyond this initial product there are more RF devices we can build, and a large variety of sensors. Until now inside each wearable or portable system, you have a digital processor and one or more external sensors chips. We have the capability to change this, by embedding the sensors into the digital processors. Today it is possible to do this by using complex multi-die packages. We don’t need to do that. Instead, at Nanusens, we can build all these sensors monolithically on the same CMOS die where the digital processor is built. And this is done without impacting yield and using minimal additional area. This reduces dramatically the size of the overall system, and it reduces also power consumption to levels that today are unseen.

The company is split between its HQ in Paignton (UK) and a subsidiary in Cerdanyola (Spain).

What was the most exciting high point of 2024 for your company? 

This year we got our RF DTC prototype measured by a large corporation. This was a very important milestone, because not only it shows the interest of the industry in our product, but it also validated all our measurements.

These measurements proved the incredible performance that our RF DTC can achieve, and it has helped us to better understand our route to market. Being able to increase by 30% the antenna efficiency of cell phones means that we increase by 30% talk time, we also increase by 14% the range from the base antenna, meaning that many areas of poor reception disappear. And for the smartphone OEM, the size of the PCB can be reduced, given that with our solution there is no need to switch standalone external capacitors. Reducing PCB size inside the phone is a key driver for smartphone OEMs, as this means having more space for battery.

With our chip, we aim to monopolize the +$800m market of smartphone aperture antenna. This is because we have unmatched performance, small size and low cost. And all this comes from the fact that we use our patented technology to build NEMS devices in CMOS.

What was the biggest challenge your company faced in 2024? 

Main difficulty for a pre-revenue start-up like Nanusens developing semiconductor and even more MEMS technology is fundraising.  Our goal for 2024 on this front was to raise a £8m Series A to provide prototypes of inertial sensor and RF DTC devices, so that next year we would be in the market with our first products, and we have customers waiting for them. This has been moved to 2025 as we will have achieved more significant milestones by then that will facilitate this process.

How is your company’s work addressing thisbiggestchallenge? 

We have decided to focus on the RF products, leaving sensors and other devices in our future roadmap. This has allowed us to reduce costs and be more efficient.

What do youthink the biggestgrowth area for 2025 will be, and why?

I think AI processors will keep being the dominant area in semiconductors. The incredible success of NVIDA, plus all the big techs jumping in, forecasts a very interesting year. At the same time, however, the market is starting to adjust itself, and I believe we will start seeing more start-up failures in this field as well. You need something really different to succeed in such a competitive field, dominated by giant players.

How is your company’s work addressing this growth? 

We put a limited effort on studying the possibility to build better AI processors using our NEMS in CMOS technology. We discovered that it is possible for us to build vacuum transistors in CMOS. This has the potential to enable x10 faster AI processors and consuming half of the power.

Vacuum transistors enjoy the terahertz range bandwidths of vacuum tubes, but without their problems of large size, mechanical fragility, low reliability and large power consumption. In fact, given the very small, nano-sized gaps of vacuum tubes, there is not even a need for heating the metals at high temperatures. Instead, a low voltage generates such a strong electrical field with this small gap, that electrons fly between the cathode and the anode by field emission.

There are research papers on vacuum transistors, which have been built using custom NEMS processes. At Nanusens, we have the capability to build them using standard CMOS processing. This has the potential to build AI processors far beyond the state of the art, and with a process ready to produce them in high volumes. This is a project for after the Series A round is completed.

How do customers engage with your company?

Although technically we can sell IP and have already done so, our business model is to sell product (ICs) directly to our customers or through distributors.

Additional questions or final comments? 

It is always difficult to predict the future. But 2025 will be a very interesting year. I will be especially interested to see what happens in this race for dominating the AI digital processor market. But, whoever wins this next year, we have a technology that will overpass them in the next years!

Also Read:

CEO Interview: Marc Engel of Agileo Automation

CEO Interview with Dr. Dennis Michaelis of GEMESYS

CEO Interview: Slava Libman of FTD Solutions


Accelerating Simulation. Innovation in Verification

Accelerating Simulation. Innovation in Verification
by Bernard Murphy on 12-30-2024 at 6:00 am

Innovation New

Following a similar topic we covered early last year, here we look at updated research to accelerating RTL simulation through domain-specific hardware. Paul Cunningham (GM, Verification at Cadence), Raúl Camposano (Silicon Catalyst, entrepreneur, former Synopsys CTO and lecturer at Stanford, EE292A) and I continue our series on research ideas. As always, feedback welcome.

The Innovation

This month’s pick is Accelerating RTL Simulation with Hardware-Software Co-Design. This was published in in the 2023 IEEE/ACM International Synposium on Microarchitecture and has 2 citations. The authors are from MIT CSAIL (CS/AI Lab).

This work is from the same group lead as the earlier paper. Their new approach, ASH adds dataflow acceleration, not available in the earlier work, which together with speculation provides the net large performance gain in this research.

Paul’s view

Important blog to end our year. This paper is a heavy read but it’s on a billion dollar topic for verification EDA: how to get a good speed-up from parallelizing logic simulation. Paper is out of MIT, from the same team that published the Chronos paper we blogged on back in March 2023 (see here). This team are researching hardware accelerators that operate by scheduling timestamped tasks across an array of processing elements (PEs). The event queue semantics of RTL logic simulation map well to this architecture. Their accelerators also include the ability to do speculative execution of tasks to further enhance parallelism.

As we blogged in 2023, while Chronos showed some impressive speed-ups, the only result shared was for the gate-level simulation of a single 32 bit adder. Fast forward to today’s blog and we have some serious results on 4 credible RTL testcases including an open-source GPU and an open-source RISC-V core. Chronos doesn’t cut it on these more credible testcases – actually it appears to slow down the simulations. However, this month’s paper describes some major improvements on Chronos that look very exciting on these more credible benchmarks – in the range of 50x speed-up over a single core simulation. The new architecture is called SASH, a Speculative Accelerator for Simulated Hardware.

In Chronos, each task can input and output only one wire/reg value change. This limits it to a low level of abstraction (i.e. gate-level), and also conceptually means that any reconvergence in logic is “unfolded” into cones causing significant unnecessary replication of tasks. In SASH each task can input and output multiple reg/wire changes so tasks can be more like RTL always blocks. Input/output events are passed as “arguments” through an on chip network and queued at PEs until all arguments for a task are ready. Speculative task execution is also elegantly implemented with some efficient HW. The authors modify Verilator (an open-source RTL simulator) to compile to SASH. Overall, very impressive work.

One important thing to note: the authors do not actually implement SASH in an ASIC or on an FPGA. A virtual model of SASH built using Intel’s Pin utility (a low level X86 virtual machine utility with just-in-time code instrumentation capabilities). I look forward to seeing a future paper that puts it in silicon!

Raúl’s view

In March of 2023 we reviewed Chronos (published in March 2020) , based on the Spatially Located Ordered Tasks (SLOT) execution model.  This model is particularly efficient for hardware accelerators that leverage parallelism and speculation, as well as for applications that dynamically generate tasks at runtime. Chronos was implemented on FPGAs and, on a single processing element (PE), outperformed a comparable CPU baseline by 2.45x. It demonstrated the potential for greater scalability, achieving a 15.3x speedup on 32 PEs.

Fast forward roughly three and a half years, and the same research group published the paper we review here, on ASH (Accelerator of Simulated Hardware), a co-designed architecture and compiler specifically for RTL simulation. ASH was benchmarked on 256 cores, achieving a 32.4x acceleration over an AMD Zen2 based system, and a 21.3x speedup compared to a simulated, special-purpose multicore system.

The paper is not easy to read. The initial discussion on why RTL simulation is difficult and needs fine grain parallelism to handle both dataflow parallelism and selective execution / low activity factors is still easy to follow. The ASH architecture comes in two flavors: DASH (Dataflow ASH) provides novel hardware mechanisms for dataflow execution of small tasks; and SASH (Selective event-driven ASH) extends DASH with selective execution, running only tasks whose inputs change during a given cycle. The latter is obviously the more effective one.

The compiler implementation for these architectures adds 12K lines of code to Verilator, while maintaining Verilator’s fast compilation times (Verilator is a full-featured open-source simulator for Verilog/SystemVerilog). The HW implementation is evaluated “using a simulator based on Swarm’ simulator [2, 27, 76], which is execution-driven using Pin [36, 43]”. The area of a HW implementation of SASH in a 7nm process is estimated to be a modest 115mm2. These descriptions, however, are not self-contained and require additional reading for a full understanding. The paper includes a detailed architectural analysis, covering aspects such as prefetching instructions, prioritized dataflow, queue utilization, etc. It also compares ASH to related work, including of course Chronos and other dataflow / speculative execution architectures, as well as HW emulators and GPU acceleration.

The paper addresses specifically accelerating RTL simulation. It tackles the challenges of RTL simulation through a combination of hardware and software, using dataflow techniques and selective execution. Given the sizable market for emulators in the EDA industry, there is potential for these ideas to be commercially adopted, which could significantly accelerate RTL simulation.


Podcast EP268: A Decade in the Chinese Semiconductor Industry: An American’s Story

Podcast EP268: A Decade in the Chinese Semiconductor Industry: An American’s Story
by Daniel Nenni on 12-27-2024 at 10:00 am

Dan is joined Dr. Douglas Sparks, CEO of M2N Technologies LLC, a consulting firm specializing in semiconductors, MEMS and sensors, including their supply chains. He has just published a new book, A Decade in the Chinese Semiconductor Industry: An American’s Story. Doug was the CTO of Hanking Electronics which built a high-volume wafer fab in Shenyang, China. In addition to China, he has international semiconductor business experience in Japan and Europe as well as the United States.

In this informative discussion Doug describes his experiences while working on semiconductor infrastructure in China. He provides information on the impact of US export controls in China and the broader impact on the worldwide supply chain. Dan and Doug discuss semiconductor fab buildout around the world and possible broad-based over-capacity going forward. Intellectual property protection in China is also discussed. The details behind security and surveillance in China and potential broader use of this technology are also explored.

You can learn more about Doug here, and explore his new book here.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


The Intel Common Platform Foundry Alliance

The Intel Common Platform Foundry Alliance
by Daniel Nenni on 12-27-2024 at 6:00 am

Common Platform Alliance

When I do a root cause analysis of Intel’s problem it is very simple. If Intel wants to continue to be a leading-edge semiconductor manufacturer, they need to fill their fabs, all of their fabs. Clearly several things need to happen in order to do that but the one that most interests me is on the foundry side.

I think we can all agree that it is important for Intel to succeed. The United States needs to stay in semiconductor manufacturing and the world needs a second trusted foundry. After the formal Intel Foundry launch the first thing I thought of was for Intel to acquire GlobalFoundries to buy a channel and accelerate the business of filling fabs. GF is stuck at 14nm (which they licensed from Samsung) so it seemed like a good fit. Instead, Intel tried to buy Tower Semiconductor which I felt was a good move as well. The semiconductor foundry business is a marathon not a sprint, but this would have sped things up for sure.

Unfortunately, the sale of Tower Semiconductor to Intel did not go through due to regulatory hurdles. The acquisition was blocked by antitrust regulators in multiple jurisdictions, including the U.S. Federal Trade Commission (FTC) and Chinese authorities. To me this was political nonsense. The sale should have been approved to add a third horse in a two-horse race (TSMC and Samsung Foundry) for the greater good of the semiconductor industry.

Meanwhile, TSMC is racing forward with new fab partnerships that have changed the foundry business. In November of 2021 TSMC announced a partnership with the Japanese government, Sony, and DENSO (automotive) for fabs in Japan. The first one opened in February of 2024, making 28nm wafers. A second FinFET fab will be added for production in 2027.

TSMC also announced a partnership with Robert Bosch GmbH, Infineon Technologies AG, and NXP Semiconductors N.V. to establish the European Semiconductor Manufacturing Company (ESMC) in Dresden, Germany. A 28nm fab is being built with production starting in 2027 and I would guess more fabs will follow.

The other noteworthy foundry partnership is Rapidus in Japan. Rapidus was founded with backing from eight major Japanese companies: Denso, Kioxia, MUFG Bank, NEC, NTT, SoftBank, Sony, and Toyota. It is scheduled to be a 2nm fab, in partnership with IBM, with production starting in 2027. To me this was a missed opportunity for Intel.

I do not have high hopes for this partnership since it involves IBM technology. I remember back in 2004 when the Common Platform Alliance was launched. It was a partnership between IBM, Samsung, and Chartered Semiconductor (later acquired by GF). The goal was to standardize process technologies (PDKs) across their fabs so a customer could take one design to multiple manufacturing sources. This also enabled a shared ecosystem of EDA and IP which as we know is a big deal.

TSMC was rising in prominence with increased market share at 40nm, so this was a NOT TSMC competitive move at 28nm. Unfortunately, it used the IBM process recipe which historically is not high yielding technology. This was the HKMG gate-first versus gate-last controversy. Common Platform used IBM’s gate-first technology and did not yield. TSMC followed Intel with gate-last and won the 28nm node by a large margin and that was the end of the Common Platform Alliance.

As I have said before Intel needs to pivot. Fabs are getting more expensive to build and filling them has never been harder. Independently building the required ecosystem to compete with TSMC is another major financial challenge. Launching an Intel Common Foundry Platform Alliance and providing other foundries (Samsung, UMC, Tower Semiconductor, GlobalFoundries, etc…) unified access to Intel manufacturing can fill the Intel fabs and packaging facilities, absolutely.

Also Read:

What would you do if you were the CEO of Intel?

Intel Presents the Final Frontier of Transistor Architecture at IEDM

Intel – Everyone’s Favourite Second Source?


CEO Interview: Marc Engel of Agileo Automation

CEO Interview: Marc Engel of Agileo Automation
by Daniel Nenni on 12-26-2024 at 10:00 am

Marc Engel CEO Agileo Automation

Marc Engel has served as the CEO of Agileo Automation for the past 15 years. Agileo specializes in software solutions for controlling semiconductor production equipment and connecting tools to MES systems using SECS/GEM and OPC-UA standards. Marc started his extensive 25-year engineering career in software development on the ground setting up production machines in wafer fabs in Germany, Taiwan, and China. He has worked for companies such as RECIF Technologies, Motorola Mobile Services, Akka Technologies, and Atos Origin. Marc holds a degree in engineering and industrial automation from the Institut National des Sciences Appliquées (INSA, National Institutes of Applied Sciences) in Toulouse, France.

Tell us about your company

Founded in Poitiers, France, in 2010, Agileo Automation specializes in software solutions for controlling production equipment in the semiconductor industry and connecting tools to the company’s manufacturing execution system (MES) using widely adopted industrial standards. Our expertise lies in guiding customers—whether start-ups or established manufacturers—through the phases of increasing the production readiness level of their equipment, helping them reduce time-to-market while maintaining high standards of efficiency and reliability. At the heart of Industry 4.0, our A²ECF-SEMI framework provides a robust foundation for developing equipment controller software, leveraging SEMI’s SECS/GEM and GEM300 standard suites and a portfolio of drivers for off-the-shelf semiconductor equipment, such as wafer handlers or load ports from multiple vendors. As a member of SEMI and the OPC Foundation, Agileo Automation is a key contributor to the development and integration of industry standards, such as SEMI standards and OPC Unified Architecture (OPC-UA).

What problems are you solving?

Think of us as the automated version of a Windows operating system for a computer. We develop the brain behind semiconductor production equipment. We help start-ups embrace the world of automation for their fab equipment, a world unfamiliar to them, as they often come from the research lab community. They develop innovative and complex production equipment, but they have limited knowledge of automation technologies needed to integrate it into the operational environment of a real-world manufacturing wafer fab. We help them prepare the machines to connect with commercially available robots, manage the operator interface, integrate with the company’s IT systems like MES, etc., ensuring a much faster time-to-market.

What application areas are your strongest?

The semiconductor industry is our primary application market. Original equipment manufacturers (OEMs) work with a wide variety of clients, wafer sizes, and carrier types found in advanced packaging facilities. We provide flexible software solutions to their complex integration challenges, including specialized advice on selecting robotic systems. We have done this kind of work for a French company, UnitySC, which works with a wide variety of semiconductor materials and diverse wafers formats. Their loading process requires specific equipment front-end modules (EFEMs), traditionally controlled using manufacturer-developed software. With our A²ECF-SEMI framework, we simplified the integration of EFEMs into UnitySC’s process modules, reducing their dependency on subsystem suppliers, automating the loading process and fab host interface, and solving complex integrations. For Soitec, by leveraging the built-in architecture of our A²ECF-SEMI framework, we developed a digital twin for the integration of a new automated SOI wafer loading robot into its production lines based in France and Singapore.

What keeps your customers up at night?

Our clients are often on tight deadlines, which can be easily disrupted by material sourcing delays or unforeseen internal process challenges. Our team is used to developing software in parallel to production machines being designed and built. A key aspect of our value proposition is minimizing the time spent on software development within the critical path of overall equipment planning. We develop our own digital twins and decouple hardware and software. Remember that some of this manufacturing equipment is shipped to the other side of the world where it not only has to be installed but also has to be maintained. Our software solutions take this kind of use case into account from the very beginning to solve issues remotely, ensuring full confidence in the quality of the software modifications made.

What does the competitive landscape look like and how do you differentiate?

There are a handful of competitors on the global scene who provide offerings similar to ours. I think what sets us apart is the overall customer experience we provide – our 20 years of global experience, our combined equipment manufacturing and software development expertise, our deep integration track record, our comprehensive suite of customizable solutions, and our service excellence. The world’s most reputable semiconductor companies trust Agileo Automation to solve their production equipment control and connectivity challenges. Our global installed product base includes over 900 controlled/connected equipment of more than 60 different types by two of the top 10 semiconductor OEMs, five in the top 50, and across more than 65 wafer fabs worldwide. A large part of our business actually comes from client referrals within the industry.

The make-or-buy decision is a particularly challenging dilemma for many of our customers. Some software solutions can certainly be developed internally, but they will likely end up being more expensive and take significantly longer to implement. Developing software at this level takes a high level of experience and maturity. Outsourcing complex software development will eventually be cheaper and lead to a faster time-to-market.

What new features/technology are you working on?

In November 2024, we launched our E84 PIO Box, a new handheld device that offers a new lightweight interface for fab staff to test semiconductor equipment software for compliance with SEMI’s E84 and GEM300 standards suite for automatic carrier delivery. It improves the readability, identification, and validation of E84 signal exchanges and functional aspects in cleanrooms or workshops. Integrated with our Speech Scenario software that emulates the fab host and validates the SECS/GEM interface with predefined test scenarios, the E84 PIO Box can easily emulate automated carrier delivery systems such as overhead hoist transport (OHT) or automated guided vehicles (AGV). It can detect non-compliance and other functional issues thanks to its close alignment with SEMI’s E84 standard.

Coming up next are new developments on the SEMI Equipment Data Acquisition (EDA) standards front. We are proud to have recently achieved a significant milestone. Our team – one of only four worldwide – successfully conducted its first SEMI EDA Freeze 3 standards interoperability tests focusing on connectivity and data acquisition with our semiconductor industry peers, including software vendors and OEMs. Our team validated key functions such as gRPC metadata usage, security administration, and data collection plan management. EDA Freeze 3 brings a substantial leap in semiconductor equipment performance through the adoption of gRPC instead of SOAP/XML, enabling reduced latency and increased data collection throughput.

What is the best advice you would give to semiconductor OEMs and equipment manufacturers when it comes to their fab automation systems?

Don’t underestimate the crucial role software plays in the deployment of production equipment. Just like the mechanical components, software requires thorough preparation, particularly regarding interfaces with operator and IT systems, to ensure the machine performs as promised within the highly complex and expensive fab manufacturing process. For example, moving a wafer from point A to point B for processing may seem straightforward at first glance; however, it involves integrating several systems from multiple suppliers, ensuring compliance with international standards, accounting for the future maintenance of equipment worldwide, ensuring scalability across the equipment family’s lifecycle, and addressing numerous other factors. These requirements demand a robust software architecture, a deep understanding of the equipment use cases, as well as qualified and experienced staff capable of simplifying complex processes to deliver peace of mind to customers.

How can customers engage with your company?
Agileo Automation – https://www.agileo.com/en
References:
SECS/GEM – https://secsgem.eu/
OPC Foundation – Home Page – OPC Foundation
Also Read:

CEO Interview with Dr. Dennis Michaelis of GEMESYS

CEO Interview: Slava Libman of FTD Solutions

CEO Interview: Caroline Guillaume of TrustInSoft


CEO Interview with Dr. Dennis Michaelis of GEMESYS

CEO Interview with Dr. Dennis Michaelis of GEMESYS
by Daniel Nenni on 12-26-2024 at 6:00 am

Dr. Dennis Michaelis, CEO of GEMESYS

Dr. Dennis Michaelis is the founder and CEO of the AI chip start-up. With a Ph.D. in Bio-Inspired Computing at the Purdue University in Indiana and a background in electrical engineering, he brings a unique blend of technical expertise and social commitment to the company. His previous role as Regional Director for Anonymous for the Voiceless highlights his leadership skills and dedication to ethical causes, as he was responsible for over 100 local groups in DACH, each with hundreds of members.

His professional expertise is underpinned by a large number of scholarships, his Cum Laude dissertation and the award of the “VDE Prize for Outstanding Academic Achievement” from the German Association for Electrical, Electronic & Information Technologies (VDE) for his Master’s thesis.

Tell us about your company?

GEMESYS is a cutting-edge technology company based in Bochum, Germany, focused on revolutionizing AI hardware. We’ve developed a novel analog chip architecture using memristive functionality, which enables ultra-efficient, real-time neural network processing directly on edge devices.

Our technology is designed to address some of the biggest challenges in the industry, like energy efficiency, performance at the edge, and data privacy. By processing AI workloads natively in hardware, we achieve dramatically lower power consumption and latency compared to traditional digital solutions. This makes our chips ideal for applications in industries like consumer electronics, automotive, healthcare, and IoT.

We’re backed by a strong combination of investors from Europe (Amadeus APEX Technology Fund, Atlantic Labs, NRW.Bank), Silicon Valley (Plug and Play Tech Center), and Japan (Sony Innovation Fund), and have recently secured an $9.1M pre-seed round, including government support. At our core, GEMESYS is about creating intuitive, sustainable technology that simplifies and enhances everyday life. Our vision is to lead the shift toward smarter, more connected edge devices, enabling our customers to innovate faster and more effectively.

What problems are you solving?

At the moment, training a single complex AI model, such as ChatGPT, consumes as much electricity as a coal-fired power plant produces. This is also reflected in the huge data centers that are constantly in operation. If this trend continues, the cost of training a single neural network could exceed the economic power of entire countries as early as 2026. GEMESYS is developing a new type of hardware solution for decentralized AI applications. At its heart is an analog AI chip that is modeled on the human brain and processes data directly at the source, the so-called edge devices. For the first time, this enables local training of AI models in addition to the local execution of AI, which reduces network loads, improves data protection and promotes scalability. Thanks to its analog approach, the GEMESYS chip offers unparalleled energy efficiency and opens up new possibilities for Internet of Things applications and the networking of numerous devices.

What application areas are your strongest?

Our technology is particularly strong in application areas where energy efficiency, real-time performance, and data privacy are critical. In consumer electronics, for instance, our chips power wearables and smart home devices, enabling real-time AI processing without draining batteries.

In automotive, we support advanced driver-assistance systems and autonomous vehicles, delivering low-latency AI capabilities that enhance safety and performance while meeting strict energy and reliability demands.

In healthcare, our chips enable portable medical devices and diagnostics tools to process data securely on-device, ensuring privacy and reliability, especially in remote environments.

We’re also making a significant impact in industrial IoT, powering factory automation, predictive maintenance, and edge monitoring systems where energy efficiency and local decision-making are essential.

Finally, in smart cities, our technology supports applications like environmental monitoring, reducing cloud dependency while providing fast, localized AI.

These diverse areas highlight how GEMESYS is driving smarter, more sustainable edge solutions across multiple industries.

What keeps your customers up at night?

Many of them are kept up at night by the need to achieve more with less—delivering high-performance AI capabilities while staying within strict power and cost constraints. For those working on edge devices, power efficiency is critical, especially for battery-powered applications where every milliwatt counts. They also need real-time AI processing without relying on cloud connectivity, which introduces latency, security risks, and compliance challenges.

Beyond that, cost pressures are a constant concern. Scaling AI capabilities while managing manufacturing costs and maintaining competitive pricing is a balancing act. Add to that the growing focus on sustainability—companies are under immense pressure to reduce their carbon footprints and meet ESG goals. AI solutions that aren’t energy-efficient simply won’t align with these priorities.

What does the competitive landscape look like and how do you differentiate?

On one side, we face established players in the semiconductor and AI hardware space, that focus on both inferencing and training of AI models in data centers such as NVIDIA, Qualcomm, and Intel. On the other, a wave of emerging startups is exploring alternative approaches like spiking neural networks or crossbar-arrays to enable inferencing on the edge.

What sets GEMESYS apart is our fundamentally different approach. Most competitors are focused on squeezing incremental gains from digital architectures that were never designed for the inherent demands of AI at the edge. These architectures struggle with energy efficiency and latency while relying heavily on cloud processing.

At GEMESYS, we’re rewriting the rules with our analog chip architecture, leveraging memristive functionality to process neural networks natively in hardware. This allows us to achieve ultra-low power consumption and real-time performance—ideal for edge devices. While many competitors are optimizing traditional solutions, we’re offering a breakthrough technology that aligns perfectly with the industry’s future needs for energy-efficient, scalable, and secure AI processing.

What new features/technology are you working on?

While our core technology—an analog chip architecture with memristive functionality—already sets us apart, we’re working on several advancements to further enhance its capabilities.

One key area of focus is expanding the adaptability of our architecture to support more diverse AI models and applications. This involves optimizing our chips to handle increasingly complex neural networks while maintaining the same ultra-low power and high-speed performance that define our technology.

We’re also developing features that improve on-device learning. Traditionally, AI models are trained in the cloud and then deployed to devices, but this approach has limitations in dynamic environments. With on-device learning, our chips can adapt to new data in real-time (continuous learning), opening up possibilities for smarter, more personalized edge devices.

Another exciting development is in the area of robust and secure AI. As data privacy becomes an ever-greater concern, we’re enhancing our architecture to ensure secure, local data processing without compromising performance. This is especially critical in industries like healthcare, automotive, and industrial IoT.

Finally, we’re exploring ways to integrate our chips into broader ecosystems, ensuring seamless compatibility with existing software frameworks and enabling end-to-end solutions for our customers. These advancements reflect our commitment to not only delivering cutting-edge hardware but also enabling our customers to stay ahead in an increasingly connected and intelligent world.

How do customers normally engage with your company?

We design and develop cutting-edge analog chips with memristive functionality, which we license to device manufacturers and system integrators. This licensing model allows us to scale efficiently, enabling our customers to integrate our technology into their products across industries like consumer electronics, automotive, healthcare, and industrial IoT.

Additionally, we generate revenue by providing tailored solutions and support for our first pilot customers, including co-development partnerships for custom applications and optimization services. This ensures our technology meets the specific needs of our customers’ use cases while also fostering long-term collaboration.

Also Read:

CEO Interview: Slava Libman of FTD Solutions

CEO Interview: Caroline Guillaume of TrustInSoft

CEO Interview: Mikko Utriainen of Chipmetrics


What would you do if you were the CEO of Intel?

What would you do if you were the CEO of Intel?
by Daniel Nenni on 12-24-2024 at 10:00 am

Intel BSPD Power Via

One of the most enduring threads in the SemiWiki forum is What would you do if you are the Intel CEO? There are currently 128 responses and more than 45,000 views. It was originally posted March 13th, 2015, after Brain Krzanich was given the CEO position. A different time for sure but an interesting read and the responses keep on coming.

One thing that struck me while watching the keynotes at IEDM is that Intel made a big change in the last few years in regard to transparency. I remember back when Intel 14nm brought FinFETs to market, we were all pleasantly surprised. Intel kept that secret like many other technology leaps that made Intel a semiconductor legend. It took the rest of the industry years to catch up and even today Intel 14nm is one of the best 14nm implementations the industry has to offer.

After the IEDM presentations by Intel and TSMC, the transparency differences were quite obvious. TSMC releases just enough information and Intel releases much more. While we all scream for transparency, in this ultra-competitive market it may not be a great idea.

The latest example is Back Side Power Delivery (BSPD). Intel first announced it at the 2022 IEEE VLSI Symposium in great detail.  PowerVia was highlighted as a critical innovation to complement Intel’s gate-all-around transistor architecture (RibbonFET) and will be used first for internal products then offered to foundry customers. Here is the paper summary:

[T6-1] Intel PowerVia Technology: Backside Power Delivery for High Density and High-Performance Computing.

This paper presents a high-yielding backside power delivery (BPD) technology, PowerVia, implemented on Intel 4 finFET process. PowerVia more directly integrates power delivery to the transistor as compared to published buried power rail schemes, enabling additional wiring resources on front side for signal routing. A fabricated E-core with >90% cell utilization showed >30% platform voltage droop improvement and 6% frequency benefit compared to a similar design without PowerVia. Transistor performance, reliability, and fault isolation capability is detailed.

The problem of course is that TSMC is a fast follower. Not only are they a fast follower, TSMC has the support of the largest ecosystem known to the semiconductor industry.

At the 2024 North America Technology Symposium TSMC announced Super Power Rail for delivery in 2026 with the A16 node. The Super Power Rail, not unlike Intel PowerVia, enhances power efficiency and signal routing by dedicating front-side resources specifically for signals, aiming to improve logic density and performance for HPC applications. This is from the TSMC website:

TSMC A16™ technology is the next nanosheet-based technology featuring Super Power Rail, or SPR.

SPR is an innovative, best-in-class backside power delivery solution. It improves logic density and performance by dedicating front-side routing resource to signals. SPR also improves power delivery and reduces IR drop significantly. Most importantly, the novel backside contact scheme we developed preserves gate density, layout footprint, and device width flexibility, thus achieving best density and performance simultaneously, and we believe it is a first in the industry.

A16 is best suited for HPC products with complex signal routes and dense power delivery network, as they can benefit the most from backside power delivery. Compared with N2P, A16 offers 8%~10% speed improvement at the same Vdd, 15%~20% power reduction at the same speed, and 1.07~1.10X chip density.

Who Will Win?

It will be another battle of the technology titans: Intel PowerVia versus TSMC Super Power Rail. I have no doubt that Intel will be first with PowerVia on internal 18A products. I do believe that TSMC Super Power Rail will be the overall winner with foundry customers.

Which implementation will be best? We really won’t know until the chips fall but my guess is that they will be competitive. Intel should have the lead since they are designing PowerVia with specific chips in mind while TSMC’s Super Power Rail will have a broader application. I do believe, however, that in time the TSMC BSPD implementation will surpass Intel’s due to the driving force of the TSMC ecosystem.

Had Intel waited for chips to be in production before bragging about BSPD it would be a much different race, absolutely.

In 1996 former Intel CEO Andy Grove published the renowned book Only the Paranoid Survivewhich is an inside look at Grove’s management style and experiences at Intel. The key theme for me was that Andy believed in a healthy amount of paranoia to keep leaders vigilant, proactive, and prepared for the many disruptions of the semiconductor industry. Hopefully the new Intel CEO will have a healthy amount of paranoia.

Also Read:

Intel Presents the Final Frontier of Transistor Architecture at IEDM

Intel – Everyone’s Favourite Second Source?

An Invited Talk at IEDM: Intel’s Mr. Transistor Presents The Incredible Shrinking Transistor – Shattering Perceived Barriers and Forging Ahead


Stochastic Pupil Fill in EUV Lithography

Stochastic Pupil Fill in EUV Lithography
by Fred Chen on 12-24-2024 at 6:00 am

Exposing EUV

Pupil fill tradeoff again

EUV lithography continues to be plagued by its stochastic nature.

This stochastic nature is most clearly portrayed by the random fluctuation of the absorbed photon number at a given location. For example, consider an absorbed dose of 10 mJ/cm2 amounts to 6.8 photons of energy 92 eV absorbed in a square nanometer of EUV resist. The standard deviation of the absorbed photon number in that area, according to Poisson statistics, is 2.6 photons, basically 38% of the dose. This “shot noise” leads to locally reduced or increased dose, which in turn can lead to defects. This frequently limits the allowed dose range in EUV lithography.

An alternative way to visualize the stochastic imaging is to realize that each location is being illuminated differently. EUV lithography systems project images from a mask onto a wafer after undergoing demagnification. Rather than consistently illuminating every location with a prescribed set of angles incident onto the mask, there is a randomness in the angle prescription itself.

Figure 1 shows that even when the targeted set of incident angles forms two leaf shapes in the pupil plane, corresponding to a certain number of photons for each angle channel, the actual photon distribution among the channels is stochastic, with some channels getting no photons, others getting an excessive number. The absorbed dose here is 8.36 mJ/cm2, corresponding to the 14% absorption of a 60 mJ/cm2 dose by a 30 nm thick resist with an absorption coefficient of 5/um.

Figure 1. Stochastic view of the dipole leaf illumination shape targeted for 30 nm pitch on an 0.33 NA EUV system. 8.36 mJ/cm2 absorbed into a 3 nm x 3 nm area. The source is divided into 152 points, each corresponding to 1/11 of the NA, each channeling 0.055 mJ/cm2 to be absorbed. The plotted numbers are the number of 92 eV EUV photons channeled at the particular incident angle targeting the 3 nm x 3 nm pixel. Left: targeted pupil shape. Right: example of actual distribution of absorbed photons corresponding to each incident angle.

Thus, although the uniform dipole leaf illumination is prescribed for 30 nm line pitch, the actual illumination is effectively a randomly fluctuating subset of this shape, with some angles getting excessive brightness. This results in random image shifts.

Those familiar with using a lithography system may realize that the number of illuminator angle channels affects the degree of fluctuation at a given dose. Hence, reducing the number of channels, or reducing the pupil fill, should be able to alleviate this effect. Through Figures 2 and 3, we get an idea of this improvement.

Figure 2. Stochastic view of the double slot illumination shape targeted for 40 nm pitch on an 0.33 NA EUV system. 10.9 mJ/cm2 absorbed into a 4 nm x 4 nm area. The source is divided into 76 points, each corresponding to 1/11 of the NA, each channeling 0.143 mJ/cm2 to be absorbed. The plotted numbers are the number of 92 eV EUV photons channeled at the particular incident angle targeting the 4 nm x 4 nm pixel. Left: targeted pupil shape. Right: example of actual distribution of absorbed photons corresponding to each incident angle.

Figure 3. Stochastic view of the small dipole illumination shape targeted for 40 nm pitch on an 0.33 NA EUV system. 10.9 mJ/cm2 absorbed into a 4 nm x 4 nm area. The source is divided into 12 points, each corresponding to 1/11 of the NA, each channeling 0.906 mJ/cm2 to be absorbed. The plotted numbers are the number of 92 eV EUV photons channeled at the particular incident angle targeting the 4 nm x 4 nm pixel. Left: targeted pupil shape. Right: example of actual distribution of absorbed photons corresponding to each incident angle.

With the much lower (~3%) pupil fill shown in Figure 3, there is less obvious stochastic distortion of the illumination shape from the target prescription. However, such a low pupil fill means much of the light from the source is cut out by the EUV illuminator itself, forcing the stage to slow down to accumulate the correct dose per unit area. This limits the system throughput severely. While a higher pupil fill had the traditional advantage of imaging a less restricted variety of pitches and shapes, stochastic considerations once again force yet another tradeoff.


Consumer memory slowing more than AI gaining

Consumer memory slowing more than AI gaining
by Robert Maire on 12-23-2024 at 10:00 am

Micron Idaho Fabs
  • Consumer memory slowing more than AI gaining causing weakness
  • HBM sold out for 2025- HBM is most of Capex- NAND near zero
  • Big miss on Q1 guide crushes stock on disappointment
  • Positive for Nvidia- Negative for Broadcom/Qualcomm
Micron – AI is wonderful & growing out of bounds while consumer sucks

Micron reported in line results of $8.7B in revenues with $1.79 in EPS which met expectations. However, guidance was poor at $7.9B+-$200M and EPS of $1.43+-$0.10.

Compared to street expectations $8.97B and EPS of $1.97, this is a large disappointment dropping the stock by 14% on the day

Dichotomy between AI and everything else grows

AI is nothing short of fantastic, great margins, super growth, fantastic outlook with expectations of quadrupling over the next several years, Micron sold out for all of 2025 (much like Nvidia)

However, consumer facing memory applications such as PCs and mobile phones are weak, and NAND is absolutely trashed with severe cuts in capex and technology increases in an attempt to slow bloated inventories that trash pricing.

So, very simply put its a race between the declining fortunes of consumer memory and the increasing fortunes of AI memory and unfortunately consumer memory is declining faster than AI is increasing as AI memory, HBM is still a relatively small percentage of overall business. Thus, even though as a percentage, AI is growing faster it is an overall smaller percentage of business.

HBM business doubled for Micron in the quarter but that was still far from enough to offset other memory declines expected.

Server was up 46% sequentially while mobile was down 19% sequentially and embedded down 10% sequentially

Weak Auto and China add to woes

On top of general consumer weakness there was weakness in increasing inventories in auto related sales.

China which has been an ongoing issue will worsen as Chinese competitors take more of the low end of the memory market in both NAND and DRAM for the domestic market leaving Micron with a smaller overall share

2025 capex will be $14B +- $500M

Capex was $3B in the quarter with the “vast majority” of spend focused on the winner, HBM and what sounds like near zero dollars on the loser, NAND.

Micron is slowing NAND wafer starts and slowing technology enhancements that add to bit growth to try and reign in bloated inventories that crush prices

Seasonality doesn’t help

Adding to the weakness is the normal seasonal weakness of the Q1 postpartum depression after the Christmas & Holiday season of peak consumer business. Not only does the current holiday season not look so hot but the slowness after the holidays will likely make matters worse.

The Stocks

Frankly, we think Micron’s stock deserves to get a bit trashed as expectations had grown way out of proportion with reality. AI fever had taken over even though we continue to point out that AI is not big enough to offset the weakness in the largest part of the business.

Maybe in a few years’ time if HBM becomes a significant portion it may help more but by that time it will also become more of a commodity.

We think that there is perhaps more of a lesson for collateral stocks.

We would think of a pair trade like going long Nvidia (which is on sale recently) while shorting Broadcom and Qualcomm.

Micron’s report points out that there is zero weakness, only strong, sold outgrowth in AI while consumer and auto related are weaker and likely getting weaker with bloating inventories in the near term.

Our view is that Micron being down 15% to 20% is not unreasonable as reality sets in.

In other collateral concerns, we see less of an impact on the semiconductor equipment stocks as Microns Capex is still strong at $30B just shifted 180 degrees to focus solely on HBM at all costs.

The “tale of two cities” between “the best of times and the worst of times” gets bigger as consumer slows more. We are somewhat surprised that Micron management missed this somewhat obvious shift in sentiment that has been going on for a while.

Unlike in France this disregard of reality only winds up in the guillotine for the stock price.

About Semiconductor Advisors LLC

Semiconductor Advisors is an RIA (a Registered Investment Advisor),
specializing in technology companies with particular emphasis on semiconductor and semiconductor equipment companies.
We have been covering the space longer and been involved with more transactions than any other financial professional in the space.
We provide research, consulting and advisory services on strategic and financial matters to both industry participants as well as investors.
We offer expert, intelligent, balanced research and advice. Our opinions are very direct and honest and offer an unbiased view as compared to other sources.

Also Read:

AMAT has OK Qtr but Mixed Outlook Means Weaker 2025 – China & Delays & CHIPS Act?

More Headwinds – CHIPS Act Chop? – Chip Equip Re-Shore? Orders Canceled & Fab Delay