Naveen Verma, Ph.D., is the CEO and Co-founder of EnCharge AI, the only company to have developed robust and scalable analog in-memory computing technology essential for advanced AI deployments, from edge to cloud. Dr. Verma co-founded EnCharge AI in 2022, building on six years of research and five generations of prototypes while serving as a Professor of Electrical and Computer Engineering at Princeton University since 2009. He also directs Princeton’s Keller Center for Innovation in Engineering Education and holds associated faculty positions in the Andlinger Center for Energy and Environment and the Princeton Materials Institute. Dr. Verma earned his B.A.Sc. in Electrical and Computer Engineering from the University of British Columbia and his M.S. and Ph.D. in Electrical Engineering from MIT.
Tell us about your company.
EnCharge AI is the leader in advanced AI inference solutions that fundamentally changes how and where AI computation happens. Our company was spun out of research conducted at Princeton University in 2022 to commercialize breakthrough analog in-memory computing technology, built on nearly a decade of R&D across multiple generations of silicon. We’ve raised over $144 million from leading investors, including Tiger Global, Samsung Ventures, RTX Ventures, and In-Q-Tel, as well as $18.6 million in DARPA funding. Our technology delivers orders-of-magnitude higher compute efficiency and density for AI inference compared to today’s solutions, enabling deployment of advanced, personalized, and secure AI applications from edge to cloud, including in use cases that are power, size, or weight constrained.
What problems are you solving?
Fundamentally, current computing architecture is unable to support the needs of rapidly developing AI models. Because of this, we are experiencing an unsustainable energy consumption and cost crisis in AI computing that threatens to limit AI’s potential impact across industries. Data center electricity consumption is projected to double by 2026 to Japan’s equivalent total consumption. The centralization of AI inference in massive cloud data centers creates cost, latency, and privacy, barriers, while AI-driven GPU demand threatens supply chain stability. Addressing these problems began at Princeton University with research aimed at fundamentally rethinking computing architectures in order to provide step-change improvements in energy efficiency. The result is scalable, programmable, and precise analog in-memory computing architecture, which delivers 20x higher energy efficiency compared to traditional digital architectures. These efficiency gains enable sophisticated AI to run locally on devices using roughly the power of a light bulb rather than requiring massive data center infrastructure.
What application areas are you strongest in?
Our strongest application areas leverage our core advantages in power and space-constrained environments, with AI inference for client devices such as laptops, workstations, and phones as our primary focus. We enable sophisticated AI capabilities without compromising battery life, delivering over 200 TOPS in just 8.25W of power. Edge computing represents another key strength, as our technology is capable of bringing advanced AI to industrial automation, automotive systems, and IoT devices where cloud connectivity is limited or low latency is critical.
What keeps your customers up at night?
Our customers face mounting pressure to meet ambitious roadmaps for integrating advanced AI capabilities into new products while navigating severe technical and economic constraints that threaten their competitive positioning. For our OEM customers, the rapidly growing AI PC market means companies struggle to meet emerging device requirements within laptop constraints while maintaining battery life and competitive pricing. Meanwhile, our independent software vendor customers grapple with cloud dependency costs, latency issues, and privacy concerns preventing local, personalized AI deployment, while enterprise IT teams face skyrocketing infrastructure costs and security risks from cloud data transmission.
What does the competitive landscape look like and how do you differentiate?
While much of the attention in the AI chip space has centered on the data center, we are instead focused on AI PCs and edge devices, where our chip architecture presents uniquely transformative benefits. That said, our technologies possess qualities that make them competitive even against the most established incumbents. Against digital chip leaders, our analog in-memory computing delivers 20x higher energy efficiency (200 vs. 5-10 TOPS/W) and 10x higher compute density, while our switched-capacitor approach overcomes the noise and reliability issues that plagued previous analog attempts. These differentiators are made possible by our unique technology and approach, which leverages analog in-memory computing. In fact, our newly launched EN100 chip is the first commercially available analog in-memory AI accelerator.
What new features/technology are you working on?
We’re actively commercializing our EN100 product family and have just announced the launch of the EN100 chip, delivering over 200 TOPS for edge devices. The chip, available in M.2 form factor for laptops and PCIe for workstations, features up to 128GB high-density memory and 272 GB/s bandwidth with comprehensive software support across frameworks like PyTorch and TensorFlow. Our development roadmap focuses on migrating to advanced semiconductor nodes for even greater efficiency improvements, while expanding our product portfolio from edge devices to data centers with performance requirements tailored to specific markets. We’re simultaneously enhancing our software ecosystem through improved compiler optimization, expanded development tools, and a growing model zoo designed to maximize efficiency across the evolving AI landscape. This enables new categories of always-on AI applications and multimodal experiences that were previously impossible due to power constraints.
How do customers normally engage with your company?
Initially, customers typically engage with us primarily through our structured Early Access Program, which provides developers and OEMs with opportunities to gain a competitive advantage by being among the first to leverage EN100 capabilities. We are opening a second round of our Early Access Program soon, thanks to popular demand. Beyond the Early Access Program, we typically engage through custom solution development, working closely with their teams to map transformative AI experiences tailored to their requirements, supported by our full-stack approach combining specialized hardware with optimized software tools and extensive development resources for seamless integration with existing AI applications and frameworks. Finally, we also maintain direct strategic partnerships with major original equipment manufacturers (OEMs) and our semiconductor partners for integration and go-to-market collaboration.
Also Read:
CEO Interview with Faraj Aalaei of Cognichip