Key Takeaways
- Numem specializes in advanced memory technology, particularly for AI, Edge Devices, and Data Centers.
- NuRAM SmartMem™ combines the benefits of SRAM and DRAM, offering low power consumption, high density, and non-volatility.
- The technology addresses the limitations of current memory solutions, providing over 3x the bandwidth of HBM and 200x lower standby power than SRAM.
Koji Motomori is a seasoned business leader and technologist with 30+ years of experience in semiconductors, AI, embedded systems, data centers, mobile, and memory solutions, backed by an engineering background. Over 26 years at Intel, he drove strategic growth initiatives, securing $2B+ in contracts with OEMs and partners. His expertise spans product marketing, GTM strategy, business development, deal-making, and ecosystem enablement, accelerating the adoption of CPU, memory, SSD, and interconnect technologies.
Tell us about your company.
At Numem, we’re all about taking memory technology to the next level, especially for AI, Edge Devices, and Data Centers. Our NuRAM SmartMem™ is a high-performance, ultra-low-power memory solution built on MRAM technology. So, what makes it special? It brings together the best of different memory types—SRAM-like read speeds, DRAM-like write performance, non-volatility, and ultra-low power.
With AI and advanced computing evolving fast, the demand for efficient, high-density memory is skyrocketing. That’s where we come in. Our solutions help cut energy consumption while delivering the speed and reliability needed for AI training, inference, and mission-critical applications. Simply put, we’re making memory smarter, faster, and more power-efficient to power the future of computing.
What problems are you solving?
That’s a great question. The memory industry is really struggling to keep up with the growing demands of AI and high-performance computing. Right now, we need memory that’s not just fast, but also power-efficient and high-capacity. The problem is, existing technologies all have major limitations.
Let’s take SRAM, for example, it’s fast but has high leakage power and doesn’t scale well at advanced nodes. HBM DRAM is another option, but it’s higher cost, power-hungry, and still not fast enough to fully meet AI’s needs. And then there’s DDR DRAM, which has low bandwidth, making it a bottleneck for high-performance AI workloads.
That’s exactly why we developed NuRAM SmartMem to solve these challenges. It combines the best of different memory types:
- It gives you SRAM-like read speeds and DRAM-like write speeds, so AI workloads run smoothly.
- It has 200x lower standby power than SRAM, which is huge for energy efficiency.
- It’s 2.5x denser than SRAM, helping reduce cost and die size.
- It delivers over 3x the bandwidth of HBM, eliminating AI bottlenecks.
- And it’s non-volatile, meaning it retains data even when the power is off.
So, with NuRAM SmartMem™, we’re not just making memory faster, we’re making it more efficient and scalable for AI, Edge, and Data Center applications. It’s really a game-changer for the industry.
What application areas are your strongest?
That’s another great question. Our memory technology is designed to bring big improvements across a wide range of applications, but we’re especially strong in a few key areas.
For data centers, we help make AI model training and inference more efficient while cutting power consumption. Since our technology reduces the need for SRAM and DRAM, companies see significant Total Cost of Ownership (TCO) benefits. Plus, the non-volatility of our memory enables instant-on capabilities, meaning servers can reboot much faster.
In automotive, especially for EVs, real-time decision-making is critical. Our low-power memory helps extend battery life, and by consolidating multiple memory types like NOR Flash and LPDDR, we save space, power, cost, and weight—while also improving reliability.
For Edge AI devices and IoT applications, power efficiency is a huge concern. Our ultra-low-power memory helps reduce energy consumption, making these devices more sustainable and efficient.
Aerospace is another area where we stand out. Mission-critical applications demand reliability, energy efficiency, and radiation immunity—all of which our memory provides.
Then there are security cameras—with ultra-low power consumption and high bandwidth, our memory helps extend battery life while supporting high-resolution data transmission. And since we can replace memory types like NOR Flash and LPDDR, we also optimize space, power, and cost.
For wearable devices, battery life is everything. Our technology reduces power consumption, enabling lighter, more compact designs that last longer—something consumers really appreciate.
And finally, in PCs and smartphones, AI-driven features need better memory performance. Our non-volatile memory allows for instant-on capabilities, extends battery life, and replaces traditional memory types like boot NOR Flash and DDR, leading to power and space savings, plus faster boot times and overall better performance.
So overall, our memory technology delivers real advantages across multiple industries.
What keeps your customers up at night?
A lot of things. AI workloads are becoming more demanding, and our customers are constantly looking for ways to stay ahead.
One big concern is power efficiency and thermal management. AI systems push power budgets to the limit with rising energy costs, the total cost of ownership (TCO) becomes a huge factor. Keeping power consumption low is critical, not just for efficiency, but for performance and profitability.
Then there’s the issue of memory bandwidth bottlenecks. Traditional memory architectures simply can’t keep up with the growing performance demands of AI, which creates bottlenecks and limits system scalability.
Scalability and cost are also major worries. AI applications need more memory, but scaling up can increase the spending fast. Our customers want solutions that provide higher capacity without blowing the budget.
And finally, reliability and data retention are key, especially for AI and data-heavy applications. These workloads require memory that’s not just fast, but also non-volatile, secure, and long-lasting while still keeping power consumption low.
That’s exactly where NuRAM SmartMem comes in. Our technology delivers ultra-low power, high-density, and high-bandwidth memory solutions that help customers overcome these challenges and future-proof their AI-driven applications.
What does the competitive landscape look like, and how do you differentiate?
The high-performance memory market is dominated by SRAM, LPDDR DRAM, and HBM. Each of these technologies has strengths, but they also come with some major challenges.
SRAM, for example, is fast, but it has high standby power and scalability limitations at advanced nodes. LPDDR DRAM is designed to be lower power than standard DRAM, but it still consumes a lot of energy. And HBM DRAM delivers high bandwidth, but it comes with high cost, power constraints, and integration complexity.
That’s where NuRAM SmartMem™ stands out. We’ve built a memory solution that outperforms these technologies in key areas:
- 200x lower standby power than SRAM, making it perfect for always-on AI applications that need ultra-low power.
- 5x higher density than SRAM, reducing die size and overall memory costs.
- Non-volatility, which unlike SRAM and DRAM, NuRAM retains data even without power. This adds both energy efficiency and reliability.
- Over 3x faster bandwidth than HBM3E, solving AI’s growing memory bandwidth challenges.
- Over 260x lower standby power than HBM3E, thanks to non-volatility and our flexible power management feature per block.
- Scalability & Customization—NuRAM SmartMem™ is available as both IP cores and chiplets, making integration seamless for AI, IoT, and Data Center applications.
So, what really differentiates us? We’re offering a next-generation memory solution that maximizes performance while dramatically reducing power and cost. It’s a game-changer compared to traditional memory options.
What new features/technology are you working on?
We’re constantly pushing the boundaries of AI memory innovation, focusing on performance, power efficiency, and scalability. A few exciting things we’re working on right now include:
- Smart Memory Subsystems – We’re making memory smarter. Our self-optimizing memory technology is designed to adapt and accelerate AI workloads more efficiently.
- 2nd-Gen NuRAM SmartMem™ Chiplets – We’re taking things to the next level with even higher bandwidth, faster read/write speeds, lower power consumption, and greater scalability than our first generation.
- AI Optimized Solutions – We’re fine-tuning our memory for LLM inference, AI Edge devices, and ultra-low-power AI chips, ensuring they get the best performance possible.
- High-Capacity & Scalable Operation – As AI models keep growing, memory needs to scale with them. We’re expanding die capacity and improving stacking while working closely with foundries to boost manufacturability and yield for high-volume production.
- Memory Security & Reliability Enhancements – AI applications rely on secure, stable memory. We’re enhancing data integrity, security, and protection against corruption and cyber threats to ensure reliable AI operations.
For the future, we’re on track to deliver our first-generation chiplet samples in Q4 2025 and second-generation chiplets samples in Q2 2026. With these advancements, we’re setting a new benchmark for efficiency, performance, and power optimization in AI memory.
How do customers normally engage with your company?
We work closely with a wide range of customers, including AI chip makers, MCU/ASIC designers, SoC vendors, Data Centers, and Edge computing companies. Our goal is to integrate our advanced memory solutions into their systems in the most effective way possible.
There are several ways customers typically engage with us:
- NuRAM + SmartMem™ IP Licensing – Some customers embed our NuRAM SmartMem™ technology directly into their ASICs, MCUs, MPUs, and SoCs, boosting performance and efficiency for next-gen AI and computing applications.
- SmartMem™ IP Licensing—Others use our SmartMem™ technology on top of their existing memory architectures, whether Flash, RRAM, PCRAM, traditional MRAM, or DRAM, to improve memory performance and power efficiency.
- Chiplet Partnerships – For customers looking for a plug-and-play solution, we offer SmartMem™ chiplets that deliver high bandwidth and ultra-low power, specifically designed for server and Edge AI accelerators while seamlessly aligning with industry-standard memory interfaces.
- Custom Memory Solutions – We also work with customers to customize memory architectures to their specific AI and Edge workloads, ensuring optimal performance and power efficiency.
- Collaborations & Joint Development – We actively partner with industry leaders to co-develop next-generation memory solutions, maximizing AI processing efficiency and scalability.
At the end of the day, working with Numem gives customers access to ultra-low-power, high-performance, and scalable memory solutions that help them meet AI’s growing demands while significantly reducing energy consumption and cost.
Also Read:
Executive Interview with Leo Linehan, President, Electronic Materials, Materion Corporation
CEO Interview with Ronald Glibbery of Peraso
CEO Interview with Pierre Laboisse of Aledia
Share this post via:
Intel Foundry Delivers!