Banner Electrical Verification The invisible bottleneck in IC design updated 1
WP_Term Object
(
    [term_id] => 15929
    [name] => CEO Interviews
    [slug] => ceo-interviews
    [term_group] => 0
    [term_taxonomy_id] => 15929
    [taxonomy] => category
    [description] => 
    [parent] => 0
    [count] => 276
    [filter] => raw
    [cat_ID] => 15929
    [category_count] => 276
    [category_description] => 
    [cat_name] => CEO Interviews
    [category_nicename] => ceo-interviews
    [category_parent] => 0
)

CEO Interview with Dr. Peng Zou of PowerLattice

CEO Interview with Dr. Peng Zou of PowerLattice
by Daniel Nenni on 11-23-2025 at 8:00 am

Key Takeaways

  • PowerLattice has developed the first power delivery chiplet that improves performance, efficiency, and reliability by bringing power directly into the processor package.
  • The technology addresses the 'AI power wall' by reducing compute power needs by over 50%, effectively doubling performance and minimizing energy waste.
  • The initial focus is on AI chipmakers, who face significant challenges with power density and efficiency, as the technology directly tackles these constraints.

Dr. Peng Zou PowerLattice

Dr. Zou is one of the industry’s leading experts in power delivery for high performance processors.  Before founding PowerLattice, he held technical leadership roles at Qualcomm/NUVIA, Huawei and Intel, where he led the multidisciplinary teams advancing integrated voltage regulator technologies across magnetic materials, circuit design and system architecture.  Recognizing that the “power wall” had become a major limiting factor for AI performance, Dr. Zou set out to drive a step-change in power delivery and founded PowerLattice. He holds 15 U.S. patents with additional patents pending.

Tell us about your company.
PowerLattice is reimagining how power is delivered in the world’s most demanding compute systems. We’ve developed the first power delivery chiplet that brings power directly into the processor package—improving performance, efficiency, and reliability. The result is a fundamental shift in how high-performance chips get powered, paving the way for the next generation of AI and advanced computing. We have silicon in hand and are now sampling to customers, so we decided it was time to emerge from stealth.

What problems are you solving?
AI accelerators and GPUs are pushing past 2 KW per chip, straining data centers that already consume as much energy as mid-size cities. Conventional power delivery forces very high electrical current to travel long, resistive paths before reaching the processor, wasting energy and limiting performance. The inefficiency and heat losses from moving power across a motherboard are now a hard limit—the “AI power wall.”

PowerLattice eliminates that barrier by moving power delivery directly into the processor package, right next to the compute die. We have also developed circuit innovations and technologies that deliver ultra-fast response times for precise voltage regulation, a capability that is crucial for processor performance. This approach reduces total compute power needs by more than 50%, effectively doubling performance.

What application areas are your strongest?
All segments from hyperscale data centers to AI chipmakers and edge compute stand to benefit from our technology. But our initial focus is on AI since it’s the AI chipmakers who are hitting the power wall the most. Their chips are pushing the limits of power density and efficiency, and our solution directly tackles those constraints—delivering higher performance per watt and unlocking scalability for the next wave of AI systems.

What keeps your customers up at night?
For most of our customers, the biggest challenge isn’t compute, it’s power. They’re reaching the physical limits of how much energy they can deliver to a chip and are having to design around that. Reliability is another major concern; as AI models scale, even micro-instabilities in power delivery can ripple across entire systems.

To address this, we’ve built a voltage-stabilizing layer directly into our chiplet design. When GPUs are pushed to their limits, voltage fluctuations can shorten their lifespan and compromise reliability. Our technology keeps voltage steady at the source, extending the usable life of GPUs and ensuring consistent performance under extreme workloads.

What does the competitive landscape look like and how do you differentiate?
Our biggest competitors are those providing legacy solutions – and this is exactly why we see such a big opportunity to disrupt the market. Traditional power delivery solutions were never designed for this era of compute. They use large, discrete voltage regulation modules (VRMs) that sit on the motherboard and regulate power externally. The result is wasted energy and also voltage fluctuations.

Our approach brings voltage regulation directly onto the wafer, integrating inductors and passives at the silicon level. It’s a fundamentally different approach. By bringing power directly into the processor package, we can reduce compute power needs by more than 50%.

Our chiplet-based approach integrates easily into existing SoC designs and is also very configurable, so we’re seeing a lot of strong interest from customers.

What new features or technology are you working on?
Right now we’re focusing on design wins with major customers and scaling through key manufacturing milestones. We’ve proven the silicon — now it’s about ramping and driving adoption. Ultimately, our goal is to make power delivery as programmable and scalable as compute itself.

How do customers normally engage with your company?
We work closely with semiconductor vendors, hyperscalers, and system integrators.  Our model is highly collaborative because the integration of power and compute is no longer optional.

Also Read:

CEO Interview with Roy Barnes of TPC

CEO Interview with Mr. Shoichi Teshiba of Macnica ATD

CEO Interview with Sanjive Agarwala of EuQlid Inc.

Share this post via:

Comments

There are no comments yet.

You must register or log in to view/post comments.