![]()
Elad Raz is the founder and CEO of NextSilicon, which he established in 2018 to fundamentally rethink how HPC architectures are built. After more than two decades designing and scaling advanced software and compute systems, Elad saw firsthand the limits of fixed, inflexible processor designs. He founded NextSilicon to address those constraints with a software-defined approach to HPC, enabling architectures that can adapt to evolving workloads across HPC, AI, and data-intensive computing.
Tell us about your company?
NextSilicon is redefining the future of HPC and AI with Maverick-2, the industry’s first production dataflow accelerator powered by our Intelligent Compute Architecture (ICA). Maverick-2 combines intelligent software with adaptable hardware to deliver unprecedented energy efficiency, scalability, and precision across applications ranging from scientific discovery and generative AI to climate modeling and healthcare.
NextSilicon emerged from stealth in October 2024 to launch Maverick-2. Since then, Maverick-2 has been redefining the interaction between hardware and software to establish a compute paradigm rooted in adaptability and efficiency. With over $300 million in funding and more than 400 employees globally, we’ve already achieved significant milestones, including deployment at Sandia National Laboratories’ Vanguard-II supercomputer.
What problems are you solving?
There is a fundamental mismatch in how we are building compute today. Modern high-performance computing (HPC) and AI workloads are irregular, memory-intensive, and require massive amounts of data. Yet, we are still forcing them onto architectures optimized decades ago for completely different problems.CPUs and GPUs simply weren’t built for what we’re asking them to do now.
Maverick-2 addresses the growing inefficiencies of CPUs and GPUs, which struggle to meet the architectural, energy, and scalability demands of AI and HPC workloads in fields such as science, weather, energy, and defense. These workloads involve complex data dependencies, memory access, and computational patterns that today’s processors cannot handle, resulting in suboptimal performance and bottlenecks that stifle innovation.
Beyond hardware challenges, there’s also a developer pain point that leads teams to spend months porting and rewriting code for new architectures. Maverick-2 eliminates this burden by supporting existing code in C/C++, FORTRAN, OpenMP, and Kokkos without requiring specialized programming or domain-specific languages. We achieve up to 10x performance improvements while using 60% less power – a 4x performance-per-watt advantage – all without forcing developers to rewrite their applications.
What application areas are you strongest in?
We’re seeing strong traction in workloads that struggle on traditional architectures, think HPC simulations, scientific computing scenarios, and data-intensive enterprise and research applications. Specifically, our dataflow architecture excels in:
○ Energy: Fluid dynamics, exploration and seismic analysis, and particle simulation
○ Life Sciences: Drug discovery, protein docking, and multiomics analysis
○ Financial Services: Risk simulation, back testing, and options modeling
○ Manufacturing: Finite element analysis, crash simulation, and digital twins
○ AI/ML: Thinking models, reasoning workloads, and trillion-parameter LLMs with massive context windows approaching 1M tokens
○ Graph Analytics: Social network analysis, fraud and anomaly detection, and recommendation systems
What’s important is that Maverick-2’s programmability allows a single platform to span multiple domains. As AI and HPC continue to converge, that flexibility becomes critical.
What keeps your customers up at night?
Our customers face enormous pressure to achieve greater performance within strict limits on power, cooling, and budget. They worry about locking themselves into architectures optimized for today’s models that can’t adapt to the next challenge. Efficiency, scalability, and longevity are critical.
Developer productivity is another major concern. Many customers have CPU workloads that were never GPU-accelerated. NextSilicon’s Intelligent Compute Architecture can accelerate these codes without forcing them into a GPU-centric paradigm. For teams already using GPUs, the goal is to explore new architectures without extensive code rewrites or learning proprietary languages and frameworks. They want to focus on scientific discovery and innovation, not porting cycles.
Customers also value future-proofing. As algorithms and models evolve at an unprecedented pace, choosing the right architecture has become genuinely difficult. Our software-defined hardware adapts dynamically during execution and evolves with workloads over time. This protects long-term investments while delivering immediate performance and efficiency gains. Customers can use existing codebases in familiar languages while achieving significant improvements while addressing both technical and resource constraints.
What does the competitive landscape look like and how do you differentiate?
The landscape is crowded, but it’s constrained by legacy thinking. CPUs continue to add more cores, more instructions, and more cost. GPUs are broad-based across all use cases. Then, many accelerators optimize for very narrow use cases.
At NextSilicon, we’re not trying to incrementally improve an old paradigm. Dataflow is a fundamentally different execution model. Our strengths lie in our true runtime reconfigurability, general-purpose programmability, and system-level efficiency.
The key technical differentiator is our dataflow architecture, which allows hardware to adapt to software in real time rather than forcing software to conform to rigid hardware designs. This, combined with our strong financial backing and scale, positions us uniquely in the market.
Maverick-2 isn’t about replacing everything overnight. It’s about offering a compelling complement to traditional architectures that are showing diminishing returns.
What new features / technology are you working on?
We’re continuously enhancing Maverick-2’s hardware and software stack. Right now, we are focused on expanding our software ecosystem to support additional programming models and frameworks, making integration even easier. We’re also optimizing performance and efficiency for emerging workload patterns as HPC and AI workflows converge. The roadmap is driven by feedback. And working closely with our early customers and partners. We want to ensure our roadmap aligns with real-world requirements and problems, not theoretical ones.
How do customers normally engage with your company?
We work with customers both directly and through our partner ecosystem, which includes Dell Technologies, Penguin Solutions, HPE, Databank, Vibrint, and E4. These partnerships enable us to provide integrated solutions and support to organizations seeking to accelerate their HPC and AI workloads. For specific engagement options and to discuss how we can address computational challenges, we encourage interested organizations to reach out to our team.
Also Read:
CEO Interview with Naama BAK of Understand Tech
CEO Interview with Dr. Heinz Kaiser of Schott
CEO Interview with Moshe Tanach of NeuReality
Share this post via:

TSMC vs Intel Foundry vs Samsung Foundry 2026