Bronco Webinar 800x100 1
WP_Term Object
(
    [term_id] => 15929
    [name] => CEO Interviews
    [slug] => ceo-interviews
    [term_group] => 0
    [term_taxonomy_id] => 15929
    [taxonomy] => category
    [description] => 
    [parent] => 0
    [count] => 317
    [filter] => raw
    [cat_ID] => 15929
    [category_count] => 317
    [category_description] => 
    [cat_name] => CEO Interviews
    [category_nicename] => ceo-interviews
    [category_parent] => 0
)

CEO Interview with Adi Gelvan of Speedata

CEO Interview with Adi Gelvan of Speedata
by Daniel Nenni on 05-17-2026 at 12:00 pm

Key takeaways

Adi Gelvan Speedata

Adi Gelvan is a veteran tech executive and serial entrepreneur, currently serving as the CEO of SPEEDATA, a semiconductor startup redefining analytics infrastructure with its purpose-built Analytics Processing Unit (APU). Known for his sharp operational instincts and deep technical insight, Adi joined Speedata in 2025 to lead its next stage of growth following a successful chip tape-out and expanding commercial interest in high-performance analytics workloads.

Before Speedata, Adi was the co-founder and CEO of Speedb, the high-performance key-value storage engine that served as a drop-in replacement for RocksDB. Under his leadership, Speedb tackled scalability bottlenecks in metadata-heavy workloads, culminating in its acquisition by Redis in 2024.

A believer in building from the ground up, with both grit and vision, Adi has built a reputation for transforming deep tech into real-world impact.

Tell us about your company.

Speedata created the world’s first Analytics Processing Unit (APU), a purpose-built processor designed specifically to accelerate big data analytics and AI data processing workloads. The core insight behind the APU is that SQL has well-defined, predictable computational patterns: complex joins, aggregations, and transformations that general-purpose CPUs and GPUs are fundamentally ill-suited to handle efficiently.

Our processor executes those operations natively in silicon rather than in memory, with a software stack that plugs directly into existing analytics frameworks (starting with Apache Spark) without code changes. The result is up to 100x performance gains over CPUs and GPUs on these workloads. We’ve raised $150 million to date from VCs and strategic investors including Intel CEO Lip-Bu Tan and former Mellanox CEO Eyal Waldman.

What problems are you solving?

We’re solving the data layer bottleneck that bogs down analytics and AI pipelines, where general-purpose CPUs and even GPUs struggle with complex batch analytics/ETL jobs and AI data preparation. The impact of accelerated analytics on a purpose-built processor is measurable: in one enterprise pharmaceutical deployment, our APU reduced processing time from 90 hours to just 8 hours, a 275x improvement in speed. More on that below.

What application areas are your strongest?

Our strongest areas map to three distinct use cases. The first is traditional batch ETL, processing large volumes of structured data through complex joins, aggregations, and transformations at scale.

The second is AI data preprocessing, the structured data cleaning, normalization and transformation work that feeds GPU training, fine-tuning, and RAG index construction. The APU significantly accelerates these preprocessing pipelines, making training cycles lighter and faster. So, our customer can iterate more frequently, scale to larger datasets, and ultimately train higher-quality models with improved performance.

The third is agentic analytics, where AI agents generate SQL queries against structured databases and the APU executes them in silicon, delivering explainable, hallucination-free answers to analytical questions. That last use case is where we see the most interesting convergence of analytics acceleration and LLM deployments. According to Databricks’ 2026 State of AI Agents report, 80% of new databases on their platform are now created by AI agents. AI agents have given a huge boost to SQL computations. Our purpose-built silicon solution for analytics workloads is here to keep the flywheel going.

What keeps your customers up at night?

A few things. Exploding data volumes required for AI applications. Painfully slow time-to-insights from legacy infrastructure. High baseload compute costs for analytics jobs and data center capacity.

They also understand a fundamental limitation of enterprise AI, which is that when you deploy a model, it only knows what it was trained on, which is typically public internet data or a fixed dataset. To make it valuable inside organizations, it needs to access-company-specific data.

The data issue is more challenging than AI model training, where scaling laws gave us a clear improvement trajectory. But to get high quality and accuracy customers need to understand, clean and structure their private data. When that data is structured, it is much more accurate and efficient for SQL use.

Data pipelines don’t have roadmaps or best practices, and most companies are still figuring out the most efficient approaches. This is a big reason why so many enterprise AI pilots ultimately fail.

Speedata is helping enterprises solve this issue. As mentioned earlier, in one deployment, processing time dropped from 90 hours to 8 hours. On the cost side, a global tech leader running AI data preprocessing on Apache Spark replaced 38 servers with just 3 – over 90% reduction in infrastructure. That kind of consolidation changes the economics entirely and lets enterprises scale analytics without sprawling server racks or massive energy bills.

Needing hyperscale performance without surrendering data jurisdiction is an additional challenge we’re seeing more of. It’s part of why our first commercial cloud deployment is with Nebul, one of Europe’s leading sovereign AI cloud providers. Their customers can’t achieve that level of performance by moving to a US hyperscaler because data sovereignty is non-negotiable for them, and Nebul integrating Speedata’s APU into their sovereign cloud infrastructure is a direct response to that market reality.

What does the competitive landscape look like and how do you differentiate?

Our main competition is general-purpose compute, CPUs from the major vendors, ARM-based processors, and increasingly GPUs being repurposed for data analytics workloads. The fundamental problem is architectural mismatch.

GPUs were designed for the massively parallel, unstructured floating-point operations that dominate AI workloads. CPUs are optimized for general-purpose serial computation with complex branch prediction and cache hierarchies.

Neither architecture is well-suited for the structured, relational patterns of Apache Spark SQL, the complex multi-table joins, aggregations across billions of rows, and iterative transformations. Our APU is a processor designed specifically around those patterns, executing Spark SQL natively in silicon.

The other critical differentiator is absence of adoption friction. Existing Spark applications run on our APU without code changes, no changes to the framework, and step-by-step integration into existing environments.

What new features/technology are you working on?

Our near-term development is focused on deeper optimization for agentic analytics workloads as enterprises increasingly need to run LLM queries against large, structured datasets. This is where we see the intersection of analytics acceleration and AI becoming most technically demanding and where purpose-built silicon has the clearest advantage over general-purpose compute. As LLM adoption scales and the volume and complexity of structured data queries grows, the performance requirements on the data layer will only intensify, and our roadmap is built around staying ahead of that curve.

Try Speedata’s Workload Analyzer to see how much faster your Spark workloads run on our APU – upload logs in the browser, run the CLI locally, or test against TPC-DS benchmarks.

CONTACT SPEEDATA

Also Read:

CEO Interview with Dr. Jekaterina Viktorova of Syenta

CEO Interview with Nagesh Gupta of llmda.ai

CEO Interview with Matt Crowley of Scintil Photonics

Share this post via:

Comments

There are no comments yet.

You must register or log in to view/post comments.