
Xianxin is the CEO and Co-Founder of Lumai, an Oxford University spin-out pioneering disruptive optical computing technologies for Al and data center acceleration. He brings over 15 years of experience in physics and engineering, and was previously an RCE 1851 Research Fellow, a prestigious fellowship whose past awardees include eight Nobel Laureates.
Tell us about your company.
Lumai is an optical compute company changing how AI compute is delivered at scale. Traditional silicon-only approaches are hitting fundamental limits in power efficiency, cost, and scalability. Our team brings together expertise across optics and AI systems with a shared belief: that new AI infrastructure requires a new approach, and that approach will use optical compute.
The technology is based on many years of research at the University of Oxford. At Lumai, we are building optical computing technology designed specifically for AI workloads. By leveraging light instead of electrons for key computational operations, we dramatically improve performance-per-watt and unlock a more sustainable path forward for large-scale AI deployment.
What problems are you solving?
AI has hit a power wall. Due to the limitations of silicon scaling, it is increasingly difficult to deliver a step-change in token generation within the fixed power constraints of a data center. A 1GW data center is limited to 1GW. Yet the goal of AI companies is to generate the maximum number of tokens – more tokens mean more intelligence and more revenue.
The core issue we address is compute inefficiency: data centers need to generate more tokens per Watt. Lumai tackles this through a hybrid optical and electronic approach, performing dense linear algebra (i.e. tensor operations) in light, alongside a standard digital chip where the software runs.
This hybrid design means the processor exposes a standard interface to the software stack and system interfaces, while offloading matrix computations to a far more efficient medium and dramatically reducing energy consumption.
In short, we are breaking through the bottlenecks slowing down AI systems.
What application areas are your strongest?
Our technology is particularly well suited to high-throughput AI inference workloads in data centers. This includes compute-bound applications such as large language models, recommendation systems, and video processing.
Lumai’s processor can be used as a prefill processor in a disaggregated compute architecture, alongside (for example) a GPU used for decode. It is especially effective in applications with long input contexts and large token volumes (e.g. KV cache generation heavy workloads).
What keeps your customers up at night?
Two things consistently come up: cost and scalability. AI is becoming central to business strategy, but the infrastructure required to support it is increasingly expensive and power-constrained.
Customers are concerned about how to scale their AI capabilities without hitting data center power limits or seeing costs spiral. At the same time, they want to achieve this without fundamentally changing their software workflows or models.
Ultimately, they’re asking: how do we continue advancing AI performance without running into a wall on cost and power?
What does the competitive landscape look like and how do you differentiate?
The landscape is evolving quickly, with innovation across GPUs, ASICs, and other AI accelerators. While these approaches deliver incremental improvements, they still rely on electronic architectures.
Lumai differentiates by taking a fundamentally different approach: optical compute. This allows us to bypass many of the inherent limitations of electronic-only systems, where increasing performance drives both higher power consumption and higher total cost of ownership (TCO).
Our focus isn’t just on building a faster processor, it is about redefining how compute is performed for AI workloads in the most efficient way. That enables step-function improvements rather than incremental gains.
We are building an architecture designed to scale generation after generation.
What new features/technology are you working on?
We are continuing to advance our optical compute platform, with a strong focus on integration and scalability. This includes developing the supporting electronics and software stack required for seamless deployment.
We have already proven the core technology; our current focus is on supporting trials, further increasing performance, and ensuring our platform integrates easily into existing AI infrastructure. Compatibility with current frameworks and workflows is key so customers can adopt it without major disruption.
As we move forward, you will see continued progress toward production systems that deliver meaningful performance and efficiency gains at scale.
How do customers normally engage with your company?
Engagement typically begins with collaborative discussions around specific AI workloads and infrastructure challenges. From there, we work closely with customers to evaluate where optical compute can deliver the most value.
We take a partnership-driven approach: whether through early access programs, joint development efforts, or pilot deployments. Close collaboration is key to ensuring successful integration and real-world impact.
Our goal is to meet customers where they are, help them break through the power wall, and transition to a more efficient, scalable AI compute platform that will serve them not only today but well into the future.
Also Read:
CEO Interview with Johan Wadenholt Vrethem of Voxo
CEO Interview with Dr. Hardik Kabaria of Vinci
CEO Interview with Steve Kim of Chips&Media
Share this post via:



Is Intel About to Take Flight?