Expedera provides scalable neural engine semiconductor IP that enables major gains in performance, power, and latency while reducing cost and complexity. The company’s neural engine architecture reduces memory usage to the theoretical minimum, eliminating memory bottlenecks that can limit application performance. Expedera’s team includes seasoned ASIC experts from Cisco, Nvidia, AMD, and Ericsson. The company is headquartered in Santa Clara, California.
An AI accelerator platform that scales
for any application
Expedera’s unified compute pipeline architecture enables highly efficient hardware scheduling and advanced memory management to achieve unsurpassed end-to-end low-latency performance. Origin enables system designers to meet a full range of low latency, power-efficient and high-performance requirements. The architecture is mathematically proven to utilize the least amount of memory for neural network (NN) execution. This minimizes die area, reduces DRAM access, improves bandwidth, saves power, and maximizes performance. Since NNs generate a tremendous amount of intermediate data, minimizing the memory allows high-resolution NN processing, such as 4K/8K video, to run real-time on-chip.
The Origin architecture allows designers to run their trained neural network unchanged, without the need for hardware-specific optimizations.
This enables them to achieve greater accuracy and predictable performance. The architecture enables a simplified software environment that reduces complexity and eases the integration effort.
Origin achieves sustained performance and can process more throughput with less power than competitive products. Origin excels at image related tasks like computer vision, image classification, and object detection. It is capable of NLP related tasks like machine translation, sentence classification, and generation. Origin offers deterministic performance, scalable on-chip execution with the smallest memory footprint, and 18 TOPS/W effective performance. It scales from edge solutions with little or no DRAM bandwidth to high-performance applications such as autonomous driving and cloud applications without the software bloat that other solutions require.
Expedera offers three Origin Neural Engine IP products that can be tuned to fit the requirements of any AI application. Learn more about Expedera’s Origin E2, Origin E6, and Origin E8.
CEO & Co-founder
Da is co-founder and CEO of Expedera. Previously, he was cofounder and COO of Memoir Systems, an optimized memory IP startup, leading to a successful acquisition by Cisco. At Cisco, he led the Datacenter Switch ASICs for Nexus 3/9K, MDS, CSPG products. Da brings more than 25 years of ASIC experience at Cisco, Nvidia, and Abrizio. He holds a BS EECS from UC Berkeley, MS/PhD EE from Stanford.
VP Engineering & Co-founder
As co-founder of Expedera, Siyad leads Expedera’s engineering and product development. Previously, Siyad led Algorithmic TCAM ASIC and IP teams for Cisco Nexus7k, MDS, Cat4k/6k. He brings more than 25 years of experience driving ASIC design and DFT at Spanslogic (Cisco), Zettacom (IDT), Chameleon, and AMD. He holds a PhD EE from Stanford.
Chief Scientist & Co-founder
As Chief Scientist, Sharad brings his extensive experience in software-hardware co-development to enable efficient AI processing. Sharad is an expert in AI frameworks, power-aware neural network optimizations, and programmable dataflow architectures. Previously, he was an architect at Cisco, Memoir Systems (Cisco), and Microsoft. He holds a BS from IIT Kanpur.
VP Business Development
Nancy leads the business development efforts at Expedera. She is a customer-focused professional with extensive start-up experience in silicon IP and EDA. She has worked at Andes Technology, Memoir Systems, Kilopass, Ansys, Virtual Silicon, and COMPASS/VLSI Technology. She holds BA Computer Science from UC Berkeley.
There are no comments yet.
You must register or log in to view/post comments.