We started working with Felx Logix more than eight years ago and let me tell you it has been an interesting journey. Geoff Tate was our second CEO Interview so this is a follow up to that. The first one garnered more than 15,000 views and I expect more this time given the continued success of Flex Logix pioneering the eFPGA market, absolutely.
What is Flex Logix’ core strength?
My co-founder Cheng Wang invented and refined a superior programmable interconnect which we apply to a range of applications to solve major market needs; then we combine this with the software tools to program the resulting solution. Combined with our design methodology, we can create scalable and portable IP products very quickly and economically.
What markets/applications does Flex Logix play in?
Embedded FPGA (eFPGA)
AI inference
DSP acceleration
You started in eFPGA, how is that market developing for Flex Logix?
We are the “ARM of FPGA technology”: we license eFPGA for integration into SoCs, but we do not build chips.
Using our superior programmable interconnect we are able to achieve Xilinx-like density and performance in any process node using standard cells for rapid development and fewer metal layers.
We have proven eFPGA silicon with numerous customers and chips in 180nm, 40nm, 28/22nm, 16nm and 12nm process nodes. There are >10 working chips using eFPGA and >>10 more in fab and in design and many more planned. Our technology is mature and robust: our 2nd generation architecture is now 3 years old and every chip has worked 1st time.
Our early adopter market segment has been Aerospace (Sandia, Boeing, etc) but commercial design activity is now taking off as well (Morning Core, Dialog, etc). Our eFPGA technology has become strategically critical to many of our customers and they have extensive roadmap plans for a series of chips and they are driving us to improve our offerings to even better meet their needs, creating very high “stickiness”.
Half of our customers are using FPGA chips and want to integrate to reduce power/size/cost. Half of our customers have never used FPGA but use eFPGA for customizability and acceleration.
We provide software tools to program our eFPGA using Verilog.
The eFPGA market now is profitable for us and the cash flow is helping fund our AI Inference initiative.
How did Flex Logix get into AI Inference and why is it synergistic?
Companies like Microsoft use FPGAs in wide deployment to accelerate work loads including inference. Inference uses a lot of MAC operations – FPGAs have a lot of MACs as do GPUs.
Customers a couple years ago asked us if we could optimize our eFPGA for AI Inference. Cheng studied the neural network models, like YOLOv3, and realized we could take our existing DSP MACs and optimize for INT8/BF16 and as well we could increase MAC density by clustering MACs into 1-dimensional systolic arrays of 64 MACs each. Using our programmable interconnect we can wire up MACs in very flexible ways to achieve high MAC utilization and throughput at low die cost for a wide range of neural network models. The resulting product is our nnMAX AI Inference IP which, like our eFGPA, is a tile that can be arrayed to achieve whatever throughput the customer needs for their SoC.
But initially we expect most customers to want to buy chips so we have designed and are taping out now our InferX X1 which is very compact and low cost but has performance that rivals chips 5-10x larger. We will also build PCIe boards and expect to sample in Q3 this year. We recently shared benchmarks vs Nvidia’s leading Xavier NX and Tesla T4, showing we have superior price/performance.
The interesting thing is the relative performance of X1/NX/T4 is very different from one model to another. Our customers did not expect this – they assumed they could get a benchmark for say ResNet-50 batch=1 and that would show relative performance. The reason it doesn’t is different models stress different aspects of the hardware (and software) architectures. For example, ResNet-50 has very small images and activations so it does not stress the memory subsystem; whereas YOLOv3 for megapixel images definitely does.
Our inference technology is available now for 16nm. Our roadmap is to make it available on 7/6nm and 12nm (for our Aerospace customers who want US fabrication).
So then what about DSP?
Just like customers led us to explore AI Inference; customers have asked us “gee, your nnMAX IP has so many MACs in such a small area, can we use it for DSP?”
It turns out nnMAX is excellent for DSP doing FIR filters at up to Gigasample rates and taps of hundreds, thousands or even tens of thousands using the arrayable nnMAX tile. For our ports to 7/6 and 12 we are exploring adding similar FFT performance.
WEBINAR: eFPGA what’s available now, what’s coming & what’s possible to optimize your SoC
About Flex Logix
Flex Logix provides solutions for making flexible chips and accelerating neural network inferencing. Its eFPGA platform enables chips to be flexible to handle changing protocols, standards, algorithms and customer needs and to implement reconfigurable accelerators that speed key workloads 30-100x compared to processors. Flex Logix’s second product line, nnMAX, utilizes its eFPGA and interconnect technology to provide modular, scalable neural inferencing from 1 to >100 TOPS using a higher throughput/$ and throughput/watt compared to other architectures. Flex Logix is headquartered in Mountain View, California. https://flex-logix.com/
Also Read:
CEO Interview: Jason Xing of Empyrean Software
Executive Interview: Howie Bernstein of HCL
CEO Interview: Adnan Hamid of Breker Systems
Share this post via:
Comments
One Reply to “Flex Logix CEO Update 2020”
You must register or log in to view/post comments.