Pradeep Vajram is a successful entrepreneur and a veteran in the Semiconductor / Embedded industry. He has over 25+ years of experience in having executed, at all levels of responsibilities, in design and development of ASIC products.
Pradeep has been an active investor in semiconductor and deep tech USA-INDIA corridor start-up, since 2017 and has vast experience in building successful businesses in Silicon Valley and India.
Currently, Pradeep is the CEO & Exec. Chairman of the AlphaICs Corporation. Before AlphaICs, Pradeep founded SmartPlay Technologies in 2008 – the world’s first integrated end-to-end product engineering services company. SmartPlay was then acquired by Aricent in 2015.
Prior to SmartPlay, he served as the Vice President of Engineering at Qualcomm, heading the India semiconductor division in Bangalore. Under his leadership, Qualcomm Bangalore Design Center developed into a strong center of excellence and delivered multiple 3G/4G products successfully.
Prior to Qualcomm, Pradeep was the CEO & co-founder of Spike Technologies – a leading chip design services company. Spike was acquired by Qualcomm in 2004.
Pradeep has a Bachelor’s degree in Electronics Engineering from Karnataka University & a Master’s degree in Computer Engineering from Wayne State University, Detroit
What is the backstory of AlphaICs and what does it do?
AlphaICs Corporation, a 4-year-old startup, designs and develops the best-in-class AI Co-processors for delivering high-performance AI computing on edge devices. With the growth in popularity of Deep Neural Networks, there has been a huge demand for running such networks in real-time, on edge devices. The AI hardware market is estimated to be a $67 Billion market by 2025. We have developed a power efficient, high throughput AI Processor technology called Real AI Processor (RAPTM) for accelerating AI workloads. RAPTM is highly scalable and modular, enabling OEMs to choose the configuration that fits their performance and power requirements.
The RAPTM co-processor can be configured from 0.5 TOPS to 32 TOPS and can scale above 32 TOPS (64 TOPS, 128 TOPS, etc.) by using a multi-core strategy. We have developed the entire software stack for creating and deploying neural networks, developed in on standard AI frameworks, on the RAPTM. Software tool-chain provides an easy method to port existing neural networks onto our processors. Our software stack supports TensorFlow currently, and we plan to add support for other AI frameworks in the future.
What is your current status and go to marketing strategy?
We are excited to have first silicon Gluon that is an 8 TOPs AI inference coprocessor. We show cased Gluon capabilities with our marketing partner CBC in the AI expo in Tokyo, Japan last month.
The response to our technology was very encouraging, and we are very excited to bring this product for our customers. Competing solutions in the market are offering a SoC solution that integrates host processor and AI accelerator which necessitates complete redesign of the system resulting in huge investment and delay. We believe a co-processor strategy will quickly enable our customers to integrate AI capabilities in their current systems resulting in significant savings. Our initial focus is video analytics. This is a big market, and many verticals like surveillance, retail, automotive, manufacturing, healthcare will have AI enabled Video analytics applications by 2025.
Our product enables OEMs and system integrators to achieve market cost, and power-performance goals for edge solutions. So, in a nut-shell, we are developing high performance, low power, easy to use, edge AI co-processors for our customers to integrate AI quickly to their solutions.
How do you differentiate from various AI start-ups and incumbent solutions in this space?
AlphaICs differentiation comes from proprietary architecture. Gluon provides better throughput in lower power than incumbent products as well as other startups’ solutions. We have also developed a software tool-chain that makes it very convenient for users to deploy their trained networks on Gluon.
AlphaICs solutions will enable edge AI compute both for inference and incremental edge learning. Edge learning is an ability of devices to learn from new data and scenarios on which they were not trained; providing additional intelligence to the edge devices. In this mode, devices start with a trained model on the partial data, and then they learn new scenarios as they encounter new data. We have showcased this on our Architecture, and it is a unique feature that gives our solution an advantage when compared to the other solutions out there. Edge learning is planned in our next generation product.
Can you elaborate your edge learning technology?
Today, edge devices run inferencing of trained deep neural networks to accomplish tasks such as object recognition, image classification, and image segmentation, to name a few. When new unseen data is encountered by the edge devices, the accuracy drop of such systems can be substantial. This is a major problem today for the real-world solution as nature of data keeps changing in these applications. With this in mind, at AlphaICs we designed our proprietary Real Artificial Intelligence Processor (RAPTM), to enable learning when new data is available to the edge devices; without affecting the already learned intelligence. We showcased Proof of Concept for Edge Learning based on a research grant from a US Gov R&D institution. Our results are very promising, and we will continue to further develop this technology.
What is AlphaICs future roadmap and direction?
AlphaICs’ core technology RAPTM supports edge inference and edge learning. We are working to bring our next product that will integrate inference and edge learning. Our current solution is 8 TOPs and we will scale up to 64 TOPs as well integrate pre and post processing capabilities. We are very bullish on huge opportunities at the Edge and we have right technologies to enable edge AI for our customers.
Also Read:Share this post via: