Using the right tool for the job can be extremely important. Well, maybe not in the case of the famed chef Martin Yan who is notorious for using just one knife—a razor sharp wide blade cleaver that doubles as a spatula—for preparing anything and everything he cooks. For the rest of us, though, the right tools can make all the difference.
The wrong choice of tool has stymied the prospects of many a product. Maybe there were justifiable reasons for the choice. Maybe the product concept was ahead of its time and too early for the market. Maybe the ecosystem at that time did not offer a better option. Perhaps you have your own list of such products. One in particular comes to mind that many of you may not be aware.
More than a decade before Apple launched the iPad, a similar product was conceived at National Semiconductor (now part of TI). It was called the WebPad, an always-on wireless tablet device. For practical purposes, the intended use cases for the WebPad were similar to the iPad. National had developed the reference design and manufactured a whole bunch of samples for its OEM customers to test and evaluate. National’s goal was to create traction for this product, so the company could sell more chips. There was serious interest from many customers. But the Achilles heel of the product was the processor. X86-based processors were available in-house. National had acquired Cyrix, an x86-architecture based processor company, a few years earlier. So, that was the processor of choice. From a PPA perspective for the intended application, it scored well on the performance metrics. But from power and area, not so well. The sample devices were power hungry and bulky. There are probably any number of reasons why the WebPad died on the vine, but the choice of processor makes for an interesting case study. For a product that is supposed to be an always-on mobile tablet, weight, form factor and longevity of each battery-charge are of paramount importance and play a deciding role on the product’s market viability.
Could a different processor have been considered for the WebPad? Maybe. Arm was nascent at that time and was just beginning its expansion into the mobile market. Arm may not have matched the x86 on performance during those days. But the applications were not that demanding and x86 was likely an overkill. And Arm would have done well on the power and area metrics. Fast forward to current times, applications are extremely demanding on all three metrics of PPA. And AI driven edge applications pose stringent requirements in terms of latency, deterministic responses, energy efficiency, memory resources and maximum throughput. As there are many options to choose from, there is no excuse for undermining a great product idea by making the wrong processor choice.
For today’s and future AI-enabled applications, is the main processor still the best fit in every case? Can custom instructions extensions breathe new life into main processing? When does it make sense to use a hybrid core architecture with a main processor along with AI accelerators? You will find the answers to these questions at an upcoming webinar hosted by Expedera and Andes Technology.
Expedera provides scalable neural engine semiconductor IP that enables major improvements in performance, power, and latency while reducing cost and complexity in AI-inference applications. Expedera’s Origin™ deep learning accelerator (DLA) products are easily integrated, readily scalable, and can be customized to application requirements. The solutions also reduce the memory requirement, which is very important for embedded devices at the edge.
While their DLA products can work with any CPU architecture, they can deliver better efficiencies alongside processors that can support custom instructions.
Andes Technology is a leading embedded processor intellectual property supplier in the world. Andes offers high-performance/low-power 32/64 bit processors and associated SoC platforms to serve the rapidly growing embedded systems applications.
Their processor cores including their RISC-V cores supporting custom extensions can fulfill the requirements of many AI applications. In other cases, an architecture with RISC-V cores and an Expedera DLA core would lead to a more optimal end-solution.
Also read:Share this post via: