We periodically see that “software ate the world” line – I’m pretty sure I’ve used it a couple times myself. The fact is, software doesn’t run itself; never has, never will. Somewhere there has to be an underlying computer. First it was on beads, then in gears, then in tubes. Today, it’s in silicon, tomorrow it might be in graphene or something else.
This computer business swings in pendulum fashion. We’ve seen several cycles of mainframe versus client server thinking. We’ve begun to see the limits of geometries shrinking, particularly in flash memory, and reversion to larger structures stacked vertically. We’ve seen where microprocessors got fast, then operating systems and applications filled them up, then multicore processors debuted, and we’re still trying to figure out how to program those efficiently – with great success in some use cases.
In the embedded business, things move a lot slower. It took years for embedded programmers to embrace high level languages, afraid of giving up the control and predictability of assembly. Eventually, C won, as compiler technology produced decent results on faster processors and structured design mandated something besides cryptic register-speak.
The world has moved forward, however. IoT programmers now are very likely to be working in Python, especially ones coming from the enterprise application development sphere. A whole range of data analytics languages have appeared, including Lua and Rust. Microsoft has just announced it has open sourced its P language, an event driven paradigm making it easy to program state machines.
I can hear the embedded engineers scoffing right now. I was at a networking event a few months ago and a guy from TSMC unloaded on me about how Python would never amount to anything. I suppose it matters a lot what you are doing, specifically; not every language can handle every task at hand. I learned to program on a TI-57 calculator, then had the obligatory FORTRAN course in my freshman year of college, then taught myself BASIC, Pascal, and C – all of which I’ve shipped production code in. I’m now learning Python in my spare time.
Perhaps the lessons of history have been forgotten. Most of us know the story behind RISC: engineers profiled UNIX, figured out what instructions were actually used, ripped out any transistors not needed for those specific instructions, and simplified their chips. CISC platforms became slaves to a growing blob of software, where every subsequent processor generation had to run (nearly) every piece of code created for previous generations. RISC platforms went and did specific jobs more efficiently.
We just kind of trust that the compiler guys have done their job for a chip.
It’s true we can get a lot software running on just about anything. Interpreted languages like Python and Javascript have made a big comeback with lots more processing power underneath them. When it comes to programming heterogeneous systems, or resource-limited systems (one way to differentiate embedded design), I’d submit we had better take a much harder look at the optimization between the pairing of a chip and its software.
I think we’re at the same point as the original move from CISC to RISC. The chip that best runs the instructions actually used will win. We’ve seen several stories lately about extensible instruction sets and cache coherency between cores and even off-chip interconnect embracing FPGAs for acceleration.
C has been a matter of convenience, taught hand in hand with Linux development. IoT development is going to call for something better. What would happen if a chip developed a reputation for being more Python-friendly – fewer cache misses, less power consumption, or similar benefits? Do we really know what chip does well in various configurations of Apache Spark? (IBM puts a lot of energy in that direction on OpenPOWER.) I really like the EEMBC benchmarks, but they’re written in C.
Where are the IoT benchmarks describing how these other languages do? A few folks like HP have taken on analytics tasks in the infrastructure, and there is a TPC-IoT working group out there. EEMBC is working on an IoT power benchmark.
We’re selling ourselves very short if we think we just design a chip with some registers, execution units, pipelining, and cache, and somebody will show up with a cool compiler or interpreter that gets the most out of it. Microsoft was using terms like “math density” in extending the instruction sets on HoloLens. CEVA figured out less than 10 instructions to take 30% out of Cat-NB1. The entire RISC-V movement is based on being able to easily customize things. Silicon Labs went out and purchased Micrium to get some software expertise in-house.
The thing is, many IoT designers are coming from software backgrounds, and maybe having prototyped their idea on a maker module. They not only don’t know how to optimize a chip, they’ve never designed one, period. So, they think they have to live with the results of “other people’s chips”. Little do they know that someone who does know how to optimize both hardware and software in concert is lurking around the corner, waiting to eat their world.
I’m not saying things are bad, just that they could be a lot better. Optimization will be the next big opportunity in IoT chip design. I can’t wait to see where this goes.
Share this post via:
5 Expectations for the Memory Markets in 2025