You are currently viewing SemiWiki as a guest which gives you limited access to the site. To view blog comments and experience other SemiWiki features you must be a registered member. Registration is fast, simple, and absolutely free so please, join our community today!
Software has till recently been a low-priority area while designing an SoC. The basic requirements for a chip design were all hardware-centric - power, performance and area. Software compatibility was an afterthought and the software development cycle started only very late in the design flow once the prototypes started coming out.
How times have changed! These days, the first question that is asked to an IC vendor aiming to 'socketize' his chip in the board is
"Will ABC software run on this chip?"
SoC vendors have realized that Software-centric metrics - compatibility and performance are the factors that dictate what kind of hardware is to be designed.
This represents a paradigm shift in traditional SoC design flow.
I agree, software compatibility is key to success. As an example I just bought a $35.00 Linux-based single-board computer called the Raspberry Pi, however after playing with it have discovered that the USB ports don't always work with my Logitech wireless mouse and keyboard, forcing me to un-plug and re-connect, not real friendly or viable. The SoC is from Broadcom and I suspect it's a software driver issue with the modified version of Debian Linux.
There are a few things I've talked about in various places:
1) No programmer can pick up an SoC manual, cold, and write code - they are way too complicated. You have to leverage something like Linaro.org, or 3rd party OS, or reuse proven code.
2) As Daniel points out, IP isn't all created equal, and driver support can kill the whole perception of a device. See Amazon Kindle Fire touch interface (fixed in Kindle Fire HD).
3) Apple A6 was designed and optimized for iOS6, it's not just a standard ARM core. Qualcomm Krait was designed for Android, similar approach.
4) The key to mobile device power consumption is software, usually tied up in how well the RF interfaces can be controlled.
Now read my most recent post on FPGA-based prototyping. The primary motive is HW/SW co-validation. It takes wayyyyyy too long to simulate real software on a complex SoC design.
Embedded designers - since board design is becoming SoC design - have to be much more software literate than ever.
I would say software has been driving the Silicon industry for a long time. Plenty of decent hardware has come and gone because it wouldn't run legacy code (or just wasn't programmer friendly).
The HSA Foundation gives an idea of where this is currently going: software targets a virtual machine architecture that supports multiple processor types and the compiler guys do backends to make it fit your (SoC) platform.
Current FPGA prototyping probably doesn't tell you much about power, and as far as I can tell the EDA tools just resort to heuristics for that. Likewise the RF stuff is somewhat disconnected from regular EDA/SoC flows.
There are ways to accelerate running code in simulation so that it doesn't take too long and you can simulate power management too, but nobody is supplying tools that do it all at the moment.
I am interested to know in what way did the final OS influence the design of these chips. I would suppose it is mostly that drivers for certain blocks in the SOCs will only be developed for the respective OS or is it more than that ?
Apple is getting more out of a lower clock speed than SoCs with comparable cores. The A6 is 1.3 GHz max and is smoking 1.6 GHz parts in benchmarks. This is likely due to a design with optimized interconnect (CPU to GPU), cache, and memory interfaces.
We know Qualcomm uses Arteris for their interconnect, and their core design issues 3 instructions per 2 clocks in most scenarios.
Both could have gone with standard ARM Cortex-A9 cores, but didn't. Understanding what software taxes in a core is important.
Apple is getting more out of a lower clock speed than SoCs with comparable cores. The A6 is 1.3 GHz max and is smoking 1.6 GHz parts in benchmarks. This is likely due to a design with optimized interconnect (CPU to GPU), cache, and memory interfaces.
We know Qualcomm uses Arteris for their interconnect, and their core design issues 3 instructions per 2 clocks in most scenarios.
Both could have gone with standard ARM Cortex-A9 cores, but didn't. Understanding what software taxes in a core is important.
The starting point for developing software that runs hardware optimally would be to develop an understanding of the mapping between a piece of code and the functionality of the hardware it drives. In most cases, software is designed independently and only a fraction of hardware capabilities are leveraged.
A lack of awareness of say the turbo boost or hyper-threading features of the processor could lead to software development that at best, doesn't utilize these features or at worst, even reduces system performance.