Why is software for modern SoCs so blasted expensive to develop? One reason is more software is being developed at the kernel layer – hardware dependent software, or HdS. Application software often assumes the underlying hardware, operating system, communication stacks, and device drivers are stable. For HdS, this flawed assumption of stability can eat a project alive.
Development efforts used to hinge on first creating “verified” hardware that should run any software thrown at it, loading up a stock OS, and pursuing debug of customized drivers and applications. There are three things wrong with this outdated thinking, suggesting why developers need to focus on hardware dependent software to rein in development costs.
1) In a fabless world, we have separated the process into creating fully verified RTL, then delivering silicon to it. This allowed FPGA-based prototyping to step in, running the instruction-accurate RTL and developing software on top of it. When silicon arrived, software was re-integrated with actual hardware for application testing – hopefully with zero or only a few errata.
However, that misses two bigger problems that have cropped up as the region of HdS has become more complex.
2) The stock OS of yesteryear was often “plain vanilla”. Usually, the OS took advantage of as few advanced hardware features as possible, trying to stay portable between architectures and forward from version to version. The assumption was faster microprocessors would remain software compatible and deliver the desired performance gains. Upgrading the OS and introducing new API features – the “board support package” – was done carefully, on stable hardware.
Today, with heterogenous multicore the norm, the operating system is rarely untouched and often highly optimized for the exact SoC environment. It would be ludicrous to not optimize an OS to take advantage of more than one CPU core, caching and memory architecture, advanced GPU and DSP cores, and chip-level interconnect.
3) The stable software assumption has flipped sides. Where the OS was often third party, applications were often developed in house. If changes were required to a system, the OS was typically off limits, viewed as golden unless there was a drastic bug worthy of reporting back to the vendor. Either the device drivers or the application itself would be modified if there was a system-level problem.
Now, in an era of open source operating systems where applications come from an application store developed by third parties, we see the reverse. Applications are now golden and must remain untouched – app developers are unwilling to adapt to platform-specific quirks. The board support package has fragmented into thousands of possible chip support package combinations based on merchant and internally developed SoCs. This has shifted the burden of getting the OS and APIs functionally correct from the OS vendor to the HdS developer.
Opportunity usually arises from problems. With a pressing need to open visibility into defects in the HdS layer, which could be either hardware or software or both, teams are turning to virtual prototyping and co-verification strategies. The effect is bringing the software schedule further forward, allowing full-up integration on the FPGA-based prototype before silicon may be ready.
This concept of HdS and these combined hardware-software development timelines are discussed in a new webinar, featuring Alex Grove from FirstEDA. One of the points he discusses is the use of SCE-MI to connect software applications and models running on a host to FPGA-based emulator, which may have speed adaptors for peripherals such as USB and Ethernet.
The big advantage of FPGA-based prototyping, be it free-running, speed adapted, or at-speed, is the RTL and the software can be modified and reinstantiated quickly. Grove walks through the differences, and points to the need for a hybrid approach for effectively targeting HdS when developing customized SoCs. It’s a half hour well spent.
Related articles:Share this post via: