Earlier this week I went to the Synopsys Interoperability Forum. The big news of the day turned out to be Synopsys wanting to be more than interoperable with Magma, but that only got announced after we’d all gone away.
Philippe Margashack of ST opened, reviewing his slides from a presentation at the same forum from 10 years earlier. Back then, as was fashionable, he had his “design productivity gap” slide showing silicon increasing at 58% CAGR while design productivity only increased by 21% CAGR. The things he was looking to in order to close the gap were: system level design, hardware-software co-design, IP reuse and improvements to the RTL to layout flow and in analog IP.
We’ve made a lot of progress in those areas but, of course, you could put up almost the same list today. ST has been a leader in driving SystemC and virtual platforms and now have over 1000 users. But the platforms still suffer from a lack of standardization for modeling interrupts, specifying address maps and other things.
One specific example that he went over was a set-top-box chip (or DVR chip) called “Orly” that can do 4 simultaenous HD video decodes on a single chip. The software was up with all their demos running just 5 weeks after they received silicon.
Next up was John Goodenough of ARM who also took the slides from a presentation by his boss (that he says he actually put together) and compared them to the situation today. “Everything has changed but nothing has changed” was the theme. Ten years ago they needed ot simulate 3B clocks to validate a processor. Now it is two orders of magnitude bigger doing deep soak validation on models, on FPGA prototypes and, eventually, on silicon. Back then they had 350 engineers; now 1400. They had 250 CPUs for validation and now they have tens of thousands of job slots for simulation, multiple tens of thousands of CPUs, multiple emulators, FPGA prototype farms.
Jim Hotalked about standards living for a long time (as I did recently but with different examples). He started from Roman roads and how the railway gauge came from that and so, in turn, the space shuttle boosters that had to travel by rail. That US railways are the same gauge, 4′ 8.5″, as UK is not surprising since the first railroads were built to run British locomotives. But the original gauge in Britain was based on the gauge used in the coal mines which was 4′ 8″ (arrived at by starting from 5′ and using rails 2″ wide). As with the Romans they were choosing a width that worked well behind a horse, although there is no evidence that the Roman gauge was copied. In fact in Pompeii the ruts are 4′ 9″ apart. And as for the space shuttle booster, it doesn’t depend on the track gauge but the load gauge (how bit a wagon can be and still clear bridges and tunnels). The US load gauge is very large and the UK one is very small (US trains and even French ones cannot run on UK rails despite the rails being the same distance apart for this reason).
Mark Templeton, who used to be CEO of Artisan before it was acquired by ARM, talked about making money. In almost all markets there is a leader who makes a lot of profit, a #2 who makes some and pretty much everyone else makes no money and struggles to be able to even invest enough to keep up. So it’s really important to be #1. He talked about going to a conference where John Bourgoin of MIPS presented and went into the many neat technical details of the MIPS architecture. Robin Saxby of ARM, at the time about the same size as MIPS, presented and talked nothing about processors but about the environment of partners they had build up: silicon licensees, software partners, modeling and EDA partners and so on. For Mark it was a revelation that winning occurs through interoperation with partners. Today MIPS has a market cap of $300M and ARM is $11.7B.
Michael Keating talked about power “just in time clocking, just enough voltage” and how over the last few years CPF and UPF (why do we need two standards?) have improved the flows so that features like multi-voltage regions, power-down, DVFS are usable. But power remains the big issue if we are going to be able to use all the transistors that we can manufacture on a chip.
Shay Gal-On talked about multi-core and especially programming multi-core. I remain a skeptic that high core count multi-core chips can be programmed for most tasks. They work well for internet servers (just put one request on each core) and for some types of algorithm (a lot of photoshop stuff for instance) but not for most things. Verilog simulation, placement etc all seem to fall off very fast in their ability to make use of cores. The semiconductor industry is delivering multi-core as the only way to control power, but making one big computer out of a lot of little ones has been a reseach project for 40 years. He had lots of evidence showing just how hard it is: algorithms that slow down as you add more cores, different algorithms that cap out at different core ceilings and so on. But it’s happening anyway.
And don’t forget Coore’s Law: the number of cores on a chip is increasing exponentially with process generation, it’s just not obvious yet since we are on the flat part of the curve.
Shishpal Rawat talked about the evolution of standards organizations. There are lots of standards organizations. Some of them are merging. There will still be standards organizations in… I’m afraid it was a bit like attending a standards organization meeting.
Share this post via: