Banner 800x100 0810

High Performance or Cycle Accuracy? You can have both

High Performance or Cycle Accuracy? You can have both
by Daniel Payne on 01-26-2013 at 10:55 pm

SoC designers have always wanted to simulate hardware and software together during new product development, so one practical question has been how to trade off performance versus accuracy when creating an early model of the hardware. The creative minds at Carbon Design Systems and ARM have combined to offer us some hope and relief in building virtual platforms that are both fast and accurate enough. Some 4,000 attendees were at the ARM TechConlast fall when Bill Neifert of Carbon Design Systems and Rob Kaye of ARM presented: High Performance or Cycle Accuracy? You can have both.

I’ve just read the 10 page White Paper created in January based on that ARM TechCon presentation.

Modeling Abstraction


The chart shows Model Speed on the X-axis, where higher simulation speed is favored by software developers and speeds of hundreds of MIPS are now possible if you write a Loosely Timed (LT) model, shown in Green. To attain that high speed requires that the model be written at a high-level of abstraction, meaning that low-level details be omitted. Programmers benefit directly from Loosely Timed models because they can develop and debug their new apps, profile their software and determine if the architecture is compliant to specs.

In the bottom-left corner is the Grey box showing that Cycle Accurate (CA) models can be created that are faithful to the RTL timing as defined by the hardware engineer. Because the timing is accurate you can develop device drivers with a Cycle Accurate model, and also perform hardware/software co-verification, finding and fixing bugs before fabricating the SoC.

Instead of creating a third level of modeling as shown in Brown called Approximately Timed (AT), the approach taken by ARM and Carbon is to combine the benefits of both Loosely Timed and Cycle Accurate models.

Loosely Timed Models
An Architectural Envelope Model (AEM) is created first as an executable spec of the architecture. The AEM can be further refined to a specific CPU core using implementation specific details or adding optional features.

High simulation speed is gained by doing Code Translation (CT), a technique where code sequences for the specific CPU get translated to code sequences on your computer to be run natively. This CT approach runs much faster than previous approaches like interpreted Instruction Set Simulators. A typical 2GHz workstation could simulate the Android OS on a model of the ARM Cortex-A15 processor in about 50 to 100MIPS.

Cycle Accurate Models
The CA model has to create correct results for every cycle plus functionally correct results every cycle. This CA model can even be used as a replacement to RTL (Register Transfer Level) code, possibly requiring pin adaptors.

ARM IP also has CA models that were created automatically from RTL code using Carbon software, so you don’t have to do any modeling work, just use the CA models. On the other end of the speed spectrum ARM also provides Fast Models which can be interchanged with the CA models.

When you need to do some debugging, then use CA models. For pure speed, instead use the LT models.

Virtual Platform with Cycle Accurate and Loosely Timed Models
When you want to develop firmware it’s recommended to keep the processor and memory subsystem modeled as LT for speed, and then use CA models as needed. The limit to simulating with a combination of LT and CA models will be the CA model speed because of it’s higher detail.

A new technology called swapping even lets you start out with all LT models, gaining high speed, then at the time of interest swapping in some more detailed CA models to continue.

To do the model swapping requires creating a checkpoint of your system, so each of your models must support checkpoints.

Different Approaches
Here’s a quick comparison of what Carbon recommends versus other virtual prototype approaches:

[TABLE] style=”width: 500px”
|-
|
| Fast Simulation
| Best Accuracy
| Fast + Accurate
|-
| Carbon
| LT models
| CA models
| Swapping
|-
| Others
| LT models
| Emulator or FGPA Prototype
| – None –
|-

So other approaches give you either fast simulation or accuracy, however never at the same time. The downside of an emulator is the high cost, and the downside of an FPGA prototype is limited visibility to debug.

Swapping does have some downside, like cache contents are not saved, so stay tuned for more improvements ahead.

Summary
The approach of using both Loosely Timed and Cycle Accurate models together in a virtual prototype for ARM IP is compelling because of the 50-200 MIPS simulation speeds. This approach accelerates software debug, firmware development, architectural exploration, performance analysis and system debug.

Further Reading
10 page White Paper


Apple Makes More on iPhone Than Samsung on Everything

Apple Makes More on iPhone Than Samsung on Everything
by Paul McLellan on 01-25-2013 at 1:39 pm

Apple’s stock is down 10% after they announced “disappointing” results. They are only disappointing in the sense that some analysts expected even bigger profits. At $13.08 billion it is the largest quarterly profit for any corporation that is not in the oil business ever. According to wikipedia, even the oil companies only ever managed 3 quarters where they made a bigger profit.

Samsung announced its preliminary results too. Their profits were up 89% to $8.3B. Sales of the Galaxy S III reached 30 million units five months after it was introduced in May…nice, but Apple sold 48M iPhones (unfortunately analysts expected 50M) since iPhone5’s introduction in September, so in 3½ months. That’s twice the rate.

In fact Apple makes so much money on iPhone that its profits on iPhone alone are greater than Samsung’s profits overall. And Samsung are (by revenue) the largest electronics manufacturer in the world. Oh and they have a semiconductor division that must be doing pretty well since they make so many components for…Apple.

Perhaps the most amazing Apple number is that its operating expenses were just $3.2B last year. They took $3.2B and used it to make $40B in profit. Their gross margins are around 40% on everything they sell. They have fewer than 100,000 employees and half of those are in the Apple stores, the most profitable retail per square foot ever seen (more than Tiffany’s for example).

Of course Apple does face a problem. It is so large and so profitable that it is hard to keep growing profits fast, which is why the stock is down. Revenues were up by 18% last quarter compared to 2011. But Apple made over $40B in profit last year. To grow by 18% again means finding another $7.2B in profit. Apple’s net margins are roughly 25% so that means adding about $30B more in revenue. That is a lot. Intel’s annual revenues are $50B for example. The smartphone market and the tablet markets are for sure not saturated, but the steepest part of the growth curve may be over, especially at the premium end. In fact one of the worries that analysts have is that the iPad mini is cannibalizing sales of the iPad not-mini and, of course, delivers less profit per unit. If Apple, as predicted, releases an iPhone nano it will have the same problem. It will probably be a wild success but, at a lower price point, deliver less profit per unit.

All the numbers are not in yet for the entire mobile market, but it may well be the case that, once again, Apple and Samsung between them made more profit than the entire market, as has been the case for the last few quarters. All the other players aggregated together lost money. Not all of them, probably Lenovo, Sony and Huawei will be profitable at least. Even Nokia made money in its last quarter (if you lay off 20,000 employees you can get by with a lot less revenue). Google’s Motorola is still losing money though, and many of the other companies do not have enough volume to be able to make money.

Of course Apple may well find another entire product line. With Apple making over half its money on iPhone it is hard to remember that it is less than a decade since it was introduced (June 2007). The long-rumored Apple TV may turn out to be big…or not…or non-existent.

One of the unremarked stories, too, is the switch from WinTel to Mac. Go to any Starbucks and look around. Everyone has a Mac. No-one has a PC. The PC is still king inside corporations but outside, not so much. Go to any web company. Macs everywhere. Graphics, music…all Macs.


A Brief History of Sidense

A Brief History of Sidense
by Daniel Nenni on 01-24-2013 at 9:00 pm


Sidense Corp. is a leading developer of embedded non-volatile memory (NVM) intellectual property (IP) for the semiconductor IC market. The company is headquartered in Ottawa, Canada, and has a global presence with sales offices worldwide.

The company was founded in 2004 by CTO Wlodek Kurjanowicz, a MoSys fellow and co-founder of ATMOS Corporation. Wlodek saw a need in the market for a more area- and cost-efficient, and better performing NVM solution than provided by existing embedded NVM IP offerings or traditional EEPROM and flash. With this goal in mind, he developed the antifuse-based 1T-Fuse™ one-time programmable (OTP) bit cell, using a single-transistor split-channel architecture, for which Sidense has been granted several patents.

The 1T-OTP macros introduced definite advantages for chip developers by providing a small footprint, secure and reliable NVM solution using standard CMOS logic processes. Sidense quickly gained traction in the market, securing its first design wins for its SiPROM 1T-OTP macros by 2006 for code and encryption key storage in processor SoCs – the inherent security of the 1T-Fuse bit-cell and cost-effectiveness as a field-programmable solution being proven in the market.

By 2008 Sidensemarked its 50[SUP]th[/SUP] customer tapeout with 1T-OTP and the first customers in production using this technology. Additionally the early innovative work done by Sidense resulted in several prestigious awards, including a Red Herring Canada 2008 Top 50 Award, a Companies-to-Watch Award in the Deloitte Technology Fast 50 Awards in 2008, and inclusion in EE Times 60 Emerging Startups list for 2008. As the development and support team grew to support its success, the company moved into larger offices in Ottawa.

Since then,Sidense has broadened its product offering to address NVM needs in different applications across many technologies. As the 1T-Fuse technology provides a small footprint solution, it is compact enough to support field programming of code and data updates, emulating multi-time programmable memory. With fast access time and wide I/O bus configurations, code can often be run from OTP memory without copying to RAM, fitting well with ROM replacement and mobile SoC code-storage applications. To address other trimming and fuse-replacement applications, the company introduced the ULP 1T-OTP macros in 2010, providing ultra-low power operation, fast power-on read of bit settings and small footprints for very power-sensitive analog and mixed-signal designs.

Embedded 1T-OTP for Code Storage

Focusing on customer and foundry requirements for manufacturability and reliability, the company has developed strong relationships with the top-tier semiconductor foundries and key IDMs, working closely to put 1T-OTP products through their qualification programs. In addition to implementing 1T-OTP in leading-edge, small-geometry CMOS logic processes, Sidense has introduced products for high-voltage and power/BCD technologies. These support NVM needs in such applications as power management and analog devices for high reliability and “under the hood” 150°C operation for automotive and industrial subsystems.

The company has grown to support increasing demand for its 1T-OTP products and broad market adoption of the technology. Sidense 1T-OTP is now licensed to more than 100 customers worldwide, including many of the top fabless manufacturers and IDMs, and is on devices in production across all major semiconductor market segments.

A wide range of 1T-OTP macros are now available in many variants at process nodes from 180nm down to 28nm, and the technology has been successfully tested in 20nm. The company’s focus looking ahead is on maintaining a leadership position with NVM at advanced process nodes and solutions focused on customer requirements in the major market segments, including mobile and handheld devices, automotive, industrial control, consumer entertainment, and wired and wireless communications. Sidense will continue to work with its foundry and other partners to develop new products and product enhancements to meet the evolving needs of its customers as they take advantage of migration to the latest generation of advanced process nodes and of new market opportunities.


How to manage decreasing by 70% a $5B IC business in less than 6 years? TI knows the answer…

How to manage decreasing by 70% a $5B IC business in less than 6 years? TI knows the answer…
by Eric Esteve on 01-24-2013 at 9:40 am

The Wireless Business Unit (WBU) from TI was created in the mid 90’s to structure the chip business in wireless handset made with customers like Nokia, Ericsson or Alcatel. I had a deep look at the WBU results: quickly growing from $1B in 2000 to reach about $5B in 2005… to finally decrease by 70%, down to $1.3B in 2012.

Let’s make it clear: TI is a great company, which has enabled the modern SC industry as we know it today, based on “Integrated” Circuit (invented by Jack Kilby in 1958). TI is also a company I had the great opportunity to work with, able to train a pure ASIC designer like me to a business-oriented engineer, allowing me to benefit from MBA level trainings, as well as exciting work environment – and great colleagues. Today, I will just use this market oriented education to try to understand what the mistake was. No doubt mistakes have been done, precisely understanding the nature of the mistake(s) just could help avoiding making it again.

In the early 90’s, TI top customer moved from IBM (DRAM and commodities for PC) to Ericsson, Cisco or Alcatel. If you prefer, the move was from the PC to the communication segment or from a commodity business (DRAM, TTL…) to the so-called Application Specific Product (ASIC, DSP…) business type. TI was leader in DSP, like TMS320C50 family, essential to support the new digital signal processing techniques used in modern communication. TI was lucky enough to have customers like Alcatel, able to precisely specify the ideal DSP core to support baseband processing of the emerging wireless (digital) standard, the GSM. TI management was cleaver enough to accept to develop the LEAD, a DSP core to be integrated into a chip using ASIC technology, along with a CPU core from a young UK based company, ARM Ltd…., into a single chip, that we know today under the “System-on-Chip”, this SoC being used to support the complexes GSM algorithms’ (baseband processing) and wireless handset related applications. Like for example to support this Ericsson handset model from 1998:

In 1995, all the pieces were already in place: Technology (ASIC), IP cores (LEAD DSP and ARM7 CPU) and customers: Ericsson, Nokia, Alcatel and more. By the way, at that time, TI Dallas based upper management was not aware that Nice (south of France) based European DSP marketing team, headed by Gilles Delfassy and counting Edgar Ausslander and Christian Dupont, had started deploying this wireless strategy! This team was cleaver enough to wait until the market clearly emerges, and the business figure to become significant, before to ask for massive funding. They literally behaved like they were managing a Start-up, except the start-up was nested in a $6B company!

President at Delfassy Consulting, Founder and GM (retired 2007) of Texas Instruments Wireless BusinessGilles Delfassy: President at Delfassy Consulting, Founder and GM (retired 2007) of Texas Instruments Wireless Business

Edgar Ausslander: VP, Qualcomm, Responsible for QMC (Qualcomm Mobile and Computing) product roadmap

Louis Tannyeres: CTO (TI Senior Fellow) at Texas Instruments

Remark: their respective titles are the currentLinkedIn title; a couple of years ago, Gilles Delfassy was CEO, Edgar Auslander VP Planning Strategy and Louis Tannyeres SoC Chief Architect, all of them with ST-Ericsson.

The WBU was officially created in 1998 and was weighting a couple of $100 million. The strategy was good, the market was exploding, the design team quickly growing, with people like Louis Tannyeres joining as technical guru (Louis is TI senior fellow), allowing to support as many customers as TI could eat, including Motorola (TI historical competitor) in 1999, it was not a surprise that WBU reach 1B$ mark in 2000. At that time, TI was offering a very complete solution: digital baseband modem and application processing, plus several companion chips (RF, power management, audio…) but let’s concentrate on the first mentioned. TI was the first company to introduce a Wireless dedicated SoC platform, OMAP, in 2002. Please note that this was possible because TI could offer both the digital baseband modem and the application processor, integrated into a single chip…

Again this was a winning strategy, explaining why WBU grew from $1B in 2000 up to $4.4B in 2005. In the meantime, CDMA had emerged and was competing with GSM, as precisely described in this brilliant post from Paul McLelan. Every company developing a CDMA compliant modem chip has to pay a license to Qualcomm, and Qualcomm was necessarily a competitor to this chip maker. Not a very comfortable position, but CDMA was opening most of the US based wireless market, so the choice was between getting almost no revenue in the US (and a few other countries)… or negotiating with Qualcomm. The below picture, showing how many CDMA royalties Samsung had to pay to Qualcomm during 1995-2005, clearly indicates that the CDMA royalty level is far to be negligible!

This 2005-2006 times was the apogee of TI WBU. In 2006, Qualcomm revenues from QCT (equivalent to WBU, chip business only, not integrating licenses) were $4.33B when TI WBU was above $5B. TI was already selling OMAP platform on the wireless market, when Gobi and Snapdragon were two years to be launched. TI’s position was still the leader, even if it was clear that Qualcomm was becoming stronger year after year. The only smartphone available on the market was from Nokia, and the iPhone was still not launched. Then, in 2007, two consecutive events happened: TI decided not to develop baseband modem any longer, and Gilles Delfassy retired…

If I remember well, the “official” reason to stop new modem development was because “digital baseband modem was becoming a commodity product”. Commodity means that the only differentiation can be done on the product price, on large production volume sales. Maybe Samsung, still fighting to launch an efficient 3G modem, would like to comment about the commodity nature of the digital baseband modem? In fact, the real reason was that TI was unable to release a 100% at spec, working 3G modem.

This sounds like in a tale from La Fontaine, “Le Renard et les Raisins”. In the tale, a fox is trying to catch grapes. Unfortunately, these grapes are located too high for him, so he fails to catch (and eat) these. Very disappointed, but pride, he says: ”Ils sont trop verts, et bon pour les goujats”, which means “these grapes are not ripen and good only for coarse person”. Just like TI upper management saying that “3G modem IC is a commodity market that we prefer not to attack”…

Moreover, even such a customer may hesitate before choosing TI when the competition (Qualcomm, Broadcom, STM) can propose to support a roadmap, where the next generation will be based on a cheaper solution: a single chip, integrating AP and Modem. Which is even more dramatic is that TI took this decision in 2007, if you remember it was precisely the year when Apple has launched the first iPhone, creating the smartphone market segment. The smartphone shipments have reached a record in 2012, with more than 700 million units being shipped. But TI has decided to concentrate on the Analog business, not on the wireless handset segment anymore. Who has said that Analog was a commodity business? I remember, it was during one of the training in marketing I had during my TI days…

By the way, any idea about TI WBU revenue in 2012? It was $1.36B, or 30% of 2006 revenue. But this trend could have been projected back in 2007, when TI has decided not to develop Digital Modem for the wireless market, wasting 10+ years R&D and business development effort.

From Eric Esteve from IPNEST


The Linley Microprocessor Conference: Weather Cloudy

The Linley Microprocessor Conference: Weather Cloudy
by Paul McLellan on 01-23-2013 at 7:51 pm

The Linley Group’s Microprocessor conference in Spring is focused on Datacenters now that cloud computing and massive internal datacenters has made them so important. The conference is on February 5th and 6th. It is free to qualified people such as network equipment vendors, network service providers and so on (which doesn’t include employees of semiconductor or EDA companies who have to pay).

The keynote on the first day at 9am is on Datacenter and Cloud Computing by Jag Bolaria and Bob Wheeler of the Linley Group.

On the second day, also at 9am, the keynote is Software Defined Networks
by Howie Xu who is VP of Engineering at Big Switch Networks.

At 2.25pm on the 6th is what looks like it will be an especially interesting session. The official topic is Designing Power-Efficient Serversbut unofficially (that is to say imho) it is really about whether ARM will make it in the server world or whether that will remain an Intel walled city. As will all the sessions, the presentations will be followed by a panel discussion. The three presenters and the titles of their talks are:

  • The Truth about ARM-based Servers; Performance and Power Measurements by Karl Freund, Vice President of Marketing, Calxeda
  • Architecting a Cloud Server for Evolving Data Center Workloads by Chris Bergen, Director of Technology, Applied Micro Circuits
  • Server Processor Landscape by Linley Gwennap, The Linley Group

One bit of bad news is that the first day of the conference coincides with the Common Platform Technology Forum. The good news is that the Linley conference has moved to the Hyatt hotel (it used to be in the Doubletree) and so it is possible to sort of attend at least some of both of them since CPTC is in the Santa Clara Convention Center attached to the hotel.

Details of the conference are here. For anyone trying to work out if they can see what they want at both the Common Platform Technology Forum and the Linley Data Center Conference, the CPTF detailed agenda is here. I, for one, will be back and forth between the two.


You may want to check that known-good RTL

You may want to check that known-good RTL
by Don Dingee on 01-23-2013 at 1:00 pm

In his blog Coding Horror, Jeff Atwood wrote: “Software developers tend to be software addicts who think their job is to write code. But it’s not. Their job is to solve problems.” Whether the tool is HTML, C, or RTL, the reality is we are now borrowing or buying more software IP than ever, and integrating it into more complex designs, and new issues are emerging.

Continue reading “You may want to check that known-good RTL”


Using IC Data Management Tools and Migrating Vendors

Using IC Data Management Tools and Migrating Vendors
by Daniel Payne on 01-23-2013 at 10:50 am

Non-volatile memory is used in a wide variety of consumer and industrial applications and comes in an array of architectures like Serial Flash and CBRAM (Conductive Bridging RAM). I caught up with Shane Hollmer by phone this week to gain some insight into a recent acquisition of Atmel’s serial flash components, and how that affected their EDA tool flow for IC data management.


Continue reading “Using IC Data Management Tools and Migrating Vendors”


Verdi: No Requiem for Openness

Verdi: No Requiem for Openness
by Paul McLellan on 01-22-2013 at 8:10 pm

I sat down last week for lunch with Michael Sanie. Mike and I go back a long way, working together at VLSI Technology (where his first job out of school was to take over the circuit extractor that I’d originally written) and then in strategic marketing at Cadence. Now Mike has marketing for (almost?) all of Synopsys’s verification products.

Of course, post Springsoft acquisition that includes Verdi. I’ve written about Verdi before, most recently when they announced Verdi[SUP]3[/SUP]and when they announced VIA, Verdi Interoperability Apps. Verdi is probably the industry’s most widely used debug system, widely used in verification groups. Historically it has been a very open system, not restricted to any one verification environment (since SpringSoft didn’t have their own simulators, emulators etc this wasn’t really an option anyway).

With Synopsys acquiring Springsoft there was a worry from industry and users as to whether Verdi would continue to be an open debug platform or would Synopsys limit the interfaces to only Synopsys tools and cut out, for example, interfaces to Cadence’s Palladium. This is especially important since the release of Verdi[SUP]3[/SUP] since much of what was new in that release was a much more open infrastructure:

  • new user-interface and personalization capabilities
  • open platform for interoperability and customization
  • new infrastructure for performance and capacity improvements

Well, Synopsys has no such plans to restrict Verdi to Synopsys’s own verification tools. It will continue with Verdi’s traditional stance of complete openness (FSDB, interfaces and the Verdi Interoperability Apps still at www.via-exchange.com). In fact Synopsys is going out of their way to communicate this to the industry even running an ad campaign since last November. All pervious user flows to use Verdi with simulators, emulators, formal verification tools, model checking engines, FPGA prototyping…all these will continue to be there.

Another interesting product that Synopsys acquired with Springsoft is Certitude (which I have also written about before). This is a tool for giving feedback on just how good (or not) your verification suite is. Unlike code coverage and other similar static techniques, Certitude works by injecting bugs into your design and then seeing just how many of them your verification flow manages to detect.

Of course, for reasonable sized designs it is never possible to exhaustively simulate or even formally verify the whole design, so it remains a judgement call whether “enough” testing has been done on the RTL. But Certitude gives an objective measure of stimulus and checker completeness to support this signoff decision, along with pointers to specific holes to accelerate the closure process by directing incremental efforts to the areas requiring additional attention.

Recently Synopsys hosted a webinar on Certitude which is available for download here.


How We Got Here…

How We Got Here…
by Paul McLellan on 01-22-2013 at 12:54 pm

Over at the GSA Forum website I have an article on the history of the semiconductor industry. It is actually based on a couple of brief history of semiconductor blogs (here and here) I published here on SemiWiki last year but edited down a lot and tightened up.

Since the start of the year seems to be the time for predictions, here are the last couple of paragraphs, which are a look to the future. No surprises here for anyone who has been reading my stuff, I’m not as optimistic as some people:Looking to the future, Moore’s law is under pressure. Not from a technical point of view; it is clear that it is possible to go on for many process nodes, but from an economic point of view: it is not clear that the cost to manufacture a million transistors is going to come down.

One major challenge is that for the foreseeable future, multiple masks are needed to manufacture some of the chip, pushing up costs. Extreme ultra-violet lithography (EUV) is a possible savior, but there are so many issues that it probably will not be ready until the end of the decade. Bigger 450mm (18-inch) wafers are another possible driver to bring down costs, but are also years away.

So it is possible that the exponential cost reduction that has driven electronics for decades is coming to an end. Electronics will still have more capability, but may not get cheaper and cheaper in the way that we have become accustomed.

The GSA Forum website is here. My article is here. You can download the entire December 2012 issue of GSA Forum here (pdf).