Synopsys IP Designs Edge AI 800x100

Asian Embargoes

Asian Embargoes
by Paul McLellan on 09-07-2013 at 8:00 pm

[This blog embargoed until 10am China time]

An interesting thing happened to me this week. I had two press briefings. No, that wasn’t the interesting thing and if you have sat through a press briefing you will not regard them as recreation. I do it for you, Semiwiki readers. Even though, as this week, the briefings are given by friends. But there was something different about these two briefings.

On Monday evening, which is Tuesday morning in Asia, you will see the blogs. This is a sort of teaser. However, this is the first time, and I’m sure not the last, where the embargo date is early in Asia as opposed to early in the US. A typical embargo is 5am in California, 8am in New York. Before ‘the’ markets open, which are on Wall Street. Nowhere else really matters.

One of these announcements is about strategy in Asia and one coincides with a user group meeting in Asia so you can’t read too much into it. But on the same day I got two announcements with Asian embargo times. That’s never happened before. In fact I can’t remember a press release with an Asian embargo before.

Like traffic down 101 (sorry, non silicon valley readers) something easy to observe may be a proxy for the health of the economy in general or a big trend that is just getting started. Or it may be nothing. My other proxy for business in the valley is Birks in Santa Clara: If you have no idea where it is it is in those pink towers beside 101 at Great America Parkway. If you can get a reservation within a couple of days, silicon valley is not in good shape, if you need to wait two weeks the valley is booming. Right now on Friday as I write you can get a reservation for Monday. For 2 people but not 4. Half-booming. During the downturn, both Birks and Parcel 104 added lots of cheap items to their menu. Not good. But you can’t get a hamburger any more. If you want the best hamburger going, go to Zuni cafe on Market near where I live in San Francisco. But it is only available for lunch or after 10pm. The perfect finish to an evening.


SpyGlass: Focusing on Test

SpyGlass: Focusing on Test
by Paul McLellan on 09-07-2013 at 5:51 pm

For decades we have used a model of faults in chips that assumes that a given signal is stuck-at-0 or stuck-at-1. And when I say decades, I mean it. The D-algorithm was invented at IBM in 1966, the year after Gordon Moore made a now very famous observation about the number of transistors on an integrated circuit. We know that stuck-at faults are not the best model for what can go wrong on an IC but it does work surprisingly well. If you can detect if every signal on a chip is stuck then it turns out you can detect a lot of other stuff that might go wrong (such as two signals bridging together).

But one particular problematic area is detecting transition faults. These are faults that, due to excessive resistance or capacitance, or perhaps one of a number of transistors in parallel being faulty, cause a signal to transition slowly. Slow-to-rise and slow-to-fall faults. If test is not being run at the normal clock rate these might not be detected since at the slower speed the circuit behaves correctly. When these first became a big problem was still in the era when manufacturing test used functional vectors and running them at-speed would automatically pick up these faults. But that approach ran out of steam and today all chips are tested using some sort of scan-test methodology during which the chip is in a special mode and even further from its normal behavior. Scan coverage works on these faults by setting up an initial condition, pulsing the clock twice and then latching the result, or similar approaches that run the transition at speed.

It turns out that most faults are easy to detect but a few hard to detect faults can make a big difference to either the fault coverage or to the time taken by the ATPG (because it has to work very hard to detect the hard to detect faults by definition. Otherwise they wouldn’t be hard to detect duh!). We can make the concept of “hard to detect” more rigorous by the idea of random resistance. No, this isn’t resistance in the resistance-capacitance-inductance sense. It is how likely a fault would be to get detected by a set of random vectors. If almost any set of random vectors will detect the fault it is easy to detect; if very few (or even none) random sequences detect it then it is hard to detect. A design (or a block) is random resistant if random vectors do not automatically have high fault coverage due to there being too many hard to detect faults.

ATPG is done late in the design cycle so we don’t want to discover test problems then, when it is very expensive to do anything about it (changing the RTL after place & route, known as ECO, is orders of magnitude more time consuming than fixing the RTL before starting physical design). What we would really like is a tool at the RTL level that will tell us if we are creating hard to detect faults. We can change the RTL to remove them. Atrenta SpyGlass DFT DSM (SDSM from now on, that is too much of a mouthful) product is such a tool.


SDSM gives feedback on 4 aspects of design at RTL:

  • distribution of nodes (in sorted order) with probable control to 0
  • distribution of nodes with probable control to 1
  • distribution of nodes with probability of observe
  • distribution of nodes with probability of detect


This allows the designer to quickly zoom into blocks where the coverage is low. SDSM can then display a sort of thermal map showing where the hard to detect faults are hiding. Typically in places that have very wide logic cones (forcing ATPG to generate a huge number of vectors to cover all possibilities) or similar issues that are not usually hard to change once identified or can simply be fixed by adding additional test points.

SDSM can identify both scannability and low fault coverage issues early and help designers fix problems without requiring iterations during the implementation flow.

Atrenta’s new white paper on Analysis of Random Resistive Faults and ATPG Effectiveness at RTL is here.

If you are attending ITC in Anaheim, Atrenta is at booth 306.


Why I dumped my iPhone5 for a Samsung S4!

Why I dumped my iPhone5 for a Samsung S4!
by Daniel Nenni on 09-07-2013 at 5:00 pm

A good friend and dog walking partner was on the smartphone Apple/Android fence last year so I pushed him over to Apple and the result was the infamous “8 Reasons Why I Hate My iPhone5” Blog. After months of complaining I bought him a Samsung S4 and gave his iPhone5 to my very appreciative wife so all is well that ends well, maybe.

During our frequent walks on the Iron Horse Trail we sometimes have smartphone contests. Voice control is everything to us. Watching people furiously trying to type on smartphones cracks us up and the autocorrect blunders are just hilarious. This morning Siri won when we asked our phones what sex author Lee Childs is (male). In general though, Android beats Siri on voice commands.

Apple maps, however, is a big loser in all of our contests. Google maps is unbeatable, not even our new car navigation systems come close so I don’t see that changing anytime soon. According to my friend:

Several weeks ago I was picking up my son at the Dublin BARTstation. I made the mistake of using Apple maps, as it is so wellintegrated with Siri. It actually understood me when I said, “Navigate toDublin BART”. Unfortunately Siri told me that no exit off 580 was required. It told me I had arrived as I was passing the station that is between the East and West bound lanes. I quickly jumped out my window and was able to meet my son on time.

Now he is trying to push me over to Android since my two year contract is up and the new Apple phones look “uninspiring”. His Google centric arguments include:

If you don’t use Google Voice you are missing out. You can easilyforward your calls to multiple phones, you can block calls, you can getvoicemail transcribed – all sorts of good stuff. However, the iPhonedoesn’t enable Google Voice to use your Google Voice number for outgoingcalls. My real cell number, which I don’t want people to see, was alwaysused by the iPhone. Now that I’m back in the Android fold, my GoogleVoice numbers is used for outgoing calls, as it should.

I save myself a ton of time by using Google contacts. It’s wellintegrated with Gmail and Google Voice. Guess what? It’s not integratedwith the iPhone. It made me crazy not being able to “call Dan” unless Iexplicitly added him as a contact on my iPhone. With Android and Googleall the syncing between your life on your Windows PC and your life onthe phone happens for free. I like MacBooks but they aren’t used atwork, so they are generally more trouble than they are worth.

If you haven’t checked out Google Now, it’s a treat. Remember Scott McNealy, one of the founders of Sun Microsystems? He famously stated, “Thereis no privacy on the internet, get over it”. Once you accept that factyou’ll appreciate Google Now snooping through your online life,including email, and making excellent recommendations. It will tell youwhen shipments you ordered from Amazon are enroute, if your plane isleaving on time, how far you are from home – in traffic, all sorts ofgood stuff WITHOUT YOU ASKING!

He is also one of the many people who help with SemiWiki so I started him on an iPad2 three years ago and later upgraded him to an iPad3 and an iPad Mini. Other than the questionable battery life of the iPad3, he has a great respect for Apple tablets but now wants a Samsung to match his phone. As soon as it arrives his iPad3 will go to my very appreciative wife.

Hopefully the new iPhone5s and iOS7 release will help in our smartphone battles and make him regret his defection. If not, Android here I come! [env]

More Articles by this author


Base Stations Move Away From Fixed Architecture DSP

Base Stations Move Away From Fixed Architecture DSP
by Paul McLellan on 09-06-2013 at 1:59 pm

Handsets moved away from fixed architecture DSP some time ago, driven by two main factors. Fixed architecture DSP consumed too much power to get good battery life in the smart-phone era, but the consumer air interface was changing fast: W-CDMA, HSPA, WiMax, 3G, LTE (which is actually a whole ‘spectrum’ of different standards) meaning that it was too difficult to use a non-programmable solution. Base stations haven’t had such severe power constraints and so they have stuck with fixed architecture DSP for longer. That is now changing. Base stations have got a lot smaller, not just those huge antenna thingies you see at the side of the freeway, but smaller ones inside conference centers and sports stadiums, moving towards picocells to offload even smaller areas.

With Nokia selling their handset business to Microsoft, they are mainly NSN (they also kept the old Navteq mapping business now rebranded as Here). Ericssson is still #1 in LTE by a wide margin. Alcatel-Lucent. But despite these European names the really big deal is in China. Companies like Huawei and ZTE.

China Mobile has been announcing the tenders for its rollout of its 4G networks. It is hard to realize how large China Mobile is. It has 700M customers, more than all the US networks put together. In fact twice the US population. Many of the handsets are made by Chinese companies you’ve never heard of. Or ones you have like Huawai and ZTE. The high-end Samsung phones and iPhone are too expensive for most of the market.

One of the big winners in the base station announcements ZTE. They won a big part of the entire China Mobile distribution. They have also just announced that they have chosen the CEVA-XC for wireless infrastructure. This is another example of the switch away from fixed architecture DSPs such as TI and Freescale for base-stations following the path trodden by handsets. It is a big market. Of the $3.2B of awards China Mobile has announced, over a quarter goes to ZTE. By units, it is a smaller market than handsets but the margins are higher.

Architectures like CEVA are a sort of hybrid between fixed architecture DSPs and developing RTL specially for each new technology. The fixed architecture DSPs often require too many cores and dissipate too much power. The custom design approach is too slow and inflexible, especially when multiple air interfaces need to be supported. The sweet spot is an architecture optimized for building modems, getting flexibility and programmability with a VLIW architecture with multiple execution units, but keeping the power low by not trying to be completely general purpose: CEVA-XC Soft Modem Engine.

Details on the CEVA-XC family are here.


Ecosystem: ARM versus Intel

Ecosystem: ARM versus Intel
by Daniel Nenni on 09-05-2013 at 2:45 pm

Ecosystem is everything when it comes to modern semiconductor design, especially if it is mobile. The fabless semiconductor industry has been all about ecosystem since the beginning and that is why we hold supercomputers in our hands today, believe it. After the invention of the transistor in 1947, and the invention of the integrated circuit in 1959, the fabless semiconductor ecosystem started to evolve into what it is today, a force of nature.

The semiconductor business transition started with the emergence of the ASIC (Application Specific Integrated Circuit). Electronic systems companies refused to be limited by the general purpose semiconductors of that era and started doing design work in-house. Back in the day, companies such as VLSI Technology and LSI Logic made billions of dollars making ASICs. In fact, this is how Apple got started as a fabless semiconductor company, they did ASICs with Samsung for the first generations of iProducts.

Next came programmable devices (FPGAs) from the likes of Xilinx and Altera. An FPGA is literally a box of Legos in which you can integrate IP blocks with custom design work using a much smaller amount of time and money. As you can imagine, the Xilinx ecosystem of tools, IP, and design partners is key to their market domination. Xilinx was also one of the first fabless semiconductor companies which brings us to the next and probably the most disruptive phase of semiconductor history; the fabless semiconductor ecosystem.

TSMC started it with what is now called the Open Innovation Platform, investing hundreds of millions of dollars in silicon proven IP, reference design flows, and a network of services partners around the world. If you want to know why TSMC commands such a large market share today it is all about the ecosystem, absolutely. This brings us to the point of this blog, take a close look at the details of the upcoming Intel Developer Forum and the ARM Technical Conference:

IDFis the leading forum for anyone using Intel® Architecture or Intel® technologies to change the world. And this year, it’s more technical than ever. This is where developers, engineers, technology managers, and business leaders from across the industry can meet, share ideas, and learn about Intel’s latest developments.

ARM TechCon™ is one of the fastest growing events in the industry. In 2012, over 4000 hardware and software engineers attended the three-day conference. The event, supported by over 85 Connected Community Partners, provides 140 hours of presentations and tutorials aimed at enabling you to optimize your ARM IP-based design. The show floor features product demonstrations and hands-on workshops fostering the perfect networking environment to Connect, Collaborate and Create future ARM Powered® devices.

I will be attending both events again this year and will do a closer comparison afterwards but based on last year and the current promotional materials, Intel still does not seem to get the whole ecosystem thing. IDF is all about Intel and ARM TechCon is all about the ecosystem, which is why ARM commands such a large market share and will continue to do so in the coming years. Just my opinion of course.

lang: en_US


3D: the Backup Plan

3D: the Backup Plan
by Paul McLellan on 09-05-2013 at 1:20 pm

With the uncertainties around timing of 450mm wafers, EUV (whether it works at all and when) and new transistor architectures it is unclear whether Moore’s law as we know it is going to continue, and in particular whether the cost per transistor is going to remain economically attractive especially for consumer markets that are very price sensitive.

One of the most important alternative approaches is 3D chips based on through-silicon vias (TSVs). This is one of the focuses of Semicon Taiwan which is taking place this week. It is also a topic that Karen Savala, the president of SEMI Americas, will be talking about in her keynote at the upcoming 2013 MEPTEC Roadmaps Symposium on September 24 in Santa Clara. MEPTEC is the Microelectronics packaging and test engineering council.

Although many companies have some sort of interposer or 3D stacking technology on their roadmaps, the actual adoption for production manufacturing is slow. Gartner estimates that TSV adoption for memory will be pushed out to 2014 or 2015, with non-memory applications delayed to 2016 to 2017 if then. They currently forecast that TSV devices will account for less five percent of the units in the total wafer-level packaging market by 2017.

Part of the problem is lack of cooperation across the industry as to what technologies should be introduced when. It looks like a repeat of the 300mm wafer transition where the industry couldn’t agree when to introduce 300mm production and stop advanced development at 200mm, and they couldn’t afford to do both. As a result, there were several false starts and hundreds of millions of dollars were lost. For 450mm there are lots of consortia for collaborative R&D, probably the most important being G450C which is backed by TSMC, Intel, GlobalFoundries, Samsung and IBM and is well enough financed to have its own fab.

For 3D-IC to be widely adopted, meaningful collaboration throughout the value chain still needs to occur. Part of the problem is that it is not even clear which parties in the value chain should be doing which steps in the manufacturing. All the players have an existing business model that must be defended or exploited based on what technical discoveries occur and what customers eventually turn out to want. It is natural that the fabless companies, foundries and OSAT houses should want to make their piece of the pie as big as possible, but without deep collaboration there won’t be a pie to divide up.

As Karen concludes:We’ll continue to see discoveries, inventions and new products in 3D-IC and progress will continue. Hundreds of patents in the area have already been issued. We’re seeing innovation and invention in wafer bonding, via manufacturing, and other areas. Standards work at JEDEC and SEMI will also contribute to the market’s development, both to enable processes and cost-reduce manufacturing, but without the emergence of a new, robust collaboration model that can deliver meaningful agreements between key constituencies, the promise of 3D innovation will remain distant and illusive.

Karen’s thoughts on 3D collaboration are online here. Details of the 2013 MEPTEC Roadmaps Symposium are here.


Did you miss Cadence’s MemCon?

Did you miss Cadence’s MemCon?
by Eric Esteve on 09-05-2013 at 4:42 am

That’s too bad, as you have missed latest news about the Hybrid Memory Cube (presentation by Micron), Wide I/O 2 standard, as well as other standards like LPDDR4, eMMC 5.0, and LRDIMM,the good news is that you may find all these presentations on MemCon proceedings web site.
I first had a look at Richard Goering excellent blog: wideI/O and Memory cube, and then I had a look at the HMC presentation made by Mike Black from Micron. HMC is an amazing technology, the comparison table as shown by Micron help understanding why:

  • Channel complexity: HMC is 90% simpler than DDR3, using 70 pins instead of… 715 pins.
  • Board Footprint: HMC board space occupies 378 mm[SUP]2[/SUP] instead of 8,250 mm[SUP]2[/SUP] for DDR3!
  • Energy efficiency is 75% better
  • Bandwidth: HMC delivers 857 MB/pin compared with 18 MB/pin for DDR3 and 29 MB/pin for DDR4

What is the secret sauce for such amazing performance? Once again, it’s because the protocol uses Very High Speed, SerDes based, serial link, instead of Parallel data transfer. Like for PCI Express instead of PCI, SATA instead of PATA and so on. Except that the link is defined at speed rate between 15 Gbps to 28 Gbps! That is, delivering more than 3X bandwidth per lane that PCIe gen-3 (8Gbps) or more than 4X than SATA III (6Gbps). To be honest, I am not completely surprised to see the emergence of such a high speed serial link protocol for DRAM, I rather think that Memory SC industry has been late compared with the rest of the industry. PCI Express has been defined in 2004, as well as SATA, and Ethernet protocols are even older. Nevertheless, I completely applaud, as HMC is expected to be a real revolution in many electronic industries, like Computing, Networking or Servers. By the way, don’t expect your smartphone or tablet to be HMC equipped… the 3D-IC form factor will prevent these devices to use HMC, see picture below:

The HMC typically includes a high-speed control logic layer below a vertical stack of four or eight TSV-bonded DRAM dies. The DRAM handles data only, while the logic layer handles all control within the HMC. In the example configuration shown at right, each DRAM die is divided into 16 cores and then stacked. The logic die is on the bottom and has 16 different logic segments, with each segment controlling the DRAMs that sit on top. The architecture uses “vaults” instead of memory arrays (you could think of these as channels).

The HMC was originally designed by Micron and is now under development by the Hybrid Memory Cube Consortium (HMCC), which is currently offering its 1.0 specification for public download and review. The HMCC includes eight “developer members” – Altera, ARM, IBM, Micron, Open-Silicon, Samsung, SK-Hynix, and Xilinx – and many “adopter members” including Cadence. I will not reproduce the adopter list, as it’s too long to fit here, as more than 110 companies are part of the consortium so far!

In addition to Wide I/O 2 and HMC, Cadence is announcing memory model support for these emerging standards:

LPDDR4 – Promises 2X the bandwidth of LPDDR3 at similar power and cost points. Lower page size and multiple channels reduce power. This JEDEC standard is in balloting, and mass production is expected in 2014.

eMMC 5.0 – Embedded storage solution with a MMC (MultiMedia Card) interface. eMMC 5.0 offers more performance at the same cost as eMMC 4.5. Samsung announced the industry’s first eMMC 5.0 chips July 27, 2013.

LRDIMM– Supports DDR4 LRDIMMs (load reduced DIMMs) and RDIMMs. This standard is mostly used in computers, especially servers.
Cadence memory models support all leading simulators, verification languages, and methodologies. “We’re involved early on in the standards development,” Jacobson noted. “We are out there developing third-party models early. We work closely with vendors to get the models certified. If you’re looking for a third-party solution for memory models, that’s what we do.”

I have extracted from Martin Lund introduction a very interesting picture, as it can help analyst to understand the new memory standards adoption behavior, as well as shows that the “old” standards (DDR1 or DDR2) are not vanishing so fast. Be careful, this is a log scale!

Just a last point, IPNEST is taking a close look at Verification IP market these days, and I had a look at the various memory standards supported by Cadence, or the associated memory models that the company provides… that’s also a pretty long list, as you can see:

Eric Esteve from IPNEST

lang: en_US


Real Time Concurrent Layout Editing – It’s Possible

Real Time Concurrent Layout Editing – It’s Possible
by Pawan Fangaria on 09-03-2013 at 2:00 pm

Layout editing is a complex task, traditionally done manually by designers, and the layout design productivity largely depends on the designer’s skills and expertise. However, a good tool with features for ease of design is a must. Layout productivity has been an area of focus and various features are constantly being added in layout editing tools for designers to quickly draw the layout. While that continues, we have yet another dimension of looking at the layout productivity. With the advent of SoCs, deep-submicron designs and varying integrated functionalities on a single chip, layout designing is no longer a job of a few designers. It needs a substantial team of designers to work on different parts of a layout and frequent synchronization between them. This again is a time consuming process and needs attention. It becomes extremely critical at the time of tape-out when chip finishing is done on the entire top level layout.

While reviewing Mentor Graphics Pyxis Layout Suite, I came across “Pyxis Concurrent”; what an excellent idea! I was amazed to see the on-line demo (link mentioned at the end). Mentor has rightly and pro-actively identified the need for multiple designers to edit different parts of the same cell and enabled them to do it concurrently, hence accelerating the layout development process and the tape-out time.


[Different parts of the layout being done simultaneously]

Designers can define their work area by creating fences and work on the same cell in shared mode over the network. The shared session is owned by the layout captain. Any edits to cell, path or shape is local to the designer who edits them until he/she saves the design. At the time of saving the design, a message is broadcasted to all designers. Data integrity is maintained as the changes by designers within their fences are local to them until a design save is done. All the typical editing commands such as edit, move, delete, undo stack etc. are supported locally to a designer for that portion of the layout. At the same time, any designer is free to view other portions of the design and provide message / feedback, on any part of the design as appropriate, to other designers.


[A Designer pointing to a layout area and communicating through chat box]

The suite provides a virtual white board and chat service for effective team interaction. Any designer can exactly pin-point the other layout joints, shapes, components and so on which may need modification and use chat area to communicate with other designers.


[Calibre real time interface with Pyxis]

Interestingly, this concurrent layout editing environment is seamlessly integrated with Realtime Calibre verification for on-line DRC checks which is especially important during chip finishing for DRC and other corrections and verification done by designers and the whole team certifying the chip layout together. A designer can just check his/her portion by running Calibre, hence keeping the layout DRC correct all the time.


[Interoperability with third party layout]

Design data interchange is supported through both GDS and OpenAccess. Any third party layout can be imported into Pyxis layout suite and integrated into the design with ease.

For an exciting on-line demo, just click DEMO

The Pyxis Project Manager provides comprehensive integration with design kits, schematic development, design verification, floorplanning, custom routing etc. for the complete layout flow for block as well as complete chip.


Microsoft Buys Nokia

Microsoft Buys Nokia
by Paul McLellan on 09-02-2013 at 11:21 pm

OK. I was wrong. Microsoft did buy Nokia’s handset business. For $7.2B, which for a company that just wrote off nearly $1B on tablets isn’t that much. Nokia is a company that had a peak valuation of $110B although it is not clear how much of that is in the deal versus out of the deal.

Details from Reuters here.

Elop is expected to join Microsoft. Omitted from the deal is NSN which used to be Nokia-Siemens Networks but since Nokia bought out Siemens the S is Service. And right now it is one of the profitable bits of Nokia. Despite good numbers for selling Lumia phones, they still ship dollars with each one.

I still don’t see how this is likely to be successful, although Microsoft clearly has much deeper pockets than Nokia. But their success in the hardware biz has been very variable. Xbox good. Zune bad. Microsoft mouse good, Kin (a mobile phone) the fastest failing phone ever, only six weeks.

More when more is known.


Low-Power Design Webinar – What I Learned

Low-Power Design Webinar – What I Learned
by Daniel Payne on 09-02-2013 at 7:00 pm

You can only design and optimize for low-power SoC designs if you can actually simulate the entire Chip, Package and System together. The engineers at ANSYS-Apachehave figured out how to do that and talked about their design for power methodology in a webinar today. I listened to Arvind Shanmugavel present a few dozen slides and answer questions in just about 33 minutes of time. In a week or so you can view and listen the recorded webinar here.


Arvind Shanmugavel
Continue reading “Low-Power Design Webinar – What I Learned”