Synopsys IP Designs Edge AI 800x100

SoC Realization: Let’s Get Physical!

SoC Realization: Let’s Get Physical!
by Paul McLellan on 10-05-2011 at 1:41 pm

If you ask design groups what the biggest challenges are to getting a chip out on time, then the top two are usually verification, and getting closure after physical design. Not just timing closure, but power and area. One of the big drivers of this is predicting and avoiding excessive routing congestion, which is something that has only downside: area, timing and power are all worse (unless additional metal layers are used which is obviously increases cost).

A typical SoC today is actually more of an assembly of IP blocks, perhaps with a network-on-chip (NoC) infrastructure to tie it all together, itself an approach partially motivated by better routability aka less routing congestion.

Some routing congestion, physical congestion, is caused by how the chip floorplan is created. Like playing tic-tac-toe where you always want to start by grabbing the middle square, routing resources in the middle of the chip are at a premium and creating a floorplan that minimizes the number of wires that need to go through there is almost always a good idea. The ideal floorplan, never truly achieved in practice, has roughly even routing congestion across the whole chip.

But other routing congestion is logical congestion, inherent in the design. This comes in two flavors: core congestion and peripheral congestion.

Core congestion is inherent in the structure of the IP block. For example, very high fanout muxes will bring a large number of routes into the area of the mux causing congestion. This is inherent in the way the RTL is written and is not something that a good floorplan or a clever router can correct. Other common culprits are high fanout nets, high fanin nets and cells that have a large number of pins in a small area.

Peripheral congestion is caused when certain IP blocks have large numbers of I/O ports converging on a small number of logic gates. This is not really visible at module development time (because the module has yet to be hooked up to its environment) but becomes so when the block is integrated into the next level up the hierarchy.

The challenge with logical congestion is that it is baked-in when the RTL is created, but generally RTL tools and groups are not considering congestion. For example, high-level synthesis is focused on hitting a performance/area (and perhaps power) sweet spot. IP development groups don’t know the environment in which their block will be used.

The traditional solution to this problem has been to ignore it until problems show up in physical design, and then attempt to fix them there. This works fine for physical congestion, but logical congestion really requires changes to the RTL and in ways which are hard to comprehend when down in the guts of place and route. This process can be short-circuited by doing trial layouts during RTL development, but a the RTL must be largely complete for this so it is still late in the design cycle.

An alternative is to use “rules of thumb” and the production synthesis tool. But these days, synthesis is not a quick push-button process and the rules-of-thumb (high fanout muxes are bad) tend to be very noisy and produce a lot of false positives, structures that are called as bad when they are benign.

What is required is a tool that can be used during RTL authoring. It needs to have several attributes. Firstly, it needs to give quick feedback during RTL authoring, not later in the design cycle when the authoring teams have moved on. Second, it needs to minimize the number of false errors that cause time to be wasted fixing non-problems. And thirdly, the tool must do a good job of cross-probing, identifying the culprit RTL not just identifying some routing congestion at the gate-level.

Products are starting to emerge in EDA to address this problem, including SpyGlass Physical, aimed (despite Physical in the name) at RTL authoring. It offers many capabilities to resolve logical congestion issues up-front, has easy to use physical rules and debug capabilities to pin-point roout causes so that they can be fixed early, along iwth simple reports on the congestion status of entire RTL blocks.

The Atrenta white-paper on SpyGlass Physical can be downloaded here.


Amazon’s Kindle Fire Spells Trouble for nVidia, Qualcomm and Intel

Amazon’s Kindle Fire Spells Trouble for nVidia, Qualcomm and Intel
by Ed McKernan on 10-05-2011 at 11:50 am

With the introduction of the Kindle Fire, it is now guaranteed that Amazon has the formula down for building the new, high volume mobile platform based on sub $9 processors. In measured fashion, Amazon has moved down Moore’s Law curve from the initial 90nm Freescale processor to what is reported to be TI’s OMAP 4 in order to add the internet, music and movies to its previously single function e-book environment. Some view it as a competitor to Apple, however the near term impact is on brick and mortar competitors (i.e. Barnes and Noble, Walmart etc) and to the mostly snail mail based movie house Netflix.
Continue reading “Amazon’s Kindle Fire Spells Trouble for nVidia, Qualcomm and Intel”


AMS Verification: Speed versus Accuracy

AMS Verification: Speed versus Accuracy
by Daniel Nenni on 10-03-2011 at 9:16 pm

I spent Thursday Sept. 22 at the first nanometer Circuit Verification Forum, held at TechMart in Santa Clara. Hosted by Berkeley Design Automation (BDA), the forum was attended by 100+ people, with circuit designers dominating. I spoke with many attendees. They were seeking solutions to the hugely challenging problems they are wrestling with today when verifying high-speed and high-performance analog and mixed-signal circuits on advanced nanometer process geometries.

Continue reading “AMS Verification: Speed versus Accuracy”


Verdi: there’s an App for that

Verdi: there’s an App for that
by Paul McLellan on 10-03-2011 at 5:58 am

Verdi is very widely used in verification groups, perhaps the industry’s most popular debug system. But users have not been able to access the Verdi environment to write their own scripts or applications. This means either that they are prevented from doing something that they want to do, or else the barrier for doing it is very high, requiring them to create databases and parsers and user-interfaces. That is now changing. Going forward the Verdi platform is being opened up, giving access to the KDB database of design data, the FSDB database of vectors and the Verdi GUI.

This lets users customize the way they use Verdi for debug and they can create “do-it-yourself” features and use-models, without having to recreate an entire infrastructure from scratch before they can get started. There are interfaces available for both TCL access and for C-code access. As a scripting language TCL is usually quicker to write, but C will usually win if something requires high computation efficiency while being harder to create.

There are a lot of areas where users might want to extend the Verdi functionality. Probably the biggest is design rule checking. Companies often have proprietary rules that they would like to enforce but no easy way, until now, to build a qualification tool. Or users might want to take output from some other tool and annotate it into the Verdi GUI rather than trying to process the raw data directly.

These small programs that run within the Verdi environment are known as Verdi Interoperability Apps, or VIAs.

In addition to allowing users to create such apps, there is also a VIA exchange that allows users to freely share and reuse them. So if a users wants to customize Verdi in some way, it may not even be necessary to write a script or some code since someone may already have done it. Or at least done something close that might serve as a good starting point. The VIA exchange is at http://www.via-exchange.com.

In addition to making TCL scripts and C-programs available for download, the VIA exchange also has quick start training material and a user forum for sharing and exchanging scripts, and getting questions answered. There are already over 60 function/procedure examples and over 30 scripts and applications already contributed by Springsoft’s own people, by Verdi users and by EDA partners.

Once again, the VIA exchange website is here.


Making Money With Cramer? Don’t Count on it!

Making Money With Cramer? Don’t Count on it!
by Daniel Nenni on 10-02-2011 at 11:16 pm

Investing with Cramer is a crap shoot. By Cramer, I mean the Mad Money TV show, and Action Alerts PLUS from thestreet.com. Cramer is certainly a smart guy and knows his stuff, but don’t think following his investment strategy is necessarily a winner. He constantly maintains that you can beat the averages by picking individual stocks and doing your homework. This just isn’t true.

Cramer did a piece on Cadence that I blogged about in 2009, Jim Cramer’s CNBC Mad Money on Cadence!, to which I concluded that Jim Cramer is in fact an “infotainer” and prone to pump and dump groupies. In regards to CDNS however, he got lucky (CDNS has doubled since then).

Cramer may have made money as a hedge fund manager, but he also used many tools (such as options, shorting, etc.) which most Joe on the street investors don’t utilize. He also had a staff, and access to much better tools and information.

A good friend of mine is getting killed with his recommendations!

It drives me crazy when he talks on his show about, “I recommended this great stock much lower ….”. He does occasionally mention the ones that get crushed. Of course if he did this as a matter of routine people wouldn’t watch.

The most salient fact: His Action Alerts PLUS portfolio hasn’t beaten the S&P (when you include dividends) since 2007.

Several awful Cramer recommendations, many of which my friend has gotten creamed on, include Bank of America, Netflix, Limelight Networks, Juniper, Teva, GM, Ford, BP, Alcoa, Apache, Express Scripts, Freeport McMoran, Starwood Hotels, and Johnson Controls. There is probably a list of winners just as long, but I’m in no mood for that.

Sour grapes? Of course. Watching Cramer and subscribing to Action Alerts PLUS will make you more informed about the market. Will it make you money? Maybe. There is just as good a chance you’ll make more money with an S&P Index fund. In an up market, you’ll feel smarter about the investments you are making. In a down market you’ll be kicking yourself in the butt.

Cramer makes me think of a “get rich quick book” that’s a really good read. The only one that gets rich is the author, and that is from selling the book.

Note, my friend still watches his TV show, albeit with a lot of Tivo fast forwarding. You can be interested in Cramer’s opinion on market direction. You can subscribe to Action Alerts. Is it worth $399/yr to be better informed? The hitch is that I’ll never be better informed than the pros on Wall Street, and more information will not necessarily make you money.

I think Cramer is a smart guy and a helluva entertainer. However, I think he does the average individual investor a disservice by leading them to believe that he can help make them above average returns. I’ve not seen this Action Alert Plus service but if it’s like most newsletters, it’s lacking in timing and exit strategies.

I have some possible explanations for his chronic under performance despite his intellect, experience, and huge research staff:

1. He has to have three new ideas every day. If I have a good investment idea once a month I’m happy.

2. Time frame – a huge part of managing a portfolio has to do with investment horizon – if you have a 10+ year time frame – you don’t care what the Finance Minister of Germany is saying about Greece. But Cramer has to have a stock idea that is an answer to that news sound byte. For this type of recommendation over a short period of time you are going to get very random results.

I think that an active financial manager that uses a tactical asset allocation strategy along with an industry sector strategy based on macro economic analysis and individual stock selection based on sound fundamental analysis can outperform a passive benchmark index over the long term. However, all this work and strategy may only mean an extra 1.5-2.0% pick up in total return. Individual investors should utilize someone that is willing to execute this type of individual portfolio management for a reasonable fee of .75-1.0%.

The bottom line is that anyone who makes promises of “big money” returns (like Cramer) is either lying (Bernie Madoff) or taking on more risk with your money than you think.


Memory Cell Characterization with a Fast 3D Field Solver

Memory Cell Characterization with a Fast 3D Field Solver
by Daniel Payne on 09-29-2011 at 12:07 pm

Memory designers need to predict the timing, current and power of their designs with high accuracy before tape-out to ensure that all the design goals will be met. Extracting the parasitic values from the IC layout and then running circuit simulation is a trusted methodology however the accuracy of the results ultimately depend on the accuracy of the extraction process.

Here’s a summary of extraction techniques:

Extraction Benefits Issues
Rule-based High capacity Limited accuracy, 10% error total capacitance, 15% error coupling capacitance
Reference-level solver High accuracy Limited capacity, long run times
Fast 3D Solver High accuracy, fast run times New approach

Bit Cell Design

Consider how a memory bit cell is placed into rows and columns using reflection about the X and Y axis:

The green regions are an application of boundary conditions on a cell with reflective boundary enclosed by 2um in the X direction and 4um in the Y direction.

For attofarad accuracy the field solver has to extract the bit cell in the context of its surroundings.

Mentor Graphics has a fast 3D field solver called Calibre xACT 3D that can extract a memory bit cell in just 4 seconds using this boundary condition apprach, compared to a reference-level solver that requires 2.15 hours. I’ve blogged about xACT 3D before.

Accuracy Comparisons
A memory bit cell was placed and reflected in an array, then the entire array was extracted. The unit bit cell used boundary conditions as shown above and the results were compared against an actual array. The accuracy of the boundary condition approach in Calibre xACT 3D is within 1% of the reference-level field solver.

Another comparison was made for symmetric bit lines in a memory array using the boundary condition approach versus the reference-level field solver, with an accuracy difference within 0.5%.


Beyond the Bit Cell

So we’ve seen that Calibre xACT 3D is fast and accurate with memory bit cells, but how about on the rest of the memory like the decoders, and the paths to the chip inputs and outputs?

With multiple processors you can now accurately and quickly extract up to 10 million transistors in about one day:

Summary
Memory designers can extract a highly accurate parasitic netlist on multi-million transistor circuits for use in SPICE circuit simulation. Run times with this fast 3D field solver are acceptable and accuracy compares within 1% of reference-level solvers.

For more details see the complete white paper.


Introducing TLMCentral

Introducing TLMCentral
by Paul McLellan on 09-29-2011 at 8:00 am

Way back in 1999 the open SystemC initiative (OSCI) was launched. In 2005 the IEEE standard for SystemC (IEEE1666-2005 if you are counting) was approved. In 2008, TLM 2.0 was standardized (transactional level models), making building virtual platforms using SystemC models easier. At least the models should be play nicely together, which had been a big problem up until then.

However, the number of design groups using the virtual platform approach still only increased slowly. Everyone loves the message of using virtual platforms for software development, but the practicalities of assembling or creating all the models necessary continued to be a high barrier. Although there are lots of good reasons to use a virtual platform even after hardware is available, the biggest value proposition is to be able to use the platform to get software development started (and sometimes even finished) before silicon is available. And time taken to locate or write models dilutes that value by delaying the start of software development. In fact in a survey that Synopsys was involved with last year, the lack of model availability was one of the biggest barriers to adopting virtual platform technology.


Today, Synopsys announced the creation of TLMCentral. This is a portal to make the exchange of SystemC TLM models much easier. Synopsys is, of course, a supplier of both IP and virtual platform technology (Virtio, VaST, CoWare). But TLMCentral is open to anyone and today there are already 24 companies involved. IP vendors such as ARM, MIPS and Sonics. Service providers such as HCL or Vivante. Other virtual platform vendors such as CoWare and Imperas. And institutes and standards organizations such as Imec and ETRI. The obvious missing names are Cadence, Mentor and Wind River, at least for now. Cadence and Mentor haven’t yet decided whether or not to participate. I don’t know about Wind. Teams from Texas Instruments, LSI, Ricoh and others are already using the exchange.

As I write this on Wednesday, there are already 650 models uploaded, and more are being uploaded every hour. By the time the announcement hits the wire on Thursday morning it will probably be over 700. There are really three basic classes of model: processor models, interface models (what I have always called peripheral models) and environment models. A virtual platform usually consists of one or more processor models, a model for each of the interfaces between the system and the outside world, and some model of the outside world used to stimulate the model and validate its outputs. The processor models run the actual binary code that will eventually run on the final system, ARM or PowerPC binaries for example. By using just-in-time (JIT) compiler technology they can achieve extremely high performance, sometimes running faster than the real hardware. The interface models present the usual register interface on some bus on one side, so the device driver reads and writes them in the normal way, while interfacing in some way to the test harness. Environment models can be used to test systems, for example interfacing a virtual platform of a cell-phone to a cellular network model.

TLMCentral is not an eCommerce site for purchasing models. It is central resource for searching for them and then finding suppliers. Some models are free, and available directly from the site, but some models you must pay for, and you are directed to the vendor. There is also an industry-wide TLM eco-system allowing users to support each other, exchange models and so on.

There have been other attempts to make models more available, notably Carbon’s IP exchange. But the scale and participation on TLMCentral, and the backing of the largest EDA company, means that this is already the largest. But the success is not so much counting how many people sign up on day one, but whether it is successful at lowering the barriers to adoption of virtual platform based software development. And that will show up as growth, hopefully explosive, in the number of groups using virtualized software development.

TLMCentral is at www.tlmcentral.com


Analog IP Design at Moortec

Analog IP Design at Moortec
by Daniel Payne on 09-28-2011 at 12:34 pm

Stephen Crosher started up Moortec in the UK back in 2005 with the help of his former Zarlink co-workers and they set to work offering AMS design services and eventually created their own Analog IP like the temperature sensor shown below:

We spoke by phone last week about his start-up experience and how they approach AMS design.


Continue reading “Analog IP Design at Moortec”


Samsung versus Apple and TSMC!

Samsung versus Apple and TSMC!
by Daniel Nenni on 09-28-2011 at 6:56 am

Apple will purchase close to eightBILLION dollars in parts from Samsung for the iSeries of products this year alone, making Apple Samsung’s largest customer. Samsung is also Apple’s largest competitor and TSMC’s most viable competitive foundry threat so it was no surprise to see Apple and TSMC team up on the next generations of iProducts. The legal battle between Samsung and Apple did come as a surprise however and will change how we do business for years to come.

“Our mission is to be the trusted technology and capacity provider of the global IC industry for years to come.” TSMC Website

During the past 25+ years I have been to South Korea a dozen or so times working with EDA and SemIP companies in pursuit of Samsung business. South Korea is a great place to visit but South Korea is not a great place to do business (my opinion) due to serious ethical dilemmas. Let’s not forget the Samsung corruption scandalthat engulfed the government of South Korea. Let’s not forget the never ending chip dumping probes. The book “Think Samsung” by ex-Samsung legal counsel accuses Samsung of being the most corrupt company in Asia. So does it really surprise you that Apple is divorcing Samsung for cloning the iPad and iPhone?

I was never an Apple fanboy, always choosing “open” products for my personal and professional needs. If the IBM PC was “closed” and obsessively controlled like Macs, where would personal computing be today? The iPod was the first Apple product to invade my home and only after a handful of other MPEG players failed on me. Without iPod/iTunes where would the music industry be today?

iPad2s came to my house next. Would there even be a tablet market without the iPad? I looked at other tablets but since they were to be gifts to SemiWiki users I had a much more critical eye for quality. I even kept one of the SemiWiki iPad2s which I now use daily. We still have some iPad2s left so register for SemiWiki today and maybe you will win one!

A MacBook Air ALMOST came next, but I chickened out and bought a Dell XPS instead. The support burden of moving my family of six from Dell/HP/Sony laptops to Apple Town was just too much to fathom.

iPhone5s for the entire family will be next, Santa is bringing them for Christmas. I’m tired of my Blackberry and I being out smartphoned by snot nosed iPhone kids. I did look at the Samsung iPhone and iPad clones, and while they are less expensive, my professional experience with Samsung will not allow me to buy their products. I will wait for an Apple flat screen TV as well.

Paul McLellan did a nice write up of “The battle of the Patents” for the wireless business: Apple, Samsung, Microsoft, Oracle, Google, Nokia, and here comes a real threat to the mobile industry, Amazon (Kindle Fire Tablet)!

The Apple / Samsung legal debacle will most definitely change the semiconductor foundry business. Can Samsung or even Intel become “the trusted technology and capacity provider of the global IC industry for years to come”? Not a chance.


Battle of the Patents

Battle of the Patents
by Paul McLellan on 09-27-2011 at 5:01 pm

What’s going on in all these wireless patent battles? And why?

The first thing to understand is that implementing most (all?) wireless standards involves infringing on certain “essential patents.” The word “essential” means that if you meet the standard, you infringe the patent, there is no way around it. You can’t build a CDMA phone without infringing patents from Qualcomm; you can’t build a GSM phone without infringing patents from Motorola, Philips and others.

The second thing to understand is that typically, if you are a patent holder, you want to license the last person in the chain. There are two reasons for this. Firstly, the further down the value chain, the higher the price, and so the easier to extract any given level of license fee. It is easier to get a phone manufacturer to pay you a dollar than a chip manufacturer, for example. The second reason is that often the patent is only infringed in the final stage of the product chain. Any patent that claims to cover phones that do something special is not infringed by chips, software or IP that might go into the phone to make that something special happen. Plus you can’t really embargo anything other than the final product if it is all assembled offshore.

Apple, presumably in a calculated way, didn’t worry about licensing anyone else’s patents. They pretty much invented what we think of as the smartphone and it is hard to build one without infringing lots of Apple patents on touch-screens, gestures, mobile operating systems, app stores and so on. So they figured that they had a good arsenal for cross-licensing to address their lack of patents on basic wireless technology.

Google seems to have been blindsided by this. They created Android, which in and of itself doesn’t infringe much. They didn’t patent much on their own and probably didn’t have any intention of suing anyone. “Don’t even be as evil as suing someone.” But when Android is put into a smartphone or tablet then that end product infringes lots of patents, most notably Apple’s. Google tried to fix this, first by offering $3.14159B for Nortel’s patents (which they lost) and then by buying Motorola’s mobile phone division for around four times as much (well, they got a mobile phone division too, which might turn out to be important).

Microsoft also had a lot of patents. In fact it has been so unsuccessful so far in its mobile strategy that it reportedly makes more money licensing Android phone manufacturers (for patent licenses) than it does licensing Window7 phone manufacturers (for software licenses, presumably including the patent licenses since suing your customers tends to be bad for business).

Also, in here somewhere, is Oracle, which with its acquisition of Sun owns any patents on Java. And Android’s app development environment is Java (Apple’s is Objective-C, which they acquired with Next).

The most schizophrenic relationship is Apple and Samsung. Samsung build the A4 and A5 chips that are in the current iPhone and iPad, it supplies some of the DRAM and some of the flash. I wouldn’t be surprised if Apple is their largest customer. But they are suing each other mainly over Samsung’s iPhone lookalikes Galaxy S and Galaxy SII and iPad lookalike Galaxy Tab. Samsung announced that they have already shipped over 10M Galaxy SIIs, which is an impressively large number. Samsung is probably the biggest threat (as a single manufacturer) to Apple, already #2 in profitability and, I think, #2 in unit volume behind Nokia.

Apple has also been suing some of the Android manufacturers but they are countering since Google is now licensing some of the Motorola patents to them (for free, I assume). Remember, Apple can’t sue Google directly since an OS doesn’t infringe a phone patent, only phones can do that, and so Google can’t counter Apple directly, it has to do it through its licensees.

Meanwhile, Nokia, which must have an enormous patent portfolio, is also suing Apple, although Apple has already settled (surrendered) some of this by paying a license fee. If Nokia is to be successful with its strategy du jour of using Microsoft for its smartphone strategy then it will need to be able to defend itself against Apple. It also needs to get moving, since the latest Mango release of Microsoft’s WP7 is already coming to market through HTC and Fujitsu. If all Nokia has is a late to market me-too WP7 implementation they are doomed. Well, I think they are doomed anyway although it may depend on how much the carriers want to keep Nokia and/or Microsoft WP7 alive to counter Android and Apple.

Oh, and Amazon’s Fire tablet comes to market tomorrow, supposedly. Don’t be surprised if Apple sues them. Amazon is probably the biggest threat to Apple leveraging content rather than basic tablet technology.

What will happen in the end? Probably not much. Nobody has a clue how much anyone infringes anyone else’s patents and nobody is going to put much effort into finding out. I expect that everyone will cross-license, with Apple and anyone else who lacks fundamental patents (the ones that are used in non-smart phones) having to make some balancing payments to cover the last couple of decades of investment that they are riding on, and anyone who hasn’t got their own smartphone patents having to make balancing payments to Apple who pretty much invented them as we now think of them.