100X800 Banner (1)

Testing, testing… 3D ICs

Testing, testing… 3D ICs
by Beth Martin on 10-06-2011 at 7:01 pm

3D ICs complicate silicon testing, but solutions exist now to many of the key challenges. – by Stephen Pateras

The next phase of semiconductor designs will see the adoption of 3D IC packages, vertical stacks of multiple bare die connected directly though the silicon. Through-silicon vias (TSV) result in shorter and thinner connections that can be distributed across the die. TSVs reduce package size and power consumption, while increasing performance due to the improved physical characteristics of the very small TSV connections compared to the much larger bond wires used in traditional packaging. But TSVs complicate the test process, and there is no time to waste in finding solutions. Applications involving the stacking of one or more memory die on top of a logic die, for example using the JEDEC Wide IO standard bus interface, are ramping quickly.

One key challenge is how to test the TSV connections between the stacked memory and logic die. There is generally no external access to TSVs, making the use of automatic test equipment difficult if not impossible. Functional test (for example, where an embedded processor is used to apply functional patterns to the memory bus) is possible but is also slow, lacks test coverage, and offes little to no diagnostics. Therefore, ensuring that 3D ICs can be economically produced calls for new test approaches.

A new embedded test method that works for test and diagnostics of memory on logic TSVs is built on the Built-In Self-Test (BIST) approach that is already commonly used to test embedded memories within SoCs. For 3D test, a BIST engine is integrated into the logic die and communicates to the TSV-based memory bus that connects the logic die to the memory as illustrated in Figure 1.

For this solution to work, two critical advances over existing embedded memory BIST solutions were necessary.

One is an architecture that allows the BIST engine to communicate to a memory bus rather than directly to individual memories. This is necessary partly because multiple memories may be stacked within the 3D IC, but mostly to allow the BIST engine to test the memory bus itself, and hence the TSV connections, rather than just the memories. Test algorithms tailored to cover bus-related failures are used to ensure maximum coverage and minimal test time. Because of this directed testing of the memory bus, the 3D BIST engine can also report the location of failures within the bus, which allows diagnosis of TSV defects.

The second critical advance in this new 3D BIST solution is that it is run-time programmable. Using only the standard IEEE 1149.1 JTAG test interface, the BIST engine can be programmed in silicon for different memory counts, types, and sizes. Because the BIST engine is embedded into the logic die and can’t be physically modified without a design re-spin, this adaptability is essential. With full programmability, no re-design is needed over time even as the logic die is stacked with different memories and memory configurations for different applications.

An automated flow is available for programming the BIST engine (for wafer or final package testing) to apply different memory test algorithms, to use different memory read/write protocols, and to test different memory bus widths and memory address ranges. The patterns needed to program the engine through the JTAG interface pins are generated in common formats, such as WGL or STIL, to be loaded and applied by standard automatic test equipment.

Because this 3D test solution is embedded, it needed to have minimal impact on design flows and schedules and no impact on design performance. This is done through an automated RTL flow that integrates the BIST engine into the logic die and fully verifies its operation. The flow is compatible with all standard silicon design flows and methodologies. There is no impact to design performance because the BIST engine intercepts the memory bus with multiplexing logic placed at a point in the functional path with sufficient slack.

This new embedded solution for testing TSVs between memory and logic die is cost effective, giving the best balance between test time and test quality. Engineers considering designing in 3D need to feel confident that they can test the TSVs without excessive delay or risk. This solution shows how that can be achieved and opens the way for a more rapid adoption of 3D design techniques.

Stephen Pateras is product marketing director for Mentor Graphics Silicon Test products.

The approach described above forms part of the functionality of the Mentor Graphics Tessent[SUP]®[/SUP]MemoryBIST product. To learn more, download the whitepaper 3D-IC Testing with the Mentor Graphics Tessent Platform.


Circuit Simulation and Ultra low-power IC Design at Toumaz

Circuit Simulation and Ultra low-power IC Design at Toumaz
by Daniel Payne on 10-06-2011 at 4:31 pm

I read about how Toumaz used the Analog Fast SPICE (AFS) tool from BDA and it sounded interesting so I setup a Skype call with Alan Wong in the UK last month to find out how they design their ultra low-power IC chips.


Interview

Q: Tell me about your IC design background.
A: I’ve been at Toumaz almost 8 years now and before that at Sony Semi for 5.5 years. My IC design experience goes back to 1997, then starting in 2005 I’ve been in the IC design group for wireless.

Q: Does Toumaz have a CAD group?
A: Yes, we do have two CAD engineers.

Q: What EDA tools are you using?
A: For RTL simulation we have Mentor Questa, and on physical verification we’re using Calibre. Place & Route it’s Synopsys and for IC layout we’ve got Cadence Virtuoso and Assura (verification). Circuit simulation we have the Analog Fast SPICE tool from Berkeley Design Automation.

Q: How about your product life cycle?
A: Most of our IC designs go from definition to Tape out in about 18 months, some are quicker. Foundry choices have been: TSMC, IBM, Infineon, UMC.

Q: What’s the first thing that you do when silicon comes back from the fab?
A: With our engineering samples we first test to see if our spec is met, then we start to characterize with initial functional vectors plus the specialized analog RF testing.

Q: For circuit simulation tools, what have you used before?
A: We’ve used Cadence Spectre tools before, then we switched over to BDA. We found better results with the Berkeley tool in terms of speed. In our evaluation we used internal designs, multiple test benches and clock circuits, taking several weeks to complete our benchmarking. Overall we saw, about 5X quicker with AFS.

Q: When the foundry provides SPICE models have there been any issues?
Q: Yes, we had some issues with model cards and BDA. There were some differences between Spectre and BDA, causing BDA to make some tweaks in their BSIM4 RF models.

Q: How do you simulate the whole mixed-signal chip?
A: For Mixed-signal chips we model at two levels of abstraction, behavioral and transistor level. During simulation we can swap out transistor level versus behavioral to get the speed and accuracy trade-off we need. For analog blocks we simulate at the transistor level.

Q: Why not use a Fast SPICE simulator for full-chip circuit simulation?
A: Our experience shows that Full-chip with Fast SPICE gives fast but wrong answers.

Q: Which process nodes are you designing at?
A: Quite a range: 130nm and 110nm, some 65nm nodes.

Q: How large are your design teams?
A: Our design team for an SOC uses a Personal Area Network and has some layout designers, firmware team, test and production engineers, maybe 20 people in total.

Q: What is your version control system?
A: We have used SVN for just the RTL coding side and also tried the Design Management Framework within Cadence IC 5. Our plan is to start using ClioSoft soon in Cadence IC 6.

Q: Are there other circuit simulation tools that you’ve looked at?
A: Quite a few: Synopsys, Aglient and Golden Gate.

Q: What’s really important for your circuit simulations?
A: Accuracy and the ability to do long simulation runs, some are up to one week in duration. We do some top level parametric simulation, try different scenarios, and run lots of configurations.

Q: What needs improvement for your circuit simulation?
A: Well, there’s some room for improvement with co-simulation, it can be a bit flakey. We co-simulate with Verilog.

Q: How often does BDA update their software?
A: With BDA there are updates every few months, so we just wait for the new features that we need then install it about once a quarter.

Q: What about your layout tools from Cadence?
A: We freeze the Cadence toolset at the start of each project and update tools only if really needed during a project.

Q: What was the learning curve for the BDA circuit simulator?
A: The learning curve was short for us, we now know how to setup the options to get the accuracy vs speed.

Q: What’s your wish list for BDA?
A: I would prefer more flexible license terms throughout the year (like Cadence credits). We always want circuit simulation to be Faster, more accurate, and improved DC convergence.

Summary
Toumaz uses a mixture of EDA tools to design their ultra low-power IC designs, working with vendors like: Berkeley Design Automation, Cadence, Mentor Graphics, Synopsys and ClioSoft.



SuperSpeed USB finally take off! Synopsys claim over 40 USB 3.0 IP sales…

SuperSpeed USB finally take off! Synopsys claim over 40 USB 3.0 IP sales…
by Eric Esteve on 10-06-2011 at 9:06 am

SuperSpeed USB specification was released in November 2008! Even if we can see USB 3.0 powered peripherals shipping now, essentially external HDD, connected to PC equipped with Host Bus Adaptors (as PC chipset from Intel or AMD were not supporting USB 3.0), it will take up to the second quarter of 2012 before PC will be shipped with “native” USB 3.0 support; native just means that PC chipset will integrate SuperSpeed USB. This will be the key enabler for USB 3.0 wide adoption in PC & Media Tablet, Smartphone and many consumer electonic applications.

It will have taken more than three years to Intel –and AMD- to finally support USB 3.0 technology. If we trust InStat, it will only take two years before more than 90% of PC being shipped natively support this technology, see the figure:

This short reminder to help the reader understand the importance of Synopsys’s Press Release about the explosion of the Design-in for SuperSpeed USB IP. Synopsys USB Marketing manager Eric Huang is claiming 40 sales since USB 3.0 IP product launch, in 2009, and we think, first that this is true, and second that 25 sales have been made in 2011 only.

Some more history: Synopsys was already the USB IP market leader with more than 60% market share when they bought ChipIdea from MIPS, to get a more than 80% market share at the time where the USB IP market was only made of High Speed (2.0), Full Speed and Low Speed. When SuperSpeed USB was released, the backward compatibility constraint imposes to provide both USB 2.0 and USB 3.0 function to be 100% compatible. IP vendors previously active in the USB 2.0 market had disappear (except Faraday) and the new comers, able to easily manage the design of a 5 Gbps SerDes to build the USB 3.0 PHY were missing the “stupid” 480 Mbps PHY you need to provide in order to be fully compatible…

Then Synopsys has been in a very good position to capitalize on the existing USB port-folio and experience, develop SuperSpeed USB PHY and Controller and integrate all the pieces to build a complete, 100% USB 3.0 compatible solution. The testimonial from Realtek highlights how important was Synopsys’s track record in USB 2.0 to support their selection for USB 3.0:

“We taped-out Synopsys’ DesignWare USB 3.0 host and USB 3.0 device in three chips targeted at the digital home and PC peripheral markets, and all are now shipping in mass production,” said Jessy Chen, executive vice president of Realtek Semiconductor Corporation. “We chose Synopsys DesignWare IP because of the company’s excellent track record in USB 2.0. With Synopsys’ USB 3.0 IP now fully certified and proven in our chips, we are certain we picked the right IP partner. We have been at the forefront of USB 3.0 development and integration, and have many innovative chips using Synopsys USB 3.0 IP coming in 2012.”

Very important for today’ SoC designs is the capability offered by the IP vendor to support the validation: of the function (IP), of the chip prior to mask generation and of the Software as early as possible in the product development cycle, to speed-up Time-To-Market and guarantee first pass success. This is possible when the following boxes are ticked:

  • Verification IP available at the same time than the IP
  • Especially important for a standard based protocol, IP certification obtained
  • In order to validate the Software as early as possible, in parallel with the SoC development, availability of FPGA-based prototype solution, HAPS, as stated by DisplayLink:

“Working with Synopsys for our USB 3.0 controller, HDMI controller and PHY IP helped us mitigate our project risk and reach volume production with our first-pass silicon,” said Jonathan Jeacocke, vice president of engineering at DisplayLink. “In addition, we used Synopsys’ HAPS® FPGA-based prototyping solution to build fully functional systems for at-speed testing of USB 3.0 and HDMI, including architecture validation, performance testing, software development and customer demonstrations.”


Some precision about these HAPS boards:
· 2 PCs are connected to a HAPS FPGA-based Prototyping platform
· The HAPS on the left has Synopsys USB 3.0 Host with a Synopsys USB 3.0 PHY daughter card.
· The HAPS platform on the right has Synopsys USB 3.0 Device, also with the Synopsys USB 3.0 PHY daughter Card.
· This is also connected via PCIe (using Synopsys PCIe) to a PC running Linux drivers for the Device.

The HAPS boards and PHY boards are off-the shelf from Synopsys, and the USB 3.0 Host, USB 3.0 Device and PCIe cores are from Synopsys.

If we look at the market segment where USB 3.0 adoption will come first “In-Stat expects several hundred million USB 3.0-enabled devices will ship in 2012, including a large share of tablets, mobile and desktop PCs, external hard drives and flash drives,” said Brian O’Rourke, research director at In-Stat. “By 2014, we expect many consumer electronics devices to transition to USB 3.0, including digital cameras, mobile phones and digital televisions. Overall, in 2014, we forecast that 1.4 billion USB 3.0 devices will ship. IP suppliers like Synopsys will help fuel this explosion in USB 3.0 adoption.”

I fully agree with the forecast mentioning several hundred million USB 3.0-enabled devices will ship in 2012, I just would like to precise that external hard drives proposed today to consumers are already USB 3.0-enabled, now in 2011. Moreover, IPNEST don’t think we will have to wait until 2014 to see Smartphone supporting USB 3.0, and we will probably see these devices on the market before, or at the same time than Media Tablet, as more than 60% of Media Tablet are using the same Application processor than Smartphone.

Then there will be a second wave of consumer electronics devices to transition, namely the Digital TV, Set-Top-Box, Blue Ray Players, to ship in 2012-2013, followed by Digital Video camera and Digital Still cameras. This means IP sales starting now and continuing in 2012 to allow for a minimum development time. In fact we have built a forecast for USB 3.0 IP sales based on a bottom-up analysis, looking at the different application in every market segment which could transition to USB 3.0, and even more important, we have tried to determine when the IP sales will happen, application by application. The result is a very complete 50 pages document, where you can find many useful informations, like the design start evaluation (generating USB 3.0 IP sales) up to 2015:



Eric Esteve from IPNEST

– Table of Content for “USB 3.0 IP Forecast 2011-2015” available here


SoC Realization: Let’s Get Physical!

SoC Realization: Let’s Get Physical!
by Paul McLellan on 10-05-2011 at 1:41 pm

If you ask design groups what the biggest challenges are to getting a chip out on time, then the top two are usually verification, and getting closure after physical design. Not just timing closure, but power and area. One of the big drivers of this is predicting and avoiding excessive routing congestion, which is something that has only downside: area, timing and power are all worse (unless additional metal layers are used which is obviously increases cost).

A typical SoC today is actually more of an assembly of IP blocks, perhaps with a network-on-chip (NoC) infrastructure to tie it all together, itself an approach partially motivated by better routability aka less routing congestion.

Some routing congestion, physical congestion, is caused by how the chip floorplan is created. Like playing tic-tac-toe where you always want to start by grabbing the middle square, routing resources in the middle of the chip are at a premium and creating a floorplan that minimizes the number of wires that need to go through there is almost always a good idea. The ideal floorplan, never truly achieved in practice, has roughly even routing congestion across the whole chip.

But other routing congestion is logical congestion, inherent in the design. This comes in two flavors: core congestion and peripheral congestion.

Core congestion is inherent in the structure of the IP block. For example, very high fanout muxes will bring a large number of routes into the area of the mux causing congestion. This is inherent in the way the RTL is written and is not something that a good floorplan or a clever router can correct. Other common culprits are high fanout nets, high fanin nets and cells that have a large number of pins in a small area.

Peripheral congestion is caused when certain IP blocks have large numbers of I/O ports converging on a small number of logic gates. This is not really visible at module development time (because the module has yet to be hooked up to its environment) but becomes so when the block is integrated into the next level up the hierarchy.

The challenge with logical congestion is that it is baked-in when the RTL is created, but generally RTL tools and groups are not considering congestion. For example, high-level synthesis is focused on hitting a performance/area (and perhaps power) sweet spot. IP development groups don’t know the environment in which their block will be used.

The traditional solution to this problem has been to ignore it until problems show up in physical design, and then attempt to fix them there. This works fine for physical congestion, but logical congestion really requires changes to the RTL and in ways which are hard to comprehend when down in the guts of place and route. This process can be short-circuited by doing trial layouts during RTL development, but a the RTL must be largely complete for this so it is still late in the design cycle.

An alternative is to use “rules of thumb” and the production synthesis tool. But these days, synthesis is not a quick push-button process and the rules-of-thumb (high fanout muxes are bad) tend to be very noisy and produce a lot of false positives, structures that are called as bad when they are benign.

What is required is a tool that can be used during RTL authoring. It needs to have several attributes. Firstly, it needs to give quick feedback during RTL authoring, not later in the design cycle when the authoring teams have moved on. Second, it needs to minimize the number of false errors that cause time to be wasted fixing non-problems. And thirdly, the tool must do a good job of cross-probing, identifying the culprit RTL not just identifying some routing congestion at the gate-level.

Products are starting to emerge in EDA to address this problem, including SpyGlass Physical, aimed (despite Physical in the name) at RTL authoring. It offers many capabilities to resolve logical congestion issues up-front, has easy to use physical rules and debug capabilities to pin-point roout causes so that they can be fixed early, along iwth simple reports on the congestion status of entire RTL blocks.

The Atrenta white-paper on SpyGlass Physical can be downloaded here.


Amazon’s Kindle Fire Spells Trouble for nVidia, Qualcomm and Intel

Amazon’s Kindle Fire Spells Trouble for nVidia, Qualcomm and Intel
by Ed McKernan on 10-05-2011 at 11:50 am

With the introduction of the Kindle Fire, it is now guaranteed that Amazon has the formula down for building the new, high volume mobile platform based on sub $9 processors. In measured fashion, Amazon has moved down Moore’s Law curve from the initial 90nm Freescale processor to what is reported to be TI’s OMAP 4 in order to add the internet, music and movies to its previously single function e-book environment. Some view it as a competitor to Apple, however the near term impact is on brick and mortar competitors (i.e. Barnes and Noble, Walmart etc) and to the mostly snail mail based movie house Netflix.
Continue reading “Amazon’s Kindle Fire Spells Trouble for nVidia, Qualcomm and Intel”


AMS Verification: Speed versus Accuracy

AMS Verification: Speed versus Accuracy
by Daniel Nenni on 10-03-2011 at 9:16 pm

I spent Thursday Sept. 22 at the first nanometer Circuit Verification Forum, held at TechMart in Santa Clara. Hosted by Berkeley Design Automation (BDA), the forum was attended by 100+ people, with circuit designers dominating. I spoke with many attendees. They were seeking solutions to the hugely challenging problems they are wrestling with today when verifying high-speed and high-performance analog and mixed-signal circuits on advanced nanometer process geometries.

Continue reading “AMS Verification: Speed versus Accuracy”


Verdi: there’s an App for that

Verdi: there’s an App for that
by Paul McLellan on 10-03-2011 at 5:58 am

Verdi is very widely used in verification groups, perhaps the industry’s most popular debug system. But users have not been able to access the Verdi environment to write their own scripts or applications. This means either that they are prevented from doing something that they want to do, or else the barrier for doing it is very high, requiring them to create databases and parsers and user-interfaces. That is now changing. Going forward the Verdi platform is being opened up, giving access to the KDB database of design data, the FSDB database of vectors and the Verdi GUI.

This lets users customize the way they use Verdi for debug and they can create “do-it-yourself” features and use-models, without having to recreate an entire infrastructure from scratch before they can get started. There are interfaces available for both TCL access and for C-code access. As a scripting language TCL is usually quicker to write, but C will usually win if something requires high computation efficiency while being harder to create.

There are a lot of areas where users might want to extend the Verdi functionality. Probably the biggest is design rule checking. Companies often have proprietary rules that they would like to enforce but no easy way, until now, to build a qualification tool. Or users might want to take output from some other tool and annotate it into the Verdi GUI rather than trying to process the raw data directly.

These small programs that run within the Verdi environment are known as Verdi Interoperability Apps, or VIAs.

In addition to allowing users to create such apps, there is also a VIA exchange that allows users to freely share and reuse them. So if a users wants to customize Verdi in some way, it may not even be necessary to write a script or some code since someone may already have done it. Or at least done something close that might serve as a good starting point. The VIA exchange is at http://www.via-exchange.com.

In addition to making TCL scripts and C-programs available for download, the VIA exchange also has quick start training material and a user forum for sharing and exchanging scripts, and getting questions answered. There are already over 60 function/procedure examples and over 30 scripts and applications already contributed by Springsoft’s own people, by Verdi users and by EDA partners.

Once again, the VIA exchange website is here.


Making Money With Cramer? Don’t Count on it!

Making Money With Cramer? Don’t Count on it!
by Daniel Nenni on 10-02-2011 at 11:16 pm

Investing with Cramer is a crap shoot. By Cramer, I mean the Mad Money TV show, and Action Alerts PLUS from thestreet.com. Cramer is certainly a smart guy and knows his stuff, but don’t think following his investment strategy is necessarily a winner. He constantly maintains that you can beat the averages by picking individual stocks and doing your homework. This just isn’t true.

Cramer did a piece on Cadence that I blogged about in 2009, Jim Cramer’s CNBC Mad Money on Cadence!, to which I concluded that Jim Cramer is in fact an “infotainer” and prone to pump and dump groupies. In regards to CDNS however, he got lucky (CDNS has doubled since then).

Cramer may have made money as a hedge fund manager, but he also used many tools (such as options, shorting, etc.) which most Joe on the street investors don’t utilize. He also had a staff, and access to much better tools and information.

A good friend of mine is getting killed with his recommendations!

It drives me crazy when he talks on his show about, “I recommended this great stock much lower ….”. He does occasionally mention the ones that get crushed. Of course if he did this as a matter of routine people wouldn’t watch.

The most salient fact: His Action Alerts PLUS portfolio hasn’t beaten the S&P (when you include dividends) since 2007.

Several awful Cramer recommendations, many of which my friend has gotten creamed on, include Bank of America, Netflix, Limelight Networks, Juniper, Teva, GM, Ford, BP, Alcoa, Apache, Express Scripts, Freeport McMoran, Starwood Hotels, and Johnson Controls. There is probably a list of winners just as long, but I’m in no mood for that.

Sour grapes? Of course. Watching Cramer and subscribing to Action Alerts PLUS will make you more informed about the market. Will it make you money? Maybe. There is just as good a chance you’ll make more money with an S&P Index fund. In an up market, you’ll feel smarter about the investments you are making. In a down market you’ll be kicking yourself in the butt.

Cramer makes me think of a “get rich quick book” that’s a really good read. The only one that gets rich is the author, and that is from selling the book.

Note, my friend still watches his TV show, albeit with a lot of Tivo fast forwarding. You can be interested in Cramer’s opinion on market direction. You can subscribe to Action Alerts. Is it worth $399/yr to be better informed? The hitch is that I’ll never be better informed than the pros on Wall Street, and more information will not necessarily make you money.

I think Cramer is a smart guy and a helluva entertainer. However, I think he does the average individual investor a disservice by leading them to believe that he can help make them above average returns. I’ve not seen this Action Alert Plus service but if it’s like most newsletters, it’s lacking in timing and exit strategies.

I have some possible explanations for his chronic under performance despite his intellect, experience, and huge research staff:

1. He has to have three new ideas every day. If I have a good investment idea once a month I’m happy.

2. Time frame – a huge part of managing a portfolio has to do with investment horizon – if you have a 10+ year time frame – you don’t care what the Finance Minister of Germany is saying about Greece. But Cramer has to have a stock idea that is an answer to that news sound byte. For this type of recommendation over a short period of time you are going to get very random results.

I think that an active financial manager that uses a tactical asset allocation strategy along with an industry sector strategy based on macro economic analysis and individual stock selection based on sound fundamental analysis can outperform a passive benchmark index over the long term. However, all this work and strategy may only mean an extra 1.5-2.0% pick up in total return. Individual investors should utilize someone that is willing to execute this type of individual portfolio management for a reasonable fee of .75-1.0%.

The bottom line is that anyone who makes promises of “big money” returns (like Cramer) is either lying (Bernie Madoff) or taking on more risk with your money than you think.


Memory Cell Characterization with a Fast 3D Field Solver

Memory Cell Characterization with a Fast 3D Field Solver
by Daniel Payne on 09-29-2011 at 12:07 pm

Memory designers need to predict the timing, current and power of their designs with high accuracy before tape-out to ensure that all the design goals will be met. Extracting the parasitic values from the IC layout and then running circuit simulation is a trusted methodology however the accuracy of the results ultimately depend on the accuracy of the extraction process.

Here’s a summary of extraction techniques:

Extraction Benefits Issues
Rule-based High capacity Limited accuracy, 10% error total capacitance, 15% error coupling capacitance
Reference-level solver High accuracy Limited capacity, long run times
Fast 3D Solver High accuracy, fast run times New approach

Bit Cell Design

Consider how a memory bit cell is placed into rows and columns using reflection about the X and Y axis:

The green regions are an application of boundary conditions on a cell with reflective boundary enclosed by 2um in the X direction and 4um in the Y direction.

For attofarad accuracy the field solver has to extract the bit cell in the context of its surroundings.

Mentor Graphics has a fast 3D field solver called Calibre xACT 3D that can extract a memory bit cell in just 4 seconds using this boundary condition apprach, compared to a reference-level solver that requires 2.15 hours. I’ve blogged about xACT 3D before.

Accuracy Comparisons
A memory bit cell was placed and reflected in an array, then the entire array was extracted. The unit bit cell used boundary conditions as shown above and the results were compared against an actual array. The accuracy of the boundary condition approach in Calibre xACT 3D is within 1% of the reference-level field solver.

Another comparison was made for symmetric bit lines in a memory array using the boundary condition approach versus the reference-level field solver, with an accuracy difference within 0.5%.


Beyond the Bit Cell

So we’ve seen that Calibre xACT 3D is fast and accurate with memory bit cells, but how about on the rest of the memory like the decoders, and the paths to the chip inputs and outputs?

With multiple processors you can now accurately and quickly extract up to 10 million transistors in about one day:

Summary
Memory designers can extract a highly accurate parasitic netlist on multi-million transistor circuits for use in SPICE circuit simulation. Run times with this fast 3D field solver are acceptable and accuracy compares within 1% of reference-level solvers.

For more details see the complete white paper.


Introducing TLMCentral

Introducing TLMCentral
by Paul McLellan on 09-29-2011 at 8:00 am

Way back in 1999 the open SystemC initiative (OSCI) was launched. In 2005 the IEEE standard for SystemC (IEEE1666-2005 if you are counting) was approved. In 2008, TLM 2.0 was standardized (transactional level models), making building virtual platforms using SystemC models easier. At least the models should be play nicely together, which had been a big problem up until then.

However, the number of design groups using the virtual platform approach still only increased slowly. Everyone loves the message of using virtual platforms for software development, but the practicalities of assembling or creating all the models necessary continued to be a high barrier. Although there are lots of good reasons to use a virtual platform even after hardware is available, the biggest value proposition is to be able to use the platform to get software development started (and sometimes even finished) before silicon is available. And time taken to locate or write models dilutes that value by delaying the start of software development. In fact in a survey that Synopsys was involved with last year, the lack of model availability was one of the biggest barriers to adopting virtual platform technology.


Today, Synopsys announced the creation of TLMCentral. This is a portal to make the exchange of SystemC TLM models much easier. Synopsys is, of course, a supplier of both IP and virtual platform technology (Virtio, VaST, CoWare). But TLMCentral is open to anyone and today there are already 24 companies involved. IP vendors such as ARM, MIPS and Sonics. Service providers such as HCL or Vivante. Other virtual platform vendors such as CoWare and Imperas. And institutes and standards organizations such as Imec and ETRI. The obvious missing names are Cadence, Mentor and Wind River, at least for now. Cadence and Mentor haven’t yet decided whether or not to participate. I don’t know about Wind. Teams from Texas Instruments, LSI, Ricoh and others are already using the exchange.

As I write this on Wednesday, there are already 650 models uploaded, and more are being uploaded every hour. By the time the announcement hits the wire on Thursday morning it will probably be over 700. There are really three basic classes of model: processor models, interface models (what I have always called peripheral models) and environment models. A virtual platform usually consists of one or more processor models, a model for each of the interfaces between the system and the outside world, and some model of the outside world used to stimulate the model and validate its outputs. The processor models run the actual binary code that will eventually run on the final system, ARM or PowerPC binaries for example. By using just-in-time (JIT) compiler technology they can achieve extremely high performance, sometimes running faster than the real hardware. The interface models present the usual register interface on some bus on one side, so the device driver reads and writes them in the normal way, while interfacing in some way to the test harness. Environment models can be used to test systems, for example interfacing a virtual platform of a cell-phone to a cellular network model.

TLMCentral is not an eCommerce site for purchasing models. It is central resource for searching for them and then finding suppliers. Some models are free, and available directly from the site, but some models you must pay for, and you are directed to the vendor. There is also an industry-wide TLM eco-system allowing users to support each other, exchange models and so on.

There have been other attempts to make models more available, notably Carbon’s IP exchange. But the scale and participation on TLMCentral, and the backing of the largest EDA company, means that this is already the largest. But the success is not so much counting how many people sign up on day one, but whether it is successful at lowering the barriers to adoption of virtual platform based software development. And that will show up as growth, hopefully explosive, in the number of groups using virtualized software development.

TLMCentral is at www.tlmcentral.com