RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

CDNLive World Tour

CDNLive World Tour
by Paul McLellan on 01-28-2014 at 11:00 pm

CDNLive is becoming a real worldwide event, starting in March in San Jose and ending in November in Tel Aviv, Israel.

The complete schedule is:

  • March 11-12th, Santa Clara, California
  • May 19th-21st, Munich, Germany
  • July 15th, Seoul, Korea
  • August 15th, Shanghai, China
  • August 7th, Hsinchu, Taiwan
  • August 11-12th, Bangalore, India
  • September 16th, Boston, Massachusetts
  • September 18th, Austin, Texas
  • November 3rd, Tel Aviv, Israel

As always, a lot of the content are presentations by users of Cadence tools or in-depth ‘techtorials’. Cadence knows that users don’t come to hear a lot of marketing guys present a lot of power point, they want to hear stories from the trenches from either Cadence customers or Cadence’s own black-belt application engineers.

The call for papers for CDNLive Silicon Valley closed in December so you are too late if you want to present this year (although how about Germany in May or Austin in September for a change of scenery).

To give you a better idea of how this works out, here are the highlights of the 2013 CDNLive in Silicon Valley. Of course the speakers will be different and some of the details will probably change, but the basic format this year will be the same, with a lot of parallel special-interest tracks, keynotes, exhibits and more.

  • Technical Sessions: Sessions took place in nine tracks over the two full days of the conference. Close to 100 presentations were delivered, including a wide variety of user-authored papers addressing all aspects of design and IP creation, integration, and verification. Attendees discovered how others are using Cadence technologies and techniques to realize silicon, SoCs, and systems—efficiently and profitably.
  • Keynote speakers: Attendees heard from industry leaders, Lip-Bu Tan (Cadence President & Chief Executive Officer), Young Sohn (Samsung President & Chief Strategy Officer) and Martin Lund (Cadence Sr. VP Research & Development) about industry trends in silicon, SoC, and system realization.
  • Designer Expo: More than 35 exhibitors participated in the Designer Expo, and highlighted the collaborative ecosystem available to support you. Cadence and our partners enjoyed lunch and an evening reception mingling with customers and exploring joint solutions.
  • Networking opportunities: The R&D luncheon offered an informal atmosphere to engage in stimulating technology discussions with Cadence technologists and industry peers.


If you have a Cadence account you can download the full proceedings from last year here.

Details of this year’s CDNLive events can be found on this page, which has links to more details for each of the individual events (and you can download proceedings for any of the other CDNLives). The CDNLive page is here.


More articles by Paul McLellan…


A Brief History of the Apple iPod

A Brief History of the Apple iPod
by Daniel Payne on 01-28-2014 at 9:58 pm

In January 2001 we had a new American president, George W. Bush, I was working at Mentor Graphics, and Apple introduced an MP3 player called the iPod with a hard drive capable of holding 1,000 songs. In the previous decades we enjoyed portable music from tape-based, CD, or mini-CD devices like the Sony Walkman. The first several generations of the iPod used two ARM 7TDMI-derived CPUs, clocked at just 90 MHz to keep battery life reasonable. [SUP]1[/SUP] The iTunes software helped you organize the music, but at first it only worked on Apple computers.


iPod, 1st generation, 2001[SUP]2[/SUP]

Subsequent generations of the iPod continued to use ARM architecture chips, while the audio chip was either Wolfson Microelectronics or Cirrus Logic. iTunes supported the Windows platform in 2002, and the iTunes Music Store launched in 2003 with some 200,000 title to buy, so now how you bought music become easier and affordable. The iPod family began to branch out in 2004 with the iPod mini, achieving a smaller size by using the Microdrive from Hitachi and Seagate.


iPod Mini, 2004

Just one year later in 2005 the iPod nano line basically replaced the iPod mini, and is best noted for its use of Flash memory instead of a hard drive, so now we had the first solid state music player by Apple driving the yield for Flash memory chips. The entry-level iPod shuffle debuted that same year, but had no display and continued to use Flash for music storage.


iPod Nano, 2005


iPod Shuffle, 2005

Competitors to the iPod like Microsoft Zune tried to enter the market, however the Microsoft Zune first sold in 2006 but was out of the market 5 years later. [SUP]3[/SUP]Microsoft just never had the marketing buzz, ease of use, or online store success that Apple created.

2007 was the year that Apple announced both the iPhone and iPod Touch products, where the iPod Touch was similar to the iPhone in terms of size, glass display and multi-touch surface, but without a cell phone radio. All the cool kids in middle school could now own an iPod Touch and look like an adult without having to pay a monthly cell-phone bill.


iPod Touch, 2007

Sales of the iPod family of products grew slowly from 2002-2003, then rose briskly in 2004. Volumes of the iPod peaked in 2009, caused by increased competition, market saturation, and the trend towards Smart Phones offered both by Apple and all of the growing list of Android competitors.

Semiconductors used inside of the iPod touch include: [SUP]4[/SUP]

  • Apple – ARM-based CPU (similar in iPhone)
  • Toshiba – NAND Flash (Samsung Flash in the iPhone)
  • Wolfson – audio chip (shared in iPhone)
  • Samsung – DRAM
  • Marvell – WiFi chip
  • Apple – Communications chip
  • Broadcom – touch screen controller chip
  • STMicroelectronics – motion sensor chip
  • Texas Instruments – video driver chip

Notice how Apple has by 2007 started designed their own ARM-based chips, and is headed down the path as a fabless design company. Apple and ARM go together all the way back to 1990 when Advanced RISC Machines Ltd was created as a joint venture between Acorn Computers, Apple Computer and VLSI Technology.

If Apple doesn’t design their own chips in these high-volume, consumer-oriented products, then they often pit one semiconductor vendor against another for the commodity parts like: [SUP]5[/SUP]

  • Flash (Toshiba, Samsung, Hynix Semiconductor, Micron Technology)
  • DRAM
  • Audio
  • Radios

Today you have four choices of iPod, each aimed at a slight different usage:

  • iPod shuffle – tiniest size, highest portability, lowest price.
  • iPod nano – small size, portable, small display.
  • iPod touch – largest display, like an iPhone without the cell phone.
  • iPod classic – greatest storage capacity, small display.

With all of the Apple success in designing their own processors for the iPod, iPad and iPhone devices you have to wonder if Intel’s days of supplying processors for the iMac, MacBook Pro and Mac Pro computers are indeed numbered.

Full Disclosure
Our family household owns: iPod, iPad, iPhone, iMac, MacBook Pro and iTunes. We also enjoy: Kindle PaperWhite, Samsung Galaxy S3, Samsung Galaxy Note 2, HTC One, Google Nexus, HP laptop, Dell laptop and custom gaming PCs.

References
1. Wikipedia
2. Apple
3. Wikipedia
3. IFIXIT
4. Bloomberg Business week

lang: en_US


Xilinx’s Mixed Signal FPGA

Xilinx’s Mixed Signal FPGA
by Luke Miller on 01-28-2014 at 10:00 am

Something in all the Xilinx chatter of UltraScale 20nm, 16nm, having massive amounts of Gigabit transceivers, DSP blocks, RAM, HLS, Rapid Design Closure gets lost… and that is Xilinx’s ability for Mixed Signals. I do not mean when you are talking with the wife (Remember Listen!), but a wonderful block that lives within the 7 series of Xilinx FPGAs. It is called ‘XADC’, what it does is marvelous and once again using Xilinx allows more of the board design, integration, cost and time to market (The FPGA Blob strikes again…) to be absorbed by the Xilinx FPGA. Let me explain…

In the ‘Internet of things’ and everything else around us, is a desire for sensor fusion and we live in an analogue world. Everything needs to be monitored. Power, current, voltage, temperature, humidity, velocity, acceleration, jerk, jolt, jounce, kid’s attitudes, sampling audio streams, heart rates, blood pressure, glucose levels, distance, light intensity, flow meters and this list can keep going. All the above sensors as most, unless we are doing some RADAR/EW design, LTE or Medical design, we do not need 3 GHz sample rates and beyond. So just for a few moments think my friend, about all that you can do! In one Xilinx part nonetheless.

Below is the block diagram of the XADC in the Xilinx 7 series FPGAs. It is a very powerful core and important as we can now think of the FPGA not as just a piece of digital processing but truly mixed signal processing. That is not a trivial task.

Above you will see are dual 12-bit, 1 Mega sample per second (MSPS) ADCs. The dual ADCs support a range of operating modes, such as externally triggered and simultaneous sampling on both ADCs and various analog input signal types, for example, single ended. The ADCs can access up to 17 external analog input channels. All of the channels are available to your FPGA design. Of course the channels are time interleaved. They can be triggered or continually round robin sampled. If all this does not excite you, please wire the ADCs to measure your heart rhythm as you may be dead, make sure to filter out any potential 60 Hz noise or higher frequencies from the fluorescent lights. You can use Vivado HLS for the adaptive filter algorithm using QR decomposition. Sorry, I digressed. As for me and my fellow nerds, we are excited!

So you want to try working with the XADC, I know you do! Once again here is where Xilinx shines. There are vast amount of resources, reference designs to allow you in a matter of minutes to get started. I always like video before reading so here is a greatXilinx video to watch. All the details for the XADC are found here in UG480, Zynq included. Now for the bread and butter, the reference design… Here you go, click here. You will need to sign up for a Xilinx account. In this design Xilinx shows the user how to perform analogue simulations which is really, really cool and hopefully our minds will start thinking that Xilinx is just more than Digital!

More articles by Luke Miller…

lang: en_US


TSMC OIP presentations available!

TSMC OIP presentations available!
by Beth Martin on 01-27-2014 at 6:27 pm

Are you a TSMC customer or partner? If so, you’ll want to take a look at these presentations from the 2013 TSMC Open Innovation Platform conference:

Through close cooperation between Mentor and Synopsys, Synopsys Laker users can check with Calibre “on the fly” during design to speed creation of design-rule correct layout, including electrically-aware voltage-dependent DRC checks.

  • Verify TSMC 20nm Reliability Using Calibre PERC(Mentor Graphics)
    Calibre PERC was used in close collaboration with TSMC IO/ESD team to develop an automatic verification kit to verify CDM ESD issues for the N20 node.

  • EDA-Based DFT for 3D-IC Applications (Mentor Graphics)
    Testing of TSMC’s 2.5D/3D ICs implies changes to traditional Built-In Self-Test (BIST) insertion flows provided by commercial EDA tools. Tessent tools provide a number of capabilities that address these requirements while reducing expensive design iterations or ECOs, which ultimately translates to a lower cost per device.

  • Advanced Chip Assembly & Design Closure Flow Using Olympus-SoC (Mentor Graphics & NVIDIA)
    Mentor and NVIDIA discuss the chip assembly and design closure solution for TSMC processes, including concurrent MCMM optimization, synchronous handling of replicated partitions, and layer promotion of critical nets for addressing variation in resistance across layers.

More articles by Beth Martin…


Compositions allow NoCs to connect easier

Compositions allow NoCs to connect easier
by Don Dingee on 01-27-2014 at 6:00 pm

I blame it on Henry Ford, William Levitt, and the NY State Board of Regents, among others. We went through a phase with this irresistible urge to stamp out blocks of sameness, creating mass produced clones of everything from cars to houses to students.

Thank goodness, that’s pretty much over. The thinking of simplifying system design to quickly produce products of uniform quality had its run – everywhere that is, except the semiconductor industry. Reflected in slogans like “Copy Exactly” and the combined physics and economics of transistor theory demanding replication by the billions, semiconductors rely on sameness for viability in production.

Levittown, PA, circa 1959 – courtesy Wikipedia

System designers feel no such constraints, however. The last I checked with the folks at Semico Research, the number of unique IP blocks in a large SoC design is approaching 100. Diversity in blocks is on the increase, with CPUs, GPUs, memory, network interfaces, display interfaces, camera interfaces, and more – each with their own unique requirements for interconnect.

The problem lies where diversity of design meets parity of production, a process we oversimplify into the term “integration”. To get IP blocks working together in the design, they obviously have to connect somehow, and hardware teams went to work on optimizing interconnects by the type of block to get the most performance in the least space. For visibility into design, test teams demand that everything be accessible with the same interconnect. For programming, software teams demand that everything be accessible using as few protocols as necessary.

Keeping the interconnect simple proved to be more difficult than thought. We tried the bus approach; it worked when the IP block count was fairly small, but quickly resulted in conflicts as multiple devices vied for limited resources. We tried JTAG, which served the needs of test but didn’t help with performance. We tried the crossbar matrix; it achieved performance but became so complex in and of itself, it was difficult to implement for larger designs in smaller geometries.

The network-on-chip (NoC) was born, to provide an abstraction between IP blocks using an initiator-target strategy. As hardware designers got familiar with the approach, different NoC implementations evolved. This meant one of three things had to happen in larger SoC implementations: 1) design teams had to agree on a NoC, and adapt each IP block into it, meaning in some cases a lot of work; or 2) design teams were restricted in the IP they could select to only blocks using the NoC of choice; or 3) a top-level NoC layer communicating between disparate NoC layers had to evolve, adding latency with a second layer.

None of those are great choices for most designs. Arteris thinks they have the solution in a new strategy: NoC compositions. FlexNoC uses the connectivity and address maps from all NoCs in a system to derive a connectivity and address map of each target, seen from each initiator, in each mode of operation. It also builds a top-level model of interconnect, which allows a full SoC simulation, and is able to check for degenerate routing loops that allow for deadlocks.

This approach eliminates the dreaded second layer NoC, and doesn’t require additional bridging which would add further delays. Background on the NoC composition strategy is available in a new white paper authored by Jonah Probell, senior solutions architect at Arteris.

Playing Well with Others: How NoC Compositions Enable Global Team Design

Rather than enforcing stiff rules on IP design teams in creating interconnects, or limiting their choices of third-party IP, NoC compositions could ease the process of generating high-performance interconnect between disparate IP subsystems within an SoC.

More Articles by Don Dingee…..

lang: en_US


Simulation of Novel TFT Devices

Simulation of Novel TFT Devices
by admin on 01-27-2014 at 5:45 pm

Traditionally logic devices built on top of thin-film-transistors (TFTs) have used one type of device, either an NMOS a-Si: TFT (hydrogenated amorphous silicon) or a PMOS organic device. Recently a-Si:H and pentacene PMOS TFTs have been integrated into complementary logic structures similar to CMOS. This, in turn, creates a problem of how to model and simulate these structures.

This is a special case of something that Silvaco does all the time, since it has a full line of TCAD products along with modeling and circuit simulation tools. So the basic idea is to use Silvaco Athenta and Atlas TCAD tools to model the process used to build the TFT devices and perform process and device simulation. This is then converted into the Utmost IV data format. From there models can be extracted that can be used in circuit simulation to predict the performance. The TCAD tools close the gap between the technology development (TD) process engineers and the designers, two worlds that have very different knowledge bases.


TFT circuits in all NMOS (or PMOS) like topology have a large static power dissipation due to the existence of a direct path from supply to ground, just as in the days of NMOS and HMOS process technologies before the world went completely CMOS for logic. This power dissipation means that circuits like this cannot be used in battery operated portable systems. So just as we did with CMOS, we can integrated a-Si:H NMOS TFT with pentacene PMOS TFT in a complementary structure to form a hybrid inverter circuit. The TCAD data is all converted to Utmost IV format and then model extraction is done in Utmost IV. For the pentacene-based PTFT a UOTFT model was used. For the NMOS a-Si:H TFT an RPI a-Si TFT model was used. The extracted SPICE models are then used in the hybrid inverter circuit and a ring oscillator containing 5 of them.


And yes, the ring oscillator oscillates:

So that is a lot of buzzwords and initials. But the important ideas are fairly simple. TCAD simulation of these novel devices were done and the output data was converted to Utmost IV format. Using this data, SPICE models for a-Si TFT (level=35) and organic TFT (using UOTFT model level=37) were extracted and used to successfully simulate a five-stage ring oscillator using the hybrid inverter. Basically, starting from details of the process, SPICE models are automatically generated and then used for circuit simulation and analysis.

The full white paper is available on the Silvaco website here.

More articles by Paul McLellan…


TSMC projects $800 Million of 2.5/3D-IC Revenues for 2016

TSMC projects $800 Million of 2.5/3D-IC Revenues for 2016
by Herb Reiter on 01-27-2014 at 11:00 am

At TSMC’s latest earnings call held mid January 2014, an analyst asked TSMC for a revenue forecast for their emerging 2.5/3D product line. C.C. Wei, President and Co-CEO answered: “800 Million Dollars in 2016 ”. TSMC has demonstrated great vision many times before. For me, an enthusiastic supporter of this technology, this statement represents a big moral boost. I had the opportunity to drive Synopsys’ support for the early TSMC reference flows and saw how this strategic move has paid off very well, for the entire Fabless EcoSystem. In my humble opinion, 2.5 and 3D ICs will have a great impact on our industry such as the TSMC’s reference flows have.

TSMC’s prediction for 2.5/3D revenues confirms what I see and hear: Several large companies and an impressive number of smaller ones are starting or are already relying on 2.5/3D technology for their products that will become available sometime between 2014-16. Why rely on 2.5/3D technology? Because continued shrinking of feature sizes, including FinFETs, is no longer economical for many applications. Likewise, wire-bonded multi-die solutions or package-on-package can no longer meet performance- and power requirements.

How can busy engineering teams quickly evaluate and choose the best alternative between current and the new 2.5 or 3D-IC solutions?

Based on the fact that this technology shifts a major part of the value creation into the package – packaging is becoming more important and must be considered PRIOR to silicon development. This new book expresses much of the packaging expertise Professor Swaminathan has gained in the last 20 years while working at IBM and teaching / researching at GeorgiaTech. Together with Ki Jin Han, they address most of the topics system- and IC designers need to consider when utilizing 2.5 and 3D-ICs solutions. Professor Swaminathan is also accumulating hands-on 2.5 and 3D experiences as CTO of E-System Design, an EDA start-up in this field. Their 2.5/3D book is available at Amazon.com.

The book explains in Chapter 1 why interconnect delays and the related power dissipation are constraining designers and how Through-Silicon-Vias (TSVs) help to finally break down the dreaded “Memory Wall”. Either a 2.5D IC (die side-by-side on an interposer) OR a 3D IC (vertically stacked die) solution can better meet the performance, power, system cost, etc requirements. But before expensive implementation is started, the various options available in either need to be objectively evaluated. Both solutions increase bandwidth while lowering power dissipation, latency and package height. In addition, they simplify integration of heterogeneous functions in a package, for example combining a large amount of memory with a multi-core CPU or adding analog/RF circuits to a logic die.

Chapter 2’s primary target audience is modeling and design tools developers. It explains how to accurately simulate the impact of TSVs, solder balls and bonding wires on high-speed designs – information also useful for package and IC designers.

Chapter 3 dives into a lot of practical considerations for designing with the above mentioned IC building blocks.

Chapter 4 focuses on signal integrity challenges, coupling between TSV as well as power and ground plane requirements. Both silicon and glass interposers are covered.

Chapter 5 addresses power distribution and thermal management and Chapter 6 looks at future concepts currently in development for solving 2.5/3D-IC design challenges.

The many formulas and examples in this book make it a great reference for experienced IC and package designers.

Herb@eda2asic

lang: en_US


What will drive MEMS to drive I-o-T and I-o-P?

What will drive MEMS to drive I-o-T and I-o-P?
by Pawan Fangaria on 01-27-2014 at 5:45 am

By I-o-P, I mean Internet-of-People- I couldn’t think of anything better than this to describe a technology which becomes your custodian for everything you do; you may consider it as your good companion through life or an invariably controlling spy. This is obvious with the embedded sensor techno-products such as Kolibree, a smart toothbrush (that tells you about your brushing style and effectiveness, recorded into your smartphone to be later examined by your dentist), PulseWallet (that biometrically reads your palm and links to stored credit card information to make payment without needing the card), Beddit(that tracks your sleep pattern), and others (Veristrideis developing a technology for your shoes that can tell you how you walk and how to improve, Netatmois developing bracelets that can monitor your exposure to sunlight and tell you when you have to put on your goggles and apply sunscreen, and Cityzen Sciencesis developing smart fabric for your techno-shirts!) making headlines among the most disruptive innovations in CES 2014. A comprehensive report of disruptive innovations is published here.

I-o-T again has a long list of gadgets and that is going to expand tremendously in near future by further proliferation into consumer, healthcare, mobile, automotive, aerospace, military and industrial applications. We will undoubtedly see more technology products joining the bandwagon of smartphone revolution in coming years.

What enables them to gain such an exciting acceptance in the market? MEMS are present in every such device making them responsive to our environment, which can be in the form of touch, motion, feel, sound, weight, pressure or any kind of change for that matter. There can be enormous ways to frame technologies based on our environment and that will be driven by MEMS. Starting with actuators in automotive airbag application, MEMS have expanded into several other areas with accelerometers, gyroscopes, microphones, resonators, switches, optical mirrors and so on.

Despite many design and manufacturing challenges, MEMS growth rate has been higher than the average of overall semiconductor industry; sensors and actuators revenue was $8.41B in 2012 and is forecast to reach $9.09B, up 8.1% in 2013 according to the reportat isuppli. It is expected to climb at double-digit growth rate from here reaching to revenue of about $12.21B by 2017. While Texas Instruments, ST Microelectronics, HPand Boschare among the top MEMS players across the world, Taiwan is seeing major volume growth in MEMS market; Domintechand mCube, each is estimated to ship 10 million units of accelerometers this year.

MEMS are rapidly moving into a more mainstream path in the modern semiconductor industry. What drives MEMS is their ability to be manufactured in tiny pieces that can be integrated with ICs in a package, thus giving push to the niche devices we talked about. Other factors that will help MEMS proliferate are low power and low cost.

So, how do we scale up MEMS manufacturing in smaller size, power, integration, and at lower cost? Since MEMS devices involve more physical variables such as motion, their process development is highly customized as per the design of the device. There is no industry standard process for MEMS unlike ICs. And therefore MEMS and IC cannot be on the same die as of today; they need to be put together into a package. My personal opinion is that 3D-IC in its assembly of planes can dedicate a few planes for MEMS, as the 3D-IC process technology matures going forward.

Anyway, apart from more compact packaging, the stage is already set for many other developments that can take place for better integration, accuracy, faster pace, and at lower cost. As I had indicated in my last article here, Coventor’s SEMulator3D tool can enable faster and accurate process modeling and scaling for MEMS manufacturing through its Virtual Fabrication platform. And that can reduce cost significantly by eliminating time consuming and expensive build-and-test cycles. This can also help accelerating development of newer and more complex MEMS models to fuel the growth of I-o-T and I-o-P.

From a design standpoint, Coventorprovides an integrated MEMS and IC co-design and verification environment through its new release of MEMS+ 4.0 suite of tools. This enables MEMS components to be designed in a 3D design entry system and imported as symbols into MathWorks(MATLAB, Simulink) and Cadence(Virtuoso) schematic environments. The MEMS models can be automatically exported in Verilog-A, which can then be simulated together with IC description in any environment that supports Verilog-A; Cadence (Virtuoso) or other AMS simulators. These models simulate extremely fast; up to 100X faster than full MEMS+ models. By automating hand-off between MEMS and IC designers, this approach can eliminate design errors and thereby require fewer build-and-test cycles. More details can be found in another article here.

Although we may be a bit away from having MEMS and IC on the same die, today we do have other tools and infrastructure to design and verify them together, which can then be put together in a system with accuracy at faster pace and reasonable cost. Coventor tools can be used by Foundries as well as fabless design houses to avail the large window of opportunity in MEMS business.

It’s heartening to see GLOBALFOUNDRIES taking a lead in volume production of MEMS by pursuing the path of IC fab-like production discipline. Such moves can bring standardization of MEMS manufacturing closer, which will be key to boosting the business further.

More Articles by Pawan Fangaria…..


SPICE Circuit Simulator Gets a Jolt

SPICE Circuit Simulator Gets a Jolt
by Daniel Payne on 01-25-2014 at 11:28 am

I’ve been using SPICE circuit simulators since 1978, both internally and commercially developed, and a lot has changed since the early days where netlists were simulated in batch mode on time-share mainframes. We used to wait overnight for our simulations to complete, and in the morning had to pickup our output results as a thick stack of folded paper, but only if there were no syntax mistakes. If your output was only two pages long, then you had a typo in your netlist. Today, however we have fast workstations and interactive circuit simulation, so finding a typo takes a few seconds.

Silvacohas offered their circuit simulator called SmartSpice for decades now, and in the latest release there are two big improvements that are rather compelling to the circuit designer: parallelism and hierarchy.

Parallelism

The classic UC Berkeley SPICE Circuit Simulator and many derivative simulators would read in a netlist and then build a single, large matrix to solve for node voltages and branch currents. This simulation method is well understood, produces accurate results, although the run times can be lengthy because of all the floating point math calculations.

SmartSpice has introduced a new command line option, called “-hpp”, that will instead automatically break up the single, large matrix into multiple smaller matrices that can be run in parallel on different cores, thus speeding up the simulation time dramatically. The good news for design engineers is that this step is fully automated, you don’t have to identify partitions or make any decisions, just use the new command line option and start getting results back faster. The following chart shows that you can expect up to a 14X faster results when running on 4 cores.

Even with a single core, there’s up to 13X speed improvement versus a baseline of SmartSpice 2012.

Improvements have also been made to shorten the time that it takes to load in large netlists with millions of elements by using a domain decomposition method. Here’s a chart showing that you can reduce load times by almost 10X when using 12 cores and the new Domain Decomposition Solver (DDS):

Hierarchy

Designers of DRAM, SRAM, Flash and TFT circuits often use massive amounts of hierarchy in their netlists. SmartSpice will now take advantage of that hierarchy with the DDS option and an isomorphism option, so that during circuit simulation the hierarchical cells can share their results instead of all being simulated individually.

Summary

Silvaco has kept up with the times by adding new options to SmartSpice that speed up circuit simulation run times and expand the capacity. The benefits are that you can now run more simulations in the same amount of time, providing you more confidence that first silicon will be correct.

lang: en_US


Stop TDDB from getting through peanut butter

Stop TDDB from getting through peanut butter
by Don Dingee on 01-24-2014 at 6:00 pm

There are a few dozen causes of semiconductor failure. Most can be lumped into one of three categories: material defects, process or workmanship issues, or environmental or operational overstress. Even when all those causes are carefully mitigated, one factor is limiting reliability more as geometries shrink – and it sneaks up over time.

I found an interesting document from Panasonic titled “Failure Mechanism of Semiconductor Devices”, a few years old (circa 2009) but a fairly concise read and a great handy reference on defect causes. Table 3.1 from that document nicely summarizes the causes and modes of failure.

courtesy Panasonic

The first failure mode described in detail in that document is our suspect of interest: time-dependent dielectric breakdown, or TDDB. As geometries get smaller and gate oxide films get thinner, the risk of long term failure due to deterioration of the oxide film is growing. There is a gory formula expressing TDDB in terms of electric field strength, temperature, and other variables, but one sentence in the explanation draws attention:

… dielectric breakdown occurs as the time elapses even if the electric field is much lower than the dielectric breakdown withstand voltage of the oxide film.

That is a bit disconcerting, because it suggests many designs may not have accounted for or analyzed TDDB, instead proceeding on the assumption the materials in use are well within specification. To help designers improve their odds in combatting TDDB and improving reliability, Mentor Graphics has taken a new look at spacing rules with some surprising results.

As with so many design techniques for difficult-to-characterize problems, peanut butter approaches are often used to mitigate TDDB: visual inspection or marker layers, targeting congested areas thought to potentially pose a problem. The “fix” is usually extra checking as indicated, followed by applying a generous spread of spacing. By creating more separation in critical areas of a design, in effect derating the oxide material, TDDB can be forestalled.

This rather empirical, experience-based approach may have worked at less aggressive geometries with fewer power domains, but as complexity is increasing a more analytical approach is required to save EDA time and unnecessary padding that wastes valuable space in short supply. Mentor’s approach to the problem sounds simple: analyze the nets and the voltage differences between them, and apply spacing rules accordingly.

Easier said than done. An analysis like this not only has to understand the layout and power domains, but account for physical implementation details – a job normally for SPICE, but creating enough test vectors to look at all the combinations is an extremely complicated exercise. Plus, every time a design changes, the analysis has to be rerun since the prior results are basically no longer valid, and brand new problems may crop up. Did I hear someone say “more padding”?

The Mentor approach brings the capability of Calibre PERC to bear on TDDB. By performing rule-based checks on layout-related and circuit-dependent values, problem areas can be spotted quickly and accurately, without the use of marker layers or tedious SPICE simulations. With the voltage differences across the nets accurately known, rules-based spacing checks can automatically run and apply the minimum spacing needed. The savings in time, space, and false alarms compared to the peanut butter approaches can be significant according to Mentor.


Mentor’s complete white paper, authored by Matthew Hogan, describing the motivation behind voltage-aware design rule checking and its benefits in mitigating TDDB, is here:

Improve Reliability With Accurate Voltage-Aware DRC

Is TDDB sneaking up on your design, waiting to cut its life short? Did your last design take more time and use more space than it had to just to be “safe”? Has the peanut butter approach let you down lately? Calibre PERC can help replace best-guess strategies with rules-based results.

More Articles by Don Dingee…..