RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

TSMC OIP: Mentor’s 5 Presentations

TSMC OIP: Mentor’s 5 Presentations
by Paul McLellan on 09-09-2013 at 6:30 pm

At TSMC’s OIP on October 1st, Mentor Graphics have 5 different presentations. Collect the whole set!

11am, EDA track. Design Reliability with Calibre Smartfill and PERC. Muni Mohan of Broadcom and Jeff Wilson of Mentor. New methodologies were invented for 28nm for smart fill meeting DFM requirements (and at 20nm me may have to double pattern some layers of fill). Also, Calibre PERC for checking subtle electrical reliability rules. These were both successfully deployed on a large 28nm tapeout at Broadcom.

2pm, EDA track. EDA-based Design for Test for 3D-IC Applications. Etienne Racine of Mentor. Lots about how to test 3D and 2.5D ICs such as TSMCs CoWoS. 3D requires bare die testing (to address the known good die problem) via JTAG. This can also be used for contactless leakage test. How to test memory on logic and other test techniques for the More that Moore world of 3DICs.

3pm on EDA/IP/Services track. Synopsys Laker Custom Layout and Calibre Interfaces: Putting Calibre Confidence in Your Custom Design Flow. Joseph Davis of Mentor. Synopsys’s Laker layout environment can run Calibre “on the fly” during design to speed creation of DRC correct layouts. Especially at nodes below 28nm, where the rules are incomprehensible to mere mortals this is almost an essential requirement to developing layout in a timely manner.

4.30pm, EDA track. Advanced Chip Assembly and Design Closure Flow Using Olympus SoC. Karthik Sundaram of nVidia and Sudhakar Jilla of Mentor. Chip assembly and design closure has become a highly iterative manual process with a huge impact on both schedule and design quality. Mentor and nVidia talk about a closure solution for TSMC processes that include concurrent multi-mode multi-corner optimization.

Identifying Potential N20 Design for Reliability Issues Using Calibre PERC. MH Song of TSMC and Frank Feng of Mentor. Four key reliability issues are electro-migration, stress-induced voiding, time-dependent dielectric breakdown of intermetal dielectric, and charge device model ESD. Calibre’s PERC can be used to verify compliance to these reliability rules in the TSMC N20 process.

Full details of OIP including registration are here.


A Hybrid Test Approach – Combining ATPG and BIST

A Hybrid Test Approach – Combining ATPG and BIST
by Daniel Payne on 09-09-2013 at 5:18 pm

In the world of IC testability we tend to look at various approaches as independent means to an end, namely high test coverage with the minimum amount of test time, minimum area impact, minimum timing impact, and acceptable power use. Automatic Test Pattern Generation (ATPG) is a software-based approach that can be applied to any digital chip design and requires that scan flip-flops be added to the hardware. SoCs can also be tested by using on-chip test hardware called Built-In Self Test (BIST).

At the ITC conference this week there’s news from Mentor Graphics about a hybrid approach that combines both ATPG and BIST techniques. Ron Pressand Vidya Neerkundar wrote a seven page white paper, and I’ll summarize the concepts in this blog. Read the complete white paper here or visit Mentor at ITC in booth #211 and ask for Ron or Vidya.


Ron Press

ATPG

The basic flow for using ATPG and compression is shown below:

The ATPG software is first run on the digital design and patterns are created that will be applied to the chip primary input pins and read on the output pins. To economize on tester time compression is used on both the incoming and outgoing patterns. Extra hardware is added to the chip for decompression, scan chains and compression. Benefits of using compression ATPG are:

  • High fault coverage for stuck-at and other faults (ie. resistive, transistor-level)
  • Low test logic overhead
  • Low power use

Logic BIST

With Logic BIST the chip itself includes the test electronics and instrumentation.


The test electronics is shown as a random pattern generator with a blue color on the left. The instrumentation to measure results is called a signature calculator with a blue color on the right side. Scan chains are shown in orange color, just like in the previous diagram for ATPG. The Random pattern generator is creating it’s own stimulus, which is then applied to the block under test. The response to the stimulus is collected by the signature calculator to determine if the logic is correct or defective. Benefits of using a logic BIST approach are:

  • No need for an external tester
  • Supports field-testing
  • Can detect un-modeled defects
  • Support design re-use across multiple SoC designs
  • Reduces tester times because you’re not limited to reading external data

Comparing ATPG and Logic BIST

Let’s take a quick comparison overview of ATPG and Logic BIST to better understand where each would offer the greatest test benefits.

[TABLE] style=”width: 500px”
|-
|
| ATPG
| Logic BIST
|-
| Black-box module
| OK
| Not accepted because of potential unknown, or X values
|-
| Testpoints
| Required less
| May be required for some faults
|-
| Multi-cycle paths
| Supported
| Added hardware required
|-
| Cross Domain Clocking
| Easy support
| Unknown states cannot be captured
|-
| ECO support
| Easy
| Masking ECO flops adds hardware
|-
| Diagnostic resolution
| Good
| Good
|-

The Hybrid Approach

There is a way to combine test hardware so that any block can have both compression ATPG and Logic BIST approaches together:

Diving into a bit more detail reveals some of the test logic in the hybrid approach:

With a hybrid implementation ATPG would only need to target faults that are not already detected by Logic BIST, saving test times up to 30% or so. You can use this hybrid methodology in either a top-down or bottom-up flow.

Hybrid Benefits

Adopting a hybrid test approach can provide:

  • Test/retest SoCs during burn-in and in-system
  • Very low DPM (Defects Per Million)
  • Checks for many defects

    • Small delay
    • Timing-aware
    • Cell-aware
    • Path delay
  • Lower test hardware area through merged logic

Summary

It may be appropriate to consider using both compression ATPG and Logic BIST in your next SoC design. Mentor Graphics offers software named the Tessent product line that support hybrid test. Read the complete white paper here, registration is required to download the PDF document.

lang: en_US


Verifying Hardware at the C-level

Verifying Hardware at the C-level
by Paul McLellan on 09-09-2013 at 2:25 pm

As more people adopt high-level synthesis (HLS) they start to worry about what is the best design flow to be using. This is especially so for verification since it forms such a large part of the effort on a modern SoC. The more people rely on HLS for producing their RTL from C, the more they realize they had better do a good job of verifying the C code in the first place. Waiting until RTL is inefficient, not to mention late in the flow. Issues are hard to locate and debug and you can’t change the RTL even when you do (at least you are asking for big problems if you do). Instead you need to return to the “golden” representation which is the C, find the corresponding part of the code and then fix it there.

Better by far is to do verification at the C level. If the C is correct, then HLS should produce RTL that is correct-by-construction. Of course “trust but verify” is a good maxim for IC design otherwise we wouldn’t need DRCs after place and route. Similarly, you shouldn’t assume that just because your C is good that the RTL doesn’t need any verification at all.

Unfortunately, tools for C and SystemC verification are rudimentary compared with the tools we have for RTL (software engineering methodology has changed a lot less than IC design methodology over the last couple of decades, but that is a story for another day). RTL engineers have code coverage, functional coverage, assertion-based verification, directed random and specialized verification languages. C…not so much.

But we can borrow many of these concepts and replicate them in the C and SystemC world. By inserting assertions and properties into your C models you can start to do functional coverage on the source code and formal property checking on the C-level design. An HLS tool like Catapult can actually make sense of these assertions and propagate them through into the RTL.

Catapult, along with SLEC, allows you to do this sort of analysis now. With SLEC you can check for defects in the C/SystemC source, do property checking on assertions in your C-level code and apply formal techniques to prove whether an assertion always holds true or not. Catapult will then embed all these assertions into the RTL, automatically instantiating them as Verilog assertions that do the at the RTL level as the originals did at the C-level. It also takes C cover points and synthesizes them into Verilog so that when you run Verilog you also get functional coverage.

This hybrid approach leverages all the methodology that the RTL experts have put together, but pulling it up to do more at the C-level. After all, the better the C, the better the RTL (and the gates…and the layout).

Calypto have a couple of upcoming webinars, one for India/Europe on another topic and one for US on verification.

Dynamic/leakage power reduction in memories. Timed for Europe and India at 10am UK time, 11am central Europe, 12pm Finland and 2.30pm India. September 12th (this coming Thursday). Register here.

How to Maximize the Verification Benefit of High Level Synthesis with SystemC. 10am Pacific on October 22nd. Register here.


TSMC OIP: Soft Error Rate Analysis

TSMC OIP: Soft Error Rate Analysis
by Paul McLellan on 09-09-2013 at 1:34 pm

Increasingly, end users in some markets are requiring soft error rate (SER) data. This is a measure of how resistant the design (library, chip, system) is to single event effects (SEE). These manifest themselves as SEU (upset), SET (transient), SEL (latch-up), SEFI (functional interrupt).

There are two main sources that cause these SEE:

  • natural atmospheric neutrons
  • alpha particles

Natural neutrons (cosmic rays) have a spectrum of energies which also affects how easily they can upset an integrated circuit. Alpha particles basically are stopped by almost anything and as a result these can only affect a chip if they contaminate the packaging materials, solder bumps etc.


Increasingly, more and more partners need to get involved in this reliability assessment process. Historically it has started when end-users (e.g. telecom companies) are unhappy. This means that equipment vendors (routers, base-stations) need to do testing and qualification and end up with requirements for process, cell design (especially flops and memories), and at the ASIC/SoC level.

Large IDMs such as Intel and IBM have traditionally done a lot of the work in this area internally, but the fabless ecosystem relies on specialist companies such as iROCtech. They increasingly work with all the partners in the design chain since it is a bit like a real chain. You can’t measure the strength of a real chain without measuring the strength of all the links.

So currently there are multi-partner SER efforts:

  • foundries: support SER analysis through technology specific SER data such as TFIT databases
  • IP and IC suppliers: provide SER data and recommendations
  • SER solution providers: SEE tools and services for improving design reliability, accelerated testing

Reliability has gone through several eras:

  • “reactive” end users encounter issues and have to deal with them. product recalls, software hot-fixes etc
  • “awareness” system integrators pre-emptively acknowledge the issue. system and component testing, reliability specifications
  • “exchanges” requirements and targets are propagated up and down the supply chain with SER targets for components, IP etc
  • “proactive” objective requirements will drive the design and manufacturing flow towards SER management and optimization

One reason that many design groups have been able to ignore these reliability issues is that they depend critically on the end market. The big sexy application processors for smartphones in the latest process do not have major reliability issues: they will crash from time to time due to software bugs and you just reboot, your life is not threatened. And your phone only needs to last a couple of years before you will throw it out and upgrade.

At the other end of the scale are automotive and implantable medical devices (such as pacemakers). They are safety critical and they are expected to last for 20 years without degrading.


iRocTech have been working with TSMC for many years and have even presented several joint papers on various aspects of SER analysis and reliability assessment.

iRocTech will be presenting at the upcoming TSMC OIP Symposium on October 1st. To register go here. To learn more about TFIT and SOCFIT, iRocTech’s tools for analyzing reliability of cells and blocks/chips respectively, go here.


Rapid Yield Optimization at 22nm Through Virtual Fab

Rapid Yield Optimization at 22nm Through Virtual Fab
by Pawan Fangaria on 09-09-2013 at 10:00 am

Remember? During DAC2013 I talked about a new kind of innovation: A Virtual Fabrication Platform, SEMulator3D, developed by COVENTOR. Now, to my pleasant surprise, there is something to report on the proven results from this platform. IBM, in association with COVENTOR, has successfully implemented a 3D Virtual Fabrication methodology to rapidly improve the yield of high performance 22nm SOI CMOS technology.

The CTO-Semiconductor of COVENTOR, Dr. David M. Fried was in attendance while IBM’s Ben Cipriany presented an interesting paper on this work at The International Conference on Simulation of Semiconductor Processes and Devices (SISPAD 2013). The paper is available at the link “IBM, Coventor present 22nm Virtual Fabrication Success at SISPAD” at the COVENTOR website.

Dr. Fried leads COVENTOR’s technology strategy, partnerships and external relationships. His expertise touches upon areas such as Silicon-on-Insulator (SOI), FinFETs, memory scaling, strained silicon, and process variability. He is a well-respected technologist in the semiconductor industry, with 45 patents to his credit and a notable 14-year career with IBM, where his latest position was “22nm Chief Technologist” for IBM’s Systems and Technology Group.

I had a nice opportunity speaking to Dr. Fried on phone during the conference at Glasgow, Scotland. From the conversation, all I can say is that SEMulator3D really has a true innovation and fascinating technology that will majorly ease the complex design and manufacturing process, not only at 22nm but still lower nodes in the days to come; specifically in analyzing and predicting the key phenomenon of variability at these nodes, quickly and accurately.

The Conversation –

Q: I guess the first introduction of SEMulator3D in wide public media was in DAC 2013?

Actually COVENTOR is a 17 years old company, with experience in MEMS s/w, Device simulation and Design enabling tools. It has many years of 3D structural modelling experience from the MEMS field, which has been migrated over the last 7 years into the Semiconductor space after encouragement by Intel, one of our original investors. I moved to COVENTOR about a year ago, and am now driving its technology roadmap and expanding SEMulator3D usage across the semiconductor industry. In 2012, we made a very important updated release of SEMulator3D. But in 2013, our newest release, developed in significant collaboration with IBM, adds lots of automation, advanced modelling and really takes a quantum leap forward in virtual fabrication.

Q: This is a new concept of Virtual Fabrication Platform. How does it relate with the design flow at Fabless companies?

It provides a flexible process platform for Fabless companies to explore the physical embodiment of their specific design IPs, otherwise they do not know what happens at the process end. In other words, it is a “Window into the Fab” for Fabless companies. Variability mainly depends on physical parameters. Because SEMulator3D is 10-100 times faster than other process modelling tools, it can provide many more data points of physics-driven highly predictive and accurate models, giving a more rigorous view of the variability effects.

Q: Let’s see some of the aspects of this paper presented at SISPAD. It’s a meticulous work that the 3D model was matched to inline measurement and metrology data as well as 2D SEM/TEM cross-section images of a nominal transistor. How did you get this idea to model this way, so accurately?

In any modelling situation, first of all, we need to gain faith by experimenting with the past results and demonstrating the perfect match with the existing data, calibrate and then predict. The modelling engine is physics-driven, so it will correctly predict many processes and effects without any calibration. For this detailed study, the nominal model was validated against a significant amount of inline measurement and metrology, and also TEMs and SEMs and iterations of calibration were performed. After that, thousands of cases of variation were performed and the results were assembled and analysed quantitatively. SEMulator3D 2013 adds the first production availability of our Expeditor tool which enables automated spreadsheet-based process variation study and analysis. Our expanded Virtual Metrology capabilities can automatically collect and analyse the structure during the minutes of modelling rather than the months these experiments could spend in the fab.

Q: It is observed that major process variation occurs at the n-p interface due to spatial exposure and overlay errors from multiple lithography levels whose edges define that interface. So, how difficult it is when multiple devices in a large layout are involved?

Right, the nfet-pfet interface is used in this work as a vehicle for the study and can easily be scaled to larger design space. Expeditorenables massively parallel execution, so the same type of modelling can be applied to many different design elements, such as cells in a design library. That’s how there is really no comparison between SEMulator3D and any TCAD tool out there. SEMulator3D is extremely fast, applicable to a larger physical design space and provides yield focused modelling.

Q: How large a design, SEMulator3D can handle to model and evaluate for process window tolerance?

There is effectively no limitation on design area. What matters is modelling time. SEMulator3D can do the typical job in minutes rather than weeks. The concept is to choose the smallest area of a design which can be modelled to solve a specific process integration challenge, apply process description and define resolution of models. And then parallelize the job. Some users are investigating large memory arrays, some users are investigating single transistor effects. Most of our typical users are applying this modelling platform at the circuit-scale.

Q: So, how do you choose that smallest sample, process description and resolution?

In our more collaborative engagements, like the one in this paper, we work with the customer to define these aspects and optimize the modelling exercise. The customer is always interested in their specific design IP, but we can help optimize the area selection and parallelization. The process description is where we do most of our collaboration. Each customer’s process is different, and developed independently, but we spend time working with “Modelling Best Known Methods” and many tricks of the trade we’ve learned in virtual fabrication. The modelling resolution is flexible based on the effects being studied and can be tailored to optimize modelling performance based on the granularity of the effects being analysed.

Q: What kind of different design structures have been modelled using SEMulator3D?

There are many, to name a few – DRAM, SRAM, logic, Flash memory, ESD circuitry, Analog circuits, TSVs, multi-wafer 3D integration structures, and so on.

Q: We know IBM is already working closely with COVENTOR. Who are the other customers using SEMulator3D?

Our primary user base is in the Foundries, IDMs and Memory manufacturers. We are engaged with the top 9 worldwide semiconductor manufacturers and also several Fabless design houses. With Cadencewe have provided a Virtuoso Layout Editor link to SEMulator3D. SEMulator3D also has its own full-function layout editor which serves as a conduit to any existing GDSII.

Q: One last question. Although, from this conversation, what I can perceive is that this tool has a bright future which can provide a solid foundation to designs at ultra sub-micron process nodes, I would like to know your view on what’s in store for future?

One of the things I spend a lot of time thinking about is 3D DRC. You know, most yield limitations come from a physical problem that is 3D in nature. That single 3D problem gets exploded into 100s of 2D rules to provide a DRC checking infrastructure. In many cases, it’s becoming cumbersome and inefficient to do it this way, especially in the development phase where these rules are just being explored. That’s an area to work on and optimize by doing 3D searches on structural models. There are some structures that will be better checked in 3D rather than with a massive set of translated 2D rules. So, in the future 3D Design Rule Checking will be a reality, and we are working hard on that.

This was a very exciting and passionate conversation with Dr. Fried, where I could enhance my knowledge about process modelling. And, I can see that SEMulator3D can provide competitive advantage to design companies as well, because this facilitates their parallel interaction with Fab in a virtual world which can be used to mitigate the manufacturing related issues at the design stage. A novel concept!


Searching an ADC (or DAC) at 28 nm may be such a burden…

Searching an ADC (or DAC) at 28 nm may be such a burden…
by Eric Esteve on 09-09-2013 at 9:13 am

If you have ever send a Request For Quotation (RFQ) for an ASIC including processor IP core, memories, Interfaces IP like PCIe, SATA or USB and Analog function like Analog to Digital Converter (ADC) or Digital to Analog Converter (DAC), you have discovered, like I did a couple of years ago, that these Analog functions may be the key pieces, limiting your choice of the technologies and foundries. Every System-on-Chip (SoC) will use a processor IP core (and most of the time it will be an ARM core, by the way), so you will realize that the processor is not a problem as it will be available in most of the technologies/foundries, as well as any Single Port or Dual Port memory, through memory compilers.

But, especially if you target advanced nodes like 28 nm, being able to find the right ADC or DAC may become a real issue. If you can’t find it in the technology you have to target, for good reasons like benefiting for the smallest possible area, or device cost, or to make sure that the power consumption will be within the allocated budget, the only possibility will be to order a custom design… not really the best approach to minimize risk and time-to-market (TTM)! When a well-known and respected IP vendor like Synopsys announce they have scaled their ADC Architecture to support Mobile and Multimedia SoC at 28 nm –and beyond, that’s the type of news which make your life easier when sending ASIC RFQ, targeting to use TSMC 28 HPM. If it’s your case, that make sense to attend to the next webinar:

Scaling ADC Architectures for Mobile & Multimedia SoCs at 28-nm and Beyond

If you are interested, you should quickly register as this webinar will be held on Tuesday, the 10[SUP]th[/SUP] of September…


Overview:
Data converters are at the center of every analog interface to systems-on-chips (SoCs). As SoCs move into 28-nm and smaller advanced process nodes, the challenges of integrating analog interfaces change due to the process characteristics, reduced supply voltages, and analog blocks’ area requirements. Synopsys is offering a complete portfolio for analog interfaces in 28-nm includes high-speed ADCs and DACs, PLL, and general-purpose ADCs and DACs, implemented on a Standard CMOS process with no additional process options.

I have noticed a few impressive features:

  • Parallel SAR 12-bit ADC architecture implementations for up to 320 MSPS sampling rates
  • Parallel assembly allows for greater architectural flexibility for specific applications
  • Very high performance 12-bit 600 MSPS DAC
  • Power consumption reduction of up to 3X and area use reduction of up to 6X over previous generations

This webinar will:

  • Compare the prevailing analog-to-data converter (ADC) architectures in terms of speed, resolution, area, and power consumption trade-offs
  • Describe the benefits of the successive-approximation register (SAR)-based ADC architecture for the medium and high speed ADCs
  • Describe how implementations of the SAR ADC architecture can reduce power consumption and area usage for 28-nm process technologies
  • Present the 28-nm DesignWare® Analog ADCs, which use the SAR-based architecture, and explain how they achieve 3x lower power consumption and 6x smaller area compared to previous generations


Who should attend:
SoC Design Engineers, Managers and System Architects

Presenter:

Carlos Azeredo-Leme, Senior Staff Engineer, DesignWare Analog IP, Synopsys
Carlos Azeredo-Leme is a senior staff engineer for the DesignWare Analog IP at Synopsys since 2009. Prior to joining Synopsys, he was co-founder and member of the Board of Directors of Chipidea Microelectronics in 1993, where he held the position of Chief Technical Officer. There, he was responsible for complete mixed-signal solutions, analog front-ends and RF. He worked in the areas of audio, power management, cellular and wireless communications and RF transceivers. Since 1994 he holds a position as Professor at the Technical University of Lisbon (UTL-IST) in Portugal. His research interests are in analog and mixed-signal design, focusing on low-power and low-voltage. Carlos holds an MSEE from Technical University of Lisbon (UTL-IST) in Portugal and a Ph.D. from ETH-Zurich in Switzerland.

Eric Esteve from IPNEST

lang: en_US


Test Compression and Hierarchy at ITC

Test Compression and Hierarchy at ITC
by Daniel Payne on 09-09-2013 at 8:00 am

The International Test Conference (ITC) is this week in Anaheim and I’ve just learned what’s new at Synopsys with test compression and hierarchy. Last week I spoke with Robert Ruiz and Sandeep Kaushik of Synopsys by phone to get the latest scoop. There are two big product announcements today that cover:

Continue reading “Test Compression and Hierarchy at ITC”


SemiWiki Analytics Exposed 2013

SemiWiki Analytics Exposed 2013
by Daniel Nenni on 09-08-2013 at 7:15 pm

One of the benefits of blogging is that you put a stake in the ground to look back on and see how how things have changed over the years. You can also keep a win/loss record of your opinions, observations, and experiences. Last year I posted the “SemiWiki Analytics Exposed 2012” blog so here is a follow-up to that.

The Semiconductor Wiki Project, the premier semiconductor collaboration site, is a growing online community of professionals involved with the semiconductor design and manufacturing ecosystem. Since going online January 1st, 2011 to June 30th 2012 more than 300,000 unique visitors have landed at www.SemiWiki.com viewing more than 2.5M pages of blogs, wikis, and forum posts. WOW!

Wow indeed. We have now had more than 700,000 unique visitors and they just keep on coming. Original content and crowdsourcing wins every time, absolutely. There is a plethora of website rating companies but we use Alexa (owned by Amazon.com) as it has proven to have the most comprehensive and accurate analysis in comparison to Google and internal analytics, absolutely.

Rankings from theAlexawebsite:

EETimes
2012 Rank: 24,870
2013 Rank: 28,584
-13%

SemiWiki
2012 Rank: 431,594
2013 Rank: 249,989
+73%

Design And Reuse
2012 Rank: 312,684
2013 Rank: 320,258
-2%

EDAcafe
2012 Rank: 441,196
2013 Rank: 388,857
+9%

ChipEstimate
2012 Rank: 861,483
2013 Rank: 1,169,187
-26%

DeepChip
2012 Rank: 1,175,172
2013 Rank: 1,849,759
-36%

SoCCentral
2012 Rank: 3,765,891
2013 Rank: 2,529,697
+14%

Yes a 73% increase is a big number but let me tell you why. First, you may have noticed we installed a new version of SemiWiki this summer which is optimized for speed and search engine traffic. My son the site co-developer has the summer off (he teaches math) so we kept him busy with site optimization. Second, we enhanced our analytics and internal reporting so we know more of what is happening inside SemiWiki. A good friend of mine is a Business Intelligence (BI) expert for a Fortune 100 company and shares his expertise with me in exchange for food and drink. Okay, mostly drink. This enhanced data mining and reporting capability gives our subscribers a clear view of what does and doesn’t work so we can manage the growth of their landing pages and SemiWiki on a whole.

The Semiconductor Wiki Project, the premier semiconductor collaboration site, is a growing online community of professionals involved with the semiconductor design and manufacturing ecosystem. Since going online January 1st, 2011 more than 700,000 unique visitors have been recorded at www.SemiWiki.com viewing more than 5M pages of blogs, wikis, and forum posts. WOW!

I would tell you more but the new SemiWiki 2.0 software release is so amazing it would sound like I was bragging and I’m much too humble to brag.

lang: en_US


Xilinx At 28nm: Keeping Power Down

Xilinx At 28nm: Keeping Power Down
by Paul McLellan on 09-08-2013 at 2:26 pm

Almost without exception these days, semiconductor products face strict power and thermal budgets. Of course there are many issues with dynamic power but one big area that has been getting increasingly problematic is static power. For various technical reasons we can no longer reduce the voltage as much as we would like from one process generation to the next which means that transistors do not turn off completely. This is an especially serious problem with FPGAs since they have a huge number of transistors, many of which are not actually active in the design at all, just due to the nature of how an FPGA is programmed.

Power dissipation is related to thermal because heat dissipated rises with power. But temperature is related to reliability: every 10°C increase in temperature doubles the failure rate. There are all sorts of complex and costly static power management schemes but ideally the FPGA would simply have lower static power. TSMC, which manufactures Xilinx FPGAs, has two 28nm processes, 28HP and 28HPL. The 28HPL process has some big advantages over 28HP:

  • wider range of operating voltages (not possible with 28HP)
  • high-performance mode with 1V operation leading to 28HP-equivalent performance at lower static power
  • low-power mode at 0.9V operation, with 65% lower static power than 28HP. Dynamic power reduced by 20%.

The 28HPL process is used to manufacture the Xilinx 7 series FPGAs. The net result is that competing FPGAs built in the 28HP process have no performance advantage of series 7 and some of the competing products come with a severe penalty of over twice the static power (along with the associated thermal and reliability issues). In fact Xilinx’s primary competitor (I think we all know who that is) has had to raise their core voltage specification resulting inn a 20% increase in static power and their power estimator has gradually crept the power up even more. Xilinx has invested resources so that the power specifications published when the product is first announced do not subsequently require being revised, meaning that end-users can plan their board design confident they do not need to worry about inaccurate power estimates.


The low power and associated thermal benefits mean that Xilinx FPGAs can operate at higher ambient temperature without the FPGA itself getting too hot. The graph below shows junction temperature against ambient temperature for series 7 standard and low power devices compared to the competitor’s equivalent array. This is very important since the ability to operate at 50-60°C ambient temperature while keeping the junction temperature at or below 100°C is essential in many applications, such as wired communications designs in rack-style environments such as datacenters. It is no secret that routers and base-stations are one of the largest end-markets for FPGAs.


But wait, there’s more, as the old infomercials said.

Not specific to series 7, the Vivado Design Suite performs detailed power estimation at all stages of the design, and has an innovative power optimization engine that identifies excessively power hungry paths and reduces them.

Reducing dynamic power depends on reducing voltage (which the 28HPL allows), reducing frequency (usually not an option since that is set by the system performance required) or reducing capacitance. By optimizing the arrays for dense area, meaning shorter wires with lower capacitance, power is further reduced compared to Xilinx’s competitors.

And there is even more than that. But to find out you need to download the Xilinx white paper Leveraging Power Leadership at 28nm with Xilinx 7 Series FPGAs available here.


Asian Embargoes

Asian Embargoes
by Paul McLellan on 09-07-2013 at 8:00 pm

[This blog embargoed until 10am China time]

An interesting thing happened to me this week. I had two press briefings. No, that wasn’t the interesting thing and if you have sat through a press briefing you will not regard them as recreation. I do it for you, Semiwiki readers. Even though, as this week, the briefings are given by friends. But there was something different about these two briefings.

On Monday evening, which is Tuesday morning in Asia, you will see the blogs. This is a sort of teaser. However, this is the first time, and I’m sure not the last, where the embargo date is early in Asia as opposed to early in the US. A typical embargo is 5am in California, 8am in New York. Before ‘the’ markets open, which are on Wall Street. Nowhere else really matters.

One of these announcements is about strategy in Asia and one coincides with a user group meeting in Asia so you can’t read too much into it. But on the same day I got two announcements with Asian embargo times. That’s never happened before. In fact I can’t remember a press release with an Asian embargo before.

Like traffic down 101 (sorry, non silicon valley readers) something easy to observe may be a proxy for the health of the economy in general or a big trend that is just getting started. Or it may be nothing. My other proxy for business in the valley is Birks in Santa Clara: If you have no idea where it is it is in those pink towers beside 101 at Great America Parkway. If you can get a reservation within a couple of days, silicon valley is not in good shape, if you need to wait two weeks the valley is booming. Right now on Friday as I write you can get a reservation for Monday. For 2 people but not 4. Half-booming. During the downturn, both Birks and Parcel 104 added lots of cheap items to their menu. Not good. But you can’t get a hamburger any more. If you want the best hamburger going, go to Zuni cafe on Market near where I live in San Francisco. But it is only available for lunch or after 10pm. The perfect finish to an evening.