TSMC’s 16FinFET and 3D IC Reference Flows

TSMC’s 16FinFET and 3D IC Reference Flows
by Paul McLellan on 09-17-2013 at 2:01 am

Today TSMC announced three reference flows that they have been working on along with various EDA vendors (and ARM and perhaps other IP suppliers). The three new flows are:

  • 16FinFET Digital Reference Flow. Obviously this has full support for non-planar FinFET transistors including extraction, quantized pitch placement, low-vdd operation, electromigration and power management.
  • 16FinFET Custom Design Reference Flow. This supports the non-digital stuff. It allows full customer transistor level design and verification including analog, mixed-signal, custom digital and memory.
  • 3D IC Reference Flow, addressing vertical integration with true 3D stacking using both TSV through active silicon and/or using interposers.


There have been multiple silicon test vehicles. The digital reference flow uses an ARM Cortex-A15 multicore processor as a validation vehicle and helps designers understand the challenges of full 3D RC modeling and quantized transistor widths, which are the big “new” gotchas in the FinFET world. The flow also includes methodology and tools for improving PPA in 16nm including low voltage operation analysis, high resistance layer routing optimization, path based analysis and graph based analysis correlation to improve timing closure.

By definition there is less automation in the custom reference flow because it’s custom and the designer is expected to do more by hand. But obviously it includes the verification necessary for compliance with 16nm manufacturing and reliability requirements.

The 3D IC flow allows everything to move up into the third dimension. This is still work in progress so I don’t think this will be any type of final 3D flow. But it supports what you would expect: the capability to stack die using through-transistor-stacking (TTS), through-silicon-vias & microbumps, backside metal routing, TSV to TSV coupling extraction.

So what is TTS? It is TSMC’s own name for TSV on wafers containing active devices (as opposed to on interposers, which typically only contain metal routing and decaps, where they still use the TSV name). The 3D test vehicle has stacked memories on top of 28nm SoC logic die (connected via microbumps). The 28nm logic die has TSVs through active silicon and connects to the backside routing (also called re-distribution layer or RDL) and C4 bumps on the backside of the logic die. The bumps then connect to standard substrate on the module. So this is true 3D, not 2.5D where die are bumped and flipped onto an interposer, and only the interposer (which doesn’t contain active devices) has TSVs. One of the challenges of TSVs is that the stress of manufacturing them alters transistor threshold voltages in the vicinity, and probably other stuff I’ve not heard about.


So FinFETs are coming at 16nm and the flows are ready to start designs, already validated in silicon. Plus a true 3D More than Moore flow.

OIP is coming up on October 1st. I’m sure that one of the keynotes will have some more about 16nm and 3D. For details and to register go here.


Intel Bay Trail Fail

Intel Bay Trail Fail
by Daniel Nenni on 09-15-2013 at 5:00 pm

Now that the IDF 2013 euphoria is fading I would like to play devil’s advocate and make a case for why Intel is still not ready to compete in the mobile market. It was very clear from the keynotes that Intel is a chip company, always has been, always will be, and that will not get them the market share they need to be relevant in mobile electronics, just my devil’s advocate opinion of course.

The first argument is the Bay Trail tablet offerings which are mediocre at best. The WinSuperSite has a nice Fall Tablet Preview with pictures and everything you need to know to decide NOT to buy one. Notice there are no Bay Trail smartphones, just tablets big and small. How many people or corporations buy the same brand tablet and phone? How many people or corporations will buy a new tablet every two years like they do smartphones? I still have my iPad2, and, like my laptop, I have no plans to replace it until I absolutely have to (3-5 years). My bet is that there will be a fire sale on Bay Trail devices next year so wait until then if you really want one.

“You’ve got to start with the customer experience and work backwards to the technology.”

The second argument is the Apple 64 bit SoC announcement last week which totally eclipsed the Intel Bay Trail hype, absolutely. Why is 64 bit a big deal? The additional performance is what everybody is talking about but the real reason for 64 bits is software portability. Corporate America can now move PC based applications to Apple tablets/phones which will further accelerate the decline of Intel’s PC revenue stream. The other thing to note is that Apple is moving away from buying chips, instead they create their own custom SoCs based on a licensed ARM architecture. This allows Apple to optimize the SoC for iOS and deliver the optimum customer experience. Qualcomm and Samsung also create custom SoCs and, between the three companies, they own the mobile market. So who is Intel going to sell chips to? Certainly not the sub $50 phone makers in emerging markets. Microsoft and the legacy PC manufacturers is all that is left?

The third argument is: Do you really care what chips are inside your phone? Thanks to Intel marketing it is clearly marked that my laptop is powered by an Intel i7. For tablets and smartphones that is not the case nor will it ever be. The only reason why I know my iPhone 5 has a 32nm dual core SoC is because I work with the foundries, which is why I also know that the iPhone5s A7 SoC is a 28nm LP quad core SoC manufactured by Samsung. For those of you who think it is 28nm or 20nm silicon from TSMC you didn’t read my “TSMC Apple Rumors Debunked”. The iProduct 6 will have TSMC 20nm silicon and the iProduct 6s will be both Samsung and TSMC 14nm, my prediction.

Fourth is Intel leadership. I met the new Intel CEO Brian Krzanich (briefly) after his keynote on Tuesday. The keynote itself was good. Not too polished, sometimes they look like something out of Las Vegas. Brian is definitely an engineer and even added a Q&A session afterwards which was new. The answers to the questions however confirmed that Intel is still Intel. Will Intel deliver synthesizable cores? No. Will Intel license their IP? No. Will Intel allow their IP to be manufactured by anyone else? No. Will Intel start with the customer experience and work backwards to the technology? Absolutely not. Intel thinks they will dominate mobile electronics like they did the PC with old school benchmarking. Unfortunately, Samsung, ARM, Apple, Qualcom, Broadcom, Mediatek, Nvidia, TSMC, and the rest of the fabless semiconductor ecosystem will not allow that to happen, no way, no how.

Also read:

The Significance of Apple’s 64 Bit A7

Intel Quark: Synthesizable Core But You Can’t Have It

lang: en_US


Sidense and TSMC Processes

Sidense and TSMC Processes
by Paul McLellan on 09-14-2013 at 2:21 pm

I’ve written before about the basic capabilities of Sidense’s single transistor one-time programmable memory products (1T-OTP). Just to summarize, it is an anti-fuse device that works by permanently rupturing the gate oxide under the bit-cells storage transistor, something that is obviously irreversible. Also, compared to devices that depend on sensing the presence or absence of a charge the read voltages are low and so the memory is naturally low power. The memory does require some non-standard voltages, especially for programming, but these are all internally generated by charge pumps. Another key advantage of the anitfuse approach is that it can be manufactured in a standard digital process with no additional masks or process steps required.

Sidense will be presenting at TSMC’s OIP on October 1st. The technology has been proven in both poly gate and HKMG gate-last. As a result there is broad support for TSMC processes from 40nm down to 20nm (all planar) with FinFET support currently in development. Sidense 1T-OTP has completed IP9000 assessment across many nodes with more coming later this year and next year.

Obviously the picture at the start of this article is a planar process and in FinFET the gate-oxide is around the fin. Nevertheless, the FinFET structures align well with Sidense’s OTP implementation. Compared to 20nm, the 16nm FinFET implementation has the same bit-cell architecture and OTP design, although the bit-cell and macros are smaller, with lower leakage and better performance.

There are also other Sidense products suitable for use in other TSMC processes typically used for analog, mixed-signal, high voltage etc. However, the Sidense memories only depend on the underlying standard digital process.

Betina Hold, director of R&D at Sidense, will be presenting An Antifuse-based Non-Volatile Memory for Advanced Process Nodes and FinFET Technologies at 4.30pm on the IP track (in the unenviable slot between attendees and beer). Register for OIP here. More details on Sidense’s product line here.


Analog Characterization Environment (ACE)

Analog Characterization Environment (ACE)
by Daniel Nenni on 09-12-2013 at 10:00 am

I’m looking forward to the 2013 TSMC Open Innovation Platform Ecosystem Forum to be held Oct. 1[SUP]st[/SUP] in San Jose. One paper in particular that has my attention is titled, “An Efficient and Accurate Sign-Off Simulation Methodology for High-Performance CMOS Image Sensors,” by Berkeley Design Automation & Forza Silicon. It is not every day that we get a chance to learn how design teams are tackling the tough verification challenges in complex high-performance applications, such as image sensors.


CMOS Image Sensor

The paper will discuss how many image sensor performance-limiting factors appear only when all of the active and passive devices in the array are modeled, including random device noise and layout parasitics. Coupled with the highly sensitive nature of image sensors, where tens of microvolts of noise can create noticeable image artifacts, these characteristics create an enormous challenge for analog simulation tools, pushing both the accuracy and capacity simultaneously.

The presentation will highlight image sensor design and verification and include a description of Forza’s verification methodology, which uses a hierarchy of models for the image sensor blocks. At higher levels of the hierarchy, the complexity of the model is reduced, but the accuracy of the global interactions between blocks is maintained as much as possible.

CMOS Image Sensor Block Diagram

Forza’s verification flow relies on the Berkeley Design Automation (BDA) Analog FastSPICE (AFS) Platform. AFS is qualified on the latest TSMC Custom Design Reference Flow and, according to Forza, has significantly improved their verification flow.

Results will highlight how the AFS Full-Spectrum Device Noise, included in the latest TSMC Custom Design Reference Flow, validates that the sensitive ADCs and readout chain will withstand the impact of device noise and parasitics. For top-level sign-off, AFS AMS enables Forza to speed up verification by using Verilog to model non accuracy-critical circuits while maintaining nanometer SPICE accuracy on blocks that were independently verified in other tools. AFS Mega has the required capacity, speed, and accuracy required for us to perform verification of over 700 signal chains at the transistor level, including extracted parasitics.


ACE Visual Distribution Analyzer – 1000 Iterations

In terms of characterization, Forza relied on BDA’s Analog Characterization Environment (ACE) to improve their characterization coverage and efficiency. Results include Monte Carlo-based analysis to predict image sensor nonuniformity due to device mismatch. Additionally, the AFS Circuit-Specific Corners, included in the latest TSMC Custom Design Reference Flow, efficiently eliminates the limitations of traditional digital process corners and generates circuit-specific corners for each measurement suitable for analog designs.

Evaluation


Datasheet


TSMC OIP

Also read: BDA Introduces High-Productivity Analog Characterization Environment (ACE)

lang: en_US


TSMC OIP: Mentor’s 5 Presentations

TSMC OIP: Mentor’s 5 Presentations
by Paul McLellan on 09-09-2013 at 6:30 pm

At TSMC’s OIP on October 1st, Mentor Graphics have 5 different presentations. Collect the whole set!

11am, EDA track. Design Reliability with Calibre Smartfill and PERC. Muni Mohan of Broadcom and Jeff Wilson of Mentor. New methodologies were invented for 28nm for smart fill meeting DFM requirements (and at 20nm me may have to double pattern some layers of fill). Also, Calibre PERC for checking subtle electrical reliability rules. These were both successfully deployed on a large 28nm tapeout at Broadcom.

2pm, EDA track. EDA-based Design for Test for 3D-IC Applications. Etienne Racine of Mentor. Lots about how to test 3D and 2.5D ICs such as TSMCs CoWoS. 3D requires bare die testing (to address the known good die problem) via JTAG. This can also be used for contactless leakage test. How to test memory on logic and other test techniques for the More that Moore world of 3DICs.

3pm on EDA/IP/Services track. Synopsys Laker Custom Layout and Calibre Interfaces: Putting Calibre Confidence in Your Custom Design Flow. Joseph Davis of Mentor. Synopsys’s Laker layout environment can run Calibre “on the fly” during design to speed creation of DRC correct layouts. Especially at nodes below 28nm, where the rules are incomprehensible to mere mortals this is almost an essential requirement to developing layout in a timely manner.

4.30pm, EDA track. Advanced Chip Assembly and Design Closure Flow Using Olympus SoC. Karthik Sundaram of nVidia and Sudhakar Jilla of Mentor. Chip assembly and design closure has become a highly iterative manual process with a huge impact on both schedule and design quality. Mentor and nVidia talk about a closure solution for TSMC processes that include concurrent multi-mode multi-corner optimization.

Identifying Potential N20 Design for Reliability Issues Using Calibre PERC. MH Song of TSMC and Frank Feng of Mentor. Four key reliability issues are electro-migration, stress-induced voiding, time-dependent dielectric breakdown of intermetal dielectric, and charge device model ESD. Calibre’s PERC can be used to verify compliance to these reliability rules in the TSMC N20 process.

Full details of OIP including registration are here.


TSMC OIP: Soft Error Rate Analysis

TSMC OIP: Soft Error Rate Analysis
by Paul McLellan on 09-09-2013 at 1:34 pm

Increasingly, end users in some markets are requiring soft error rate (SER) data. This is a measure of how resistant the design (library, chip, system) is to single event effects (SEE). These manifest themselves as SEU (upset), SET (transient), SEL (latch-up), SEFI (functional interrupt).

There are two main sources that cause these SEE:

  • natural atmospheric neutrons
  • alpha particles

Natural neutrons (cosmic rays) have a spectrum of energies which also affects how easily they can upset an integrated circuit. Alpha particles basically are stopped by almost anything and as a result these can only affect a chip if they contaminate the packaging materials, solder bumps etc.


Increasingly, more and more partners need to get involved in this reliability assessment process. Historically it has started when end-users (e.g. telecom companies) are unhappy. This means that equipment vendors (routers, base-stations) need to do testing and qualification and end up with requirements for process, cell design (especially flops and memories), and at the ASIC/SoC level.

Large IDMs such as Intel and IBM have traditionally done a lot of the work in this area internally, but the fabless ecosystem relies on specialist companies such as iROCtech. They increasingly work with all the partners in the design chain since it is a bit like a real chain. You can’t measure the strength of a real chain without measuring the strength of all the links.

So currently there are multi-partner SER efforts:

  • foundries: support SER analysis through technology specific SER data such as TFIT databases
  • IP and IC suppliers: provide SER data and recommendations
  • SER solution providers: SEE tools and services for improving design reliability, accelerated testing

Reliability has gone through several eras:

  • “reactive” end users encounter issues and have to deal with them. product recalls, software hot-fixes etc
  • “awareness” system integrators pre-emptively acknowledge the issue. system and component testing, reliability specifications
  • “exchanges” requirements and targets are propagated up and down the supply chain with SER targets for components, IP etc
  • “proactive” objective requirements will drive the design and manufacturing flow towards SER management and optimization

One reason that many design groups have been able to ignore these reliability issues is that they depend critically on the end market. The big sexy application processors for smartphones in the latest process do not have major reliability issues: they will crash from time to time due to software bugs and you just reboot, your life is not threatened. And your phone only needs to last a couple of years before you will throw it out and upgrade.

At the other end of the scale are automotive and implantable medical devices (such as pacemakers). They are safety critical and they are expected to last for 20 years without degrading.


iRocTech have been working with TSMC for many years and have even presented several joint papers on various aspects of SER analysis and reliability assessment.

iRocTech will be presenting at the upcoming TSMC OIP Symposium on October 1st. To register go here. To learn more about TFIT and SOCFIT, iRocTech’s tools for analyzing reliability of cells and blocks/chips respectively, go here.


Xilinx At 28nm: Keeping Power Down

Xilinx At 28nm: Keeping Power Down
by Paul McLellan on 09-08-2013 at 2:26 pm

Almost without exception these days, semiconductor products face strict power and thermal budgets. Of course there are many issues with dynamic power but one big area that has been getting increasingly problematic is static power. For various technical reasons we can no longer reduce the voltage as much as we would like from one process generation to the next which means that transistors do not turn off completely. This is an especially serious problem with FPGAs since they have a huge number of transistors, many of which are not actually active in the design at all, just due to the nature of how an FPGA is programmed.

Power dissipation is related to thermal because heat dissipated rises with power. But temperature is related to reliability: every 10°C increase in temperature doubles the failure rate. There are all sorts of complex and costly static power management schemes but ideally the FPGA would simply have lower static power. TSMC, which manufactures Xilinx FPGAs, has two 28nm processes, 28HP and 28HPL. The 28HPL process has some big advantages over 28HP:

  • wider range of operating voltages (not possible with 28HP)
  • high-performance mode with 1V operation leading to 28HP-equivalent performance at lower static power
  • low-power mode at 0.9V operation, with 65% lower static power than 28HP. Dynamic power reduced by 20%.

The 28HPL process is used to manufacture the Xilinx 7 series FPGAs. The net result is that competing FPGAs built in the 28HP process have no performance advantage of series 7 and some of the competing products come with a severe penalty of over twice the static power (along with the associated thermal and reliability issues). In fact Xilinx’s primary competitor (I think we all know who that is) has had to raise their core voltage specification resulting inn a 20% increase in static power and their power estimator has gradually crept the power up even more. Xilinx has invested resources so that the power specifications published when the product is first announced do not subsequently require being revised, meaning that end-users can plan their board design confident they do not need to worry about inaccurate power estimates.


The low power and associated thermal benefits mean that Xilinx FPGAs can operate at higher ambient temperature without the FPGA itself getting too hot. The graph below shows junction temperature against ambient temperature for series 7 standard and low power devices compared to the competitor’s equivalent array. This is very important since the ability to operate at 50-60°C ambient temperature while keeping the junction temperature at or below 100°C is essential in many applications, such as wired communications designs in rack-style environments such as datacenters. It is no secret that routers and base-stations are one of the largest end-markets for FPGAs.


But wait, there’s more, as the old infomercials said.

Not specific to series 7, the Vivado Design Suite performs detailed power estimation at all stages of the design, and has an innovative power optimization engine that identifies excessively power hungry paths and reduces them.

Reducing dynamic power depends on reducing voltage (which the 28HPL allows), reducing frequency (usually not an option since that is set by the system performance required) or reducing capacitance. By optimizing the arrays for dense area, meaning shorter wires with lower capacitance, power is further reduced compared to Xilinx’s competitors.

And there is even more than that. But to find out you need to download the Xilinx white paper Leveraging Power Leadership at 28nm with Xilinx 7 Series FPGAs available here.


A Brief History of TSMC OIP

A Brief History of TSMC OIP
by Paul McLellan on 09-01-2013 at 9:00 pm

The history of TSMC and its Open Innovation Platform (OIP) is, like almost everything in semiconductors, driven by the economics of semiconductor manufacturing. Of course ICs started 50 years ago at Fairchild (very close to where Google is headquartered today, these things go in circles). The planarization approach, whereby a wafer (just 1” originally) went through each process step as a whole, led to mass production. Other companies such as Intel, National, Texas Instruments and AMD soon followed and started the era of the Integrated Device Manufacturer (although we didn’t call them that back then, we just called them semiconductor companies).

The next step was the invention of ASIC with LSI Logic and VLSI Technology as the pioneers. This was the first step of separating design from manufacturing. Although the physical design was still done by the semiconductor company, the concept was executed by the system company. Perhaps the most important aspect of this change was not that part of the design was done at the system company, but rather the idea for the design and the responsibility for using it to build a successful business rested with the system company, whereas IDMs still had the “if we build it they will come” approach, with a catalog of standard parts.

In 1987, TSMC was founded and the separation between manufacture and design was complete. One missing piece of the puzzle was good physical design tools and Cadence was created in 1988 from the merger of SDA and ECAD (and soon after, Tangent). It was now possible for a system company to buy design tools, design their own chip and have TSMC manufacture it. The system company was completely responsible for the concept, the design, and selling the end-product (either the chip itself or a system containing it). TSMC was completely responsible for the manufacturing (usually including test, packaging and logistics too).

This also created a new industry, the fabless semiconductor company, set up in many ways to be like an IDM except for using TSMC as a manufacturer. So a fabless semiconductor company could be much smaller since it didn’t have a whole fab to fill, often the company would be funded to build a single product. Since this was also the era of explosive growth in the PC, many chips were built for various segments of that market.

At this time, the interface between the foundry and the design group was fairly simple. The foundry would produce design rules and SPICE parameters, and the design would be submitted as GDSII and a test program. Basic standard cells were required, and these were available on the open market from companies like Artisan, or some groups would design their own. Eventually TSMC would supply standard cells, either designed in house or from Artisan or other library vendors (bearing a underlining royalty model transparent to end users). However, as manufacturing complexity grew, the gap between manufacturing and design grew too. This caused a big problem for TSMC: there was a lag from when TSMC wanted to get designs into high volume manufacturing and when the design groups were ready to tape out. Since a huge part of the cost of a fab is depreciation on the building and the equipment, which is largely fixed, this was a problem that needed to be addressed.


At 65nm TSMC started the OIP program. It began at a relatively small scale but from 65nm to 40nm to 28nm the amount of manpower involved went up by a factor of 7. By 16nm FinFET half of the effort is IP qualification and physical design. OIP actively collaborated with EDA and IP vendors early in the life-cycle of each process to ensure that design flows and critical IP were ready early. In this way, designs would tapeout just in time as the fab was starting to ramp, so that the demand for wafers was well-matched with the supply.

In some ways the industry has gone a full circle, with the foundry and the design ecosystem together operating as a virtual IDM.

To be continued in part 2


The TSMC OIP Technical Paper Abstracts are up!

The TSMC OIP Technical Paper Abstracts are up!
by Daniel Nenni on 08-25-2013 at 8:10 pm

The TSMC Open Innovation Platform® (OIP) Ecosystem Forum brings TSMC’s design ecosystem member companies together to share with our customers real-case solutions for customers’ design challenges and success stories of best practice in TSMC’s design ecosystem.

More than 90% of the attendees last year said “this forum helped them better understand the components of TSMC’s Open Innovation Platform” and “they found it effective to hear directly from TSMC OIP member companies.”

This year, the forum will feature a day-long conference starting with executive keynotes from TSMCin the morning plenary session to outline future design challenges and roadmaps, as well as discuss a recent collaboration announcement, 30 selected technical papersfrom TSMC’s EDA, IP, Design Center Alliance and Value Chain Aggregator member companies, and an Ecosystem Pavilion featuring up to 80 member companies showcasing their products and services.

Date: Tuesday, October 1st, 2013

Place: San Jose Convention Center

Attendees will learn about:

  • Design challenges in 16nm FinFET, 20nm, and 28nm
  • Successful, real-life applications of design technologies and IP
  • Ecosystem specific implementations in TSMC reference flows
  • New innovations for next generation product designs

In addition, attendees will hear directly from our design ecosystem member companies talk exclusively about design solutions using TSMC technologies, and enjoy valuable opportunities for peer networking with near 1,000 of industry experts and end users.

TSMC Open Innovation Platform Ecosystem Forum is an “invitation-only” event: : please register in order to attend. We look forward to seeing you at the 2013 Open Innovation Platform Ecosystem Forum.

Registration: Join the TSMC 2013 Open Innovation Platform® (OIP) Ecosystem Forum to be held on Tuesday, October 1st at the San Jose (CA) Convention Center.

Established in 1987, TSMC is the world’s first dedicated semiconductor foundry. As the founder and a leader of the Dedicated IC Foundry segment, TSMC has built its reputation by offering advanced and “More-than-Moore” wafer production processes and unparalleled manufacturing efficiency. From its inception, TSMC has consistently offered the foundry segment’s leading technologies and TSMC COMPATIBLE® design services.

TSMC has consistently experienced strong growth by building solid partnerships with its customers, large and small. IC suppliers from around the world trust TSMC with their manufacturing needs, thanks to its unique integration of cutting-edge process technologies, pioneering design services, manufacturing productivity and product quality.

The company’s total managed capacity reached 15.1 million eight-inch equivalent wafers in 2012. TSMC operates three advanced 12-inch wafer fabs, four eight-inch wafer fabs, and one six-inch wafer fab in Taiwan. TSMC also manages two eight-inch fabs at wholly owned subsidiaries: WaferTech in the United States and TSMC China Company Limited. TSMC also obtains eight-inch wafer capacity from other companies in which the Company has an equity interest.

lang: en_US


20nm IC production needs more than a ready Foundry

20nm IC production needs more than a ready Foundry
by Pawan Fangaria on 08-23-2013 at 11:00 am

I think by now all of us know, or have heard about 20nm process node, its PPA (Power, Performance, Area) advantages and challenges (complexity of high design size and density, heterogeneity, variability, stress, lithography complexities, LDEs and so on). I’m not going to get into the details of these challenges, but will ponder on the flows and methods which can overcome these and can generally be available for a larger design community for mass production of ICs at 20nm; of course, based on the rules and regulations laid down by foundries. If anyone wants to refer to details of these challenges, she/he can refer to an earlier paper published by Cadencehere.

Sometime in Jun/July this year, it was reported by TSMCthat their risk production of 20nm chips has already started and volume production will start by Dec this year or early next year. It is known that Apple(for its A8 processors), its first customer is already lined up, more may join the queue. It must be noted that in last quarter of 2012 TSMC also announced support of double patterning technology and multi-die integration and corresponding reference flows for 20nm process node.

For proliferation of this technology into mass production by leveraging the sea of design houses, EDA vendors must provide the complete holistic solutions to overcome these challenges rather than point tools. At 20nm, that need becomes more prominent because it changes the paradigm in the context of double patterning complexities, variability and interdependence between design phases and manufacturing. The designers can no longer wait to fix problems until layout sign-off, everything has to be done in parallel at each stage.


[Challenges and requirements for 20nm IC design]

As we see, tackling these issues in the design is not enough, the design closer needs to happen in time and with desired PPA in order to avail the window of opportunity in the market. Having earlier worked at Cadence, I can firmly say that this is one company which provides a complete end-to-end solution to the overall design flow, with the whole spectrum of EDA tools for all types of designs; custom, digital, mixed-signal and so on. This company has the right expertise, through its long tenure in semiconductor EDA domain, to address designers’ need at all levels. For example, analog design needs more customized approach whereas digital design has very high level of automation.


[Cadence GigaFlex technology – A flexible modelling approach to manage large designs]

Cadence proposes rapid prototyping and rapid verification methodologies to save significant amount of design time. It uses flexible modelling to support required level of abstraction at each stage. For example, the model at design exploration or planning stage does not require details of that used at block implementation level. Further, it uses an innovative “Prevent, Analyze and Optimize” approach which drives both custom and digital platforms to enable faster design convergence at advanced nodes. In-design sign-off is done at each stage such as placement, routing, lithography analysis, timing and signal integrity and so on by utilizing state-of-the-art sign-off quality tool engines. Also correct-by-construction approach is used at the design time by utilizing smart tools such as constraint-driven design, LDE-aware placement, color-aware P&R and in-design verification.


[Clock Concurrent Optimization combines timing-driven CTS with physical optimization]

Clock Concurrent Design Flow is a paradigm shift that makes Clock Tree Synthesis (CTS) timing window-driven rather than skew-driven, and merges it with physical optimization. This provides significant PPA optimization; 30% saving on power and area and 100MHz of chip performance improvement for a GHz design with ARM processors.

To conclude, there are several challenges to fetch the benefits of 20nm technology, but with right tools, methodologies and collaboration across semiconductor ecosystem, they can easily be achieved. There is a detailed whitepaper from Cadence on the methodologies to be used for 20nm designs, “A Call to Action: How 20nm will Change IC Design”. It’s worth looking at, I enjoyed reading it and jotted down a summary of that in this article. The paper also has other references on 20nm technology. Enjoy reading!!