Banner 800x100 0810

Will 14nm Yield?

Will 14nm Yield?
by Daniel Nenni on 03-26-2013 at 9:00 pm

If I had a nickel for every time I heard the term “FinFET” at the 2013 SNUG (Synopsys User Group) Conference I could buy a dozen Venti Carmel Frappuccinos at Starbucks (my daughter’s favorite treat). In the keynote, Aart de Geus said FinFET 14 times and posed the question: Will FinFETs Yield at 14nm? So that was my mission, ask everybody I see if FinFETs will yield at 14nm.

Established in 1990, SNUG represents a global design community focused on accelerating innovation. In 2012, SNUG brought together nearly 9,000 users across North America, Europe, Asia and Japan, and featured nearly 250 customer papers. As the electronics industry’s largest user conference, SNUG delivers a robust technical program developed by users for users and includes strong ecosystem partner participation through Design Community Expos in many regions.

Given that Synopsys owns the TCAD market, Aart certainly knows what is yielding and where. Ever the businessman, Aart then segued into a pitch for the Synopsys Yield Explorer product which is a staple at 20nm. At the Press event following the keynote Aart mentioned lithography challenges and doping. In the hallway Co-CEO Chi-Foon Chan mentioned parasitic extraction challenges and process variation was mentioned multiple times by some of the 3,000 or so attendees. No real consensus could be had because in the words of one conference attendee, “We just don’t know what we don’t know.” Hard to argue with that. An Intel person also told me over lunch that yielding 3D transistors at 14nm will be a new challenge, even after successfully yielding at 22nm.

Aart mentioned “The Exponential Age” during his keynote and talked a bit about Moore’s Law and how semiconductor manufacturing improves exponentially with time. Afterwards a hallway discussion included Wright’s Law of Experience or “We learn by doing” which seems much more applicable in regards to semiconductor yield. A quick search on my pocket supercomputer (iPhone5) brought me to a paper published in 2010 out of the Santa Fe Institute: “Statistical Basis for Predicting Technology Progress” which tests the hypotheses of Moore’s, Wright’s, and four other laws amongst 62 different technologies including transistors. Spoiler alert: Wright’s Law wins but Moore’s Law is a close second!

Circling back around to the original question “Will 14nm Yield?” you really have to take a close look at 20nm since 14nm is really just a 20nm process with FinFET transistors. According to an FPGA and two Mobile SoC companies that I know and love, 20nm is currently yielding “as expected”. So I would have to say yes, 14nm FinFETs will have a reasonably quick yield ramp since most of the problems will be vetted in 20nm. The next question of course is: Who will be first with 14nm production silicon? This is probably one of the most interesting races in the history of the fabless semiconductor ecosystem. If you allow handicapping and buy into Wright’s Law it will be TSMC, absolutely.

Don’t forget to register for FinFET day at the Electronic Design Process Symposium on April 19th. There will be technical presentations from TSMC, GLOBALFOUNDRIES, ARM, Synopsys IP, and Oracle plus an opening keynote from me on the Planar versus FinFET value proposition. I hope to see you there!


In compliance we trust, for integration we verify

In compliance we trust, for integration we verify
by Don Dingee on 03-26-2013 at 8:10 pm

So, you dropped that piece of complex IP you just licensed into an SoC design, and now it is time to fire up the simulator. How do you verify that it actually works in your design? If you didn’t get verification IP (VIP) with the functional IP, it might be a really long day.

Compliance checking something like a PCIe interface block is a stringent process that has to explore every sequence of a protocol. A diligent IP vendor will have performed exhaustive testing on a block to assure the protocol execution conforms within interoperability guidelines. For most third-party blocks that have been released, it is a fairly safe assumption that the block executes the protocol under controlled conditions.

An SoC design, with multiple cores operating in a complex application generating traffic asynchronously, usually doesn’t qualify for “controlled conditions”. The job of integration testing is to expose known-good IP to stimuli within the complete environment, subjecting it to just enough chaos to increase confidence in the overall design.

A recent Synopsys webinar titled “Accelerate PCIe Integration Testing with Next-Generation Discovery VIP” walks through the thought process behind verifying a piece of complex interface IP. Paul Graykowski, Senior Corporate Application Engineer (CAE) for Synopsys, outlines four areas that should be explored. My Cliff Notes version:

Link testing – first, a PCIe link must be trained. VIP allows setting the number of lanes and the link speed, verifies that L0 is reached, and performs enumeration. From L0, link speeds can need to be renegotiated, from 8Gbps for Gen3, down to 5Gbps and 2.5 Gpbs. Finally, lane reversal needs to be checked and verified.

Traffic testing – first, the PCIe block is set up as requester, and the VIP block as completer. A set of reads and writes with different completion and payload sizes is done, looking for proper multiple completions with random data sets. Then the roles are reversed, and the VIP is the requester. Config writes and reads to random registers and random addresses with randomized payload sizes are performed. Finally, a series of writes and reads is performed with other AXI traffic in the system.

Interface testing – supported PCIe interfaces include SERIAL, PARALLEL, and PIPE; all must be verified. Also important is verifying lane error handling, forcing negotiation to a lower number of lanes, and looking at common errors such as disparity and bit-flipping. Gen3 equalization should also be checked, seeing that coefficients are requested and rejected. Finally, the behavior under power management scenarios, including things like hot plug and clock gating, should be verified.

Performance testing – with things working, performance can then be verified. With background AXI traffic not targeting the PCIe block running, concurrent traffic is then generated with direct I/O for PCIe, and the latency and throughput is verified.

Sounds easy, right? Even if you’re a PCIe expert, generating the cases is non-trival, and that is assuming everything works. When is enough coverage enough? And how do you debug something subtle when it goes wrong?

The architecture includes coverage models, so you can track things like tests for all the layers of the protocol – transaction, data link, and PHY – and combinations of traffic class and virtual channels, including cross combinations. It allows you to track DLLPs, looking at ack/nak, power management, flow control, and injected errors. It also allows you to track TLPs, looking at transaction types, header values, sequence numbers, error injections, and completion status.

For protocol-aware debug, the Protocol Analyzer understands a TLP, DLLP, credits, ordered set, LTSSM state – all presented in graphical way – and also understands waveforms. You can select a particular TLP, and see address, data, byte enables, and descendants. You can also put more than one protocol side-by-side, say AXI in, PCIe out, to explore deeper issues.

There’s more in this webinar, and if you are new to verification, or having a hard time justifying the outlay of buying into commercial verification IP, or just curious what the verification process looks like beyond the mechanics of writing SystemVerilog, it is definitely worth a look. Synopsys has a full range of verification IP for a variety of interfaces and protocols, and the site has more information on things like error injection, coverage tracking, and protocol-aware debug.


Moore Push Versus Market Pull

Moore Push Versus Market Pull
by Paul McLellan on 03-25-2013 at 5:55 pm

I was at SNUG earlier today at both Aart’s keynote that opened the conference and at his “meet the press” Q&A just before lunch. The keynote was entitled Bridges to the Gigascale Decade. And the presentation certainly contained lots of photos of bridges! Anyway, I’m going to focus on just one thing, namely how the dynamics of the industry change depending on the cost-per-transistor as we go down to 9nm.

One thing that Aart talked about at both sessions was this trend as we go down through the next few process nodes. It is clear that FinFETs bring great value, especially much lower leakage current. Whereas 20nm planar doesn’t bring much advantage, all the extra costs and hassles of 20nm without the advantages of FinFETs. No wonder everyone is rushing to 14nm (or sometimes called 16nm). This is actually 14nm FinFETs with 20nm interconnect fabric.

Aart had a graph from Intel showing the costs per transistor coming down almost linearly, with an extra kicker if and when we have 450mm wafers. Of course there is another saving with EUV but, as you probably know, I’m a bit of a skeptic about that. I hope this is true, but I’ve also seen other graphs showing the cost being flat. At the common platform meeting a few weeks ago, Gary Patten of IBM said that there was a cost saving but it is much less than we have been used to. The old economics was a 50% increase in die per wafer and a 15% increase in cost per wafer leaving a 35% saving. Who knows what the new rules are?

But Aart feels it doesn’t really matter. We are leaving the era where the push of Moore’s law drove the semiconductor industry and entering the era when market pull will drive the semiconductor business even if transistor costs do not come down. That is certainly true for some markets. The cost of the application processor chip in your cell-phone isn’t that critical since it is a $500 product. But in Africa there is a market for cell-phones with a $50 BOM and every $ is important.

So I’m a little unconvinced that the economics of Moore’s law are irrelevant compared to the exponential demand for greater and greater functionality.

At the Q&A we all discussed this. We talked about how everyone would like IBM’s Watson in our pocket and semiconductor technology over the next decade will be able to deliver this. If the costs per transistor come down, then I believe this. But if they don’t, and if Watson has, say, ten times as may transistors as the current chip in your smartphone, that means that the chip will cost ten times as much or more. Yes, wonderful functionality, low power, but maybe at a price point that doesn’t work even in the US. And, to make it worse, if you just wait, which has always worked in the past as a way of getting electronics cheaper than being the first person to buy the first version of something, it won’t get any cheaper.

A lot of electronics has been driven over the years by the exponential decrease in cost over many process generations. Not a 15% saving, but a reduction in cost of 1000X over 20-30 years. That is how we have more computer power in our pockets than million dollar flight simulators did in the 1980s. I suspect that costs will come down as we get more learning about yield, but there are genuinely unavoidable extra costs like double patterning and the complex construction of the 3D FinFET structure.


New ways for High Frequency Analysis of IC Layouts

New ways for High Frequency Analysis of IC Layouts
by Pawan Fangaria on 03-25-2013 at 5:30 pm

Amidst frequently changing requirements, time pressure and demand for high accuracy, it is imperative that EDA and design companies look at time consuming processes in the overall design flow and find alternatives without losing accuracy. High Frequency Analysis of IC designs is one such process which is traditionally based on models developed using TCAD tools, often leading to approximation in the actual designs. The method is not suitable for architectural exploration which is a prime need today. Moreover considerations of actual interactions among devices and interconnects are necessary.

Mentor Graphics has developed a novel and practical approach to address these issues realistically in live designs on-the-fly. It can extract full chip with automatic detection of HF devices and their accurate characterization taking into account their interaction with neighbouring devices and interconnects. Naturally the approach is quite suitable for design exploration as well; the HFA engine is seamlessly integrated into the various design flows of Calibre platform making it very scalable, user-friendly and production worthy giving performance & capacity boost of 10X compared to traditional TCAD based approach.

It’s interesting to have a look at Mentor’s white paper located at – http://www.mentor.com/resources/techpubs/upload/mentorpaper_74888.pdf. It describes the new modelling and characterization approach in detail. An inductor has been taken as an example for its automatic recognition and characterization. It uses port characterization method which is given in terms of matrix of S-parameters. Effective inductance (L) and quality factor (Q) can be calculated from S-parameters.


[Example – Inductor layout with assigned ports and a shield (in blue) placed below the inductor to reduce substrate loss]

With the help of SVRF (Standard Verification Rule Format) rule file which has specific information about process layers, port assignments, frequency range etc. Calibreautomatically detects precise inductor geometry from the layout data. Electromagnetic (EM) modelling is used and Calibre EM engine evaluates the integral form of Maxwell equations in the frequency domain, using numerical calculations based on boundary-element method. Special techniques have been employed to control memory consumption and computation time in solving the linear system resulting from the boundary-element method.

The S-parameter results from Mentor’s new modelling approach are within 2% of those from TCAD.


[Mentor (red) and Reference (green) results for port S-parameters]

The L and Q values derived from Mentor S-parameters are within 5% and 10% of TCAD based L and Q values respectively.


[L and Q values results from Mentor (red) and Reference (green)]

Mentor’s HFA solution is excellent for practical purposes of design and exploration in quick turn-around-time with high accuracy. Calibre provides integrated platform with excellent usability, scaling and on-the-fly solution. The scalability can be increased further by using multiple CPUs in a multi-threaded, multi-core, or clustered configuration.


Moore and Beyond: Global Semiconductor Primer

Moore and Beyond: Global Semiconductor Primer
by Daniel Nenni on 03-24-2013 at 8:10 pm

There are a couple of new analyst reports out that are interesting enough to blog about. I talk to the financial types on the phone every week or so explaining the semiconductor business and answering questions about the foundries and the top fabless companies. We don’t always agree on the outlook but there is always something to be learned.

“Integrated circuits will lead to such wonders as home computers or at least terminals connected to a central computer, automatic controls for automobiles, and personal portable communications equipment. The electronic wristwatch needs only a display to be feasible today” (Gordon Moore, 1965).

The first report is a 160 page Moore and Beyond: Global Semiconductor Primerfrom Merrill Lynch / BofA. It’s a very good overview with some excellent market data but comes up short when talking about the foundries and FinFETs. Intel still has a big influence on Wall Street, certainly much more influence than Intel has on the semiconductor ecosystem as it stands today. Commenting on the Intel versus ARM mobile battle without first knowing the next Intel CEO seems to be a waste of time but they do it anyway. If Intel promotes from within rather than grabbing a mobile oriented CEO, Intel will go the way of the PC (my opinion). ARM has already made the CEO move which is talked about HERE.

For more information on this reoport contact: Merrill Lynch Research

Even more interesting is a Foundry Tracker Report from Catamount Strategic Advisors: Competition increasingly to define the semiconductor manufacturing business:

METHODOLOGY

  • Interviewed semi foundry professionals in recent trips to Taiwan, Seoul (S. Korea) and Silicon Valley (CA) to gauge industry trends.
  • Identified momentum shifts between manufacturing processes and measured impact for companies throughout the food chain.

COMPANIES MENTIONED:

  • TSMC (TSM)
  • UMC (UMC)
  • GLOBALFOUNDRIES
  • Samsung
  • IBM (IBM)
  • INTEL (INTC)
  • Apple (APPL)
  • Altera (ALTR)
  • Xilinx (XLNX)
  • Marvell (MRVL)

KEY TAKES

  • Via relentless execution, TSMC has established itself as the 800 lb gorilla in the semiconductor manufacturing industry (>50% market share). Our research indicates this will change in the next few years and highlight our research below:

For more information on this report contact: Catamount.

And yes, I’m one of the industry professionals they interviewed for this report. Catamount is a boutique research firm here in Northern California so they are easy to reach:

“Tech Equity Research the way it should be: Actionable, Insightful, and Timely”

For more detailed information on the value proposition of FinFETs please join me at the Electronic Design Process Symposium in Monterey, CA on April 19[SUP]th[/SUP]. You can find more information HERE. I hope to see you there!


Can Samsung Deliver As Promised?

Can Samsung Deliver As Promised?
by Daniel Nenni on 03-24-2013 at 8:01 pm

Samsung’s aggressive marketing is starting to hit the fabless semiconductor ecosystem so we had all better be prepared. Samsung’s 2012 marketing budget exceeded $400M which probably beats the marketing budget of the entire fabless ecosystem! The question is, will the fabless semiconductor industry be easily swayed by Samsung’s clever marketing? Let’s take a look at their recent mailer and see:

Samsung is fueling the innovations that are changing the way we connect, interact, and benefit from technology. Our low-power, high-performance components and advanced process manufacturing are the foundation for your next leap in mobile products and the infrastructure that connects them. Connect with Samsung and stay abreast of our latest solutions.

This email talked about 14nm and getting ready for FinFET design. According to what I hear in Silicon Valley, Samsung is claiming to be far ahead of the competition and that is confirmed here:

GET READY TO DESIGN 14nm PROCESS IS HERE!

The foundry business is all about setting customer expectations and meeting them. Let’s take a look back at 28nm. According to an EETimes article dated 6/6/2011:

South Korea’s Samsung Electronics Co. Ltd. said Monday (June 6[SUP]th[/SUP]) that its foundry business, Samsung Foundry, has qualified its 28-nm low-power (LP) process with high-k metal gate technology and is ready for production.

Where is the qualified 28nm silicon they speak of almost two years ago? Why does my iPhone5 have a Samsung 32nm based SoC? Why does TSMC have 99.99% 28nm market share? If you want someone to blame for the 28nm shortage how about Samsung? Clearly they did not deliver. Barely a month later (7/12/2011) EETimes reports:

Samsung Electronics Co. Ltd. has said that is foundry chip making business has taped out an ARM processor test-chip intended for manufacture in a 20-nm process with high-K metal-gate technology.

Where is the Samsung 20nm silicon they speak of? I’m waiting for a teardown of the new Galaxy S4 but knowing that the S3 is powered by a 32nm Samsung SoC I highly doubt we will see a Samsung 20nm SoC anytime soon.


Speaking of marketing smoke and mirrors, I was at the Samsung keynote at CES where the Exynos 5 Octa SoC was launched claiming to be the first (8) core SoC. There was a more technical presentation at ISSCC a month later. In reality, the Exynos 5 Octo is (4) off the shelf ARM A15 cores plus (4) off the shelf ARM A7 cores in a Big/Little configuration where only (4) cores can be used at a time. What a mess, they should have named it Octomom!

Meanwhile, back on planet Earth, Apple and Qualcomm license the ARM architecture and create highly integrated custom SoCs tuned for the associated operating system, which is why they have the best performance/battery life ratio on the market today. How long until APPL and QCOM have the ARM 64 bit architecture rolled out? Closer than you think, believe it.

Time will tell but based on previous Samsung marketing materials for 28nm and 20nm, Samsung 14nm may not be as close as it appears. Samsung 28nm test chips were announced June of 2010 and hopefully we will see 28nm production silicon 3 years later. Samsung 20nm test chips were announced July 2011 with silicon expected in 2014? Samsung 14nm test chips were announced in December of 2012 so you do the math. Maybe Q1 2016? If so they are behind the competition, certainly.

Of course that will not stop the Samsung propaganda machine. As they say, the pen is mightier than the sword and $400M buys a whole lot of ink!


Unlocking the Full Potential of Soft IP

Unlocking the Full Potential of Soft IP
by Daniel Payne on 03-22-2013 at 11:32 am

EDA vendors, IP suppliers and Foundries provide an eco-system for SoC designers to use in getting their new electronic products to market quicker and at a lower cost. An example of this eco-system are three companies (TSMC, Atrenta, Sonics) that teamed up to produce a webinar earlier in March called: Unlocking the Full Potential of Soft IP.


Continue reading “Unlocking the Full Potential of Soft IP”


A Brief History of Forte Design Systems

A Brief History of Forte Design Systems
by Daniel Nenni on 03-21-2013 at 8:10 pm

When Semiwiki readers see the name Forte Design Systems, they may think of the live bagpipers’ performance that closes the yearly Design Automation Conference. Forte has been the sponsor of this moving end to DAC since 2001. Step with me behind the plaid kilts for a good look at this remarkable company headquartered in San Jose, Calif., enabling remarkable designs.

Forte finished 2012 with 22% growth, and boasted that 2012 was the seventh consecutive year of revenue growth. Better yet, it was named the #1 provider of electronic system-level (ESL) synthesis software by Gary Smith EDA. Forte prefers high-level synthesis (HLS) –– a way for hardware engineers to work at a higher level of design abstraction –– to the ESL acronym and, in fact, is credited with defining the market. Sean Dart, CEO, and Brett Cline, vice president of marketing and sales, cite a strong commitment to high-value software and top-notch support for moving into the top slot.

Forte has been around since 2001, not to sponsor the bagpipers but after CynApps and Chronology merged. The name was chosen appropriately enough because it’s a noun that means “good at” or “strength.” Long-time industry observers will remember that Dr. John Sanguinetti started CynApps in 1998 after Chronologic and VCS, the popular Verilog Compiled Simulator, were acquired by Viewlogic in 1994. (Synopsys acquired Viewlogic in 1997 and continues to sell VCS today.)

Dr. Sanguinetti is Forte’s CTO and knew back in the early 1990s that logic verification and logic synthesis were two big EDA problem areas. He took on logic verification at Chronologic, then turned his attention to logic synthesis, knowing that the change in abstraction levels from gates to register transfer level (RTL) would improve design and verification efficiency. That would be enabled by logic synthesis. In 1998, he and two other engineers founded CynApps to create a higher level design environment, along with a synthesis product that would produce RTL code from higher level designs.

Then came the 2001 merger of CynApps and Chronology with synthesis and verification offerings. Chronology was a verification provider, but not as well known or successful as Verisity (now part of Cadence). With a visible presence and good reputation in the synthesis space, CynApps could attract funding. The decision was made to continue to focus on high-level synthesis and use the verification tools to build a high-level synthesis environment with verification at its core. The move to a high-level synthesis business model was soon validated –– Fujitsu, Ricoh and Sony were among the first Forte customers that year.

Asia’s consumer electronics companies were the first to adopt a high-level synthesis methodology ahead of other regions. Japan’s consumer device designs were a natural fit for this kind of software because HLS can implement image-manipulation algorithms into hardware.

Cynthesizer, Forte’s SystemC high-level synthesis, is selected by development teams who want to reduce time-to-market pressures by designing at a higher level of abstraction and require substantial improvements in circuit size and power. In many cases, teams create designs that would be impossible using RTL. United States, Korea and Japan, in particular, are design centers where Cynthesizer is in use for custom processors, and wired and wireless communication devices.

In 2009, Forte acquired Arithmatica to complement its product offerings with a portfolio of intellectual property (IP) and datapath synthesis technology that has been integrated directly into Cynthesizer.

Moving design to a higher level of abstraction has been a tough challenge that Forte solved and its production-quality tools are going mainstream to prove it. Of course, Forte will be at the 50th DAC in Austin, Texas, and is planning already for the return of the bagpipers this year.


Mixed-Signal SoC verification has integrated solution

Mixed-Signal SoC verification has integrated solution
by Pawan Fangaria on 03-21-2013 at 8:10 pm

These days when we talk of SoC verification, what comes to our mind immediately is VirtualPlatform. Of course with the increasing size, complexity and different styles of designs, it is very much a need.

However, that is supported by actual verification engines and methodologies which are varying considerable with digital, analog and mixed-signal portions of the design. Gone are the days when we used to embed pre-verified analog block into digital design with identified signals interfacing between them. Conversely, digital blocks were imported into analog centric designs. Today, we have large analog, digital and mixed signal design content, all sitting together in an integrated design and interacting with multiple signals continuously. And almost in all designs, mixed-signal content is inevitable.

Typically, analog components are simulated by Spice and its variants and digital simulation is simplified by discrete data models simulated through Verilog or VHDL simulators. As design complexity has increased along with the need of analog and digital components staying together, various methodologies, languages and tools have emerged for modelling and verifying analog, digital and mixed-signal designs with trade-off between accuracy and performance/capacity.

Verilog-A, Verilog-AMS, VHDL_AMS, SystemVerilog and now SystemC (extended to
SystemC AMS for system level mixed signal modelling) provide tremendous capabilities to model analog and digital content together which can be efficiently simulated by a standard tool having single kernel in optimum time and desired level of accuracy. Cadence provides a suite of tools and techniques applicable in various contexts to solve this challenging problem of verifying complete SoC mixed-signal design. It provides a seamless environment with unified GUI which integrates its simulation engines, design environment and verification methodologies together to provide a unique experience to the user. I came across a white paper of Cadence which describes the verification solution with nice level of details. The white paper can be found at –
http://www.cadence.com/rl/Resources/white_papers/ms_soc_verification_wp.pdf

As an example Virtuoso AMS Designerlinks Virtuoso custom design platform with Incisive verification platform. It supports simulators like Spectre and Spectre RF Circuit Simulators, UltraSim Full Chip simulator, Accelerated Parallel Simulator and Incisive Enterprise Simulator; all integrated together in common GUI and used as per need.

For interface and translation between discrete levels of digital signals and continuous voltage levels of analog signals is done automatically through special connection modules. Common Power Format (CPF) is used with novel techniques to distinguish between a functional error and a false error due to power shut down in a particular digital or analog portion of the circuit (which usually is a case with low power design to save power that advocates to keep power ON only for active circuitry at a time). Again provisions for levels of abstractions from behavioural up to transistor level have been provided.

Just a glimpse of how the digital centric and analog centric methodologies are being unified has been represented as –

To sum it up, while Spice simulators help in verifying individual analog IP blocks, as we move up towards full chip verification of mixed signal SoCs, analog behavioural models embedded with digital languages and the techniques and tools associated with them provide up to 100x speed up in the complete SoC verification.