RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

Results of TSMC’s ECO Fill Flow

Results of TSMC’s ECO Fill Flow
by Beth Martin on 12-22-2014 at 7:00 am

By Jeff Wilson, Mentor Graphics and Anderson Chiu, TSMC

At this year’s TSMC Open Innovation Platform® (OIP) Ecosystem Forum, Mentor Graphics and TSMC co-presented some results of the ECO Fill flow developed for TSMC customers working at advanced nodes. Here is a summary of the presentation. (TSMC customers can access the presentation at TSMC-Online).

Metal fill (inactive metal shapes) was originally added to open design areas in layouts because a certain metal density was required to pass the foundry’s density design rule checks (DRC). These foundry density requirements helped reduce wafer thickness variations created during chemical-mechanical polishing (CMP) processes. To avoid creating parasitic capacitance issues, the goal was to add only as much fill as needed to satisfy the minimum and maximum density requirements set by the foundry.

At 45nm and below, metal fill affects multiple manufacturability issues such as stress, etch response, and rapid thermal annealing, and has an impact on design performance. Foundry fill targets have switched from ensuring a basic minimum density to achieving a maximum density. In addition, density checks for density gradient now require a smooth transition between fill densities in adjacent locations. At 20nm and below, fill requirements must also comply with multi-patterning (MP) restrictions to ensure mask balancing, and designers must begin adding multi-layer fill not just to back-end-of-line (BEOL) metal and via layers, but also to front-end-of-line (FEOL) layers. All of these changing manufacturing requirements impact the complexity of metal fill placement, as well as the number of fill elements in a design.

These changes in fill require sophisticated new fill types and filling strategies. New techniques such as cell-based and multi-patterning-aware fill were integrated into fill engines to provide an automated fill process that can be called from place and route (P&R) tools to ensure an easy-to-use design flow that produces correct-by-construction results. However, the number of fill shapes in advanced node technologies can exceed a billion objects. So an engineering change order (ECO) that arrives late in the tapeout process and requires fill changes in the surrounding area can be a significant engineering challenge. The complexity of replacing fill and reconfirming timing may negatively affect runtime and timing closure, which can lead to a delayed tapeout delivery.

To handle these last-minute design changes, TSMC developed an ECO fill reference flow designed to work in concert with their overall design ECO flow. The TSMC ECO fill flow addresses the same range of fill situations that their full fill flow encounters, but concentrates only on the portion of the design affected by the ECO. This flow can account for the timing impact of fill without slowing down the back-end flow.

The TSMC ECO fill reference flow incorporates Calibre® YieldEnhancer’s SmartFill functionality and Calibre DESIGNrev™ to keep fill shapes in a separate file on disk, similar to the approach that the leading parasitic extraction tools use to
minimize the size of the design database. This proven “merge when needed” approach provides the proper balance between accuracy and performance. The TSMC ECO fill reference flow (shown in the figure to the right) is currently supported for 16nm and 20nm processes. Users can download all the necessary files from TSMC.

By removing and replacing only the fill in the surrounding area, and re-verifying timing only in the affected area, designers can reduce runtime, manage file size, and minimize timing impacts (see the following figure). By restricting the ECO fill operation to only the same locations where actual mask-making changes occur, the TSMC ECO fill reference flow limits the size of the region that must be evaluated for errors, edited, and refilled. This area reduction is accomplished by generating exclude regions, and clipping the fillable database to include only the area around the design ECO.

To reduce the size of the fill database, TSMC uses a cell-based approach to fill the design. If the ECO fill flow does not properly handle fill cells, designers will see an explosion in the fill database. So, to minimize this, Calibre SmartFill only flattens the minimum number of cell instances required to remove existing fill that conflicts with the ECO design shapes. It also removes shapes based only on metal-stack-aware DRC spacings. It then refills only in the areas where ECO changes occurred, rather than refilling the entire chip.

There is a breakeven point in this reference flow—if the area to be refilled is too large, then the efficiencies of scale may be lost. In general, ECO fill strategies are most efficient when the change affects less than 1% of the design area. For bigger changes, the runtime of the ECO fill flow may exceed that of a regular fill run. Generally, good candidates for ECO fill include small areas of change, such as changes in gate functionality that requires a localized rerouting in a limited area. When changes to an entire block indicate that it would be more efficient to simply refill the design from scratch, a hierarchical fill approach may be more appropriate. However, designers must always consider whether minimizing timing impacts and mask costs offset any runtime disadvantage.

This table demonstrates a number of advantages to having a specialized ECO Fill flow that uses the exact same fill deck that was used to fill the design originally.


The results from several real world test cases show that fill runtime was reduced by 34% to 89% by using the ECO fill flow rather than a full refill. In four of the five cases, the number of masks that required changes was reduced, and in one case the ECO fill approach resulted in six fewer masks requiring re-manufacturing. The TSMC ECO Fill reference flow implemented with the SmartFill functionality in Calibre YieldEnhancer provides a push-button solution that can handle any last minute design changes.


Ensuring Safety Distinctive Design & Verification

Ensuring Safety Distinctive Design & Verification
by Pawan Fangaria on 12-21-2014 at 12:00 pm

In today’s world where every device functions intelligently, it automatically becomes active on any kind of stimulus. The problem with such intelligence is that it can function unfavorably on any kind of bad stimulus. As the devices are complex enough in the form of SoCs (which at advanced process nodes are more susceptible to external exposure such as radiation, static charge etc.) encompassing rich set of multiple functions, it’s essential to condition those to function favourably even in the event of any unexpected stimulus. While the functional safety of these devices are critical in automotive, aerospace, and healthcare applications, other applications such as industrial, home, consumer etc. are not isolated considering financial loss. So, how to make the SoCs immune to unexpected, unplanned or unintended (may be by human itself) stimuli and condition them to work safely in any environment at the chip or system level?

At the design level, SoCs need to be made fault-tolerant by introducing alternative paths to process at the expense of added redundancy and at the same time special checkers need to be introduced to monitor the system and trigger error response and recovery when needed.

To verify the system and ensure tool confidence level (TCL), the verification must include safety verification along with functional verification at all levels of abstractions from system to components. The functional tests must be replayed after injecting faults into the system to ensure correct working of alternative paths on correct data and of checkers on erroneous data monitoring and recovery.

Cadencehas beautifully extended its Incisive functional verification platform for functional safety verification. The platform has demonstrated well in complying with automotive safety standards and has been used in production by several automotive IC suppliers.

The Incisive verification platform seamlessly augments functional verification plan with Safety Verification Plan that meets complete functional safety assessment, requirements and TCL. The metric-driven verification (read Effective Verification Coverage through UVM & MDV to know more about metric-driven verification) is used to effectively monitor sets of metadata through complete verification flow including functional and safety requirements. The functional safety assessment is done by simulating system behavior (that includes IP, SoC and complete system) through Incisive Functional Safety Simulator (that includes permanent as well as transient fault simulation) under various error conditions. The fault models include manufacturing-time stuck-at-0 and stuck-at-1 faults, as well as single event upset faults and transient faults that can occur while the ICs are functioning in the system.

Cadence’s functional safety solution is very efficient in providing complete tracing of requirements, safety verification and TCL that conforms to automotive ISO 26262 standard. The automated solution from requirements to verification and TCL reduces ISO 26262 certification effort by ~50%.

The Incisive Functional Safety Simulator accelerates safety verification by seamlessly reusing functional and mixed-signal verification environment that provides 10X runtime performance compared to traditional Verifault-XL engine used in functional safety simulation. The existing SystemVerilog, UVM or e functional verification environments can be reused as usual. The faults are injected during simulation of DUT and can propagate through SystemC, analog transistor or behavioral models, and assertions.

The Incisive vManager automatically generates a safety verification regression from the fault dictionary created by the simulator. It can then track millions of detected, potentially detected, and undetected faults introduced into simulation to verify the safety in a design.

Both the Incisive Functional Safety Simulator and vManager are part of Cadence System Development Suite. They address dependability and reliability of the system which has become a critical criterion (together with PPA) today in the face of nanometer process nodes.

Cadence continues to expand its functional safety solution portfolio by including more hardware, software and IP components in different application areas. A more detailed view on the automotive functional safety solution is available in a whitepaperat Cadence website, written by Philippe Roche of STMicroelectronicsand Adam Sherer of Cadence.

More Articles by PawanFangaria…..


New book untangles the Internet of Things (IoT)!

New book untangles the Internet of Things (IoT)!
by Daniel Nenni on 12-21-2014 at 9:00 am

In 10 years, there will be 50 billion devices connected to the web, said Ericsson CEO Hans Vestberg. Next, Cisco chief John Chambers called IoT a US$ 19 trillion business opportunity in his keynote at the 2014 CES.

What is this Internet of Things after all? And how is it evolving seamlessly into multiple dimensions? How does it relate to the connected wearable devices like smartwatch? What’s its relationship with the mobile Internet and its prime vehicle: the smartphone? Where do weather balloons, drones, fiber and satellites fit into this twenty-first-century network juggernaut?

Here comes a new book that provides answers to all these questions and makes the sense of it all. The Next Web of 50 Billion Devices looks into the future—the Internet of Things—by first analyzing the past: mobile Internet. In between these two technology parables, the book delves into the present—native apps vs. web tug of war—and provides a detailed treatment of HTML5 and mobile browser technologies and their business prospects.

The Next Web of 50 Billion Devices also chronicles prominent efforts to develop infrastructure for this twenty-first-century network—from GPRS to LTE-based 4G—and presents mobile commerce as a case study to demonstrate how this modernistic network establishment is evolving. It also takes a peek into the Internet of Things bandwagon and shows how it’s converging and colliding with another giant shift in mobile computing: connected wearbles. Then it brings forth new dimensions in the mobile Internet realm: The Internet of photos, location, augmented reality and so on.

While providing the Internet context of the next-generation technologies, the book takes a close look at what tech giants like Amazon, Apple, Facebook and Google are doing to claim their stake in the next Internet gold rush. At the same time, The Next Web of 50 Billion Devicesalso profiles mobile web pioneers such as Mozilla, Nest and Opera.

In the final analysis, the book shows readers how the two spectacularly unpredictable technologies—computing and telecom—came together to accomplish the ultimate computing milestone: an Internet that is simple, reliable and pervasive. There is a dearth of good books on smartphone and mobile Internet and how they relate to the emerging new worlds such as IoT and connected wearables. There are only a couple of books available on this subject, and they mostly deal with marketing-centric issues.

The book is written in semi-technical business language to make it easy for managers and tech professionals from diverse backgrounds to absorb content on a crucial industry. The easy-to-read account charts areas of opportunities and challenges facing IoT and wearable markets. And that makes it a valuable read for IT managers tasked with formulating mobile and IoT strategy for their businesses. The Next Web of 50 Billion Devices also differs from other business books in its content presentation of technology where advancements are tied to the history and evolution perspective.

The Next Web of 50 Billion Devicesis available in both paperbackand e-book on Amazon.


Is Your FPGA Design Secure? Use Xilinx to Make Sure

Is Your FPGA Design Secure? Use Xilinx to Make Sure
by Luke Miller on 12-20-2014 at 7:00 pm

I hope your Christmas break is starting off well! You know this, but evil takes no break for Christmas. We are seeing more and more the hacking of systems and it seems to have become the norm. Do you get nervous anymore when you hear that your credit card company lost their data? Or I mean your data?

It’s as if we have given up on the ideas of privacy and security or that they are something that cannot be obtained. North Korea is not as stupid as we thought, eh? Will we actually see market crashes? Power grid failures? Will the ‘news’ be hacked? To say the least these are interesting times and probably will not get better anytime soon.

More important than ever, depending on your application for your Xilinx FPGA, security and Anti-Tamper (AT) may be more important as ever! Think of applications like High Frequency Trading (HFT), RADAR, Medical, Power Control, and Data Centers which Xilinx will start gobbling up due to their innovative OpenCL solution called SDAccel.

To start, security and the likes are not seasonings that get sprinkled on at the end of your design. Security is a methodology that must be in lock step with the Xilinx FPGA design and the systems in and around the FPGA. To mess up here is to have a very unsecure, but often a very expensive design. Now is the time to get familiar with what Xilinx has to offer as the leader in FPGA security. To begin, May I recommend reading the Xilinx web page on ‘Design Security Solutions’. Here is just the beginning into the world of Secure Xilinx designs. I will call your attention to three key documents:

· XAPP1084 – Developing Tamper Resistant Designs with Xilinx Virtex-6 and 7 Series FPGAs
· WP365 – Solving Today’s Design Security Concerns Using Spartan-3 Generation FPGAs
· WP412 – The Xilinx Isolation Design Flow for Fault-Tolerant Systems

There is a lot of meat here, read carefully and slowly. The three areas to keep in mind are prevention, detection and response. For example, we can encrypt the bit stream to prevent a first order attack from being successful. Detection can be very elaborate or very simple by monitoring voltages and temperature. When a attack is detected what do you want to happen? Erase the bit stream? Load a new image to start recording the attack? Some of you may be asking is all this really necessary? Until my eyes were opened, I would of asked the same question years ago. Given the Internet of Things, the ‘Cloud’ and everything flowing over wireless data pipes I would say yes, 100% you need a secure FPGA design. XAPP1084 sums it up best:

The decision as to how much AT to include primarily depends on three factors:
• Value: The perceived value of the intellectual property and the damage it might cause either financially and/or to national security if it were to become compromised. Certain AT features can be expensive to implement and that cost must be weighed against the value of the technology being protected.

• Adversary: Access to the system and the sophistication level/resources available to carry out the attack. For example, will access to the system be prevented by “guns, gates, and guards” or will it be easily obtained in the open market? Is the adversary a garage-based hacker or a nation-state? The adversary’s capabilities could be at these extremes or anywhere in-between.

• Design Stage: At what point in the system development cycle is the decision made to enable AT for the FPGA design? Xilinx highly recommends that the decision to utilize FPGA AT features is made very early on (i.e., after CT is defined in a system) to help address both schedule and cost concerns. It is always more costly and time consuming to insert AT features later on.

You can Trust and Count on Xilinx not only for the World’s Finest FPGAs and Tools, but also for your next Secure FPGA design.


Verilog-AMS connects T-SPICE and Riviera-PRO

Verilog-AMS connects T-SPICE and Riviera-PRO
by Don Dingee on 12-20-2014 at 7:00 am

With advances in available IP, mixed signal design has become much easier. Mixed signal verification on the other hand is becoming more complicated. More complexity means more simulation, and in the analog domain, SPICE-based techniques grinding away on transistor models take a lot of precious time. Event-driven methods like Verilog in the digital domain are very fast, but do little with the analog IP. Continue reading “Verilog-AMS connects T-SPICE and Riviera-PRO”


An Approach to Top-Down SoC Verification

An Approach to Top-Down SoC Verification
by Daniel Payne on 12-19-2014 at 1:00 pm

We’ve blogged dozens of times about UVM– Universal Verification Methodology at SemiWiki, and all of the major EDA vendors support UVM, so you may be lulled into thinking that UVM is totally adequate for top-down SoC verification. Yesterday I had a phone discussion with Frank Schirrmeister of Cadence about a new approach to top-down SoC verification that has started to change my mind because they have engineered a new EDA product called Perspec, announced just last week.

An SoC is really part of a larger system which contains IP blocks, cores, operating system, drivers and apps.


Typical SoC

Related – Don’t Mess with SerDes!

If you need to bring up an OS early for validation, then how would you do it? Let’s say that your SoC has 150 IP blocks and 8 processors – how do you deal with coherency, power, OS, measure all of your verification tests, and re-use IP verification? The new approach from Cadence has you creating an abstract model in UML (Unified Modeling Language) for each use case, then run constraint solvers to automatically create software tests from a top-down perspective.


Perspec System Verifier

The above block diagram shows how four tests have been generated to run on the cores of an SoC. To create a new use case there’s a graphical approach in Perspec with UML:


UML based use-case: View a video while uploading it.

Related – Semiconductor IP Information Flow

Once the use-case is defined, then the solver goes to work and creates a constrained random data and control flow for you, automatically saving much manual effort.


Results of solver

Another use case could be to decode video from the DDR and show it on the display, so using SLN (System Level Notation) and UML this can be entered and then the solver will automatically and exhaustively complete the goals into full, legal scenarios.


DDR use case

The five steps in using Perspec are show below, and you can use any vendor tool for: Virtual Platform, Simulation Platform, HW acceleration and emulation, FPGA prototyping.


Use case verification flow

With this top-down verification approach you can even model complex behaviors like running cache transactions during power shutdown of one core, while powering up another core to validate coherency.

Related – Using Cadence PVS for Signoff at TowerJazz

The development of Perspec was all done in-house at Cadence over several years by the same engineering team that created Verisity. If you visit DVCONin March 2015, then check out a paper by STmicroelectronics: Automated Test Generation to Verify IP Modified for System Level Power Management. Cadence will also have a tutorial: Verification Solutions for ARM v7/v8 Based Systems on Chips.

Summary

Using this new top-down approach to SoC verification can benefit your development team by:

  • Abstraction – create use cases with UML style diagrams with debug tied in
  • Automation – solvers automatically create complex tests
  • Platforms – run more tests on pre-silicon and post-silicon platforms
  • Measurement – know the coverage of functionality, flows and dependencies
  • Leverage – reuse use cases across different users

A Brief History of Silicon Frontline

A Brief History of Silicon Frontline
by Paul McLellan on 12-19-2014 at 7:00 am

Silicon Frontline was founded in 2007 by Yuri Feinberg. Since then the company has built up a team with expertise in computational geometry, circuit layout, circuit simulation and analysis, and post-layout verification. After a small initial funding, Silicon Frontline has continued to grow, acquiring new customers even over the last few years of macro-economic uncertainty. The first products, F3D and R3D, were released in 2009. F3D is used for reference extraction (against which other extractors are compared during qualification of new processes) and high-accuracy RC extraction (needed in, for instance, image sensors). R3D is a unique product for power transistor verification and optimization. R3D was extended to optimization of the gate network in 2010, and further extended to incorporate a high-performance thermal analysis engine in 2013.

Silicon Frontline has leveraged their core technology into new product lines (ESRA, P2P and P2P-XL) which were launched in 2012 to address the needs of large AMS or SoC designers, considering interconnect behavior under IR drop, current density/electromigration and ESD. SoCs on leading-edge processes have increasing challenges in these areas. A strong driver with minimum width metal at the output is already out of spec for electromigration. And ESD is no longer a problem limited to the I/O devices.

The roots of the company are in Nassda, Silvaco, PDF and other companies. The founder and CEO, Yuri Feinberg comes via Nassda, Epic and Silvaco. The COO Dermott Lynch was at Infinisim, Nassda, and Sente. The CTO Maxim Ershov from Foveon, T-RAM, PDF Solutions, and was a professor at Georgia Tech and at University of Aizu (in Japan).


The basic premise of the company is that:

  • Design is moving to AMS – modulators everywhere, usually switch-cap, with lots of digital control for calibration and compensation. Both absolute and relative accuracy are required
  • Post-layout verification isn’t keeping up with the demands of new processes. There’s plenty of work on the process side, and on the layout/design creation side, but marrying the two requires too many shortcuts and approximations that inject errors and inaccuracy into the verification procedures
  • Tape out times are lengthening. Respins continue to be the norm. There’s lots of focus on next gen devices (FinFETs, and new passive structures too, like MIM/MOMs) and not enough on verifying the interconnect
  • Focus on ease of use for neophyte or infrequent users, with plenty of power available, and ease of results visualization

Despite their low profile, not quite stealth mode, they do have customers, over 40 of them, including 9 of the top 20 semiconductor vendors. The products are used in a wide range of different applications: power devices, smart power, image sensors, flash memories, network devices, controllers, HSIO, PLL/DLL, data converters, analog/mixed-signal circuits. They are supported on all contemporary processes by all major foundries.

Silicon Frontline’s website is here.

More articles by Paul McLellan…


Why an Arduino Gift Might Make Your Holiday Shopping Easier

Why an Arduino Gift Might Make Your Holiday Shopping Easier
by Tom Simon on 12-18-2014 at 7:00 pm

If you happen to still be looking for a Christmas gift for a tech savvy youth, the answer to your search may be an Arduino. This funny sounding word is the name for a family of easy to use low cost circuit boards and related items used to build projects that contain a microcontroller. With an Arduino it is possible to build projects with sensors, LEDs and many kinds of servos or motors. Add-on ‘shields’ share common pin-outs that let you easily connect these devices.

Arduinos have become extremely popular for hobbyists and educators ever since their open source hardware and development tools became available. Not only can you buy an Arduino from the Arduino Project that started it all, but also there are dozens of companies that offer boards that are fully compatible or are modified in some way to enhance their capabilities.

The free Arduino development environment works with all of them and there are several go-to boards that are excellent starting points. For a beginner the most standard board is the Arduino Uno – look for Rev 3. A quick search on Amazon or Ebay will turn up a number of choices for this board. Some of my favorites are from Osepp, Seeed Studio or of course the one built by the Arduino Project itself. Near me, in California, retailers such as Fry’s Electronics stock them. So there is still time to pick one up.

The board can be programmed by downloading the free development kit from www.arduino.cc and connecting the Arduino to your computer with a USB cable. The board runs bare metal code that is stored in its on-board flash memory. Whenever power is applied the code will run. Most people’s first program will simply blink the LED that is connected to the D13 digital output/PWM pin.

The development environment uses GNU C ++ under the hood. But its interface for development is simple and works ‘out of the box.’ There is a library of functions for reading and writing digital and analog pins, as well as for using some of the specific features of the Atmel AVR series microcontroller that is the heart of the system. Online there are abundant resources, including source code for drivers and libraries for many sensors and devices. A large user base also makes finding answers to technical questions easy.

Serial data can be sent over the USB connection back the host PC, when it is plugged in. This can be helpful for debugging or creating projects that connect to a PC computer. But the Arduino is completely standalone and can run by itself. It can be powered by an AC adapter, battery or USB cable. Many people build projects that have an Arduino board ‘hidden’ inside.

It’s worth pointing out that this is not just for kids either. Adult tinkerers will find the simplicity of working with an Arduino refreshing, and a great way to focus on your project ideas instead of learning a complex development tool chain.

Once you become familiar with the Arduino family, you will learn about more powerful versions that use ARM chips or that have built in connectivity through Bluetooth, Zigbee, WiFi or Ethernet, etc. Arduinos can be used for serious applications too. Today they are found inside many 3D printers, CNC machines and many IoT prototypes.

You may find that this is a present for your kids that you find is just as much fun as they do!


Semiconductor Capacity Utilization Rising

Semiconductor Capacity Utilization Rising
by Bill Jewell on 12-18-2014 at 5:00 pm

Semiconductor capacity utilization (the ratio of production to capacity) appears to be on the rise, based on available data. Reliable global industry capacity data has not been available since Semiconductor Industry Capacity Statistics (SICAS) disbanded in 2011.

TSMC and UMC (the two largest pure-play foundries according to IC Insights) provide wafer shipments and wafer capacity data each quarter. Utilization is calculated by dividing the total wafer shipments by the total capacity of the two companies. This may not correspond exactly to capacity utilization due to definition and timing differences. Utilization calculated using this data exceeds 100% in some quarters. However it provides a reasonable indication of foundry utilization trends.

The U.S. Federal Reserve publishes data on capacity utilization for industries located in the U.S., regardless of the country of ownership. The names and number of companies participating in the Federal Reserve data for “Semiconductors and Related Equipment” is not available. We
can assume the Federal Reserve data is a reasonable representation of U.S. semiconductor utilization.

TSMC + UMC utilization hit a low of 35% in 1Q 2009 during the economic downturn but quickly rebounded to over 100% in 2Q 2010. Utilization slipped below 80% in 4Q 2011, but has recovered to over 100% for 2Q and 3Q 2014. U.S. utilization (based on the Federal Reserve data) hit a low of 64% in 1Q 2009 and recovered to 84% in 1Q 2011. Utilization dropped below 70% in 1Q 2012 and has been close to 70% since. So what is the overall industry trend? It is a reasonable assumption the answer lies between the TSMC + UMC and Federal Reserve data. Thus global semiconductor capacity utilization peaked at over 90% in 1Q 2011 (the last complete SICAS data), slid to 70% to 80% in 2012, and recovered to over 80% in 2014. This level of utilization implies sufficient capacity to meet near term increases in demand but high enough utilization to ensure profitable operations for semiconductor manufacturers.

The capacity utilization trend is reflected in bookings (orders) and billings (shipments) of semiconductor manufacturing equipment. SEMI and SEAJ data shows a downtrend from early 2011 to late 2012. Bookings and billings began to increase in 2013 as capacity utilization improved. Bookings and billings in 3Q 2014 are down slightly from their peaks in 4Q 2013 and 1Q 2014 respectively, but the book-to-bill ratio has been above 1.0 for the last two quarters, indicating near term growth in semiconductor equipment.


ASIC Days Are Here Again

ASIC Days Are Here Again
by Paul McLellan on 12-18-2014 at 7:00 am

Technology often goes in cycles. Thirty years ago the dominant mode of computing was a shared computing resource with comparatively dumb terminals. Think of a Vax accessed by terminals. Then workstations and the PC came along and the dominant mode became a computer on everyone’s desk. Then the smartphone came along and so now the dominant mode is back to a centralized cloud with comparatively under-resourced terminals (although much smarter than the old VT100 terminals of the 1980s.).


Drew Wingard, the CTO of Sonics, gave a presentation at IP-SoC on the ASIC/ASSP cycle. He started with a digression on Makimoto’s wave which predicts these types of transitions on a decadal timescale. For the last ten years or so the market (foundries, EDA flows, IP requirements) have been driven almost entirely by application processors for mobile (except inside Intel) delivered as ASSPs (such as Qualcomm’s Snapdragon).

Let’s look at 3 markets.

First, application processors. So how can system companies differentiate their products? If they all use the same SoCs they cannot. So Samsung and Apple changed the rules and designed their own chips, going to something closer to an ASIC model (not really the old ASIC model where the interface was netlist though). This took away so much of the market that many players were forced out: ST, Freescale, even Texas Instruments (which had #1 market share in 2010 and exited the market and shutdown their Villneuve-Loubet facility in the south of France in 2012). The market is now dominated by Apple and Samsung, with ASICs, and Qualcomm and Mediatek (with ASSPs).

Next market, cellular basestations. This was traditionally an ASSP business then the market started to fragment with the need for higher performance LTE basestations at the same time as the need for smaller cells for infill and overlay. So one part no longer worked for all and so this market has also largely gone back to an ASIC model, although with custom IP from companies like Ceva, Tensilica/Cadence and ARC/Synopsys to build the processors flexibly.

Third market, wearables (and probably other segments of the IoT market). Form factor and battery life reqire high integration but the end requirements are unclear. This creates the requirement for “premature integration.” Right now the only thing that can be done is to sacrifice cost to learn from the market, since unattractive prototypes that are too large or too power-hungry will automatically be unsuccessful and no learning will take place. So for now this is an ASIC or ASIC-like market too.


Wearables are a special challenge having these contradictory dimensions. Sonics NoCs can help. At first glance this seems like it might be overkill for such a simple design but probably it is not so simple:

Using a NoC allows an AgileIC methodology, allowing rapid prototyping, followed by rapid creation of production designs once the learning has taken place.

More information on Sonics NoC technology is here. For an introduction watch the webinar here.