RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

Experts Talk at Mentor Booth

Experts Talk at Mentor Booth
by Pawan Fangaria on 05-11-2015 at 7:00 pm

It’s less than four weeks to go at DAC 2015 and the program is final now. So I started investigating new technologies, trends, methodologies, and tools that will be unveiled and discussed in this DAC. In the hindsight of the semiconductor industry over the last year, I see 14nm technologies in the realization stage and 10nm beckoning at us. Also, IoT related developments in various segments are poised to grow at most of the technology nodes including 28nm. At the same time, the design sizes have grown to unprecedented levels. This has created major challenges in the design, verification and testing of SoCs. Multiple verification approaches have to be employed to verify an SoC. Also, reliability has become more vulnerable at lower technology nodes, thus needing ever more accuracy in the estimation of various parameters relating to reliability.

Overall, design completion time tends to increase without leaving any scope for that to increase because time-to-market has rather decreased. The only alternative is to find innovative and faster methods for design and verification closure. From the business perspective in technology, I see wafer costs increasing as we go down the nodes and the demand still being there for wafers at higher technology nodes. In such a scenario, capacity at all nodes along with yield improvement at lower nodes needs more resources and efforts.

DAChas been a perfect forum since more than half a century where industry experts from all walks of semiconductor ecosystem join together to discuss, exchange ideas, demonstrate products and innovations, and present in several sessions. This creates an amazing learning environment. EDA has been a core enabler for the semiconductor industry and we see a great involvement and participation from the EDA community in DAC.

When I looked at the program at Mentor’s booth, I found pretty good topics that will be discussed there. The topics in various sessions touch upon the burgeoning challenges in the semiconductor design industry and their solutions. There are many industry leaders and experts who would be talking at Mentor’s booth #1432.

There are interesting panel discussions by well known personalities in the semiconductor industry who would discuss from foundry, design as well as tools perspective for solving major issues that are appearing in the design, verification and fabrication.

June 8, 4:00 PM –Meeting Exploding Demand Throughout the Ecosystem
Panellists: Joe Sawicki (Mentor), Kelvin Low (Samsung), Prasad Subramaniam (eSilicon)

It will be interesting to know how fabless companies, foundries and tool providers can jointly address to move to 14nm and below and at the same time help maintaining capacity at all technology nodes.

June 9, 4:00 PM – Emulation – Why So Much Talk?
Panellists: Alex Starr (AMD), James Hogan (Vista Ventures), Lauro Rizzatti (Verification Consultant)

This will be a valuable session for verification engineers and managers. New ideas will flow about emulation driven verification that can fit into several modes including simulation acceleration, embedded software acceleration, transaction-based acceleration, and in-circuit emulation.

June 10, 4:00 PM – The IC Design Waterfall: How Advanced Design Techniques are Now a Requirement at Established Nodes
Panellists: Michael Buehler-Garcia (Mentor), TianShen Tang (SMIC), Thomas Riener (ams), Richard Wawrzyniak, Semico Research Corp.

In this panel, one can appreciate the advanced verification methodologies that are essential to be deployed to address the design complexity irrespective of its technology node.

Then there are technical sessions that present and demonstrate the solutions to many critical issues of today including reliability analysis, designing at advanced nodes, IoT design, verification, and customer case studies and so on. To mention a few which I liked –

  • Ahmed Eisawy, “Reliability Analysis of Analog-Centric ICs for Automotive Applications”
  • Matt Hogan, “Competing in Reliability Focused Growth Markets with Calibre PERC”
  • Jeff Wilson, Joe Kwan, “DFM and Fill Update for Advanced Nodes”
  • Karen Chow, “Meeting New Extraction Challenges at Advanced Nodes and Advanced Design”
  • Tom Daspit, “Pyxis IC Station for IoT Applications”
  • Gordon Allen, “Maximize your Bug-finding Productivity with the Visualizer Debug Environment”
  • Srinivas Velivala, Tom Williams, “DRC-Clean Cell Design in 30 Minutes – Qualcomm’s Experience with Calibre RealTime”
  • John Ferguson, Ofer Tamir, “How to Banish Waiver Worry from your Design Flow”

There are many other sessions from Mentor’s Verification Academy including specific presentations on UVM environment and debugging methods. Also there are sessions on power analysis, test coverage, agile evolution in SoC verification, and so on.

A complete list of sessions can be found at Mentor’s website here. Search for specific sessions has been made easy with different categories such as “Technical Focus Areas”, “Experts at the Booth”, “Partner Activities”, and “Experts in the Conference” and so on. Select particular sessions of your choice from different categories and register for them. Do not forget to register for “Networking & Luncheons”, Mentor is providing refreshments!

Pawan Kumar Fangaria
Founder & President at www.fangarias.com


Breaking the SoC lab walls

Breaking the SoC lab walls
by Don Dingee on 05-11-2015 at 7:00 am

There used to be this thing called the “computer lab”, with glowing rows of terminals connected to a mainframe or minicomputer. Computers required a lot of care and feeding, with massive cooling and power requirements. Microprocessors and personal computers appeared in the 1970s, with much smaller and less expensive machines placing isolated dots of compute capability on desks of the fortunate ones. When one says computer lab now, they are usually referring to the 100-level computing course at the local community college.

During the rise of the engineering workstation in the 1980s and 1990s, the mantra was Sun’s “the network is the computer” slogan. Ethernet provided an easy way to connect computers quickly, and distributed file systems such as NFS and interprocess communication via sockets or RPCs made distributed application development possible. Compute power scaled easily by adding workstations on the corporate network, and they could be placed anywhere cabling could be run.

Embedded systems took full advantage, becoming Ethernet-enabled for distributed applications. In the first dot-com and open source boom, vendors with big, expensive hardware based on VME, CompactPCI, AdvancedTCA, and other system standards were creating “virtual labs”. These were controlled environments of a hardware and software configuration in a padded cell on the corporate network at headquarters, designed for customers anywhere to log in remotely so they could benchmark code and evaluate the platform.

SoCs emerged with integrated connectivity, powering a wave of mobile devices and shrinking many embedded platforms to tiny boards. Today, Wi-Fi brings Ethernet everywhere, tablets and laptops bring computers anywhere, and software developers bring code from all over the globe. SoCs now feature multicore processing and dedicated acceleration units and bottleneck-free interconnect to a wider variety of peripherals.

With all these advances, why is FPGA-based prototyping still stuck on a relatively big, expensive machine in a “SoC lab” with limited access?

Granted, one cutting-edge large FPGA with high-speed SERDES interfaces is not cheap. Placing several of those large FPGAs in a prototyping system with the proper pin multiplexing and the right partitioning software to chop up SoC designs effectively is an artform, practiced by few. FPGA-based prototyping systems also bring enhanced debugging, many with deep trace features.

Until now, people using FPGA-based prototyping systems have gotten by with the SoC lab approach. Teams of SoC designers were relatively small, located close enough to walk down to the lab to engage with their masterpiece. IP blocks might be developed and debugged first, then passed into the integrated system for a concentrated, full-up effort.

All that is changing as SoC complexity continues to increase, third party IP from a variety of sources becomes more prevalent, and software co-verification with advanced operating systems requires expertise from a community scattered across the globe. It is no longer enough to scale the FPGA-based prototyping hardware – the entire approach has to change.
S2C has outlined the look of a state-of-the-art FPGA-based prototyping platform, and a vision for adding cloud capability, in a new white paper. There is the usual discussion of performance, extensibility, platform-aware synthesis that can partition and pin-multiplex, and debug. Data illustrating a task-level breakdown for today’s SoC design – upwards of $300M by some estimates for large designs on advanced 16/14nm nodes – shows software and system-level co-verification to be the largest efforts.

A prototyping system with cloud-based access to management features can transform those efforts. Mixed-level prototyping is now common, with some blocks complete, some under development in RTL, and some in progress using behavioral models. Physically distributed teams are also common, some working on SoC IP, some on operating systems or drivers.

Cloud-enabled FPGA-based prototyping platforms raise an exciting new set of possibilities for breaking the SoC lab walls. Perhaps a corporate SoC design team coordinates debug efforts with an off-site third-party IP supplier, or brings together software developers from multiple locations to view debug results as they unfold during a verification run. With complexity going up and co-location going down, the approach S2C is suggesting has a great deal of merit.

Download the entire white paper, along with other S2C white papers, under this title:

FPGA Prototyping of System-on-Chip Designs


End of the Road for Micrel

End of the Road for Micrel
by Majeed Ahmad on 05-10-2015 at 7:00 pm

Micrel Inc., one of the oldest chipmakers in Silicon Valley, has been acquired by Chandler, Arizona–based Microchip Technology Inc. for $839 million. A pure-play analog chip house will go to one of the leading microcontroller suppliers after regulatory approval amid the consolidation wave that has engulfed the semiconductor industry in the system-on-chip (SoC) era.

Apparently, the primary motive of the acquisition seems to acquire analog building blocks for MCUs, which are shaping into more powerful and diverse SoC devices to serve the emerging markets like connected car, smart home and other Internet of Things (IoT) segments. The analog and mixed-signal chip vendor from San Jose, California will bring power management assets for Microchip’s microcontroller portfolio. Moreover, its analog products for automotive, industrial and communications segments are expected to complement Microchip’s IoT initiatives.


A journey of 37 years coming to an end

Micrel, one of the prominent analog and mixed-signal chip vendors, was founded in 1978 as an independent test facility for semiconductor products with a mere capital of $300,000. Three years later, in 1981, it acquired an independent semiconductor processing facility and started focusing on custom and specialty fabrication for other IC manufacturers. Eventually, Micrel began to develop its own line of semi-custom and standard products for power electronics. Micrel went public in December 1994.

The name Micrel came from “microcircuits that are reliable.” Over the years, Micrel earned recognition for its power management offerings for the networking and communications infrastructure markets including cloud and enterprise servers, network switches and routers, storage area networks and wireless base stations. Its product portfolio comprises of timing, clock management and high-speed Physical Media Device (PMD) products.

Micrel’s family of Ethernet products comprises of physical layer transceivers (PHY), media access controllers (MAC), switches and SoC devices that support various Ethernet protocols with transmission speeds from 10Mbps to gigabit range. They are aimed at digital home, enterprise, industrial and automotive markets.


Ray Zinn co-founded Micrel and led the company for 37 years

Micrel’s co-founder Ray Zinn has led the company as President, CEO and Chairman since its inception in 1978. He won a proxy fight with the investment firm Obrem Capital Management (OCM) for the control of the company back in 2008. “Our goal is to become a billion-dollar semiconductor company on a revenue basis,” Zinn told The Wall Street Journal back in 2010. Micrel was a relatively smaller analog chip vendor compared to rivals like TI, Linear and Maxim and probably that made it a more attractive acquisition target.

Also read:Three Colorful Bytes from the NXP History

Majeed Ahmad is author of books Age of Mobile Data: The Wireless Journey To All Data 4G Networks and Essential 4G Guide: Learn 4G Wireless In One Day.


SoCs in New Context Look beyond PPA – Part2

SoCs in New Context Look beyond PPA – Part2
by Pawan Fangaria on 05-10-2015 at 10:00 am

In the first part of this article, I talked about some of the key business aspects along with some technical aspects like system performance, functionality, and IP integration that drive the architecture of an SoC for its best optimization and realization in an economic sense. In this part, let’s dive into some more aspects that are needed to make your SoC robust enough to survive in today’s global environment.

Hardware, Software, and Embedded Software: Today, before you architect an SoC, you have to think about how it will be driven by the software systems sitting on it and make provisions for those. It has to account for interfaces with other multimedia, graphics, networking, and connectivity devices and software. Accordingly it has to be targeted for particular applications such as IoT, wearable, medical, automotive, and so on.


[GENIVI Software Architecture, courtesy Mentor Graphics, GENIVI]

Above is an example of a Linux based open source software architecture promoted by GENIVI in the automotive space. This architecture is supported on the reference boards available from Renesas, Texas Instruments, and Freescale.

The embedded software again is an essential part of any software for SoC. It is heavily dependent on the underlying hardware and has to be designed according to the application. As we move up from the hardware up to application level, the embedded software transforms the data through Hardware Abstraction Layer (HAL) and its corresponding APIs, OS and its corresponding APIs, Communication middleware and its corresponding APIs, and the application software itself. The embedded software needs to be optimized by exploiting the characteristics of the underlying hardware and re-used in several similar systems as far as possible.

The idea is to architect a complete system including hardware, software and embedded software; i.e. a complete ecosystem. One who can do that and supply a complete SoC with authority can be the winner. Why Appleis the most valuable company today? One may criticize it for not being open, but the truth is that Apple is good at both hardware and software. Googleand Microsoftare good at software, but not hardware. Google does great innovation in hardware, but has not been able to pursue it to practice; it has not been able to encash on Google Glass, for example. Samsung is good at hardware, but not at software; they have realized that they need to build their own mobile OS, but have long way to go.

An SoC is not a hardware chip anymore; both software and hardware are its core components. Hence to develop an effective SoC, one has to have core competence in both software and hardware.

Connectivity: In today’s highly connected world, operating at frequencies ranging from baseband to RF in different segments, the right connectivity solution to work with the SoC has to be investigated and planned in the beginning during the architecture stage. Moreover, there are various protocols such as Wi-Fi, Bluetooth, ZigBee, and so on in the wireless domain, and several M2M protocols in the IoT space. Provisions for the right band and the right kind of protocols to be supported have to be planned during the SoC architecture stage. A versatile SoC with programmable microcontroller, appropriate memory and connectors’ support, and support of complete frequency range applicable for a particular segment can become far more successful than an SoC connecting with only some portion of the intended devices in that segment.

Security: In an IoT world, even a small chip in a device located in a remote corner of the world can be globally connected through M2M connections and connection with the internet. In this multi-point connection, every point-to-point connection can be vulnerable to cyber attack. The security aspect has to be dealt with at hardware as well as software level for the whole of the interconnection infrastructure. The data has to be protected at the device level and its authenticity maintained during its transfer. At the device level the data has to be restricted for access only to the authorized individuals; and that has to be done at the source in the hardware, i.e. your SoC. Moreover, an SoC or an IP also has to be protected from being reverse engineered, duplicated or counterfeited. There are several methods which can be employed to secure SoCs. For example, PUFs (Physically Unclonable Functions) can be instantiated on the chip for device authentication and providing secure keys. PUFs can be successfully used to protect and secure memories, smartcards, USBs, and other mobile devices.


[PUF: Protecting Smartcard ICs. Source NXP]

NXP has successfully used PUFs to protect next-generation smartcard ICs. During production or personalization, the IC measures its PUF environment and stores this unique measurement. From then on the IC can repeat the measurement as and when required to check if the environment has changed, thus protecting the card.

Side Channel Attack (SCA), which is non-invasive, is a new kind of threat for ICs. In this, information can be obtained out of a chip based on its power profile, electromagnetic analysis, or even timing analysis. Such attacks have to be prevented by deploying security at the physical level, may be at the leaf cells of the design. As an example, by preventing any variation in power to be detected during the circuit operation, one can secure the chip from SCA on power variation during switching of the circuit. There are newer methods evolving to address security at the SoC level; several terms are in use today such as TRNG (True Random Number Generators), Root-of-Trust, watermarking, and so on. Several methods are being used to detect hardware Trojans.

The point is that for a particular SOC to be designed for certain applications or environments, its security aspects must be thought of during the architecture stage and the same must be implemented from the base level.

Reliability: In SoCs at ultra-low process nodes (20nm and below) having transistors at extremely low noise margins, and SoCs taking care of several functions, reliability cannot be denied or considered after functional implementation. In the pursuit of PPA optimization, reliability is often overlooked. High performance, at high frequency will consume high power and can become a source of heat leading to electro migration and other complications. If the heat is not estimated and rated, and provisions made for its diffusion, then it can deteriorate the life of the device sooner than later. There are tools available to estimate power dissipation, noise, and reliability. Although a CPU’s real power dissipation can be computed, actual consideration should be TDP (Thermal Design Power) for determining temperature ranges and designing appropriate cooling systems where needed. Recently, Inteldesigned its CoreM processor at 14nm technology node that has a TDP range between 3.5W (with down freq 600 MHz) and 6W (with up freq 1.4 GHz) with a nominal value of 4.5W. Interestingly, Apple used Intel’s CoreM for its new MacBook and didn’t need to put a fan in it.

Verifiability and Testability: Various DFT methodologies such as scan chain, built-in self test (BIST), MBIST, and so on are well known for making a design testable. And that has become a usual practice. With an increase in design complexity, now a day there is emphasis on making a design verifiable. The focus is on the micro-architecture that should simplify the verification process. This includes synchronization of the design, minimization of cycle based logic, ensuring the design to be CDC (Clock Domain Crossing) safe, avoiding complex interfaces, and so on. The Verification IP (VIP) should be easily migrated between different design levels, e.g. RTL to gate level. A better verifiable design should also be easily debug-able. For this interfaces need to be defined formally and cleanly. Assertions need to be added at various places to verify conditions.

Serviceability: In large SoC different types of errors may occur at any point of time due to various reasons; however that should not initiate replacement of the whole SoC. There must be provisions in the SoC for easy diagnosis of the problems and their correction. The components that are likely to fail should be easily identifiable and reparable or replaceable.

There can be soft errors as well as hard errors in an SoC. Memories such as DRAMs are very susceptible to errors due to large data storage and activities at extremely high rate of data transmission. These errors can be soft errors, hard errors or retention errors.

Typically, ECC (Error Correcting Code) circuitry is used in the SoC to correct the errors that could be caused in the data. The ECC can be used in various modes of operations depending on the location or errors. For example, ECC scrubbing can be used to check the whole memory array and correct all single-bit errors. Memory sparing is another activity which can instantly replace any failing memory by spare memory in the system. Similarly there are other techniques such as data/address parity, cyclical redundancy checks (CRC) etc. which can be used in the SoCs to address problems and maintain data integrity. Power-on-Self-Test (POST) and Built-in-Self-Test (BIST) are the techniques which can detect hard errors. The hard errors cannot be corrected; they have to be mapped out of the usable area by the operating system.

The idea is to make provisions in the SoCs to have appropriate methods and circuitry for different types of components to be diagnosed and corrected as and when required.

Read the first part, “SoCs in New Context Look beyond PPA” to know about the factors discussed earlier. The two parts of this article provide a general preview of several factors which are important to consider in today’s SoCs, unlike only power, performance and area in earlier chips. Each factor in itself is a complete field in semiconductor and there are several companies providing solutions for the same. It’s not possible for a single company to have solutions for all of these. It requires a collaborative approach for today’s SoCs to find the best possible solution from different companies specializing in these areas and then architect them in best possible manner.

Pawan Kumar Fangaria
Founder & President at www.fangarias.com


Feed Your Mind and Body at 52nd DAC!

Feed Your Mind and Body at 52nd DAC!
by Daniel Nenni on 05-10-2015 at 4:00 am

My beautiful wife and I attend the Design Automation Conference together whenever possible. More so now that she is the co-founder and CFO of SemiWiki. It is really nice for her to put a face to the invoices and personally thank our subscribers. Her first DAC was 1985 in Las Vegas. We were married for less than a year so it was like a second honeymoon. She still remembers getting fudge sundaes at midnight after one of the big EDA parties which was a bit on the rowdy side even for us recent college grads.

The DAC parties are much more civilized now that most of us are approaching AARP status. Funny story, I put an AARP advertisement on SemiWiki as an April fools joke this year. It got more than 3,000 clicks so I guess the joke was on me.

Anyway, my wife and I enjoy the DAC parties and will be attending as many as possible this year. If you want me to attend your party I will need a +1, VIP passes get priority. SemiWiki is sponsoring the Wednesday night DAC reception in the foyer again. Last year we had a book signing and gave away hundreds of books compliments of eSilicon. This year we will be giving away some very nice pens with touch screen tips on them. If I can find a book sponsor there will be books as well. Six of the eight SemiWiki bloggers will also be there and we would be pleased to meet you.

During the day, breakfast and lunch presentations are my favorite (I still blog for food). Cadence is really going big this year so you will want to check them out. Here is a quick list of food events from one of their mailers:


Feed your appetite for technical knowledge at one of our breakfast or luncheon panels at DAC 2015. You’ll hear from Cadence® tech experts, our customers, and our partners as they share their electronic design expertise and experiences in Room 104 in front of Exhibit Halls B/C at the Moscone Center.

Schedule
Monday, June 08, 2015
LUNCHEON:How to Make Next-Generation Verification Smarter
Room 104
12:00PM – 01:30PM

Hear our panel of experts discuss the key technologies needed to address the next level of verification challenges. They’ll cover a variety of questions, such as: What role will hardware-assisted development play in the context of other verification engines? How can verification become smarter? Will the next level of abstraction save the day and move verification to the transaction level?

REGISTER HERE

Tuesday, June 09, 2015
BREAKFAST:Crossing the Great Divide: How to Safely Navigate the Move from 28nm to 16FF+
Room 104
7:30AM – 09:00AM

What are the biggest challenges at 16nm? When do you make the decision to move from 28nm? Hear from SoC designers who crossed the great divide from 28nm to 16FF+ and emerged on the other side safely, seasoned, and successful. And listen to design ecosystem experts’ insights and guidance for crossing the divide smoothly.

REGISTER HERE

Tuesday, June 09, 2015
LUNCHEON:The Future of Digital is Here
Room 104
12:00 PM – 01:30 PM

In this panel, Cadence and a few of our customers will discuss an integrated digital implementation solution based on Cadence’s RTL synthesis, digital implementation, and signoff design tools and flows. These tools, which include the recently launched Innovus™ Implementation System, help designers successfully implement large, high-performance, advanced SoCs using 16/14/10nm FinFET processes as well as established process node SoCs.

REGISTER HERE

Wednesday, June 10, 2015
LUNCHEON:Methodology and Metrics for Analog/Mixed-Signal Verification: Madness or Marriage?
Room 104
12:00PM – 01:30PM

Hear the experts weigh in on the idea of applying the rigors of metric-driven verification planning and management from the digital realm to analog and mixed-signal design. Is this a natural progression of managing skyrocketing verification complexity, or is this a really bad idea because of skill sets, organizational boundaries, and the principles of divide and conquer? Enjoy a lively debate.

REGISTER HERE


Design Virtualization Technology: VMWare for SoCs

Design Virtualization Technology: VMWare for SoCs
by Paul McLellan on 05-10-2015 at 1:00 am

It was way back in 2001 that Pat Gelsinger, then CTO of Intel, pointed out that if we kept increasing clock rates that chips would have the power density of rocket nozzles and nuclear reactor cores. Ever since then power has been public enemy #1 in chip design. In 2007 Apple announced the iPhone and the application processor inside it, and smartphones became one of the most intense battlegrounds for power. After all, the length of time that a battery lasts is much more visible to the consumer than, say, the power dissipated by the chips in their wireless router. But routers are not immune to power either, at least at the datacenter level.

There is a sense in which all chips today are low power. A chip for a hearing aid might have power measured in millwatts whereas a chip for a datacenter might have a budget of 150W. But both chips are face the challenge of meeting their performance, very different of course, under their power envelope.

There are many techniques for power reduction, way beyond the scope of a single blog. But some of the most confusing are the selection of libraries, process technology, signoff corners, giving up yield for power, multiple voltage rails and so on. Basically, what underlying fabric should be used to construct the design.

eSilicon have a huge amount of data on this sort of thing based on the large number of designs they have run and also on a lot of additional characterization that they have done in addition so that they can easily estimate the effect of, say, reducing the upper temperature for characterization to 100°C (nobody’s cellphone can run that hot or your pockets catch fire) or lower bound to 0°C (when did you last see a datacenter with icicles), or lowering the margin of error on the voltage regulator from 5% to 3%. They call this Silicon Virtualization Technology. Today this is provided as a service but under the hood is a lot of software and a huge amount of characterization data of all types, far more than is practical to process by hand or even in Excel.

Rather than give the marketing pitch, I think it is good to actually show a real-world example of the technology in action on a real design. eSilicon worked with a customer on a design to go into a networking device. When the design was essentially complete the first power estimate came up at 130W.

Design Virtualization Technology to the rescue:

  • Where is the power coming from? 95% of the power turned out to be coming from a single 450MB memory.
  • Can we customize the memory? Yes. So eSilicon did that using an off-the-shelf memory from their extensive portfolio of memory IP that they develop internally. They then removed all the peripheral logic that supports options not required for this particular design. There were also device swaps in the periphery by using libraries with multiple thresholds.
  • This got the power down to 90W, but the customer target was lower at 75W. eSilicon’s design virtualization technology analyzed the design.
  • By applying low power techniques such as lowering the core voltage a tiny bit, using more multi-Vt libraries and so on they achieved 75W.
  • The chip was 3 days away from its scheduled tape-out. And there was not a lot of flexibility to slip the date for a month or two. But at this stage life looked good.
  • Marketing came back and said the power budget had to be 35W. eSilicon fired up their design virtualization technology again. If they got aggressive at voltage, temperature and process corners could they get there? What about frequency? Was there any flexibility to reduce that and still meet the performance requirements?
  • It turned out they needed to do all 4. Voltage down 2%. Tighter process window, eating a tiny potential yield loss, temperature maximum of 105°C. Frequency from 500MHz down to 400MHz. Power at 35W.
  • Tapeout on schedule 3 days later. Success.

One thing that I learned from this design is that 3 sigma really is very conservative. If you reduce that to 2 sigma then the maximum yield loss is under 5% but the potential gains in power (and performance) can be significant. But you need a lot of data to make that kind of change with confidence, and Design Virtualization Technology is your “ring of confidence.”

The eSlicon White Paper on Power Reduction Using Design Virtualization Technology is here.


Chip Design Problems Remain the Same, More or Less

Chip Design Problems Remain the Same, More or Less
by Mike Gianfagna on 05-09-2015 at 2:00 pm

For those who may not know me, here is a brief introduction. I started in the semiconductor business when RCA was still making vacuum tubes and I wrote EDA software before there was an EDA industry. I’ve designed and sold chips and developed, sold and used EDA tools at companies as big as General Electric and as small as seven people. So what have I learned through all those years and all those companies?

The problems with chip design remain the same, more or less.


The size of the problems and the penalty for getting it wrong have increased exponentially, though. That is courtesy of Moore’s Law. The effect is well known and I won’t revisit it. Let’s look at the problem set associated with chip design instead.

When I worked at RCA, we had a huge number of bipolar processes, and it was really hard to figure out which one to use. First-time-right silicon was a big, big challenge. There was a corporate mandate to reduce the number of processes and to re-engineer the design flow to use higher levels of abstraction to facilitate faster design that was less error prone.

Our solution was a symbolic layout system. The parameters for each circuit element were stored inside the symbol and the user did a “loose” layout of the chip. The layout was then sent to a “layout compiler” that would read the parameters and replace each circuit symbol with the polygons to implement that device. The whole thing then went to a layout compaction tool to get rid of the white space. We taped out the first chip with this system and got it done a lot faster. And it worked the first time. It was 1978.

A few years later, we were designing a new generation of single-chip microprocessors. There were several design groups working on various parts of the chip and assembling the whole thing was a huge problem. Without a good master plan, the pieces just didn’t fit very well, and critical-path timing issues kept getting in the way. A “paper” master plan was always developed and was out of date in about a day. The solution was to develop a hierarchical chip planning system that allowed manipulation of the plan at an abstract level. All the design groups accessed this same tool and kept the plan up to date. This system went live in 1980.

About a year later, in 1981, I was having a beer with a design manager who had just finished releasing a library of Texas Instruments TTL discrete parts that were now implemented in CMOS (to save power). We thought about all these components and all the work to get them taped out correctly (we still used actual magnetic tape by the way). What if we could strip the bond pads off these designs and release them as building blocks for on-chip macros? That day, we were inventing IP reuse and block-based design. We called the idea Silicon Circuit Board. The whole thing failed within six months – without a reuse methodology, reuse was, well, impossible.

Raising abstraction levels, re-using IP and automating design has been haunting us for a long, long time. Are there new and unique approaches to these problems? I think so. I’ll discuss that next time.

From Mike Gianfagna of eSilicon

Also read: Chip Design – Coming of Age in the Computer Age


Calibre xACT Shakes Up 16nm and Below Extraction Game

Calibre xACT Shakes Up 16nm and Below Extraction Game
by Tom Simon on 05-09-2015 at 8:00 am

Mentor Graphics made a big announcement regarding SOC extraction at their User2User conference in San Jose during April. Before I get to the meat of the announcement, I’d like to reflect back on the early days of Calibre-DRC at Mentor. I was in Sales at Mentor around 1999, and Calibre-DRC was the new kid on the block. We had to go convince Dracula (yes, remember?) users that they could get the same accuracy with no rule deck rewrite in a fraction of the time. Well seeing was believing: usually once someone to tried Calibre-DRC, they went with it.


As we all know, today Calibre-DRC is the “go to” tool for DRC, and Calibre xACT is sort of the new kid on the block. In the same spirit as Calibre-DRC in those days, xACT is now making use of sophisticated optimizations to provide breathtaking gains in performance. This is, after all, how Calibre-DRC made its name – by embracing a new paradigm. There are a couple of exciting new ideas being deployed in xACT.

Previously, existing extraction too used tiles to achieve parallelism. The number of tiles determined the degree of parallelism that could be utilized. But, analysis at the edges of the tiles always caused accuracy issues. xACT uses net based parallelism; so the analysis problem can be spread over a large number of processors/CPU’s with no artifacts from tile boundaries degrading the results. But a much bigger win comes from running multiple corners in a single run. Yes, you read that correctly. Mentor R&D figured out how to run additional corners with only a ~15% increase in run time and memory. This harkens back to the seemingly unbelievable performance results that hierarchical DRC in Calibre promised. Designers will now be able to easily add the new double patterning variation corners to the already numerous process corners that they would like to run.


xACT is also addressing proximity effects by seamlessly incorporating a 3D field solver for parasitic extraction around devices, such as FinFET’s. What’s really nice is that there is no need to switch rule decks or run devices in situ in a separate tool. xACT automatically uses the solver when it needs to, and avoids double counting in the process.


There is a lot more to the details of what is offered in xACT, but just like before seeing is believing. It turns out that my friend Bo Gao, who is a Director of Engineering at Cypress, was giving a talk on his experience with Calibre-xACT at the U2U meeting. Their design profile leans toward older nodes, their challenges with difficult end-product environments means they need to thoroughly run many corners to ensure reliability. As a result of their merger with Spansion, they have a large market share in the automotive space, where reliability is extremely important. Cypress is also a leader in SRAM and NOR Flash.


Cypress TrueTouch and CapSense controllers demand attofarad sensitivity. They also focus on ultra low power products for mobile designs. Lastly, noise management is big issue for them. The flow they use requires parasitic output that is used in a variety of tools, some of which need different formats. Fortunately xACT is very flexible when it comes to regenerating different output file formats from its persistent parasitics database.

So what did Bo observe when using xACT during his beta test experience? He ran several test cases: one at 130nm, another at 65nm and one at 28nm. Design speeds ranges up to about 1.5GHz. Bo reported excellent correlation with their golden field solver. When he ran xACt against the “other’ tool, the correlation charts for R and C had a slope of one and were tightly clustered. In his LEF/DEF flow he saw memory size go from 735B to 72MB, and run tine to from 2:20 to 0:30. In his GDS flow he saw memory go from 25.7GB to 4.5GB and runtime go from 14:45 to 3:05. So it would seem based on this that xACT lives up to Mentor’s claims. Cypress was able to integrate xACT into their flow and was happy enough about the tool to be willing to talk about their experience.

As I previously mentioned, Mentor also is announcing capacity improvements and other significant features. If you want to get the full picture I suggest downloading and reading their complete white paper on Calibre xACt for nm design through 16nm and beyond.

Mentor has been developing and assembling through acquisition strong technology over the last few years and is differentiating itself from its competitors. I look forward to what is coming next from them.


We cover the Semiconductor Industry so you don’t have to!

We cover the Semiconductor Industry so you don’t have to!
by Daniel Nenni on 05-08-2015 at 2:00 pm

The media landscape is changing faster than ever before. I don’t really think we have more news to cover but there sure are more people covering it and even more news outlets are coming. Facebook, LinkedIn, Twitter, SnapChat, etc… news will be coming at us faster and from more directions than ever before. It’s not a bad thing but there are sure to be continued business model crashes and shakeouts.

The SemiWiki business model has not changed since we started five years ago. We are a blog site and what bloggers do is distill information using our experience, observations, and opinions. For example: Sometimes we download a ten page white paper and distill it down to a 600 word summary with a link to the paper for further investigation. Same thing with live events and webinars, we attend them and summarize what we feel are the salient points.

All of our postings are up for discussion and that is why we call SemiWiki “The Open Forum for Semiconductor Professionals.” As a member you can comment on blogs, participate in forum discussions, and create or edit wikis. You can also add events to our public calendar. SemiWiki is now and has always been about crowdsourcing, absolutely.

If a SemiWiki membership is not your thing you can follow our LinkedIn company pageor you can join our LinkedIn Group. You can also follow us on Twitter, FaceBook or Google+ (buttons on top right of page).

You may also have noticed SemiWiki publishes books. Our first book “Fabless: The Transformation of the Semiconductor Industry” was released last year. A book on the history of ARM will be published later this year and we already have our next book project scoped out for 2016.

You should also know that the SemiWiki bloggers have day jobs which is an important part of our business model. We are consultants within the fabless semiconductor ecosystem performing a wide range of services. There are eight of us now: Daniel Nenni, Paul McLellan, Daniel Payne, Eric Esteve, Don Dingee, Pawan Fangaria, Tom Simon, and Majeed Ahmad Kamran. Please click on the names to see our LinkedIn profiles for more information about us.

And what does all of this SemiWiki work get us? Based on the hours we put into it; minimum wage plus full access to the fabless semiconductor ecosystem from top to bottom, totally worth it!

The one thing that has changed about SemiWiki is our analytics. According to Google Universal Analytics, more than 1,429,784 “Users” have spent time on SemiWiki. By “Users” Google means devices, and by device it is 76% desktop, 19% phone, and 5% tablet. Personally I use a laptop, tablet, and phone so I’m guessing most people do the same, more or less. Google also details the different browsers: Number one is Google Chrome (47%) then Firefox (18%) and the rest. The majority of the mobile devices are Apple (iPhone 36.25% – iPad 17.50%), the rest are mostly Samsung.

SemiWiki.com Statistics:
Threads 5,099
Posts 19,902
Server Load Averages 45,262 Users Online


Based on a highly scientific algorithm (my best guess) the amount of actual people that have visited SemiWiki is 608,527. Probably the most surprising analytics are the gender and age demographics which I now monitor via new memberships and feel they are right on target:

  • Male 88.75%
  • Female 11.25%

AGE
[LIST=1]

  • 24-34 29.59%
  • 35-44 24.65%
  • 45-54 23.53%
  • 55-64 8.75%
  • 18-24 7.44%
  • 65+ 6.04%

    LOCATION
    [LIST=1]

  • United States 45%
  • India 8%
  • Taiwan 7%
  • Germany 4%
  • France 4%
  • United Kingdom 4%
  • Japan 3%
  • Canada 2%
  • South Korea 2%
  • Singapore 2%
  • Netherlands 1%
  • China 1%
  • Isreal 1%
  • Italy 1%
  • Russia 1%

    ACQUISITION
    [LIST=1]

  • Organic Search 34%
  • Direct 30%
  • Referral 26%

    Feedback is always welcome and we take blog requests, just let us know what topics you are interested in and we will do our best to cover them.


  • Antifuse is the New Foundation of NVM Below 16nm

    Antifuse is the New Foundation of NVM Below 16nm
    by Paul McLellan on 05-08-2015 at 7:00 am

    Today the non-volatile memory (NVM) foundation is the eFuse. It is typically available for free from the foundry and is the default choice because, like Mount Everest, it is there. However, like Mount Everest it is big. It is also power hungry and slow. eFuse solutions blow the silicide on the poly line creating a change in resistance. There are other technologies, such as embedded flash, but these require additional process steps and cost. Others, like ROM, are only really suitable when every die contains the same code (such as fonts in a printer).

    Modern eFuse is built on polysilicon with Cobalt or Nickel silicide on top. The fuse is programmed by a well-known reliability mechanism called electromigration in which electron momentum pushes the silicide atoms out of the conductor link. Still, most fuses can only be programmed at wafer and have stringent power requirements for programming. It makes programming in packaged parts difficult. The bitcell is the largest of the standard CMOS NVM technologies. For higher bit density memory applications, e.g. greater than 4Kb, the size of the fuse quickly begins to take up the area of the SOC. eFuse is usually custom-designed and provided by the foundry as macros. As a result, it cannot be legally ported to another foundry without the consent of the foundry.

    At 16nm and below the eFuse will get obsoleted and the replacement technology will be antifuse. It is 1/300th of the size of the eFuse, lower power and higher speed.

    An antifuse is the opposite of an eFuse. The circuit is open (high resistance) to begin with and is programmed closed by applying electrical stress that creates a low resistance conductive path. Antifuse NVM has been implemented for many decades using additional processing steps. Kilopass was the first to pioneer antifuse in a standard CMOS process with no additional processing steps. Kilopass holds patents for several flavors of bitcells, including the 1T and 2T.

    A hard gate oxide breakdown is used as the one-time programmable non-volatile memory mechanism. The breakdown is achieved by applying a high voltage on the program gate. Before the breakdown, between the gate and the source of the program transistor, it is isolated like a capacitor. After the breakdown, it behaves like a resistor between the gate and the source. The program transistor is isolated from the select transistor. Both the program and read transistors are implemented using core devices so as the technology scales, the bitcell scales.

    Another issue with eFuse in these security-conscious days is that the security is very low. It is easy to identify blown fuses in a device through a microscope whereas antifuse is almost impossible to detect even for military grade security. It has proved to be impossible to read through either passive attacks such as power analysis, semi-invasive attacks such as microscopy, or invasive attacks such as microprobing, and even backside approaches.

    The table below summarizes the most important differences between eFuse and antifuse. Both can be manufactured in the standard CMOS process without any additional mask steps and then can be programmed after manufacturing making them one-time programmable (OTP), although the eFuse is normally programmed before packaging and antifuse can be programmed either before or after packaging since it does not require special voltages to be applied.

    [TABLE] align=”left” class=”cms_table_grid” style=”width: 480px”
    |-
    | class=”cms_table_grid_td” style=”text-align: center” | Feature

    | class=”cms_table_grid_td” style=”text-align: center” | eFuse

    | class=”cms_table_grid_td” style=”text-align: center” | Antifuse

    |- class=”cms_table_grid_tr”
    | class=”cms_table_grid_td” | Cell structure
    | class=”cms_table_grid_td” | poly fuse
    | class=”cms_table_grid_td” | 1T/2T antifuse
    |- class=”cms_table_grid_tr”
    | class=”cms_table_grid_td” | Standard CMOS
    | class=”cms_table_grid_td” | yes
    | class=”cms_table_grid_td” | yes
    |- class=”cms_table_grid_tr”
    | class=”cms_table_grid_td” | Bitcell area (normalized)
    | class=”cms_table_grid_td” | 300
    | class=”cms_table_grid_td” | 1
    |- class=”cms_table_grid_tr”
    | class=”cms_table_grid_td” | 128Kb area (normalized)
    | class=”cms_table_grid_td” | 8
    | class=”cms_table_grid_td” | 1
    |-
    | class=”cms_table_grid_td” | Program after packaging
    | class=”cms_table_grid_td” | no
    | class=”cms_table_grid_td” | yes
    |- class=”cms_table_grid_tr”
    | class=”cms_table_grid_td” | Endurance
    | class=”cms_table_grid_td” | 1
    | class=”cms_table_grid_td” | ~5
    |- class=”cms_table_grid_tr”
    | class=”cms_table_grid_td” | Standby and active current
    | class=”cms_table_grid_td” | High
    | class=”cms_table_grid_td” | Low
    |- class=”cms_table_grid_tr”
    | class=”cms_table_grid_td” | Random access time
    | class=”cms_table_grid_td” | Slow
    | class=”cms_table_grid_td” | Fast
    |- class=”cms_table_grid_tr”
    | class=”cms_table_grid_td” | Security
    | class=”cms_table_grid_td” | Low
    | class=”cms_table_grid_td” | High
    |- class=”cms_table_grid_tr”
    | class=”cms_table_grid_td” | High/low temperature & voltage tolerance
    | class=”cms_table_grid_td” | Medium
    | class=”cms_table_grid_td” | High
    |-