Synopsys IP Designs Edge AI 800x100

Reliability is the New Power

Reliability is the New Power
by Paul McLellan on 03-09-2013 at 9:56 am

It has be come a cliche to say that “power is the new timing”, the thing that keeps designers up at night and drives the major architectural decisions in big SoCs. Nobody is saying it yet but perhaps “reliability is the new power” will be tomorrow’s received wisdom.

I talked to Adrian Evans of IROCTech last week. He used to work at Cisco and with an enormous installed base of routers processing enormous amounts of data, very rare events such as Single Event Effects (SEE) happen all the time. And customers don’t like it when their routers reboot for no discernible reason, not to mention being very expensive for Cisco to swap out “faulty” boards that actually have no faults, just got hit by a random cosmic ray. The chips in those generations of routers are not even 28nm or 20nm, and SEE problems get worse with process generation, as the gate-oxide becomes just a few atoms thick, the power densities increase and lower voltages lead to lower noise margins. So one thing you will see is system companies such as Cisco specifying reliability standards for all chips.

Several years ago, when power really started to become the huge issue it is today, we developed power standards. Of course purchasers of chips would specify power numbers, especially for mobile devices, but the place that power really changed design was in building power delivery networks (and analyzing them), board design and a standard file format. Well, we are EDA, so we can never make do with one standard when two would be more fun, so we ended up with both CPF and UPF with broadly comparable capabilities as a way of specifying power policy throughout the design flow.

There is no equivalent format for specifying reliability data, constraints, policy etc throughout the design flow. You can go into PrimeTime and say “report_timing” or “report_power” but “report_reliability” won’t give you anything.


Like other things in design, reliability is a tradeoff. For chips in satellites, triple redundancy and voting might be appropriate to achieve extremely high levels of reliability in an extremely difficult environment, but it would be completely inappropriate for a cell-phone. In other environments, errors in the chip may not be so important if they can be detected and corrected in software. You can see that reliability is thus a chain from software down to chips down to things like making sure the solder in your package doesn’t emit too many alpha particles. As with any chain, it is only as strong as the weakest link. But the corollary is that there is no point in building one or two especially strong and expensive links, you want all the links to be roughly the same strength.

We are past the time at which spreadsheets and email work as a way of passing reliability data around. What is required is a Reliability Information Interchange Format (RIIF). Well, such a standard is, in fact, in development. It is a modeling language with a purpose to specify the rate of occurrence of failure modes in electronic components. The goal is to make it an eventual IEEE standard. Work started about a year ago, largely in conjunction with the European automotive manufacturers.


People expect their cars to last 10 or 20 years and much of the electronics in cars has to work in a fairly hostile environment, climbing out of death valley in summer means that the ECUs in the car are in a very hot environment, Minnesota in February not so much. And the electronics in cars (now well over 100 electronic control units or ECUs are in a high-end car) is, in many cases, safety critical. For sure, Cisco doesn’t want their routers to reboot unexpectedly. But you really don’t want your ABS system to reboot unexpectedly. So the automotive manufacturers are in the vanguard of driving reliability metrics down their supply chain.

There are two important workshops on this topic coming up this month, one in Silicon Valley and one in Europe in Grenoble:

  • Silicon Errors in Logic – System Effects (SELSE) at Stanford, March 26th and 27th. Details here. Keynotes from Microsoft, IBM and DoD CEC.
  • 1st RIIF Workshop, Grenoble, March 22nd (co-located with DATE). Towards Standards for Specifying and Modeling the Reliability of Complex Electronic Sytems. Details here. There are speakers from ARM, Intel, Bosch, Infineon Automotive among others.


Qualcomm and Intel Dynasty Scenario at 14nm

Qualcomm and Intel Dynasty Scenario at 14nm
by Ed McKernan on 03-08-2013 at 1:00 pm

At a different time, but certainly within the past 12 months, Paul Otellini was asked if Intel would be a Foundry for Qualcomm. His reply was that it did not leave a good taste in his mouth. Nevertheless it was not rejected and the door that remained open just a crack is likely to swing open for Qualcomm, the premier mobile silicon supplier in whom both Apple and Samsung are dependent, to win the Mobile Market. The hinge of fate rests in the hands of Andy Bryant, Chairman of Intel, who would need to EOL the Atom and the acquired Infineon baseband group to eliminate the competitive wall that would lead to not just a true Fab filling but would redraw the geopolitical map of the semiconductor industry. With Intel pouring another $13B of CapEx into its expanded 14nm footprint, there are only two possibilities that make sense: Qualcomm and Apple (the latter is now focused on TSMC). A marriage of Qualcomm baseband with Intel 14nm process technology could result in a scenario that would be a remake of Intel’s 1990s Pentium Dynasty.

The trend in the mobile industry for Samsung and Apple is to continue down the path of increased verticality. The Baseband Ecosystem maintains the high ground in tablets and smartphones and soon it will be a standard feature in x86 ultrabooks. Intel bought Infineon’s baseband group to complete the platform needed to compete in the broader mobile market. However, their efforts are still markedly behind that of Qualcomm and others. Bryant can continue the forced march with little to show or abandon the effort that blocks Qualcomm’s entry into the Fabs.

An article recently mentioned that Apple has hired a team of over 100 ex TI Engineers in Israel to create WiFi and Bluetooth solutions. The timeframe for these solutions is unknown but with $137B in the bank it is easy for Apple to acquire the talent that can create silicon solutions that end up replacing their current suppliers (i.e. Broadcom and Qualcomm). A net reduction of $20 of silicon in every iPhone, iPAD and perhaps Mac Airs could lead to saving the company up to $10B in the era of the Billion Unit+ mobile market that is arriving in the next couple of years. As they say a Billion here, a Billion there and pretty soon your talking real money.

The aggressiveness of Apple and Samsung in designing the key platform components while elbowing out other Fabless vendors at the Foundry has to be making Qualcomm nervous. The $25B+ in Qualcomm’s bank account leads all mobile players, except Apple. What if the cash is not enough of a cushion to prevent Apple or Samsung from hiring or buying the assets of Qualcomm’s competitors? If you think it unlikely, then one just has to review the staggering opportunity outlined above.

Under the Andy Bryant Regime, All product groups must now come clean on their true ROI of existing and new products. Atom processors fall way below the line of pulling their weight for a company that by next January will have spent $36B on Capex in the past three years. All of this to drive towards 22nm and 14nm dominance. In contrast to Atom, the Xeon and Ivy Bridge more than any other digital IC, except FPGAs, are delivering on a heavy positive cash flow. However, the dilemma in play is that mobile will be at least an order of magnitude larger than x86 powered PCs and the number of Fabs will matter in the end game.

The idea that a fast growing market could be on an accelerating path towards consolidation seems at odds with the concept that a rising tide lifts all boats. It took more than 50 years for the American auto industry to consolidate and yet the new mobile industry and the entire supply base may do so in less than 8 years from the time of the first iPhone introduction. It is in Apple and Samsung’s interest to accelerate the trend.

Juxtaposed to the Samsung and Apple vertical supply chain is the also heavily capitalized Fabs of Samsung, TSMC and Intel who race to be the ultimate winner at the leading edge, where all mobile silicon goes to maximize performance/watt while minimizing quiescent current. Intel’s leadership in the pre-mobile days was based on the x86 processor lock required for Windows and its process lead. The silicon supremacy shift away from processors and to the baseband and wireless infrastructure occurred faster than most imagined and the Intel acquisition of Infineon has proved to be too late in the game to help x86 Atoms make a dent in the market.

Now that the multi-billion unit, 4G enabled train has left the station, Intel has to catch up with its only true weapon and that is 14nm. Should Andy Bryant be able to sign a Foundry agreement with Qualcomm and redirect Intel’s massive design resources, there would be benefits in a number of areas for both companies. For Qualcomm, the ability to leverage Intel’s lower cost and much lower power 14nm would remove the competitive threats of Broadcom, nVidia, Mediatek and others. Samsung and Apple would have to think twice of continuing with their own internal wireless and baseband developments as Qualcomm moves into the mid range and low end markets at generous margins. The profit pool that would arise for Intel and Qualcomm would be staggering but it requires Intel give up its desire to own the chip inside the smartphone.

In return for enabling Qualcomm to clear the field, Intel would take a giant step towards rebalancing its Fabs relative to Samsung and TSMC in the mobile market. This move, with Qualcomm’s increased TAM exposure at the expense of its rivals, would be the equivalent of moving more than one Fab loading from TSMC over to Intel’s side of the ledger. For Intel the legacy x86 and Data Center business will still require some leading edge capacity, however a larger and larger percentage of processors will shift to a longer tail business model now that AMD competition has melted away. Intel will initiate other long tail fab deals, of which the 14nm Altera one is a perfect example.

The outcries from the former Otellini regime will be huge as the Atom and Infineon groups fight to remain relevant. The math is simple for Bryant. A $15 Atom processor at 5-10% or even 20% share in the smartphone market doesn’t come close to the revenue and margins that are available by opening up the Foundry to Qualcomm. Legacy Intel and Windows will remain together from tablets to PCs and servers as Win RT on ARM fades quickly into the sunset. By the end of 2013, I can envision a scenario where the partnership of Intel and Qualcomm is announced and the surprise to most is that they are no longer competitors at the platform level.

The tremors that will ripple through the semiconductor industry on an Intel – Qualcomm partnership will destabilize much of the mobile market and over time be seen as greater in magnitude than any other single event, including IBM’s selection of Intel’s 8088 for the original PC that sent Motorola packing. Qualcomm building products at Intel will lay low their wireless peers while Samsung and Apple take time to reconsider if their internal efforts are effectively moot. Intel’s ability to finally monetize its leading edge process will force Wall St. analysts to reconsider their valuation metrics. Beyond this though are additional second order derivatives acting as forcing functions. Will Apple consider a partnership at Intel so that they can develop the equivalent of a Snapdragon with their own ARM processor integrated with Qualcomm’s baseband?

For those of us who have watched the Semiconductor paint dry during the post Y2K decade, it is very interesting to consider what changes may occur as 14nm rolls out.

Full Disclosure: I am Long AAPL, QCOM, ALTR, INTC


Silicon Summit April 18th, 2013

Silicon Summit April 18th, 2013
by Daniel Nenni on 03-07-2013 at 7:00 pm

Moore’s Law has transcended computing expectations; however, its promise will eventually reach scalability limitations due to extraordinary consumer demands. Future technology encompasses breakthroughs capable of interaction with the outside world, which the More than Moore movement achieves. Through integrating functionalities that do not scale to deliver cost-optimized and value-added system solutions, this trend holds significant potential for the industry. This event will explore the business and technical factors defining the More than Moore movement, and address how it will yield revolutionary electronic devices.

Collaboration is the foundation of the fabless semiconductor ecosystem. TheGlobal Semiconductor Alliance continues to lead the way with Silicon Summits, technology work groups, and a comprehensive set of resources for the greater good of the industry that brought us the mobile devices that my children cannot live without!

GSA’s WORKING GROUPS

GSA’s Resources include:

The next Silicon Summit is April 18th, 2013 at the Computer History Museum. Reserve your seat now! GSA members receive complimentary admission. Non-member registration fee is $50. More than two hundred of the top semiconductor professionals will attend making this one of the best semiconductor ecosystem networking events.

Session One:Disruptive Innovation – Enabling Technology for the Mobile Industry of Tomorrow
With the industry’s long-term focus on scaling now joined by functional diversification, this session will open with an overview on how More than Moore is enabling the mobile landscape of today and shaping the future of tomorrow.

A panel will follow, discussing current and emerging applications that continue to drive the More than Moore adoption as well as the process technologies enabling this development.

Session Two: How More than Moore Impacts the Internet of Things
From the Swarm Lab to the smart bulb, the Internet of Things is showing evidence of becoming a reality. However today’s productivity trails what is needed to make the Internet of Things a truly ubiquitous system, and at the heart of the matter is developing the low power, mixed-signal technology that will enable chips and systems to communicate to the real world with minimal or without battery power. This session will open with an overview on where the industry stands in applying the concept of More than Moore to drive the Internet of Things.

A panel will follow, assessing the industry requirements, obstacles, and advancements in developing the technology required to make the Internet of Things a reality.

Session Three: Integration Challengesand Opportunities
Furthering the advancement of More than Moore involves unifying silicon technologies with novel integration concepts; application software convergence; and new supply chain business models. This session will open with an overview identifying the key industry trends, challenges and opportunities to realize higher density, greater functional performance and boosted power for ICs.

A panel will follow, discussing possible collaborative solutions to the challenges of integration and its impact on business market growth and investment.

Program:
View speaker bios.
[TABLE] style=”width: 100.0%”
|-
| Time
| Item
| style=”width: 23px” |
|-
| 9:00 a.m.
| Registration
|
|-
| 9:15 a.m.
| colspan=”2″ | [TABLE] style=”width: 100.0%”
|-
| Morning Reception sponsored by SuVolta
|
|-

|-
| 9:45 a.m.
| Opening Remarks
|
|-
| 10:00 a.m.
| Session One: Disruptive Innovation – Enabling Technology for the Mobile Industry of Tomorrow
|
|-
|
|

  • Kaivan Karimi, Executive Director, Global Strategy & Business Development, Microcontroller Group, Freescale

|
|-
|
|

|
|-
|
|

|
|-
|
|

  • Dr. Ely Tsern, VP & Chief Technologist, Memory and Interfaces Division, Rambus

|
|-
| 11:00 a.m.
| Networking Break
|
|-
| 11:15 a.m.
| Session Two: How More than Moore Impacts the Internet of Things

Moderator:Edward Sperling, Editor In Chief, Low-Power Engineering
|
|-
|
| Panelists:
|
|-
|
|

|
|-
|
|

  • Kamran Izadi, Director, Sourcing & Supplier Management, Cisco

|
|-
|
|

  • Oleg Logvinov, Director of Market Development, Industrial and Power Conversion Division, STMicroelectronics

|
|-
|
|

  • Martin Lund, Senior VP, Research and Development, SoC Realization Group, Cadence

|
|-
| 12:15 p.m.
| Networking Lunch
|
|-
| 1:15 p.m.
| Session Three: Integration Challenges and Opportunities

Moderator:Bruce Kleinman, VP, Product Marketing, GlobalFoundries
|
|-
|
| Panelists:
|
|-
|
|

|
|-
|
|

|
|-
|
|

|
|-
|
|

|
|-
| 2:15 p.m.
| Closing Remarks
|
|-

The Global Semiconductor Alliance (GSA) mission is to accelerate the growth and increase the return on invested capital of the global semiconductor industry by fostering a more effective ecosystem through collaboration, integration and innovation. It addresses the challenges within the supply chain including IP, EDA/design, wafer manufacturing, test and packaging to enable industry-wide solutions. Providing a platform for meaningful global collaboration, the Alliance identifies and articulates market opportunities, encourages and supports entrepreneurship, and provides members with comprehensive and unique market intelligence. Members include companies throughout the supply chain representing 25 countries across the globe. www.gsaglobal.org


Tanner EDA v16 OpenAccess is here!

Tanner EDA v16 OpenAccess is here!
by Daniel Nenni on 03-07-2013 at 4:00 pm

Tanner EDA is a pleasure to work with, they are big on collaboration and customers absolutely love their tools. With the Synopsys acquisition of SpringSoft, Tanner needs to step up and fill the void of the affordable Laker tools. Take a close look at their new v16 release and let me know how they are doing.

New capabilities for back-end (layout):

  • OpenAccess database support for PDK and EDA tool interoperability
  • Collaborative design / multi-user design control for enhanced team productivity
  • Improved file loading and rendering speeds
  • Improved performance of physical verification (HiPer Verify)

New capabilities for front-end (schematic capture, simulation, waveform viewing):

  • Integrated mixed-signal simulation (Verilog-AMS co-simulation)
  • Parametric plots, scatter plots and improved text control and graphics manipulation

Bottom line: OA for L-Edit provides a quantum leap in interoperability and productivity for designers and layout engineers. Design elements (and entire designs, in fact) can be dynamically created, modified and shared across and outside of a design team. Users of other layout tools can access and exchange the information seamlessly; provided those tools also support the Si2 OpenAccess database standard.

The feedback from first release customers is looking good:

HiPer Silicon v16 with OpenAccess provides users with unprecedented interoperability and advanced capability, offering an alternative tool flow for those seeking high productivity with improved price-performance. Whether designing IP blocks, discrete circuits or complete SoCs, OpenAccess designs can be easily shared between designers and engineering teams across other tool flows. As Kenton Veeder of Senseeker Engineering, Inc. said, “I really like the OpenAccess capabilities. I also like the increased control over axis labels in [waveform editor]W-Edit.” Veteran user Mark Wadsworth, Tangent Technologies founder, commented, “Overall v16 is a winner – great job Tanner EDA!”

There is a full suite of videos on the new features:

Mixed Signal Simulation Demo
L-Edit Standard & Custom Vias
L-Edit Library & Cell List Navigation
L-Edit Electrical Ports & Text Labels
L-Edit Open Access Databases
L-Edit Dockable Toolbars
S-Edit Verilog AMS views
W-Edit Plot & Curve Enhancements
W-Edit Parametric & Scatter Plots
W-Edit Measure Fit Calculation
W-Edit Cursor Table
W-Edit Chart Text Enhancements

Or you can register and see the Tanner Tools v16 Full Flow Demonstration. Better yet, take the Tanner tools for a free 30 day test drive!

Tanner EDA provides a complete line of software solutions that catalyze innovation for the design, layout and verification of analog and mixed-signal (A/MS) integrated circuits (ICs). Customers are creating breakthrough applications in areas such as power management, displays and imaging, automotive, consumer electronics, life sciences, and RF devices.

A low learning curve, high interoperability, and a powerful user interface improve design team productivity and enable a low total cost of ownership (TCO). Capability and performance are matched by low support requirements and high support capability as well as an ecosystem of partners that bring advanced capabilities to A/MS designs.


Multiprotocol 10G-KR and PCIe Gen-3 PHY IP will support big data and smartphone explosion

Multiprotocol 10G-KR and PCIe Gen-3 PHY IP will support big data and smartphone explosion
by Eric Esteve on 03-07-2013 at 7:47 am

We have frequently said in Semiwiki how crucial is it for the SC industry to benefit from high quality PHY IP… even if, from a pure business point of view (MBA minded), PHY IP business does not look so attractive. In fact, to be able to design on-the-edge SerDes and PLL (the two key pieces), you need to build and maintain a highly skilled and well experienced design team, benefiting of salaries in the top range of the industry, the EDA toolset needed to support high end analog design is also in the high range. On top of these needs to support the design phase, you also need to build a characterization lab, filled with expensive oscilloscopes able to track ps jitter and calculate BER…

Finally, because analog world is like the real world, that is, more unpredictable than the digital, the risk of redesign is far to be null! But, if you want to be able to support the ever increasing need for data bandwidth linked with the smartphone usage explosion and the requirement for accessing data in the cloud, you simply need to benefit from higher speed protocols, like 10G-Base KR or PCIe Gen-3, and to be able to create efficient systems, you need to integrate PHY supporting up to 10 Gbps data rate. A country based industry not able to design such high speed PHY IP would be like some other industries being forced to import from other countries the rare earth material absolutely essential to build key aeronautic, defense or communication systems …

Looking at a single channel PHY block diagram makes you disappointed? You think that a PHY design does not look that complex? In fact, some of the most important features are difficult to highlight in a block diagram, like:

  • Multi-featured (CTLE and DFE) receiver and transmitter equalization: adaptive equalizers have many different settings, and in order to select the right one there needs to be some measure of how well a particular equalization setting works. The result will be to improve Rx jitter tolerance, ease board layout design, and improve immunity to interferences.
  • Mapping the signal eye and output the signal statistics via the shown JTAG interface: this allows for simple inspection of the actual signal. This in-situ testing method can replace very expensive test equipment (when a simple idea gives the best results!)
  • The pseudo-random bit sequencer (PRBS) generator send patterns to verify the transmit serializer, output driver, and receiver circuitry through internal and external loopbacks (keep in mind that Wafer level Test equipment are limited in frequency range, such a circuitry allows running test at functional speed on a standard testers).

If you are interested by Eye diagram measurement, and more specifically want to know how to reduce PCI Express 3 “fuzz” with multi-tap filters, you definitely should read this blog from Navraj Nandra (Marketing Director PHY & Analog IP with Synopsys). The very didactical article explains how adaptative equalization works, Inter Symbol Interferences (ISI), as well as help to understand how signals contain different frequency content, illustrated by four examples of forty bit data patterns, from the Nyquist data (data pattern alternating 1010 data) which is the data pattern which has the highest frequency content possible to the pattern integrating 20 ’0′s followed by 20 ’1′s, which represents the signal with the lowest frequency content. Navraj has been able to explain advanced signal processing concepts by using simple words, and this is everything but simple to do!

If you just want to know what are the protocols supported by the multi-rate PHY IP spanning 1.25 Gbps to 10.3 Gbps data rates to cover key standards: PCI Express 3.0, 10GBASE-KR, 10GBASE-KX4, 1000BASE-KX, CEI-6G-SR, SGMII and QSGMII, just download the “Enterprise 10G PHY IP” datasheet here, or have a look at this PR from Synopsys…

Eric Esteve from IPNEST


A Brief History of the Foundry Industry, part 1

A Brief History of the Foundry Industry, part 1
by Paul McLellan on 03-06-2013 at 2:10 pm

The fundamental economics of the semiconductor industry are summed up in the phrase “fill the fab.” Building a fab is a major investment. With a lifetime of just a few years, the costs of owning a fab are dominated by depreciation of the fixed capital assets (the building, the air and water purification equipment, the manufacturing equipment etc). This puts a big premium on filling the fab and running it as close to capacity as possible. If a fab is not full then the fixed costs will overwhelm the profit on the capacity that is used and the fab will lose money. Of course, if demand is high there is a corresponding problem since a fab that is already full cannot manufacture any more by definition (actually fabs sometimes run at 110% capacity but that is about the most that can be pushed through).

The capacity of a fab is usually a good fraction of the overall needs of the company that built it and so there is often a mismatch between the capacity needed, in terms of wafer-starts, and what is available. In one case, the semiconductor company is out of capacity, the fab is full, but they could sell more product if only they could get it manufactured. In the other case, the company has surplus capacity, perhaps a newly opened fab, and doesn’t have enough product to keep the fab full. This dynamic led to the original foundry businesses, which was semiconductor companies, sometimes competitors, buying raw manufactured wafers off each other to smooth out these mismatches between capacity and demand.

The first fabless semiconductor companies such as Chips & Technologies and Xilinx extended this model a little bit. By definition they didn’t have their own fabs, but they would form strategic relationships with semiconductor companies that had excess capacity. The relationships had to be strategic by definition. You couldn’t just walk into a semiconductor company and ask for a price for a few thousand wafers, any more than today you can walk into, say, Ford and ask how much to have a few thousand cars manufactured. It is not how they are set up to do business.

In 1987 a major change took place with the creation of the Taiwan Semiconductor Manufacturing Company, TSMC. It was an outgrowth of Taiwan’s Industrial Technology Research Institute, ITRI. Since very few fabless semiconductor companies existed back then (Chips and Technologies was founded only in 1985 for instance) their business model was to be a supplier to the existing foundry business, namely providing manufacturing services to semiconductor companies who were short of capacity in their own fabs. One of the original investors was Philips Semiconductors (since spun-out from Philips as NXP) who also was one of the first customers buying wafers.


United Microelectronics Corporation, UMC, was an earlier spinoff from ITRI, created in 1980 as Taiwan’s first semiconductor company. Across the road in Hsinchu from TSMC, its focus also gradually shifted to foundry manufacturing especially once the fabless ecosystem created both a lot of demand and also a wish to have a competitor to TSMC to ensure that pricing remained competitive.

The third of the big three back in that era was Chartered Semiconductor, based in Singapore and backed by a consortium including the Singapore government who saw semiconductor manufacture as a strategic move up the electronic value chain.

The big change that the creation of TSMC made was that it became possible to have semiconductor wafers manufactured without requiring a deep strategic relationship. Pricing wasn’t so transparent that you could just look at the price-list on the web (not least because in 1987 there wasn’t a web) but a salesman would quote you for whatever you needed. It was very similar to a metal foundry, where the name had come from: if you wanted some metal parts forged then they would give you a quote and build them for you. In the same way, if you needed some wafers manufacturing you could simply go and get a price.

This might not seem like that significant a change but it meant that forming a fabless semiconductor company no longer depended on the founders of the company having some sort of inside track with a semiconductor company with a fab, they could focus on doing their design safe in the knowledge that when they reached the manufacturing stage that they could simply buy wafers from TSMC, UMC or other companies that had entered the foundry business.

Companies such as TSMC and UMC were known as pure-play foundries because they didn’t have any other significant lines of business. Semiconductor companies with surplus capacity would still sell wafers and run their own foundry businesses but they were always regarded as a little bit unreliable. Everyone suspected that if the semiconductor company’s business exploded that they would be forced out and have to find a new supplier. Gradually, over time, the semiconductor companies whose primary business was making their own chips became known as Integrated Device Manufacturers or IDMs. This contrasted them with the fabless ecosystem where the companies that created and sold the designs, the fabless semiconductor companies, were different from the companies that manufactured them, the foundries.

The line between fabless semiconductor companies and IDMs has blurred over the last decade. Back in the 1990s, most IDMs manufactured most of their own product, perhaps using a foundry for a small percentage of additional capacity when required. But their own manufacturing was competitive, both in terms of the capacity of fab they could afford to build, and in terms of process technology.

Part 2 is HERE.

Also read: Brief History of Semiconductors


Lithography from Contact Printing to EUV, DSA and Beyond

Lithography from Contact Printing to EUV, DSA and Beyond
by Paul McLellan on 03-05-2013 at 6:21 pm

I used my secret powers (being a blogger will get you a press pass) to go to the first day of the SPIE conference on advanced lithography a couple of weeks ago. Everything that happens to with process nodes seems to be driven by lithography, and everything that happens in EDA is driven by semiconductor process. It is the place to find out if people believe EUV is going to be real (lots of doubt), how about e-beam, is directed self-assembly really a thing.

The keynote was by William Siegle called Contact printing to EUV and was a history and lessons learned from a career from when lithography started to the present day (well, a little bit into the future even).

Back when ICs first started, the technology was 1X contact printing. The mask would physically be in contact with the wafer (like a contact print in photography before everything went digital). The design was actually hand-cut out of a red plastic called rubylith (originally a trademark like escalator) at 5X the actual size. This would then be photographically reduced to create the master mask. The master mask would then be used to create submaster masks. And the submaster masks used to make working masks. The working masks didn’t last very long because they were physically in contact with the wafer and so would get damaged and pick up defects fast. One lesson was to beware 1X printing.

IBM decided that the thing to do was to use the technology used to make the mask, photoreduction, to build a stepper (although I don’t know if they used that name back then). But it turned out to be much harder than they thought and the project failed. Perkin-Elmer were the first to build a successful optical projection stepper. It was still a 1X reduction, the polygons on the mask the same size as on the wafer, but the mask didn’t contact the wafer so the damage/defect problem was much reduced.

Late 1970s steppers finally make it into production. Embarrassingly for the US, Nikon and Cannon soon had better machines than GCA and PE and dominated the market by the late 1980s. In the mid-1980s ASML emerged (and absorbed PE).

The US got seriously worried about the Japanese, not just in steppers but in semiconductor memory too. Until that point there had been no co-operation among companies but that was about to change. SRC, Sematech, DOD VHSIC program, IMEC, LETI, College of nanoscale science and engineering (CNSE) at University of Albany. Another lesson was that sharing enables faster progress.

There was dramatic advances in the 1990s as we went from 365nm to 248nm to 193nm light (we are still at 193 today) along with continual improvement in the photoresist, most notably the invention of CAR (chemically amplified resist) and excimer lasers. IBM actually had all this but kept it secret until they realized that the equipment industry would never build the equipment they needed at IBM until good resist was widely available.

There were some blind alleys too. Ebeam was used in the mask shop and everyone wondered if it would make it to the production flow. The challenge with e-beam is resolution versus throughput. If the beam is small, the throughput is low. As we moved to smaller nodes, e-beam became non-competitive.

The big blind alley was X-ray lithography (around 1nm wavelength). This was killed by three things. Firstly, it needed a synchrotron as a source of X-rays. Second, it required a 1X membrane mask. And remember, beware of 1X printing. But mostly it was built on a false assumption that we would not be able to get beyond 0.25um using optical technology. Well, we are pretty much at 14nm using that technology which is 0.014um in old currency. So one more lesson was to never underestimate the extendability of existing technology.

What has enabled light to last so long was a combination of optimizing the mask (optical proximity correction, OPC) and holistic litho optimization.

There are also promising future technologies. Nano-imprint has remarkable image fidelity (where the mask essentially is pressed onto the wafer). But it has all the same problems as the rubylith era, of 1X contact printing meaning that masters, submasters and working masks are required.

Of course the big hope for the future is EUV with a wavelength of 13.5nm with a 4X reduction reflective reticle. But we need to get the light source power up to 100-250W, we need production worthy masks, resists and metrology. The justification is economic, to avoid multiple patterning, but that it won’t be adopted until it can beat that cost ceiling.

And the future? Another rule, it is impossible to predict 10 years ahead. So at this point we can’t tell if EUV will make it, if directed self-assembly will turn out to be a breakthrough, if carbon nanotubes can be manufactured into circuits economically. We can really only see out about 3 years.


Verification the Mentor Way

Verification the Mentor Way
by Paul McLellan on 03-05-2013 at 3:05 pm

During DVCon I met with Steve Bailey to get an update on Mentor’s verification. They were also announcing some new capabilities. I also attended Wally Rhines keynote (primarily about verification of course, since this was DVCon; I blogged about that here) and the Mentor lunch (it was pretty much Mentor all day for me) on the verification survey that they had recently completed.

Verification has changed a lot over the past few years. The techniques that were only used by the most advanced groups doing the most advanced designs have become mainstream. Of course this has been driven out of necessity, as verification has expanded to take up more and more of the schedule. This is evident in the 75% increase in the number of verification engineers on a project since 2007, compared to the minor increase in the number of design engineers.


The specification for many designs is that each block must have 100% coverage, or waivers are required. Generating and justifying waivers to “prove” that certain code is unreachable and so does not need to be covered is very time consuming. NVidia estimated that they took 9 man-years on code-coverage on a recent project. So one new development is Questa CoverCheck that automates coverage closure. Formally generated waivers for unreachable code reduce the effort to write manual tests and also eliminates the tedious manual analysis to justify waivers to management.


Another new capability is in the area of interconnect verification. Trying to set up all the blocks on a modern SoC so that they generate the required traffic on the interconnect is very time-consuming to do by hand. The simulation is also large, requires a lot of memory and runs slowly. Instead, inFact can be used to generate the traffic more explicitly, replacing the actual blocks of the design with traffic generators that work much more directly.


Mentor also has tools for rules-based verification that gives verification engineers and, especially, the project management insight into how far along verification really is. When this is done ad hoc it always seems that verification is nearly complete for most of the schedule. As the old joke goes, it takes 90% of the time to do the first 90% of the design, and then the second 90% of the time to do the remaining 10%. By switching to rules-based verification the visibility is both improved and made more accurate.


Watch the Clock

Watch the Clock
by Paul McLellan on 03-05-2013 at 2:24 pm

Clock gating is one of the most basic weapons in the armoury for reducing dynamic power on a design. All modern synthesis tools can insert clock gating cells to shut down clocking to registers when the contents of the register are not changing. The archetypal case is a register which sometimes loads a new value (when an enable signal is present, for example) and otherwise recirculates the old value back from the output. This can be replaced with a clock gating cell using the same enable so that the register is only clocked when a new value is loaded, and instead of recirculating the old value the register is simply not clocked at all so that it retains the old value.

The efficiency of clock gating can be measured by clock-gating efficiency (CGE). Static CGE simply counts up the percentage of registers that are gated. But not every clock gate has much effect. In the archetypal example mentioned earlier, there is little power saving if the register loads a new value almost all the time, and a huge saving if the new value is almost never clocked in. Instead of using static CGE, dynamic CGE, the percentage of time that the clocks are actually shut off, is a much better measure.

But even dynamic CGE ignores just how much power is actually saved. If the enable signal shuts off a large part of the clock tree then the power saving can be large and it is worth the effort to try and improve the enable signal so that it captures all the times that the clock can be suppressed. On the other hand, if an enable only applies to a small part of the design (perhaps just a single flop) then there is little point in trying to optimize the enable (and, in fact, just clock gating the register may not even save power versus leaving the multiplexor to recirculate the output bit).

To perform this analysis most accurately requires clock-tree synthesis (CTS) to have been completed. But this is part of the physical design flow and is too late to return to the RTL level to optimize the RTL to incrementally reduce power. Instead, Apache’s PowerArtist allows this analysis to be done at the RTL level using models of the clock tree and the associated interconnect capacitance. This allows the enable efficiency to be calculated for each clock gate and highlights the cases where a gate controls a large amount of capacitance and so is a candidate for additional effort to further improve the enable efficiency and so further reduce power.

See Will Ruby’s blog on clock gating here.


Integrating Formal Verification into Synthesis

Integrating Formal Verification into Synthesis
by Paul McLellan on 03-05-2013 at 1:29 pm

Formal verification can be used for many things, but one is to ensure that synthesis performs correctly and that the behavior of the output netlist is the same as the behavior of the input RTL. But designs are getting very large and formal verification is a complex tool to use, especially if the design is too large for the formal tool to take in a single run. This is an especially severe problem for Oasys RealTime Designer because its capacity is so much larger than other synthesis tools. Using formal verification typically requires complex scripting and manual intervention to get results with reasonable runtimes.

Oasys and OneSpin Solutions have just announced an OEM agreement. Now, in EDA, OEM agreements really only work when the product being sold is integrated inside another (such as Concept Engineering’s schematic generator). Otherwise customers always prefer to buy the two products from their respective companies. This OEM is a tight integration. OneSpin is licensing a portion of its OneSpin 360 EC technology, automated functional equivalence checking software, to Oasys to integrate with RealTime Designer.


The integrated product allows RealTime Designer to drive the formal verification process automatically, dividing the design up into portions that can then be verified in parallel using multiple licenses. For example, a nearly 5 million instance design (so perhaps 30 or 40 million gates) can be verified in just over 2 hours using 10 licenses. The integration is fully compatible with the low power and DFT flows in RealTime Designer, correctly handling clock gating and scan chain insertion.

OneSpin EC equivalence checking ensures that the RTL design and the output gate-level netlist will produce the same results for the same inputs under all circumstances. It doesn’t use simulation-type approaches but is based on mathematically proving that this is so. In the event that this isn’t so (which would be a bug in RealTime Designer unless any manual intervention has taken place) it will produce a counter example.