wide 1

The Unknown in Your Design Can be Dangerous

The Unknown in Your Design Can be Dangerous
by Graham Bell on 07-30-2012 at 10:00 am

The SystemVerilog standard defines an X as an “unknown” value which is used to represent when simulation cannot definitely resolve a signal to a “1”, a “0”, or a “Z”. Synthesis, on the other hand, defines an X as a “don’t care”, enabling greater flexibility and optimization. Unfortunately, Verilog RTL simulation semantics often mask propagation of an unknown value by converting the unknown to a known, while gate-level simulations show additional Xs that will not exist in real hardware. The result is that bugs get masked in RTL simulation, and while they show up at the gate level, time consuming iterations between simulation and synthesis are required to debug and resolve them. Resolving differences between gate and RTL simulation results is painful because synthesized logic is less familiar to the user, and Xs make correlation between the two harder. Unwarranted X-propagation thus proves costly, causes painful debug, and sometimes allows functional bugs to slip through to silicon.

Continued increases in SOC integration and the interaction of blocks in various states of power management are exacerbating the X problem. In simulation, the X value is assigned to all memory elements by default. While hardware resets can be used to initialize registers to known values, resetting every flop or latch is not practical because of routing overhead. For synchronous resets, synthesis tools typically club these with data-path signals, thereby losing the distinction between X-free logic and X-prone logic. This in turn causes unwarranted X-propagation during the reset simulation phase. State-of-the-art low power designs have additional sources of Xs with the additional complexity that they manifest dynamically rather than only during chip power up.

Lisa Piper, from Real Intent, presented on this topic at DVCon 2012 and she described a flow in her paper that mitigates X-issues. The flow is reproduced here.

She describes a solution to the X-propagation problem that is part technology and part methodology. The flow brings together structural analysis, formal analysis, and simulation in a way that addresses all the problems and can be scaled. In the figure above, it shows the use model for the design engineer and the verification engineer. The solution is static analysis centered for the design engineer and is primarily simulation-based for the verification engineer. Also, the designer centric flow is preventative in nature while the verification flow is intended to identify and debug issues.

She also gave a video interview on her presentation at DVCon 2012 and you can watch it here.


Tensilica Joins Wi-Fi Alliance

Tensilica Joins Wi-Fi Alliance
by Paul McLellan on 07-30-2012 at 7:00 am

The Wi-Fi Alliance is an industry consortium dedicated to driving adoption of the various Wi-Fi standards which also go under the rather less catchy name of IEEE 802.11x (where the x varies depending on the generation of the standard, right now a, b, g or n). They also certify devices for interoperability.

Wi-Fi Alliance says that today there are over 1 million Wi-Fi hotspots in the world. I’m assuming that is just the public ones, if we include all of us with a home router (which I’m pretty sure includes everyone reading this blog and probably pretty much everyone you know) then the number has to be much larger. After all, there are predicted to be over 1 million LTE base stations going in over the next few years (many of them with Tensilica inside them).

Tensilica already have some customers using their dataplane processor units (DPUs) for some of the older Wi-Fi standards (with comparatively low data rates).

Tensilica plans to integrate Wi-Fi with its multi-standard radio capabilities. Several of Tensilica’s processors are ideal for Wi-Fi including ConnX D2 and the BBE DSP product families. As always, Tensilica are focused on delivering high performance at low power, in this case providing power-efficient programmable solutions for the next-generation Wi-Fi standards. The recently-announced ConnX BBE32 UE DSP has been specifically optimized for 3G and LTE uplink/downlink and should be ideal for the more demanding Wi-Fi standards. 802.11n, for example, has a data rate of up to 450 Mbps and future standards will be even higher bandwidth.

So Tensilica announced today that they have joined the Wi-Fi Alliance. This will allow their designs to be Wi-Fi Certified which shows not just that they work but, perhaps more importantly, that they do not disrupt other Wi-Fi devices using the same frequency bands (Wi-Fi does not put signals into separate time and frequency bands like 2G wireless technologies other than CDMA did).

Although not directly related to the Tensilica announcement, the Wi-Fi Alliance is also working on a standard called Passpoint to make it easier to roam around different Wi-Fi hot spots without requiring to log in with different credentials each time you are somewhere new. To me, the most annoying of these are the free access points that still require you to go to a splash screen merely to tick a box agreeing to some conditions that you haven’t even read. At least if you are logging into someone that requires you to pay there isn’t much of a way around entering your credit card data. You can expect to see a gradual blurring of Wi-Fi hotspots with Pico-basestations so that networks can offload as much data as they can off the increasingly congested backhaul.

But I love my new iPad 3 (sorry, “the new iPad”) which has LTE access and I can turn into a hot spot so when I stay in hotels in the US I never pay outrageoous $15/day internet access any more. If only they would fix international data roaming. My Verizon iPad account gives me 3 gigabytes of data per month for $20 (or something like that). When I’m out of the country, AT&T sends me a text message telling me that data rates are $20/megabyte. That’s 3,000 times more expensive. If I have a good LTE connection maybe I can download at as much as 2.5 megabytes per second. That’s $50/second. Even if I don’t want to download a movie, the rates are expensive enough to make things we take for granted, like Google maps or email too expensive to use except in a Wi-Fi hotspot. So thank goodness for the Wi-Fi Alliance and all the companies that have made Wi-Fi what it is today.



SemiWiki.com Analytics Exposed 2012

SemiWiki.com Analytics Exposed 2012
by Daniel Nenni on 07-29-2012 at 7:30 pm


About 4 years ago some of my semiconductor cohorts urged me to blog. “Hey Dan, you’re a funny guy, write about EDA and IP, make us laugh!” Of course what I think is funny most people think is snarky, which is a nice word for being a smart ass. The traditional semiconductor press was crumbling, the non traditional EDA websites were outdated, so I figured the timing was right for a new communication channel inside the semiconductor ecosystem, absolutely.

The Semiconductor Wiki Project, the premier semiconductor collaboration site, is a growing online community of professionals involved with the semiconductor design and manufacturing ecosystem. Since going online January 1st, 2011 to June 30th 2012 more than 300,000 unique visitors have landed at www.SemiWiki.com viewing more than 2.5M pages of blogs, wikis, and forum posts. WOW!

Paul McLellan, Daniel Payne, and dozens of others were already blogging so I was one of many. When I mentioned blogging to John Cooley at DAC he laughed and said I was an idiot. Proving John wrong was motivation enough to get started. I blogged every Sunday for 2 years and the analytics kept growing. Thousands of semiconductor professionals subscribed to my blog and read it every week. No fortune but plenty of fame. Random people walked up to me and said “Hey Dan, I read your blog!” It also got me access to the executive levels of the semiconductor ecosystem. Lunch with Mentor CEO Wally Rhineswas well worth the effort.

About a year and a half into it I decided to see if I could scale my blog through crowd sourcing and SemiWiki was born. My oldest son Ryan had just earned his BS degree and had a six month wait for graduate school so he was employee #1. Next came Paul McLellan, Daniel Payne, Eric Esteve, and my beautiful wife Shushana. Paul confessed to me over dinner one night that at first he didn’t think SemiWiki would succeed. I had my doubts as well but clearly SemiWiki is successful and now some refer to it as a “force of nature”. I really appreciate that.

Like my blog, SemiWiki was slow going at first. Luckily we had Atrenta and Mentor as beta partners. TSMC, GlobalFoundries, ClioSoft, SpringSoft, Solido, and others followed. With lots of input from friends and foes, today SemiWiki has exceeded all expectations, absolutely.

Google Analytics was a challenge since SemiWiki has dozens of micro sites inside which caused some bumps and bruises but here is what Google says today:

Q1 2011 was a pretty good start:
Unique Visitors:20,397
Pageviews:201,498

Q2 2011 business started picking up:
Unique Visitors: 37,513
Pageviews: 273,909

At this point we had about 500 registered users. Not good at all in my opinion. I was expecting thousands. After one year we were just short of 5,000 registered users. Fast forward to where we are today.

Q2 2012 SemiWiki numbers:
Unique Visitors: 85,012
Pageviews: 506,316

January 1, 2011 – June 30, 2012:
Unique Visitors: 309,036
Pageviews: 2,520,522

Total Blogs, Wikis, and Forum Posts on SemiWiki: 5,091

Registered SemiWiki Users: 12,495

Traffic Sources:

  • 27.34% Search
  • 43.81% Referral
  • 28.85% Direct

Demographics:
[LIST=1]

  • North America
  • India
  • United Kingdom
  • Germany
  • France
  • Taiwan
  • Japan
  • China
  • Singapore
  • South Korea

    Top 20 company domains:
    [LIST=1]

  • Intel
  • AMD
  • Samsung
  • Qualcomm
  • TSMC
  • Synopsys
  • GlobalFoundries
  • Xilinx
  • Apple
  • Altera
  • Mentor
  • Broadcom
  • Freescale
  • Cisco
  • IBM
  • Mediatek
  • Huawaei
  • Cadence
  • Nvidia
  • Texas Instruments

    Rankings from the Alexawebsite:
    EETimes

    • Rank: 24,870

    Design And Reuse

    • Rank:312,684

    SemiWiki

    • Rank : 431,594

    EDAcafe

    • Rank:441,196

    ChipEstimate

    • Rank:861,483

    DeepChip

    • Rank:1,175,172

    GabeonEDA

    • Rank: 1,625,750

    SoCCentral

    • Rank: 3,765,891

    What’s next for SemiWiki? Plenty! Ryan just finished his graduate degree and is developing new features. We have a dozen or so bloggers (anybody can blog on SemiWiki). We are working closely with more than two dozen companies. As the analytics grow so does the value proposition and it is easy to see.

    SemiWiki is a real-time feedback loop. Prime examples are white papers or webinars, key communication channels for the semiconductor ecosystem. SemiWiki bloggers download white papers or view webinars and do a 500 word blog on what they feel the message is. Readers can then make public or private comments and either click over to download or view. If 5,000 people read the blog and 0 click over, not good. If 5,000 people read it and 250 people click over that is pretty good. Vendors can then take this data and feed it back into their marketing and product development cycle, simple as that.

    So I want to thank everyone who made SemiWiki what it is today and ask that you keep the real-time feedback loop going for the greater good of the semiconductor ecosystem!


  • NVM IP: why only anti fuse solution from Novocell Semiconductor is 100% reliable?

    NVM IP: why only anti fuse solution from Novocell Semiconductor is 100% reliable?
    by Eric Esteve on 07-28-2012 at 3:29 am

    The concept of Non Volatile Memory (NVM) block which could be integrated into an ASIC is relatively recent, Novocell for example has been created in 2001. NVM IP integration into an ASIC is a pretty smart technology: integrating from a few bytes to up to Mbits into a SoC can help reducing the number of chips in a system, increase security and allow for Digital Right Management (DRM) for video and TV applications, or provides encryption capability.

    In the past (in the 1990’s) integrating a Flash memory block into a SoC was requesting to add specific mask levels, leading to an over-cost of about 40%. I remember trying to sell such an ASIC solution in 1999-2001: the Flash memory capability was looking very attractive for the customer, until we talk about pricing and the customer realizes that the entire chip would be impacted. I made few, very, very few sales of ASIC with embedded Flash! The current NVM IP offering from Novocell Semiconductor does not generate such a cost penalty; the blocks can be embedded in standard Logic CMOS without any additional process or post process steps and can be programmed at the wafer level, in package, or in the field, as en use requires.

    An interesting feature offered by the Novocell NVM family, based on antifuse One Time Programmable (OTP) technology, is the “breakdown detector”. Does that means that, if you do not use this function, delivered only by Novocell, you may program an OTP, by activating the antifuse, and finally getting an apparently programmed memory cell, or if you prefer, a memory cell which you think you have programmed, but in fact that the cell is NOT containing the desired value? The answer is, unfortunately for Novocell competitors, YES!

    The breakdown detector is used to precisely determining when the voltage applied to the gate (breaking the oxide and consequently allowing the current to flow through this oxide, and finally programming the memory cell) will effectively have created an irreversible oxide breakdown, the “hard breakdown”, by opposition of a “soft breakdown” which is an apparent, reversible oxide breakdown. If, for example, the oxide has been stressed during a period of time which is not long enough, the hard breakdown is not effective and the user can’t program the memory cell. Looking at the two pictures help understanding the mechanisms:

    • On the first, the current (vs time) is going up sharply only after the thermal breakdown is effective

    • The next pictures shows the current behavior of a memory cell for different cases, and we can see that when the hard breakdown is effective the current value is about three order of magnitude higher than for a progressive (or soft) breakdown.

    When your NVM IP does not include a breakdown detector, you can find some way to increase your level of confidence into the OTP programming; we will see two of these:

    • You can choose to increase the time duration during which you apply this high voltage. But you have to know that such a time is strongly dependent on technology factors, like oxide thickness, doping concentration of the substrate, and so on. This time duration varies between different technologies nodes, this seems quite obvious, but it also varies, for the same technology node, with the Silicon foundry selected. So, you can say you will multiply the average time by a factor of 2. Or maybe you should multiply by a factor of 5? Or 10? In other word, if you plan to program a large OTP, you will consume more than needed tester time, which is, as everybody knows, very cheap!

    • If you use the above mentioned approach, you have to keep in mind that, whatever the time duration selected, there is a (statistically) remaining risk that “some” cell may not have been properly programmed. Thus, NVM IP vendors (these who are not using the “breakdown detector”) have decided to add spare memory points. In fact, they have quickly realized that to be sure to have enough margin, they should duplicate the memory array! The cost in term of wasted silicon area is proportional with the memory size…

    Clearly, one of the Novocell’s differentiator, and probably the most important, is reliability. To avoid the limitations of traditional embedded NVM technology, Novocell utilizes the patented dynamic programming and monitoring process and method of the Novocell SmartBit™, ensuring that 100% of customers’ embedded bit cells are fully programmed. The result is Novocell’s unmatched 100% yield and unparalleled reliability, guaranteeing customers that their data is fully programmed initially, and will remain so for an industry leading 30 years or more. Novocell NVM IP scales to meet all NVM size and complexity challenges that grow exponentially as SoCs continue to move to advanced nodes such as 45nm and beyond.

    Eric Estevefrom IPNEST

    A Brief History of Semiconductor IP



    Addressing the Nanometer Digital Design Challenges! (Webinars)

    Addressing the Nanometer Digital Design Challenges! (Webinars)
    by Daniel Nenni on 07-27-2012 at 7:30 pm

    Optimizing logical, physical, electrical, and manufacturing effects, Cadence digital implementation technology eliminates iteration without sacrificing design quality by addressing timing sensitivity, yield variation, and leakage power from the start.
    Continue reading “Addressing the Nanometer Digital Design Challenges! (Webinars)”


    Synopsys Protocol Analyzer Video

    Synopsys Protocol Analyzer Video
    by Paul McLellan on 07-27-2012 at 3:07 pm

    Josefina Hobbs, a solutions architect at Synopsys, demonstrates protocol debug made easy using the Synopsys Protocol Analyzer. This gives users a graphical view of the transfers, transaction, packets and handshaking of a protocol. The video also shows the integration of Synopsys Protocol Analyzer with SpringSoft’s Verdi using the Verdi Interoperability Apps (VIA) which gives open access to the Verdi KDB and FSDB databases.

    Analyzing the implementation of modern protocols is complex. Transactions on buses are interleaved and so the relationship between the transactions themselves, which ones are happening concurrently, and which bus activity is associated with which transaction is not obvious. The Synopsys Protocol Analyzer makes this clear. In the screenshot below, the leftmost part of the window shows the transactions, the middle part (with the arrow) shows the concurrent transactions and the right part (brown) shows the bus activity. By clicking on any of these, the corresponding other pieces of the puzzle are highlighted.


    This makes it easy to unravel the complex behavior of highly interleaved traffic, understand activity, identify bottlenecks and debug anything unexpected. The Protocol Analyzer also links with simulation logfiles, as in the screenshot below, so that timelines in the protocol are linked to the same point in the simulation logfile, making it easy to investigate issues by moving up and down the different levels of abstraction.

    Protocol Analyzer can also be linked to SpringSoft’s Verdi so that the raw waveforms can be examined, and still have all the links to the higher level representations, and synchronized timelines, as in the screenshot below.

    The video is hosted by Synopsys here and by SpringSoft here.


    Parasitic-Aware Design Flow with Virtuoso

    Parasitic-Aware Design Flow with Virtuoso
    by Daniel Payne on 07-27-2012 at 12:01 pm

    I learn a lot these days through webinars and videos because IC design tools like schematic capture and custom layout are visually oriented. Today I watched a video presentation from Steve Lewis and Stacy Whiteman of Cadence that showed how Virtuoso 6.1.5 is used in a custom IC design flow: Continue reading “Parasitic-Aware Design Flow with Virtuoso”


    Addressing the Nanometer Custom IC Design Challenges! (Webinars)

    Addressing the Nanometer Custom IC Design Challenges! (Webinars)
    by Daniel Nenni on 07-26-2012 at 7:30 am

    Selectively automating non-critical aspects of custom IC design allows engineers to focus on precision-crafting their designs. Cadence circuit design solutions enable fast and accurate entry of design concepts, which includes managing design intent in a way that flows naturally in the schematic. Using this advanced, parasitic-aware environment, you can abstract and visualize the many interdependencies of an analog, RF, or mixed-signal design to understand and determine their effects on circuit performance.

    Watch technical presentations and demonstrations on-demand and learn how to overcome your design challenges with the latest capabilities in Cadence custom/analog design solutions.

    Virtuoso 6.1.5 – Front-End Design

    Steve Lewis, Product Marketing Director

    Highlights of new front-end design tools and features (including a new waveform viewer, Virtuoso Schematic Editor, and Virtuoso Analog Design Environment), and how to identify and analyze parasitic effects early.
    View Sessions

    Virtuoso Multi-Mode Simulation
    John Pierce, Product Marketing Director

    Updates on the latest simulation capabilities including Virtuoso Accelerated Parallel Simulator distributed multi-core simulation mode for peak performance; a high-performance EMIR flow; Virtuoso APS Accelerated Parallel Simulator RF analyses; and an enhanced reliability analysis flow.
    View Sessions

    Virtuoso 6.1.5 – Top-Down AMS Design and Verification
    John Pierce, Product Marketing Director
    View Session
    Highlights of the latest in advanced mixed-signal verification methodology, checkboard analysis, assertions, and how to travel seamlessly among all levels of abstraction of the design.
    View Sessions

    Virtuoso 6.1.5 – Back-End Design
    Steve Lewis, Product Marketing Director
    View Session
    Highlights of the latest in constraint-driven design; Virtuoso Layout Suite; links between parasitic-aware design, rapid analog prototyping, and QRC Extraction; and top-down physical design: floorplanning, pin optimization, and chip assembly routing with Virtuoso Spaced-Based Router.
    View Sessions

    Virtuoso 6.1.5 – MS Design Implementation
    Michael Linnik, Sr. Sales Technical Leader
    View Session
    Highlights of the latest mixed-signal implementation challenges and solutions that link Virtuoso and Encounter technologies on the OpenAccess database, including analog/digital data interoperability, common mixed-signal design intent, advances in design abstraction, concurrent floorplanning, mixed-signal routing, and late-stage ECOs.
    View Sessions

    What’s New in Signoff
    Hitendra Divecha, Sr. Product Marketing Manager

    Highlights of standalone and qualified in-design signoff engines for parasitic extraction, physical verification, power-rail integrity analysis, litho hotspot analysis, and chemical-mechanical polishing (CMP) analysis.
    View Sessions


    Jasper Customer Videos

    Jasper Customer Videos
    by Paul McLellan on 07-25-2012 at 2:28 pm

    Increasingly at DAC and other shows, EDA companies such as Jasper are having their customers present their experiences with the products. Everyone has seen marketing people present wonderful visions of the future that turn out not to materialize. But a customer speaking about their own experiences has a credibility that an EDA marketing person (and I speak as one myself) do not. In fact EDA marketing doesn’t exactly have a great reputation (what’s the difference between an EDA marketing person and second-hand car salesman? the car salesman knows when he is lying).

    At DAC Jasper had 3 companies present their experiences using various facets of formal verification using JasperGold. These are in-depth presentations about exactly what designs were analyzed and how, and what bugs they found using formal versus simulation.

    The presentations were videoed and are available here (registration required).

    ARM
    Alan Hunter from Austin (which based on his accent is a suburb of Glasgow) talked about using JasperGold for validating the Cortex-R7. It turns out that a lot of RTL was available before the simulation testbench environment was set up so designers started to use formal early (for once). They found that it was easier to get formal set up than simulation and, when something failed, the formal results were easier to debug than a simulation fail.

    He then discussed verifying the Cortex A15 L2 cache. They had an abstract model of the cache as well as the actual RTL. The results were that they discovered a couple of erros in the specification using the formal model. They could also use the formal model to work out quickly how to get the design into certain states quickly. And they found one serious hardware error.

    With A7, where formal was used early, they focused on bug-chasing and were less concerned about having full proof coverage. But with A15 they were more disciplined and more focused on proving both the protocol itself and the reference implementation were rock solid under all the corner cases.

    NVIDIA
    NVIDIA talked about using JasperGold for Sequential Equivalence Checking (SEC). Equivalance Checking (EC), I suppose I should say Combination Equivalence Checking to distinguish it from Sequential, is much simpler. It identifies all the registers in the RTL and finds them in the netlist and then verifies that the rest of the circuit matches the RTL. The basic assumption is that the register behaviour in the RTL is the same as in the netlist. SEC relaxes that because, for example, a register might be clock-gated if its input is the same as its output. For example, if a register in a pipeline is not clocked on a clock cycle, the downstream register does not need to be clocked on the next clock cycle since we know the input has not changed. In effect, the RTL has been changed. This type of optimization is common for power reduction, either done manually or by Calypto (who I think have the only automatic solution in the space). Half the bugs found in one design were from simulation but the other half came from FV. About half of the bugs found by FV would eventually have been discovered by simulation…but would have taken another 9 months of verification.

    But wait, JasperGold doesn’t do sequential equivalence checking does it? No, it doesn’t but it will. NVIDIA is an early customer of the technology and have been using it on some very advanced designs to verify complex clock gating. Watch out for the Jasper SEC App.

    ST Microelectronics
    ST talked about how they use JasperGold for verification of low power chips. The power policy is captured in CPF and the two real alternatives to verification are power-aware RTL simulation and FV. Power-aware RTL simulation is very slow, still suffers from the well-known X-optimism or X-pessimism problem, along with an explosion of coverage due to the power management alternatives. And, like all simulation, it is incomplete.

    Alternatively they can create a power-aware model, in the sense that the appropriate parts of CPF (such as power domains, isolation cells and retention cells) are all included in the model. This way they can debug the power optimization and even analyze power-up and power-down processes, in particular to ensure that no X values are propagated to the rest of the system.


    ARM and TSMC Beat Revenue Expectations Signaling Strength in a Weakening Economy?

    ARM and TSMC Beat Revenue Expectations Signaling Strength in a Weakening Economy?
    by Daniel Nenni on 07-25-2012 at 11:00 am

    Fabless semiconductor ecosystem bellwethers, TSMC and ARM, buck the trend reporting solid second quarters. Following “TSMC Reports Second Highest Quarterly Profit“, the British ARM Holdings “Outperforms Industry to Beat Forecasts“. Clearly the tabloid press death of the fabless ecosystem claims are greatly exaggerated.

    “ARM’s royalty revenues continued to outperform the overall semiconductor industry as our customers gained market share within existing markets and launched products which are taking ARM technology into new markets. This quarter we have seen multiple market leaders announce exciting new products including computers and servers from Dell and Microsoft, and embedded applications from Freescale and Toshiba. In addition, ARM and TSMC announced a partnership to optimize next generation ARM processors and physical IP and TSMC’s FinFET process technology.” Warren East, ARM CEO.

    • ARM’s Q2 revenues were up 12% on Q1 at £135m with profit up 23% at £66.5m.
    • H1 revenues were up 12% on H1 2011 at £268m, with profit up 22% at £128m.
    • 23 processor licenses signed across key target markets from microcontrollers to mobile computing
    • Two billion chips were shipped into a wide range of applications, up 9% year-on-year compared with industry shipments being down 4%
    • Processor royalties grew 14% year-on-year compared with a decline in industry revenues of 7%
    • 3 Mali graphics processor licensess were signed in Q2, of which two were with new customers for Mali technology
    • 5 physical IP Processor Optimisation Packs were licensed.

    ARM enters the second half of 2012 with a record order backlog and a robust opportunity pipeline. Relevant data for the second quarter, being the shipment period for ARM’s Q3 royalties, points to a small sequential increase in industry revenues. Q4 royalties are harder to predict as macroeconomic uncertainty may impact consumer confidence, and some analysts have become less confident in the semiconductor industry outlook in the second half. However, building on our strong performance in the first half, we expect overall Group dollar revenues for full year 2012 to be in line with market expectations.

    Even more interesting is the recently announced TSMC / ARM multi-year agreement that extends beyond 20 nm technology (16nm) to enable the production of next-gen ARMv8 processors that use FinFETtransistors and leverages ARM’s Physical IP that currently covers a production process range from 250 nm to 20 nm.

    “By working closely with TSMC, we are able to leverage TSMC’s ability to quickly ramp volume production of highly integrated SoCs in advanced silicon process technology,” said Simon Segars, executive vice president and general manager, processor and physical IP divisions, ARM. “The ongoing deep collaboration with TSMC provides customers earlier access to FinFET technology to bring high-performance, power-efficient products to market.”

    “This collaboration brings two industry leaders together earlier than ever before to optimize our FinFET process with ARM’s 64-bit processors and physical IP,” said Cliff Hou, vice president, TSMC Research & Development. “We can successfully achieve targets for high speed, low voltage and low leakage, thereby satisfying the requirements of our mutual customers and meeting their time-to-market goals.”

    This agreement makes complete sense with 90% of ARM silicon going through TSMC and the PR battle Intel is now waging against both ARM and TSMC. But lets not forget the Intel Atom / TSMC agreement of March 2009:

    We believe this effort will make it easier for customers with significant design expertise to take advantage of benefits of the Intel Architecture in a manner that allows them to customize the implementation precisely to their needs,” said Paul Otellini, Intel president and CEO. “The combination of the compelling benefits of our Atom processor combined with the experience and technology of TSMC is another step in our long-term strategic relationship.”

    Sorry Paul, clearly this was not the case. TSMC is customer driven and Atom had no customers. So there you have it. The agreement was “put on hold” less than a year later:

    Intel spokesperson Bill Kircos said no TSMC-manufactured Atoms are on the immediate horizon, though he added that the companies have achieved several hardware and software milestones and said they would continue to work together. “It’s been difficult to find the sweet spot of product, engineering, IP and customer demand to go into production,” the Kircos said.


    Given that wrong turn, the current Intel strategy is to offer ASIC services versus the traditional foundry COT (customer owned tooling) for Atom SoCs using a CPU centric 22nm process. This turnkey ASIC service is currently called Intel Foundry Services to which we have heard plenty but have yet to see any silicon. Just my observation of course.