CEVA Dolphin Weninar SemiWiki 800x100 260419 (1)

Intel Foundry Rounds Out IP Lineup With ARM at IDF 2016

Intel Foundry Rounds Out IP Lineup With ARM at IDF 2016
by Patrick Moorhead on 09-28-2016 at 12:00 pm

There are always debates on who does what best in the semiconductor industry, but most agree that Intel is the best in process and transistor technology. This leadership has served the company extremely well over the last few decades and allowed them to reach a position of dominance in the PC and server semiconductor markets. In order to build such a leadership position, Intel has repeatedly invested billions in true R&D (with a focus on “R”) to invent the next big transistor and to reach the next big process node. In addition to those investments, Intel has to spend billions of dollars building new fabs to accommodate these new process nodes and technology. This investment not only serves their own designs but since 2010 for foundry customers.

At the Intel Developer Forum, Intel announced new customers, customer updates and new ARM Holdings-based IP for their custom foundry which I believe could really open up a lot of opportunities for their foundry business.

Intel Custom Foundry a natural extension of the fab
The fab business always has been a business of scale, so it comes as no surprise that Intel would logically want to take on the fabrication of others’ chips to increase scale. Intel has competition from Samsung, TSMC and GlobalFoundries in the foundry space, but until 2010 Intel only produced their own chips from their own designs. That changed when Intel took on Altera, which they eventually acquired, but now the company is expand their foundry capabilities and customer base even more. To be more competitive with the competing 10nm processes from Samsung and TSMC, Intel is beginning to take a bigger role in the foundry business with their Custom Foundry which was outlined today.

New foundry customers and details on others

Today at the Intel Developer Forum, Intel Custom Foundry announced new customers and details on previously announced customers ranging from 22nm all the way down to 10nm, including Achronix, LG Electronics, Netronome and Spreadtrum. Their products range from networking accelerators and FPGAs to mobile SoCs.

This is part of a journey that Intel started in 2010 to enable multiple fab customers, which involved making a lot of changes to their standard tools and integrating other companies’ IP. This meant building chips with the help of IP from leaders like Synopsis, Cadence and Mentor Graphics that led to Intel building out their ecosystem of design tools and supported IP. This was a huge change from prior decades where it was Intel designs, Intel tools in intel IP in Intel fabs.

New ARM-based IP
The lessons learned from building chips for others has led Intel to reach another major milestone today with their capabilities as a fab. That milestone announced today at IDF is the ability to now fab processors for ARM Holdings customers using Artisan IP. There’s an ARM processor on the Altera FPGA they fab, too.

The ability to fab ARM Holdings processors in the traditionally x86 Intel fab comes from a partnership between Intel and ARM that includes ARM’s Artisan Physical IP platform. This means that Intel has access to ARM’s high performance and high density logic libraries as well as their memory compilers and POP IP. While Intel has had the ability to fab an ARM-based part, the addition of the Artisan IP makes it easier.

This means that ARM Holdings customers have another choice when it comes to considering a foundry when it comes to a leading process node like 10nm. Currently, there are no 10nm chips so all foundries are currently talking about 10nm as a future node, but the reality is that many foundries will have 10nm chips shipping in 2017.

What it all means
The partnership between ARM and Intel to deliver ARM Artisan Physical IP within Intel’s Custom Foundry further legitimizes Intel’s efforts to open their Custom Foundry to more than what some considered “the random customer”. The addition of ARM IP to Intel’s fab also legitimizes ARM’s own Artisan Physical IP as the industry standard for physical IP with all major fabs now supporting it.

Intel has come a long way since making only their own chips with their own designs and tools and now they are beginning to present themselves as a serious foundry competitor to Samsung, TSMC, and GlobalFoundries, all three who heavily rely on mobile foundry volumes for profitability.

Intel may not be making chips for Qualcomm any time soon and the perceptual competitive challenges will need to be overcome, but there is a lot of potential that Intel could continue to grow their fab business as a way to improve the scale of their fabs and increase profitability. If Intel keeps driving their technology like they have for the last decade, some may have to fab there to remain competitive.

Net-net, this is a win for both Intel and ARM Holdings, but the biggest winner are the customers who see even greater competition in the foundry business.

Also read: The 2016 Leading Edge Semiconductor Landscape


What Will Kill ROP Cyberattacks?

What Will Kill ROP Cyberattacks?
by Matthew Rosenquist on 09-28-2016 at 10:00 am

IBM recently announced a software-oriented solution to help eradicate Return Oriented Programming (ROP) malware attacks. ROP is a significant and growing problem in the industry. Crafty hackers will use snippets of code from other trusted programs and stitch it together to create their attacks. It has become a very popular and effective technique for top malware.

Almost 90 percent of exploit-based software attacks use the hostile ROP technique in the chain of attack.

The Security Intelligence article referenced a blog I wrote in June about how Intel and Microsoft have developed a hardware based solution. Thought leading companies are looking to prevent these types of attacks.

First, let’s recognize that the problem is real, it is an issue now, and will likely be a favorite method of attackers because of its effectiveness and stealth properties. Because it is using parts of trusted code, it is very difficult to detect and stop. Software solutions have tried in the past to stem the problem, but have largely been unsuccessful. Software fighting software is just to even of a fight and the attackers only need to find one way around preventative solutions to win. I hope the IBM solution has a positive effect, but am concerned about the long term viability.

In the end, I believe the future of ROP security will be based on features embedded beneath the software, operating systems, virtual machines, and even beneath the firmware, and located in the hardware processor itself. Hardware tends to be outside the maneuvering zone of software hackers, therefore can give a definitive advantage to securing the system from ROP based attacks. The architecture itself can be designed to give advantages to secure computing practices, help OS’s be more secure, and compensate for vulnerable software.

Regardless of where it happens, it is very important for innovative minds to continue to work on taking the fangs out of ROP attacks.

References:


The Privacy Delusion

The Privacy Delusion
by Roger C. Lanctot on 09-28-2016 at 7:00 am

Why do we think we have privacy in our cars? Why does the government believe there is an interest in preserving privacy in cars? Can we just get over it? One of the least private places known to mankind – outside of the Internet – is the car!

But our transportation regulators in the U.S. and their counterparts at the European Commission cling to the fantasy of automotive privacy. It makes no sense. You sit in a device weighing more than a ton surrounded by glass with a powerful engine that, more often than not, loudly announces its presence.
We drive through camera-equipped intersections and tolling stations and past speed cameras peppered throughout the landscape deluding ourselves that we are somehow below the radar or out of sight. It’s nonsense.

But regulators and the car companies themselves foster this fallacy by promising to protect our privacy at all cost. The latest nonsense effort in this direction is one of the 15 “guidelines” from the U.S. Department of Transportation governing the development of cars capable of automated driving.

The USDOT says car owners should have a clear understanding of what kind of data is being collected by the vehicles. They should also be able to reject any collection of personal information such as biometrics or driver behavior.

This is more naïve nonsense. Any engineer working on self-driving cars today will tell you that these systems must integrate driver monitoring systems. Following the fatal crash of a Tesla Motors Model S in Florida and the latest autopilot software updates from Tesla, this is no longer negotiable. If you take your hands off of the steering wheel in an autopilot-equipped Model S for too long the car will direct you to retake the wheel and if you fail to do so will pull over.

This is not unlike a feature contemplated and being tested by GM, according to the GM Authority newsletter, as part of its Supercruise Level 2 enhanced cruise control – which will monitor driver attention and intervene and slow and stop the car (after an OnStar agent intervention) if the driver fails to respond or is incapacitated. Not only is privacy irrelevant when driving a car, it is dangerous.

Car makers like Tesla – in fact most auto makers – disclose their data collection activities but generally do not provide an opt out capability. Some European auto maker RFQs have included a valet parking function as a form of privacy, but the reality is that the implementation and proliferation of active safety systems will increasingly remove privacy from the equation.

Drivers in autopiloted cars must be monitored. Period.

But there are broader implications to the safety imperative. We are increasingly looking toward an IoT-style driving environment where all vehicles will inform all other vehicles of their presence and heading for the purposes of collision avoidance. No privacy there.

Further, car companies will increasingly be held responsible for being aware of vehicle flaws and failures in real time. Privacy will unquestionably be an impediment to broad communication and collection of diagnostic data in real-time.

So let’s please get over the privacy obsession. When we get in our cars we have unlocked a liberating experience, but we should never be deceived into believing that this experience comes with any privacy privileges or rights.

Also read: The Virus of Car Ownership


Synopsys Hosting Formal Methods in CAD Conference Next Week

Synopsys Hosting Formal Methods in CAD Conference Next Week
by Bernard Murphy on 09-27-2016 at 8:00 pm

SynopsysB

FMCAD (Formal Methods in Computer Aided Design) is a technical conference with a 20-year pedigree. This is a conference for serious formal methods teams. Key notes are from Berkeley and UCLA, committee members are all formal heavyweights and best I can tell, there is no exhibitors area.


Continue reading “Synopsys Hosting Formal Methods in CAD Conference Next Week”


Making photonic design more straightforward

Making photonic design more straightforward
by Don Dingee on 09-27-2016 at 4:00 pm

The arrival of optical computing has been predicted every year for the last fifteen years. As with any other technology backed by prolific research, lofty goals get dialed back as problems are identified. What emerges first is a set of use cases where the technology fits with practical, realizable implementations.

When it comes to photonics, the obvious use case is high bandwidth I/O channels. The ability to multiplex separate data channels in different wavelengths onto a single glass fiber has been the cornerstone of fiber optic communications for decades. These days, it’s not difficult at all to get that done with an SFP transceiver, a physical transition at the edge of a board from electrical to light domain. The idea in play now is to move that transition closer to the processing – and that has significant challenges.

Photons just work differently than electrons, and those differences drive layout and verification of photonic ICs. The photonic approach requires a combination of methods from traditional mixed signal design, RF design, high speed digital design, an understanding of the fab technology needed, plus light-domain expertise. That sounds scary, right? As the saying goes: if it were easy, everyone would do it. Fortunately, borrowing technology from those other design methodologies and adding knowledge of photonics is making some very cool things possible.

I’m struck by this quote in a new white paper co-written by a team at Luceda Photonics (a company started as a spin-off of imec) and the Tanner EDA experts at Mentor Graphics.

There is a large gap between what the silicon photonic technology can accomplish and the functionality that designers can actually design and simulate.

Several of the problems they point out are related to the waveguide nature of photonic design, forcing layouts into a single layer with very specific design and interconnect rules. Another area of big concern is the highly customized process design kits (PDKs) for photonic design, and the heavy burden placed on simulation due to the light spectrum frequencies and electromagnetic, electro-optical, and thermal parameters.


The goal is a design flow that moves from intense academic photonic knowledge to a production-ready environment that looks more familiar to designers coming from mixed signal backgrounds. Luceda Photonics has stepped in with their IPKISS.eda design framework built on Tanner L-Edit. This integration was made possible by use of the OpenAccess database standard and API, including heavy use of Python macros extending L-Edit functionality.

The white paper goes into detail with design of a 2×2 optical crossbar switch based on a Mach-Zehnder Interferometer thermo-optic switch. I’m not a photonic expert nor do I play one on TV, but I’ve done enough RF layout to appreciate how the Luceda environment handles waveguide generation. Note how the Luceda extensions appear as a new set of pull-downs in the familiar Tanner L-Edit tool:


Using Tanner Calibre One nmDRC, design rule checks can be run quickly using the rule deck from the photonic foundry, and highlighted in the results browser against the layout. Functional verification is performed with Luceda’s CAPHE optical circuit simulator. With the photonic environment fully integrated in IPKISS.eda, designers get help from automation with a large degree of control.

The complete white paper is available from Mentor Graphics (registration required):

Luceda Photonics Delivers a Silicon Photonics IC Solution in Tanner L-Edit

I remember the days of brute force RF and MEMS layout and funky tools that didn’t integrate with much of anything except a printer and a file the fab hopefully understood. This white paper is a good read on two fronts: how Luceda used open extensibility features to add functionality to L-Edit, and how challenging photonic chip design still is. I’m encouraged by the progress here. Certainly the philosophy Luceda is pursuing is correct – make photonic design as easy as advanced mixed-signal design. Leveraging Tanner’s capability not only got them there faster, it helps reduce EDA tool fragmentation and reduces the learning curve for users.

Making photonic design more straightforward should increase the number of photonic designs people will be willing to attempt, and that in turn should speed up overall innovation in the field.


Why is Low Frequency Noise Measurement for ICs Such a Big Deal?

Why is Low Frequency Noise Measurement for ICs Such a Big Deal?
by Daniel Payne on 09-27-2016 at 12:00 pm

Even digital designers need to be aware of how noise impacts their circuits because most clocked designs today use a Phase Locked Loop (PLL) block which contains a circuit called a Voltage Controlled Oscillator (VCO) that is quite sensitive in operation to the effects of noise and process variation. As process node scaling continues the effects of low frequency noise increase. There are even new devices coming out of R&D like nano-wires, Silicon Carbide (SiC) and photonic devices where it’s necessary to measure ultra-low current levels. During process development the engineers will need to accurately measure and characterize low frequency noise in devices so that designers of SRAMs, MEMs and sensors have the most accurate statistical models.


Planar MOS device is sensitive to 1/f noise

One of the newer vendors in the noise characterization space is Platform Design Automationwhich has created a fast and accurate system called the NC300.


NC300

Related blog – A Brief History of Platform Design Automation

An ideal noise characterization system would have these characteristics:

  • Fast speed, seconds instead of minutes
  • Lowest current measurements, pA to nA
  • IC testing, SRAM testing, sensor noise testing

How well does the NC300 compare to the ideals listed above?

  • About 8s to 10s per bias, up to 10X faster
  • Low noise floor of under 10[SUP]-29[/SUP]A[SUP]2[/SUP]/Hz
  • Current noise in the pA range
  • Integrated with Source Monitor Units (SMU) and Dynamic Signal Analyzer (DSA)
  • Supports both multi-die and multi-wafer measurements


NC300 – system noise floor

Other noise measurement systems can take weeks to measure 1/f noise with large samples to form corner or statistical models, but with the NC300 you’re getting results 10X quicker than that. This is a big deal because time is money in the semiconductor business.

With this type of instrumentation and software you can perform all four steps from noise measurement to circuit characterization for design engineers:

The design of the NC300 includes the DSA and Low Noise Amplifier (LNA), saving you equipment space and the performance enables high-volume noise measurements, suitable for both on-wafer and packaged parts. With this kind of test and modeling setup you can characterize:

  • Random Telegraph Noise (RTN) in RRAM
  • Circuit noise test
  • MEMs sensor test
  • Mercury Cadmium Telluride (MCT) infrared device noise test
  • Use MeQLab software for data extraction and modeling

Related blog – Are Your Transistor Models Good Enough?

Noise Testing Examples
Let’s take a quick look at some specific noise testing cases starting with a type of low frequency noise called RTN where electrons or holes are getting trapped between the gate and well of devices which in turn causes vibrations in V[SUB]th[/SUB]values:


RTN with electrons


RTN with holes


RTN causing multi-states

Circuit noise from an Op Amp can be measured and a statistical noise model automatically generated:

The IoT market is enabled by sensors, so here’s a pressure sensor tested for both a low frequency spectrum and derived quality factor:


Pressure sensor – Low Frequency Spectrum


Derived quality factor

GUI
A modern GUI makes learning and using the NC300 easy and intuitive:

Customers
The NC300 has been adopted by leading foundries, design companies and research labs. The 5 leading foundries have benefited from NC300 with its fast speed and ease of use. Researchers are now able to do the noise measurement of super-low currents such as the dark current of photo diodes, which was never achieved with other systems.

Summary
There are multiple vendors offering noise characterization systems in the world, and the NC300 from PDA looks to be a strong, new player worth taking a look at.


Good AI

Good AI
by Bernard Murphy on 09-27-2016 at 7:00 am

A hot debate recently, promoted notably by Elon Musk and Stephen Hawking, has explored whether we should fear AI. A key question centers around the ethics of AI – how we can instill ethical values in intelligent systems we will build and how, I hope, we can ensure we use those systems ethically. This is not an academic question – autonomous cars have already been involved in crashes so it is reasonable to expect they will face challenges which require ethical decisions to be made. More concretely, Alphabet (Super-Google), Facebook, Microsoft, Amazon and IBM have been meeting periodically to discuss how to build ethics into AI.

However, this is a different class of problem from other applications of AI. When you think about image or speech recognition, for example, the class of images or words you want to recognize is well-defined and bounded. But ethical problem spaces are not so easily characterized. Philosophers and religious leaders have struggled for at least 2500 years with the difficult question of how to define what is “good”, and what guidance they provide is generally based more on beliefs than an evidence-rooted chain of reasoning. Which might be a workable starting point for automated ethics if we all shared the same beliefs, but unfortunately we don’t seem to be able to make that claim, even within a single cultural group.

Deep reasoning might be one way to approach the problem (within a common belief-system) on the grounds that perhaps you don’t have to understand the issues, you just have to train with sufficient examples. But how can you construct a representative set of training examples if you don’t understand the problem? Verification engineers should relate to this – you can’t develop a reasonable test plan if you don’t understand what you are testing; the same principle should apply to training.

Perhaps we could develop a taxonomy of ethical principles and build ethics systems to learn human behavior for each principle within a given cultural group. It seems that these are easiest to define by domain; one example I found develops a taxonomy for ethical behavior in crowdsourcing. In some contexts, a domain-specific set of ethical principles might be sufficient. But you could imagine that other contexts may require ethical choices to be more challenging. A commonly cited example here is in choosing between multiple options in collision-avoidance – hitting another car (potentially killing you and the other driver), swerving into a wall (killing just you) or swerving onto a sidewalk (killing pedestrians but saving you). A purely rational choice here, no matter what that choice might be, is unlikely to be acceptable from all points of view.

Another viewpoint considers not the basics of recognizing ethical behavior but instead the mechanics of policing such behavior. It starts from the assumption that it is possible to capture ethical guidelines in some manner and provides layers of oversight, similar to societal standards, to an AI system which must be monitored. This approach provides monitors to ensure AI behavior stays within legal bounds (eg obeying a speed limit), super-ethics decision-makers which look not just at the narrow legality of a situation but also larger human ethics (saving a life, minimizing risk) and enforcers (police) outside the control of the local system which can report violations of standards. Perhaps this discussion doesn’t take the society analogy far enough – if we’re going to have laws and police, shouldn’t we also have lawmakers and courts? And how do we manage conflicts between AI laws and human laws?

On which note, Stanford has a 100-year study on AI in which they look at many factors, including implications for public policy. One discussion is on implications for civil and criminal liability and questions of agency (can AI enter into a legally binding contract?) and how those should be used in prosecuting the consequences of unethical behavior. What is interesting here is that damage to another is not limited to bodily harm – it could be financial or other kinds of harm. So definitions of ethical behavior in some contexts can be quite broad (imagine an AI bot ruining your online reputation). And the agency question is very important. In the event that liable behavior is found, who or what should be liable? This area cannot be resolved purely with technology – our laws must advance also, which requires that our societal beliefs must advance.


I found another piece of research quite interesting in this context – a piece of AI designed to engage in debate. Outside of fundamentalist beliefs, many (perhaps most) ethical problems don’t resolve to black and white choices especially when not in full possession of all relevant data. In these cases, the process of arguing a case towards an outcome is at least as important as a bald set of facts in support of the outcome, especially where there is no clear best choice and yet a choice must be made quickly. For me this line of thinking may be at least as important as any taxonomy and deep-reasoning based work.

As you can see, this is a domain rife with questions and problems and much more sparsely populated with answers. This is not a comfortable place for engineers and technologists to operate – we like clean-cut, yes/no choices. But we’re going to have to learn to operate in this fuzzier, messier world if we want AI to grow. The big tech collaboration is discussed HERE, the societal/oversight article can be found HERE. The Stanford study is HERE and the debate video is HERE.

More articles by Bernard…


Getting out of DIY Mode for Virtual Prototypes

Getting out of DIY Mode for Virtual Prototypes
by Don Dingee on 09-26-2016 at 4:00 pm

Virtual prototyping has, inexplicably, been largely a DIY thing so far. Tools and models have come from different sources with different approaches, and it has been up to the software development team to do the integration step and cobble together a toolchain and methodology that fits with their development effort.

That integration step has scared away many teams that should be using virtual prototyping and capturing the benefits of earlier software creation and debug. Teams using virtual prototyping can have code running and tested months before their physical hardware platform is ready. What’s in the way? I know I can relate to the squiggly red line in this diagram:


To create a virtualizer development kit (VDK) with a set of transaction-level models (TLMs) for IP blocks has been a somewhat painful process. Software teams leave the cohesive world of C compilers, debuggers, and other tools often fully integrated into an Eclipse IDE, and step into a different parallel universe of SystemC for creating TLMs. Adding the rest of the metadata needed for a complete VDK – pin names, register maps, and connectivity – is another tedious step. The languages are different, the tools are different, and sometimes the process borders on manual labor with developers resorting to editing text with emacs.

Fortunately, many IP vendors have gotten the message and are creating VDKs for their supplied IP blocks. This is especially true in safety-critical environments where software testing tends to be more extensive. (Insert my favorite comment about IoT applications becoming -critical here.) As software grows to dominate schedules and resource loading, and more SoC starts are motivated by accelerating unique software algorithms with dedicated hardware, it’s a welcome development to have more IP suppliers providing VDKs.

The above chart was provided by Malte Doerper, product marketing manager for virtual prototyping at Synopsys. They have stepped back and asked the question why VDK integration has to be so hard. Part of the answer is the tools involved; productivity is lost whenever a team has to jump back and forth between non-integrated tools. Part of the answer is using automation to help the process.

The highly logical step here is to get the virtual prototyping environment into Eclipse. Synopsys has created Virtualizer Studio, an integrated Eclipse IDE that handles browsing, editing, debug, and static analysis chores for SystemC. Those who have worked with Eclipse-based tools know they can be easily extended, allowing software teams to pick and choose from best-of-breed plugins and customize their workflow. Doerper suggests that just bringing the environment together in Eclipse has cut development time for TLM models in SystemC in half.


The real hit Synopsys is after comes in VDK creation. That figure of 10x productivity improvement is based on initial observations from lead customers of Virtualizer Studio. It’s definitely #YMMV; for a huge design with many complex IP blocks, it might be a much larger figure, for a trivial design it might be less. Having off-the-shelf VDKs helps. Synopsys has been working with ARM and other providers for configurable reference VDKs, and supplies models for its DesignWare IP. Still, Doerper says that if VDKs need to be created, at least 50% of the source is stuff that can be autogenerated. The approach takes advantage of the fact that abstraction simplifies the input and the results.

There are also gains to be had in incremental development. Maybe integration testing shows that the models aren’t exactly right and need a bit of tweaking. Or, perhaps an IP supplier updates their models and that drives a slight change in a model of another IP block or the integrated SoC. In Virtualizer Studio, switching from the VDK debug perspective to the TLM creation perspective is a simple matter of selecting a different tab:


For more of the story on the new Synopsys Virtualizer Studio IDE, the full press release:
Synopsys’ New Virtualizer Studio Integrated Development Environment Accelerates Virtual Prototyping Productivity

More background on VDKs and what Synopsys is doing in segments such as automotive:
Synopsys Virtualizer Development Kits

To me, the development of an Eclipse-based environment for VDKs is a no-brainer. Software teams can quickly embrace the Synopsys platform and add their own extensions. Adding off-the-shelf reference VDKs to jump start productivity is also a big step. To get more SoC starts, the semiconductor industry needs to do more things to make the software developers comfortable, since those people will be the ones generating many of the new ideas.

Related Blogs:

Automotive Design and Virtual Prototyping

Embedded Agility

Developing ARM v8 Code…Today


The Status and Future of FDSOI

The Status and Future of FDSOI
by Scotten Jones on 09-26-2016 at 12:00 pm

I recently took a look at the current status and future direction of FinFET based logic processes in my Leading Edge Logic Landscape blog. I thought it would be interesting to take a similar look at FDSOI and to compare and contrast the two processes. My Leading Edge Logic Landscape blog is available here.
Continue reading “The Status and Future of FDSOI”


Top 5 Highlights from the 2016 TSMC Open Innovation Platform Forum

Top 5 Highlights from the 2016 TSMC Open Innovation Platform Forum
by Tom Dillinger on 09-26-2016 at 7:00 am

Recently, TSMC conducted their annual Open Innovation Platform forum meeting in San Jose. Although TSMC typically eschews a theme for the forum, David Keller, EVT TSMC North America, used a phrase in his opening remarks that served as a foundation for the rest of the meeting – “celebrate the way we collaborate”.

The forum begins with a technology overview session from TSMC. Jack Sun, VP and CTO TSMC R&D, provided the keynote, entitled: ”Technology Leadership for Collaborative Innovation”. Cliff Hou, VP R&D Design Technology Platform, provided an update on the design flow enablement and IP readiness for TSMC’s advanced process nodes.

Their presentations were followed by 3 tracks of customer and EDA vendor presentations, highlighting areas where their collaboration with TSMC has resulted in pushing the envelope of new designs and design methods.

The following list highlights five key impressions from the forum, mostly related to TSMC’s roadmap update. (I should note that there are subjective comments included, which solely represent my opinions, not those of TSMC.)

(5) Platforms: End-market requirements are driving broader design enablement releases
TSMC has adopted a design enablement strategy with 4 platforms, to address unique characteristics of key market segments. Customers will be incorporating the associated PDK models, techfiles and reference flows for their specific market. (Clearly, this strategy entails much greater support from Cliff’s team.) The four platforms are:

a) High Performance Computing platform
The majority of the HPC platform discussion pertained to the 7nm node.
Characteristics of device models and tool qualification include:

  • FEOL device models need to support VDD overdrive and hyper-overdrive performance boost modes
  • BEOL interconnect design rules use wider upper level metals, larger vias
  • TSMC is providing an “H360” standard cell foundation IP library
  • power-grid construction flows focus on minimizing IR, addressing EM issues
  • clock-tree synthesis must meet very low skew requirements
  • improved wiring delay optimization will be needed in APR flows
  • statistical timing analysis support is required
  • statistical EM analysis is required

b) Mobile platform
As with HPC, the focus was on the availability of the 16FFC platform, and the development underway for the upcoming 7nm node. Relative to N10FF, the 7nm mobile platform offering offers improvements of ~15% performance (iso-power), ~35% power (iso-performance), with a gate density improvement of 1.65X.

  • TSMC is providing an H240 standard cell dense library, for maximal gate density
  • Similar EDA reference flow requirements as the HPC platform

c) Automotive platform
Clearly, there is an expectation for a growing market for automotive electronics, to address a growing set of ADAS feature requirements, as guided by the ISO-26262 standard.

The TSMC automotive platform is currently focused on the N16FFC process node, with PDK support for extended operating environment conditions:

  • models qualified to 150 degrees C (from 125 C for HPC)
  • EM model analysis to 125 C, statistical EM sign-off
  • TSMC IP qualification reports provided for NBTI, PBTI, HCI, TDDB, device aging
  • SRAM soft error model enhancements

d) IoT platform
TSMC is working on ensuring Ultra Low Power PDK support across a wide set of process nodes, focusing on IP qualification at lower operating voltages – i.e., 40ULP (1.2V à 0.9V), 28HPC (0.9V à 0.7V), 16FFC (0.8V à 0.55V).

An extra-high Vt device offering (EVHT) adds to the available set of Vt targets. A low-leakage SRAM bitcell IP is also available.

TSMC expressed support for working with customers on near Vt characterization, as well.

4) High capacity memory array technology alternatives in R&D
Jack S. briefly alluded to the R&D activity underway to investigate alternative memory array technologies, specifically embedded Resistive RAM (eRRAM) and embedded Magnetoresistive RAM (eMRAM). Yet, no specific timeline for process node introduction was provided.

This is in contrast to announcement from other foundries, which have provided (preliminary) dates for eMRAM introduction. I found this distinction to be interesting.

3) N10, N7 development “on track”
TSMC shared dates for N10 and N7 production availability. N10FF will ramp this year. The very aggressive N7FF schedule (for HPC and mobile platforms) is an extremely impressive feat.

Reference flow support for the 7nm EDA features listed above will be available in 4Q’16.

N7FF foundation and SRAM IP will be available to the v0.5 PDK release this year.

Risk production tapeouts will be accepted in 1Q’17. High-speed SerDes IP will be available to the v1.0 PDK in 2Q’17.

This is especially impressive, given the additional design enablement resource required for the two platforms – with different design rules, PDK models, standard cell IP, etc.

2) DRC waivers at 7nm? Fuhggedaboudit…
One of the indirect benefits of attending the TSMC OIP forum is the opportunity to chat with TSMC’s EDA and IP partners at their booths. Another is the chance to network with TSMC customers over lunch and coffee breaks. I ran into a colleague from years past, who shared an insight that has since stuck with me.

His contention (not necessarily TSMC’s) was that: “Design rules at 7nm, with 193i photolithography, require a new way of thinking. Design rules are strict. In essence, TSMC is confirming ‘We can print this with these rules, but don’t expect any significant process latitude.’ The days where a custom designer could approach TSMC with a request for a tapeout DRC waiver to realize a PPA benefit for a specific set of cells are long gone.”

He further commented, “At 7nm, I wouldn’t consider any IP that hasn’t been proven in silicon. I’d like to see the IP tapeout signoff review data, and the post-silicon characterization reports. There’s simply no layout design margin anymore.”

Those comments really resonated with me.

1) new 7nm requirement, with a significant tool/flow impact
Each new technology brings new challenges – the exciting nature of our industry is how those challenges are solved. N7FF introduces a unique requirement.

Throughout the scaling of process nodes, interconnect resistance per unit length and via resistance have increased. FinFET’s provided an increase in the areal current density. For high performance designs, the scaling of interconnects strongly impact both interconnect delays and electromigration reliability. For long routes, the goal is to promote signals to higher (thicker) levels of metallization for optimal timing. For EM robustness, the goal is to provide sufficient metal for the associated current density, an issue that is of specific concern at the output of a FinFET cell (through an output pin).

The photolithographic limitations at 7nm preclude traditional methods to address these issues, such as large via bars on dog-bone metal ends. Instead, via pillars will be required, with a significant impact on routing track resources.


A presentation from Synopsys highlighted the extent to which the insertion of via pillars as part of both timing and EM optimization has influenced tools and design methods. The full set of synthesis and implementation flows from Design Compiler-Graphical through IC Compiler-II APR has to be adapted to the via pillar design style. Clock-tree synthesis and optimization involves inserting buffers and optimal pillars to balance loading. Signal routing and post-route optimizations are strongly impacted, as you might imagine. The promotion/demotion of route layers for optimization has to include the congestion impact of associated pillars. All these optimization algorithms rely upon accurate parasitic extraction of the pillar structure.

In summary, the TSMC OIP forum highlighted the collaboration needed among customers, EDA vendors, and the foundry, to enable design success at advanced nodes. And, the coming introduction of 7nm design enablement will include several challenges, necessitating pretty significant changes in design styles and methods. With every challenge comes an opportunity to pursue new innovations in our industry.

-chipguy