My Top Three Reasons to Attend IEDM 2019

My Top Three Reasons to Attend IEDM 2019
by Scotten Jones on 10-11-2019 at 6:10 am

The International Electron Devices Meeting is a premier event to learn about the latest in semiconductor process technology. Held every year in early December is San Francisco this years conference will be held  from Decembers 7th through December 11th. You can learn more about the conference at their web site here.

This is a must attend conference for me every year.

The following are a few of the announced papers I am particularly excited about:

(1) TSMC to Unveil a Leading-Edge 5nm CMOS Technology Platform: TSMC researchers will describe a 5nm CMOS process optimized for both mobile and high-performance computing. It offers nearly twice the logic density (1.84x) and a 15% speed gain or 30% power reduction over the company’s 7nm process. It incorporates extensive use of EUV lithography to replace immersion lithography at key points in the manufacturing process. As a result, the total mask count is reduced vs. the 7nm technology. TSMC’s 5nm platform also features high channelmobility FinFETs and high-density SRAM cells. The SRAM can be optimized for low-power or high-performance applications, and the researchers say the high-density version (0.021µm2) is the highest-density SRAM ever reported. In a test circuit, a PAM4 transmitter (used in highspeed data communications) built with the 5nm CMOS process demonstrated speeds of 130 Gb/s with 0.96pJ/bit energy efficiency. The researchers say high-volume production is targeted for 1H20. (Paper #36.7, “5nm CMOS Production Technology Platform Featuring Full-Fledged EUV and HighMobility Channel FinFETs with Densest 0.021µm2 SRAM Cells for Mobile SoC and High-Performance Computing Applications,” G. Yeap et al., TSMC)

(2) Intel Says Heterogeneous 3D Integration Can Drive Scaling: CMOS technology requires both NMOS and PMOS devices, but the performance of PMOS lags NMOS, a mismatch which must be addressed in order to wring every last bit of performance and energy efficiency from future chips. One way to do that is to build PMOS devices with higher-mobility channels than their NMOS counterparts, but because these are built from materials other than silicon (Si) which require different processing, it is challenging to build one type without damaging the other. Intel researchers got around this with a 3D sequential stacking architecture. They first built Si FinFETNMOS transistors on a silicon wafer. On a separate Si wafer they fabricated a single-crystalline Ge film for use as a buffer layer. They flipped the second wafer, bonded it to the first, annealed them both to produce a void-free interface, cleaved the second wafer away except for the Ge layer, and then built gate-all-around (GAA) Ge-channel PMOS devices on top of it. There was no performance degradation in the underlying NMOS devices, and in an inverter test circuit the PMOS devices demonstrated the best Ion-Ioff performance ever reported for Ge-channel PMOS transistors (Ion=497 µA/µm and Ioff=8nA/µm at 0.5V). The researchers say these results show that heterogeneous 3D integration is promising for CMOS logic in highly scaled technology nodes. (Paper #29.7, “300mm Heterogeneous 3D Integration of Record Performance Layer Transfer Germanium PMOS with Silicon NMOS for Low-Power, High-Performance Logic Applications,” W. Rachmady et al., Intel.)

(3) Versatile 22nm STT-MRAM Technology: Many electronics applications require fast nonvolatile memory (NVM), but embedded flash, the current dominant technology, is becoming too complex and expensive to scale much beyond 28nm. A type of embedded NVM known as STT-MRAM has received a great deal of attention. STT-MRAM uses magnetic tunnel junctions (MTJs) to store data in magnetic fields rather than as electric charge, but this ability decreases as temperature increases. That makes STT-MRAM both challenging to build – it is fabricated in a chip’s interconnect and must survive high-temperature solder reflow – and also to use in applications such as automotive, where thermal specifications are demanding and the ability to resist outside magnetic fields is critical. TSMC will describe a versatile 22nm STT-MRAM technology that operates over a temperature range of -40ºC to 150ºC and retains data through six solder reflow cycles. It demonstrated a 10-year magnetic field immunity of >1100 Oe at 25ºC at a 1ppm error rate, and <1ppm when in a shield-in-package configuration. The researchers say that by trading off some of the reflow capability and using smaller MTJs, even higher performance can be achieved (e.g., 6ns read times/30ns write times), making them appealing for artificial intelligence inference engines. (Paper #2.7, “22nm STT-MRAM for Reflow and Automotive Uses with High Yield, Reliability and Magnetic Immunity and with Performance and Shielding Options,” W. Gallagher et al., TSMC)

About IEDM
With a history stretching back more than 60 years, the IEEE International Electron Devices Meeting (IEDM) is the world’s pre-eminent forum for reporting technological breakthroughs in the areas of semiconductor and electronic device technology, design, manufacturing, physics, and modeling. IEDM is the flagship conference for nanometer-scale CMOS transistor technology, advanced memory, displays, sensors, MEMS devices, novel quantum and nano-scale devices and phenomenology, optoelectronics, devices for power and energy harvesting, high-speed devices, as well as process technology and device modeling and simulation. The conference scope not only encompasses devices in silicon, compound and organic semiconductors, but also in emerging material systems. IEDM is truly an international conference, with strong representation from speakers from around the globe.


A Future Vision for 3D Heterogeneous Packaging

A Future Vision for 3D Heterogeneous Packaging
by Daniel Nenni on 10-07-2019 at 6:00 am

At the recent Open Innovation Platform® Ecosystem Forum in Santa Clara, TSMC provided an enlightening look into the future of heterogeneous packaging technology.  Although the term chiplet packaging is often used to describe the integration of multiple silicon die of potentially widely-varying functionality, this article will use the term heterogeneous packaging.  The examples below illustrate integration of both large and small die, DRAM die, and a full high-bandwidth memory die stacks (HBM2), much richer in scope than would commonly be designated as a chiplet.  Dr. Douglas Yu, Vice President of Integrated Interconnect and Packaging at TSMC, provided a review of current TSMC heterogeneous packaging offerings, and a 3D package vision that could be best described as “More-than-More-than-Moore”.

Doug began with the well-understood realization that the pace at which the cost-per-transistor has improved with IC process technology scaling has slowed.

Figure 1.  The rate of cost-per-transistor improvements with process technology scaling has slowed.  (Source:  TSMC)

There are certainly ongoing product PPA benefits to scaling, but the ultimate cost for the total system functionality may drive system designers to pursue heterogeneous packaging alternatives.

CoWoS

The first heterogeneous packaging offering from TSMC was Chip-on-Wafer-on-Substrate ( CoWoS® ).  A cross-section of a CoWoS® package is illustrated below.

Figure 2.  CoWoS® package integration  (Source:  TSMC)

A silicon interposer provides the interconnects between the die, with through-silicon vias (TSV) to the substrate below.  Doug highlighted the recent advances in CoWoS® technology in production, specifically the ability to fabricate the interposer at 2X the max reticle size for wafer lithography.

Figure 3.  CoWo®S support for a silicon interposer greater than the single max-reticle size   (Source: TSMC)

InFO-PoP

Doug reviewed TSMC’s cost effective heterogeneous integration package, based on the Integrated FanOut (InFO) technology.  The original InFO offering provided (reconstituted) wafer-level redistribution layer connectivity to extended bump locations outside the die perimeter.  The newer InFO-PoP package cross section is illustrated below.

Figure 4.  InFO-PoP cross section  (Source:  TSMC)

The extension of the InFO molded package beyond the embedded die is used for Through-InFO vias (TIV) between the top die and the redistribution layer connections.

SoIC®

A more recent TSMC heterogeneous package innovation involves the transition from microbump connections between die and substrate to a bumpless (thermo-compression) bond between direct die connections – see the figure below for a comparison between microbump and bumpless attach. TSMC-SoIC® is an innovative frontend wafer-process-based platform that integrates multi-chip, multi-tier, multi-function and mix-and-match technologies to enable high speed, high bandwidth, low power, high pitch density, and minimal footprint and stack-height heterogeneous 3D IC integration.

Figure 5.  Comparison of bump and bumpless technology characteristics, and SoIC® package cross-section  (Source:  TSMC)

Through silicon vias provide connection to bumps, part of the final back-end package assembly flow.  The density and electrical characteristics of the bumpless attach technology are far superior.  (Here’s a previous semiwiki article that describes the TSMC SoIC® package technology – link.)

https://semiwiki.com/semiconductor-manufacturers/tsmc/8150-tsmc-technology-symposium-review-part-ii/

Future Vision

Doug provided a vision for heterogeneous packaging, which combines the unique advantages of the technologies described above.  The figure directly below depicts a bumpless SoIC® composite as part of a larger InFO or CoWoS® package, integrating additional die and/or HBM memory stacks.

The second figure below illustrates multiple levels of (thinned) die using bumpless attach for subsequent back-end package assembly.  Doug referred to this multi-level SoIC® solution as part of the transition from 3D system integration to full 3D system scaling.

Figure 6.  SoIC® integration in a subsequent InFO or CoWoS® package  (Source:  TSMC)

Figure 7.  Multi-level bumpless die integration à 3D system scaling  (Source:  TSMC)

The heterogeneous packaging technology vision presented by TSMC will truly provide system architects with great opportunities for continued scaling.  In addition to traditional monolithic die PPA considerations for technology selection, this vision offers additional opportunities for system-level functional integration and packaging cost optimizations.  It will be fascinating to see how these heterogeneous packaging offerings impact future system designs.


A Review of TSMC’s OIP Ecosystem

A Review of TSMC’s OIP Ecosystem
by Daniel Nenni on 10-06-2019 at 10:00 am

Each year, TSMC conducts two events – the Technology symposium in the Spring and the Open Innovation Platform (OIP) ® Ecosystem Forum in the Fall.  Yet, what is the OIP ecosystem?  What does it encompass?  And, how does the program differentiate TSMC from other foundries?  At the recent OIP Forum in Santa Clara, Suk Lee, Senior Director, TSMC Design Infrastructure Management Division, presented a review of OIP, highlighting the breadth of technology and support available for TSMC process nodes.  Additionally, and very significantly, he described the extensive engineering investment made to develop and qualify new process design kit materials (PDK) and IP offerings.  Here are some of the highlights of Suk’s presentation.

Figure 1.  OIP partner overview  (Source:  TSMC)

OIP entails the collaboration between TSMC and numerous partners, spanning a range of technical facets of silicon foundry support:

  • EDA tool providers
  • IP developers
  • Design Center Alliance (DCA) providers, offering services ranging from system-level front-end design to back-end physical/test implementation
  • Value Chain Aggregators (VCA), additional service providers commonly offering a different set of capabilities, including assembly/test and supply chain management support

and, the newest aspect of OIP partnership, which was announced last year with the introduction of its OIP Virtual Design Environment (OIP VDE), lowering entry barriers of Cloud adoption for the customers of all sizes

  • Cloud service providers

To perhaps better distinguish between the DCA and VCA partners, Suk overlaid the previous chart on top of the Design Enablement and Process Development groups at TSMC – see the figure below.

 Figure 2.  OIP partner interactions with TSMC R&D  (Source:  TSMC)

Yet, these OIP designations are much more than companies enrolling with TSMC.  Suk emphasized the qualification requirements and ongoing monitoring to which each OIP partner is subjected, as illustrated below.

Figure 3.  Examples of OIP partner qualification requirements  (Source:  TSMC)

From TSMC’s own derivative of ISO9000 qualify management and assurance (aka, TSMC9000) to qualification of process interconnect technology definitions to ongoing certification of OIP service providers, there is a major emphasis on ensuring foundry customer success.

Perhaps the best illustration of OIP collaboration is the activity pursued with EDA partners during new process development.  This activity ensures new tool features and full methodology reference flow capabilities are qualified concurrently with initial process availability for IP developers and early end customer adopters.  Suk provided examples where new EDA tool attributes were defined, developed, and integrated into reference flows, driven by both process innovations (e.g., EUV lithography) and design reliability and manufacturability (e.g., via pillars, statistical analysis).  The figures below illustrate how the digital and custom design flows from EDA OIP partners were enhanced in support of these advanced process requirements.

Figure 4.  New EDA tool features addressing process requirements – digital flows  (Source:  TSMC)

Figure 5.  New EDA tool features addressing process requirements – custom implementation flows  (Source:  TSMC)

Engineers are skeptical by nature, seeking a silicon-proven demonstration of models, EDA tool features, and reference flows.  Suk highlighted the specific collaboration with ARM – for nearly a decade, TSMC and ARM have used leading ARM core IP as a testchip vehicle.  Full EDA (and IP) qualification reports are available at TSMC’s customer portal. On the day of OIP Forum, the two companies also announced the latest result of their collaboration, an industry-first 7nm silicon-proven chiplet system based on multiple Arm® cores and leveraging TSMC’s Chip-on-Wafer-on-Substrate (CoWoS®) advanced packaging solution.

 

Figure 6.  Screenshot of example EDA qualification reports on the TSMC portal   (Source:  TSMC)

With all the news relative to new process node announcements, it is easy to overlook the underlying activities and resources required to synchronize process availability with the companies supporting the related EDA, IP, and service provider ecosystem.  Suk’s presentation reminded the OIP Ecosystem Forum audience of the tremendous investment made as part of the effort to sustain Moore’s Law (for at least another node or two).  The focus that TSMC has applied to design enablement has truly been a differentiating characteristic of their corporate philosophy.


TSMC OIP Overview and Agenda!

TSMC OIP Overview and Agenda!
by Daniel Nenni on 09-05-2019 at 6:00 am

The TSMC Symposium and OIP Ecosystem Fourm are the most coveted events of the year for the fabless semiconductor ecosystem, absolutely. In my 35 years of semiconductor experience never has there been a more exciting time in the ecosystem and that is clear by the overview and agenda for this year’s event. I hope to see you there:

The TSMC OIP Ecosystem Forum brings together TSMC’s design ecosystem companies and our customers to share practical, tested solutions to today’s design challenges. Success stories that illustrate TSMC’s design ecosystem best practices highlight the event.

More than 90% of last year’s attendees said that, “the forum helped me better understand TSMC’s Open Innovation Platform” and that “I found it effective to hear directly from TSMC OIP member companies.”

This year’s event will prove equally valuable as you hear directly from TSMC OIP companies about how to apply their technologies and joint design solutions to address your design challenges in High-Performance Computing (HPC), Mobile, Automotive and Internet-of-Thing (IoT) applications.

This year, the forum is a day-long conference kicking-off with trend-setting addresses and announcements from TSMC and leading IC design company executives.

The technical sessions are dedicated to 30 selected technical papers from TSMC’s EDA, IP, Design Center Alliance and Value Chain Aggregator member companies. And the Ecosystem Pavilion feature up to 70 member companies showcasing their products and services.

Date: Thursday, September 26, 2019 8:00 AM – 6:30 PM

Venue: Santa Clara Convention Center

Learn About:

  • Emerging advanced node design challenges including 5nm, 6nm, 7nm, 12FFC/16FFC, 16FF+, 22ULP/ULL, 28nm, and ultra-low power process technologies.
  • Updated design solutions for specialty technologies supporting Internet-of-Thing (IoT) applications
  • Successful, real-life applications of design technologies and IP from ecosystem members and TSMC customers
  • Ecosystem-specific TSMC reference flow implementations
  • New innovations for next generation product designs targeting HPC, mobile, automotive and IoT applications

Hear directly from ecosystem companies about their TSMC-specific design solutions. Network with your peers and more than 1,000 industry experts and end users.

The TSMC Open Innovation Platform Ecosystem Forum is an “invitation-only” event.  Please register to attend. We look forward to seeing you at the event.

The views expressed in the presentations made at this event are those of the speaker and are not necessarily those of TSMC.

Agenda:

Join the TSMC 2019 Open Innovation Platform® Ecosystem Forum and hear directly from TSMC OIP companies about how to leverage their technology to your design challenges!

08:00 – 09:00 Registration & Ecosystem Pavilion

09:00 – 09:20 Welcome Remarks

09:20 – 10:10 TSMC and its Ecosystem for Innovation

10:10 – 10:30 Coffee Break

REGISTRATION

Please click the paper title to see its abstract:

HPC & 3DIC Mobile & Automotive IoT & RF
10:30 – 11:00
TSMC 3DIC Design Enablement Updates
TSMC
TSMC EDA & IP Design Enablement Updates
TSMC
TSMC RF Design Enablement Updates
TSMC
11:00 – 11:30

Calibre in the Cloud – A Case study with AMD, Mentor & TSMC

Microsoft/AMD/Mentor

Functional Safety Analysis and Verification to meet the requirements of the Automotive market

Texas Instruments/Cadence

Simplify Energy Efficient designs with cost-effective SoC Platform

Dolphin Design
11:30 – 12:00

Optimizing FPGA-HBM in InFO_MS Structure

Xilinx/Cadence

Thermal-induced reliability challenge and solution for advanced IC design

ANSYS

Flexible clocking solutions in advanced FinFET processes from 16nm to 5nm

Silicon Creations
12:00 – 13:00 Lunch & Ecosystem Pavilion
13:00 – 13:30

Chiplets solutions using CoWoS and InFO with 112Gbps SerDes and HBM2E/3.2Gbps for AI, HPC and Networking

GUC

Overcome time-to-market and resource challenges: Hierarchical DFT for advanced node SoC design and production

AMD/Mentor

Developing AI-based Solutions for Chip Design

Synopsys
13:30 – 14:00

Realizing Adaptable Compute Platform for AI/ML and 5G with Synopsys’ Fusion Design Platform

Xilinx/Synopsys

Comprehensive ESD/Latch-up reliability verification for IP & SoC Designs

NXP/Silicon Frontline/Mentor

Reliable, Secure and Flexible OTP solutions in TSMC for IoT, AI and Automotive Applications

eMemory
14:00 – 14:30

HBM2E 4gbps I/O Design Techniques in 7nm & Below Nodes

Open-Silicon

Sensor fusion and ADAS SOC designs in TSMC 16FFC and N7

Cadence

High-Speed Interface IP PAM-4 56G/112G Ethernet PHY IP for 400G and Beyond Hyperscale Data Centers

Synopsys
14:30 – 15:00

Pushing 3GHz Performance of TSMC N7 Arm Neoverse N1 CPU using the Cadence Digital Flow

Cadence/Arm

AWS Cloud enablement of design characterization flows using Synopsys® Primetime® & HSPICE®

Xilinx/Synopsys

Automotive IP Functional Safety – A Verification Challenge

Cadence
15:00 – 15:30 Coffee Break & Ecosystem Pavilion
15:30 – 16:00

Large Scale Silicon Photonic Interconnects for Mass Market Adoption

HPE/Mentor

A New Era of MIPI D-PHY and C- PHY: Automotive Applications

M31

Best practices for Arm Cortex CPU energy efficient implementation flows

Arm
16:00 – 16:30

Photonics Coming of Age: From Laboratory to Mainstream Applications

Cadence/Lumerical

Integrating ADAS Controllers with Automotive-Grade IP for TSMC N7

Synopsys

The Challenges Posed by Dynamic Uncertainty on AI & ML Devices Targeting 16nm, 7nm & 5nm

Moortec
16:30 – 17:00

Accelerating Semiconductor Design Flows with EDA on the Cloud

Astera Labs/AWS

Arm automotive physical IP addresses new feature and functionality demands

Arm

Developing AI Accelerators for TSMC N7

Synopsys
17:00 – 17:30

Best Practices using Synopsys Fusion Technology to Achieve High-performance, Energy Efficient implementations of the latest Arm® Processors in TSMC 7nm FinFET Process Technology

Synopsys/Arm

Cloud-based Characterization with Cadence Liberate Trio Characterization Suite and Arm-based Graviton

Cadence
Optimize SOC designs while enabling faster tapeouts by closing chip integration DRC issues early in the design cycle
MaxLinear/Mentor
17:30– 18:30 Social Hour

 

TSMC pioneered the pure-play foundry business model when it was founded in 1987, and has been the world’s largest dedicated semiconductor foundry ever since. The company supports a thriving ecosystem of global customers and partners with the industry’s leading process technology and portfolio of design enablement solutions to unleash innovation for the global semiconductor industry.

TSMC serves its customers with annual capacity of about 12 million 12-inch equivalent wafers in 2019 from fabs in Taiwan, the United States, and China, and provides the broadest range of technologies from 0.5 micron plus all the way to foundry’s most advanced processes, which is 7-nanometer today. TSMC is the first foundry to provide 7-nanometer production capabilities, and is headquartered in Hsinchu, Taiwan.

REGISTRATION


TSMC in the Cloud Update #56thDAC 2019

TSMC in the Cloud Update #56thDAC 2019
by Daniel Nenni on 06-13-2019 at 10:00 am

During my Taiwan visit, prior to Las Vegas, I was fortunate to spend time with Willy Chen and Vivian Jiang to prepare for the cloud panel I moderated at #56thDAC. Willy and Vivian are part of the ever-important Design Infrastructure Marketing Division of TSMC, which includes the internal and external cloud efforts. TSMC first announced their external cloud offering last year: TSMC announces initial availability of Design-in-the-Cloud via OIP VDE and OIP Ecosystem Partners and has made follow-up announcements with all of the key vendors and participated in multiple cloud panels last week in Las Vegas. Make no mistake, TSMC is a semiconductor cloud pioneer, absolutely.

There are however a couple of things I would like to point out as an objective semiconductor cloud insider. I first heard of TSMC seriously considering the cloud more than 10 years ago. Back then the big hurdle was customer security and having been through TSMC’s security protocol for EDA and IP vendors many times I can tell you TSMC is all about security. But TSMC is also all about enabling customers of all types and getting high quality wafers to the masses and today that means cloud.

Another interesting point in the semiconductor cloud transformation is that systems companies are driving the leading edge foundry business instead of traditional fabless chip companies. Some of these systems companies are actually cloud based companies (Google, Microsoft, Amazon, and Facebook) so there is no security concern there. In fact, cloud security is above and beyond anything we have ever seen in the semiconductor industry and TSMC knows this by direct experience with their cloud customers.

As more systems companies use the cloud for chip design the fabless companies have no choice but to follow. The cloud company chip designers are the extreme case. They can do simulations and verification in hours versus days or weeks. Imagine being able to run a SPICE simulation or characterization run in an hour versus over night?

As I mentioned before, investment in fabless chip companies more than tripled in 2017 and doubled again in 2018. Similar to the fabless transformation, where semiconductor companies no longer had to build fabs, today’s fabless companies don’t have to buy computers and tools, they just go to the cloud where TSMC and EDA is already there waiting for them, absolutely.

One of the more interesting cloud events at #56thDAC was the Mentor Calibre Luncheon (FREE FOOD). SemiWiki Blogger Tom Simon sat in front of me and will blog this in more detail so spoiler alert: Willy Chen was on the panel and he talked about TSMC cutting down the DRC runtime of an N5 testchip from 24 hours to 4 hours using Azure Cloud.  AMD was on the panel and they talked about doing the same thing with their N7 products scaling to 4000 CPU cores using a Microsoft Azure VM (which is an AMD EPYC based server).

Admittedly the AMD presentation was a little self-serving but my takeaway was that AMD partnering with TSMC and pivoting to the cloud for chip design before their much bigger competitors do is a VERY big deal.


#56DAC – What’s New with Custom Design Platform

#56DAC – What’s New with Custom Design Platform
by Daniel Payne on 06-12-2019 at 10:00 am

Dave Reed, Synopsys

TSMC attends DAC every year and they do something very savvy, it’s a theatre where they invite all of their EDA and IP partners to present something of interest, followed by a drawing for a prize. At the end of the day they even have a nice prize, like a MacBook Air, which I didn’t win. On Wednesday I watched Dave Reed of Synopsys present an update on the Custom Design Platform.

Dave Reed, Synopsys

Tools in the Custom Design Platform include:

Back in April they announced that these tools were certified by TSMC for the 5nm FinFET process node, which is always a big deal for IC design teams pushing the bleeding edge because you need your EDA tools ready.

The integration between these tools is tight, so the phrase used is DRC Fusion or Extraction Fusion, because users don’t want to wait hours streaming data out of one tool and made compatible with the next tool in the flow. Adoption of the Custom Compiler tool increased this past year, now with 100 logos and 3,000 users, and all internal Synopsys IP designs use their own tools.

Analog IC designers know that layout parasitics will affect the performance, accuracy and reliability of their circuits, so the Synopsys flow allows for early use of parasitic estimates, followed by partial extraction and fully extracted netlists.

Each new, smaller process node has an increase in circuit simulations and increase in parasitics, so FineSim can be used to simulate circuits like SERDES, PLL and ADC, now about 3X faster than before. Plus, they’ve added RF simulation to FineSim, so you have another choice than HSPICE.

Automating the layout of analog design is a noble quest, tried by many vendors in the past, so Synopsys continues with a template-based approach where expert layout designers capture their best practices, allowing for transistor size changes. All this is done without having to write code, or becoming a computer science major. I asked Dave about the CiraNova technology that they acquired years ago, and it’s still be used under the hood for layout automation.

IC designers used to work in either a digital or an analog environment, with tedious file interchanges between them, but not any more because Synopsys allows seamless use between the digital flow of IC Compiler II and the analog flow of Custom Compiler. DRC checking can be done either in batch mode or even interactively, saving you time.

Summary

If the Custom Design Platform can be used internally at Synopsys for creating all of their own IP in TSMC nodes at 28nm all the way down to 5nm, then it’s going to work for your project too. This is a competitive market segment and Synopsys keeps plugging away, year by year, making it easier to reach design closure through clever automation.

Related Blogs


In Their Own Words: TSMC and Open Innovation Platform

In Their Own Words: TSMC and Open Innovation Platform
by Daniel Nenni on 06-01-2019 at 8:00 am

TSMC, the largest and most influential pure-play foundry, has many fascinating stories to tell. In this section, TSMC covers some of their basic history, and explains how creating an ecosystem of partners has been key to their success, and to the growth of the semiconductor industry.

The history of TSMC and its Open Innovation Platform (OIP)® is, like almost everything in semiconductors, driven by the economics of semiconductor manufacturing. Of course, ICs started 50 years ago at Fairchild (very close to where Google is headquartered today, these things go in circles). The planarization approach, whereby a wafer (just 1” originally) went through each process step as a whole, led to mass production. Other companies such as Intel, National, Texas Instruments and AMD soon followed and started the era of the Integrated Device Manufacturer (although we didn’t call them that back then, we just called them semiconductor companies).

The next step was the invention of ASICs with LSI Logic and VLSI Technology as the pioneers. This was the first step of separating design from manufacturing. Although the physical design was still done by the semiconductor company, the concept was executed by the system company. Perhaps the most important aspect of this change was not that part of the design was done at the system company, but rather the idea for the design and the responsibility for using it to build a successful business rested with the system company, whereas IDMs still had the “if we build it they will come” approach, with a catalog of standard parts.

In 1987, TSMC was founded and the separation between manufacture and design was complete. One missing piece of the puzzle was good physical design tools and Cadence was created in 1988 from the merger of SDA and ECAD (and soon after, Tangent). Cadence was the only supplier of design tools for physical place and route at the time. It was now possible for a system company to buy design tools, design their own chip and have TSMC manufacture it. The system company was completely responsible for the concept, the design, and selling the end-product (either the chip itself or a system containing it). TSMC was completely responsible for the manufacturing (usually including test, packaging and logistics too).

At the time, the interface between the foundry and the design group was fairly simple. The foundry would produce design rules and SPICE parameters for the designers; the design would be given back to the foundry as a GDSII file and a test program. Basic standard cells were required, and these were available on the open market from companies like Artisan, or some groups would design their own. Eventually TSMC would supply standard cells, either designed in-house or from Artisan or other library vendors (bearing an underlining royalty model transparent to end users). However, as manufacturing complexity grew, the gap between manufacturing and design grew too. This caused a big problem for TSMC: there was a lag from when TSMC wanted to get designs into high volume manufacturing and when the design groups were ready to tape out. Since a huge part of the cost of a fab is depreciation on the building and the equipment, which is largely fixed, this was a problem that needed to be addressed.

At 65 nm TSMC started the Open Innovation Platform (OIP) program. It began at a relatively small scale but from 65 nm to 40 nm to 28 nm the amount of manpower involved went up by a factor of 7. By 16 nm FinFET, half of the design effort is IP qualification and physical design because IP is used so extensively in modern SoCs, OIP actively collaborated with EDA and IP vendors early in the life-cycle of each

process to ensure that design flows and critical IP were ready early. In this way, designs would tape-out just in time as the fab was starting to ramp, so that the demand for wafers was well-matched with the supply.

In some ways the industry has gone a full circle, with the foundry and the design ecosystem together operating as a virtual IDM. The existence of TSMC’s OIP program further sped up disaggregation of the semiconductor supply chain. Partly, this was enabled by the existence of a healthy EDA industry and an increasingly healthy IP industry. As chip designs had grown more complex and entered the SoC era, the amount of IP on each chip was beyond the capability or the desire of each design group to create. But, especially in a new process, EDA and IP qualification was a problem.

On the EDA side, each new process came with some new discontinuous requirements that required more than just expanding the capacity and speed of the tools to keep up with increasing design size. Strained silicon, high-K metal gate, double patterning and FinFETs each require new support in the tools and designs to drive the development and test of the innovative technology.

On the IP side, design groups increasingly wanted to focus all their efforts on parts of their chip that differentiated them from their competition, and not on re-designing standard interfaces. But that meant that IP companies needed to create the standard interfaces and have them validated in silicon much earlier than before.

The result of OIP has been to create an ecosystem of EDA and IP companies, along with TSMC’s manufacturing, to speed up innovation everywhere. Because EDA and IP groups need to start work before everything about the process is ready and stable, the OIP ecosystem requires a high level of cooperation and trust.

When TSMC was founded in 1987, it really created two industries. The first, obviously, is the foundry industry that TSMC pioneered before others entered. The second was the fabless semiconductor companies that do not need to invest in fabs. This has been so successful that two of the top 10 semiconductor companies, Qualcomm and Broadcom, are fabless and all the top FPGA companies are fabless.

The foundry/fabless model largely replaced IDMs and ASIC. An ecosystem of co-operating specialist companies innovates fast. The old model of having process, design tools and IP all integrated under one roof has largely disappeared, along with the “not invented here” syndrome that slowed progress since ideas from outside the IDMs had a tough time penetrating. Even some of the earliest IDMs from the “Real men have fabs” era have gone “fab lite” and use foundries for some of their capacity, typically at the most advanced nodes.

Legendary TSMC Chairman Morris Chang’s “Grand Alliance” is a business model innovation of which OIP is an important part, gathering all the significant players together to support customers—not just EDA and IP, but also equipment and materials suppliers, especially for high-end lithography.

Digging down another level into OIP, there are several important components that allow TSMC to coordinate the design ecosystem for their customers.

  • EDA: the commercial design tool business flourished when designs got too large for hand-crafted approaches and most semiconductor companies realized they did not have the expertise or resources in-house to develop all their own tools. This was driven more strongly in the front-end with the invention of ASIC, especially gate-arrays; and then in the back end with the invention of
  • IP: this used to be a niche business with a mixed reputation, but now is very important with companies like ARM, Imagination, CEVA, Cadence, and Synopsys, all carrying portfolios of important IP such as microprocessors, DDRx, Ethernet, flash memory and so on. In fact, large SoCs now contain over 50% and sometimes as much as 80% TSMC has well over 5,500 qualified IP blocks for customers.
  • Services: design services and other value-chain services calibrated with TSMC process technology helps customers maximize efficiency and profit, getting designs into high volume production rapidly
  • Investment: TSMC and its customers invest over $12 billion a year. TSMC and its OIP partners alone invest over $1.5 billion. On advanced lithography, TSMC has further invested $1.3 billion in

Processes are continuing to get more advanced and complex, and the size of a fab that is economical also continues to increase. This means that collaboration needs to increase as the only way to both keep costs in check and ensure that all the pieces required for a successful design are ready just when they are needed.

TSMC has been building an increasingly rich ecosystem for over 25 years and feedback from partners is that they see benefits sooner and more consistently than when dealing with other foundries. Success comes from integrating usage, business models, technology and the OIP ecosystem so that everyone succeeds. There are a lot of moving parts that all have to be ready. It is not possible to design a modern SoC without design tools, more and more SoCs involve more and more 3rd party IP, and, at the heart of it all, the process and the manufacturing ramp with its associated yield learning all needs to be in place at TSMC.

The proof is in the numbers. Fabless growth in 2013 is forecasted to be 9%, over twice the increase in the overall industry at 4%. Fabless has doubled in size as a percentage of the semiconductor market from 8% to 16% during a period when the growth in the overall semiconductor market has been unimpressive. TSMC’s contribution to semiconductor revenue grew from 10% to 17% over the same period.

The OIP ecosystem has been a key pillar in enabling this sea change in the semiconductor industry.

TSMC 2019 Update

2019 TSMC Technology Symposium Review Part I

TSMC Technology Symposium Review Part II

 

Global Unichip Corporation

Another facet of TSMC is GUC, Global Unichip Corporation. It is a partially owned subsidiary and also an important partner, providing design services and allowing TSMC themselves to continue to be a pure-play foundry. GUC was founded in 1998 with 10 employees as what has come to be known as a “Design Service” company. It ramped fast and by 2000 it employed over 100 people.

The years between 2003 and 2010 were milestone years for GUC, representing a period of unprecedented growth. The era was marked by a strengthening of both business and technology relationships with the largest semiconductor foundry in the world, TSMC. That relationship set GUC on firm growth, bringing over the core of today’s management and the business strategy that guides the company today.

In 2003, TSMC assumed an ownership stake in GUC. But the foundry leader’s investment went far beyond financial investment. Part of its strategy to enhance the return on its investment was to move GUC to a global business strategy and put it on the road to being an advanced technology leader.

The technology model, and the business model that accompanied it, soon began to gain traction. Prior to 2003, much of GUC’s business came from the consumer electronics companies who tended to utilize more mature technologies and were primarily located in Taiwan. With the installation of new management, and new business and technology models, business emphasis began to migrate to the more technically sophisticated networking and communications sectors that required more advanced technologies. In 2004, 100% of GUC’s revenue was at  the 0.13 µm technology node; by 2005, 5% of revenue came from the new 90 nm node and a year later, an additional 3% of revenue came from the emerging 65 nm node.

The impact of this trend was soon seen in the company finances. Revenue jumped from $20 million in 2002 to $27 million in 2003, $32 million in 2004, and a whopping $48 million in 2005—more than doubling the revenue over a four year period.

The year 2006 marked another major milestone. In the third quarter of that year, GUC became a publicly traded company when it offered its shares on the Taiwan Stock Exchange.

The operations side instituted a major focus on advanced technology. In 2007, the company developed an advanced technology digital design flow, followed shortly thereafter by a low power design flow. As a result, the company saw a large increase in the size of their designs, many with gate counts jumping exponentially. In the face of an industry-wide recession in 2009, GUC showed its confidence in the future by investing heavily in internal IP development, in particular, IP targeting the networking market segment.

This era of prosperity was reflected by a broad set of indices. Annual revenue in 2006 more than doubled that of 2005 ($48 million) at $103 million, then more than doubled again in 2007 to $216 million. In 2008, revenue jumped to $295 million before falling during the recession of 2009 to $252 million. In 2008, GUC saw a jump in revenue from advanced technology to 21 % and to 34 % in 2009, with 1 % of that coming from leading edge 40 nm products. Like all technology companies, GUC experienced a financial decline in 2009, with revenues dropping to $252 million.

But the company would rebound quickly in 2010, posting revenues of $327 million with advanced technologies accounting for 42% of that total. The year also proved auspicious. Driven by the recession to examine its business model, GUC would begin making a series of strategic decisions that would allow it to capitalize on a new era of semiconductor device design.

The company’s growth as an innovative force in the semiconductor industry is also reflected in the number of new employees required to implement increasingly complex technologies. At the end of 2003, GUC employed 132 people, most of them in Taiwan. Three years later, that number had more than doubled to 287 and by the start of 2010, the company counted 484 employees, a number that has held relatively steady through 2013. Employee growth was fueled by expanded geographic growth. GUC opened its first international office when it established a subsidiary in North America (GUC N.A.) in February of 2004 and then opened its Japan office in June of 2005. Nearly three years later, in May of 2008, the company opened its third international office, GUC Europe, in Amsterdam, The Netherlands and one month later opened an office in Korea. GUC entered the fast-growing China market when it opened an office in one of that country’s technology hubs near Shanghai in 2009.

Success in the semiconductor industry going forward is going to be heavily weighted by the ability to leverage industry’s third-party infrastructure that has now matured. Foundries are at the leading edge of the infrastructure, providing the most advanced process technologies, as well as specialized technologies to all comers. IP and chip design implementation are also being outsourced, cost-effectively utilizing technology and financial resources.

It is in this new and exciting environment that GUC evolved the Flexible ASIC Model, which is designed to provide the most effective, efficient and flexible path to semiconductor innovation.

The Flexible ASIC Model is a response to both the business and technical challenges facing today’s semiconductor companies. This model allows companies to allocate their resources more efficiently. It brings together design expertise, systems knowledge and manufacturing resources to efficiently drive delivery of the final packaged IC. The model’s basic strategy is to spread design risks and to minimize IDM, fabless and OEM (Original Equipment Manufacturers) upfront semiconductor-related fiscal and human capital investments. The goal is to increase the efficiency of the entire value chain, from concept to delivery; to shorten all phases of the development cycle; and to ultimately increase device yield quality and reliability.

At the heart of the Flexible ASIC Model is integrated manufacturing. GUC has made a strategic choice to work exclusively with TSMC, the semiconductor industry’s leading foundry service company. It is this relationship that plays an integral role in the company’s ability to achieve early advanced technology access and match designs to manufacturing resources.

Spotlight: Dr. Morris Chang

Dell changed the way personal computers are manufactured and sold. Starbucks changed the amount we would pay for a cup of coffee. eBay took the yard sale out of our yards. TSMC took the semiconductor manufacturing costs off our balance sheets and out of our capital investments.

It’s hard to overstate the impact that Dr. Morris Chang, Founder, Chairman, and until-recently CEO of TSMC, has had on the industry. He has been influential as a leader in business model innovation, and has earned his company roughly 50% of the foundry market share.

Chang left his native China in 1949, moving to the US to attend Harvard University. He soon transferred to MIT as he followed his interest in technology. After earning his MS in 1953 from MIT’s mechanical engineering graduate school, Morris went directly into the semiconductor industry at the process level with Sylvania Semiconductor and was quickly moved to management.

Chang moved to Texas Instruments in 1958, where he stayed for 25 years, rising to VP of the worldwide semiconductor business (and also earned a PhD in electrical engineering from Stanford in 1964). At TI, he worked on a four transistor project in which the manufacturing was done by IBM, thus engaging in one of the early semiconductor-foundry relationships. Also at TI, Chang developed a new model of semiconductor pricing that sacrificed early profits to gain market share and to achieve manufacturing yields that would lead to higher long-term profits.

Chang left TI in 1983 and did a short stint at General Instrument Corporation. He then moved to Taiwan to head the Industrial Technology Research Institute (ITRI), which led to the founding of TSMC.

Chang noticed in the early 1980s, while at TI and GI, that top engineers were leaving and forming their own semiconductor companies. Unfortunately, the heavy capital requirement of semiconductor manufacturing was a gating factor. The cost back then was $5-10 million to start a semiconductor company without manufacturing and $50-100 million to start a semiconductor company with manufacturing. Some of these start-ups used excess capacity from IDMs but were subjected to uncertainties in foundry capacity and sometimes had to buy wafers from a competitor. Around this time, 1985, the first truly fabless startups, like Xilinx and Chips and Technologies, were launched and doing well.

It was in 1987, within this nascent fabless environment, that Chang started TSMC. Although TSMC started two process nodes behind where semiconductor manufacturers (IDMs) were at the time, they had the advantage of being a pure-play foundry, not a competitor. Their focus was on their customers.

Morris Chang made the first TSMC sales calls with a single brochure: TSMC Core Values: Integrity, Commitment, Innovation, Partnership. Four or five years later, TSMC was only behind by one process node and the orders started pouring in. In 10 years, TSMC caught up with IDMs (except for Intel) and the fabless semiconductor industry blossomed, enabling a whole new era of semiconductor design and manufacturing. In the last 20 years, and still today, even the remaining IDMs are being forced to go fabless or fab-lite at 28 nm and below due to high costs and daunting technical challenges.

Dr. Morris Chang in 2007.

Dr. Morris Chang turned 82 on July 10th, 2013. He is still running TSMC full time as the founding Chairman. He works from 8:30 am to 6:30 pm like most TSMC employees and says that a successful company life cycle is: rapid expansion, a period of consolidation, and maturity. The same could be said about Chang himself.

2019 Update: Dr. Morris Chang

In 2017 the TSMC Museum of Innovation opened Under Fab 12 in Hsinchu Taiwan. It not only commemorates the history of semiconductors and TSMC but also the life of Dr. Morris Chang. Morris Chang’s wife Sophie was actively involved in this project:

The TSMC Museum of Innovation encompasses three exhibition galleries: “A World of Innovation,” “Unleashing Innovation,” and “Dr. Morris Chang, TSMC Founder.” Through interactive technology, digital content, and historical documents we will learn about the pervasiveness of ICs in our daily lives and about their continued advancement. In addition, we will learn how ICs are making our lives more fulfilling and how they are driving technology beyond our imagination. We will also learn how TSMC contributes to global IC innovation and to Taiwan’s economy.

In 2018 Dr. Morris Chang retired from TSMC for the second and final time:

TSMC Dr. Morris Chang Announces Retirement in June 2018. Future Dual Leadership Will Be Mark Liu as Chairman And C.C. Wei as CEO.

Issued by: TSMC Issued on: 2017/10/02

Hsinchu, Taiwan, R.O.C. – Oct. 2, 2017 – TSMC Chairman Morris Chang today announces: “I will retire from the Company immediately after the Annual Shareholders Meeting in early June, 2018. I will not be a director in the next term of the board of directors. Nor will I participate in any TSMC management activities after the Annual Shareholders Meeting in early June, 2018. From early June, 2018 on, TSMC will be under the dual leadership of Dr. Mark Liu and Dr. C.C. Wei. Dr. Mark Liu will be the Chairman of the Board, and Dr. C.C. Wei will be the Chief Executive Officer. All present directors of the board have agreed to be nominated, and if elected, serve as directors of the board during the next term. They have also agreed to support the aforementioned dual leadership of the Company under Drs. Liu and Wei. Chairman Morris Chang further said, “The past 30-odd years, during which I founded and devoted myself to TSMC, have been an extraordinarily exciting and happy phase of my life. Now, I want to reserve my remaining years for myself and my family. Mark and CC have been Co-CEO’s of the Company since 2013 and have performed outstandingly. After my retirement, with the continued supervision and support of an essentially unchanged board, and under the dual leadership of Mark and CC, I am confident that TSMC will continue to perform exceptionally.”


400G Ethernet test chip tapes-out at 7nm from eSilicon

400G Ethernet test chip tapes-out at 7nm from eSilicon
by Tom Simon on 05-24-2019 at 10:00 am

Since the beginning of May eSilicon has announced the tape-out of three TSMC 7nm test chips. The first of these, a 7nm 400G Ethernet Gearbox/Retimer design, caught my eye and I followed up with Hugh Durdan, their vice president of strategy and products, to learn more about it. Rather than just respin their 56G SerDes, they decided to add the 112G SerDes, and at the same time use this vehicle for several other objectives. The gearbox in this chip contains 8 lanes of 56G and 4 lanes at 112G, allowing it to handle 400G Ethernet traffic. More than just showing that the SerDes work at 7nm, the configuration allows them to demonstrate a number of other things as well.

In our call, Hugh mentioned that they chose to work with Precise-ITC who develops IP for Ethernet and Optical Transport Network (OTN). They saw this as an opportunity to combine eSilicon interface IP with 3rd party IP to go through the process of integration and ensure that their StarDesigner 7nm flow was working as they expected. In essence this is a pipe cleaner of their SOC flow for 7nm.

Precise-ITC contributed a Forward Error Correction (FEC) block, Media Access Controller (MAC) and the Gearbox block. Having higher level functionality offers increased confidence in each element of the test chip. Hugh pointed out that this is a chip that customers can actually use as they evaluate the eSilicon’s offering. The chip will feature long reach and use only around 5W for the entire gearbox.

Designing at 7nm is even more difficult than at previous nodes. Lithography requirement impose many new restrictions on the layout. This makes designing chips with analog content challenging. Another aspect of the design that plays a critical role in the success of a chip like this is the packaging. Hugh told me that they used this opportunity to anticipate the complexity of designs with a much higher lane count by adding a more complex package design for some of the lanes. They also have the ability to inject noise during testing to ensure that the SerDes will perform in larger and more complex environments.

eSilicon is expecting to get silicon back in their lab by Q3 in 2019. They will make a test board that customers can use to put the SerDes and Ethernet related IP through its paces. The 112G SerDes will open the doors to continued development of Terabit Ethernet, which is becoming necessary with the explosion of data center throughput requirements.

eSilicon has consistently expended resources to stay at the leading edge of SOC technology. Their other May test chips included HBM and AI/ML designs all at 7nm. At the same time their partnerships will make life easier for their customers who are going to want to add advanced functionality to their designs. Test chips like this are a win for eSilicon, TSMC, Precise-ITC and their customers. We can eagerly await the return of silicon from this and their other test chips to learn more about how 7nm will perform in the wild. For more details, refer to the announcement on their website.


An evolution in FPGAs

An evolution in FPGAs
by Tom Simon on 05-24-2019 at 5:00 am

Why does it seem like current FPGA devices work very much like the original telephone systems with exchanges where workers connected calls using cords and plugs? Achronix thinks it is now time to jettison Switch Blocks and adopt a new approach. Their motivation is to improve the suitability of FPGAs to machine learning applications, which means giving them more ASIC-like performance characteristics. There is, however, more to this than just updating how data is moved around on the chip.

Achronix has identified three aspect of FPGAs that need to be improved to make them the preferred choice for implementing machine learning applications. Naturally, they will need to retain their hallmark flexibility and adaptability. The three architecture requirements for efficient data acceleration are compute performance, data movement and memory hierarchy. Achronix took a step back and looked at each element in order to recreate how programmable logic should work in the age of machine learning. Their new Speedster 7t is the result. Their goal was to break the historical bottlenecks that have reduced FPGA efficiency. They call the result FPGA+.

Built on TSMC’s 7nm node these new chips have several important innovations. Just as all our phone calls are now routed with packet technology, Achronix’s Speedster 7t will use a 2 dimensional arrayed network on chip (NoC) to move data between the compute elements, memories and interfaces. The NoC is made up of a grid of master and slave Network Access Points (NAPs). Each row/column operates at 256b @2.0Gbps, a combined 512 Gbps. This puts device level bandwidth in the range of 20Tbps.

The NoC supports specific connection modes for transactions (AXI), Ethernet packets, unpacketed data streams and NAP to NAP for FPGA internal connections. One benefit of this is that the NoC can be used to preload data into memory from PCIe without involving the processing core. Another advantage is that the network structure removes pressure during placement to position connected logic units near each other, which was a major source of congestion and floor planning headaches.

The NoC also allows the Achronix Speedster 7t to support 400G operation. Instead of having to run a 1000 bit bus at 724 MHz, the Speedster 7t can support 4 parallel 256 bit buses running at 506MHz to easily handle the throughput. This is especially useful when deep header inspection is required.

For peripheral interfaces, the approach that Achronix uses is to offer a highly scalable SerDes that can run from 1 to 112Gbps to support PCIe and Ethernet. They can include up to 72 of these per device. For Ethernet, they can run 4x 100Gbps or 8x 50Gbps. Lower rate Ethernet connections are also supported for back compatibility. They support PCIe Gen5, with up to 512 Gbps per port, with two ports per device.

The real advantage of their architecture becomes apparent when we look at the compute architecture. Rather than have separate DSPs LUTs and block memories, they have combined these into Machine Learning Processors (MLPs). This immediately frees up bandwidth on the FPGA routing. These three elements are used heavily together in machine learning applications, so combining them is a big advantage for their architecture.

AI and ML algorithms are all over the map on the need for mathematical precision. Sometimes large float precision is used, in other cases there has been a move to low precision integer. Google even has their own Bfloat precision. To handle this wide variety, Achronix has developed fracturable float and integer MACs. The support for multiple number formats provides high utilization of MAC resources. The MLPs also include 72Kbit RAM blocks, and memory and operand cascade capabilities.

For AI and ML applications, local memory is important, but so is system RAM. Achronix decided to use GDDR6 on their Speedster 7t family. It offers lower cost, easier and more flexible system design and extremely high bandwidth. Of course DDR4 can be used for less demanding storage needs as well. The use of GDDR6 allows each design to tune their memory needs, rather than being dependent on memory that is configured in the same package as the programmable device. Speedster 7t supports up to 8 devices with throughput of 4 Tbps.

There is a lot to digest in this announcement, it is worth looking over the whole thing. Looking back, this evolution will seem as obvious as how our old wired table top phones evolved into highly connected and integrated communications devices. The take-away is that this level of innovation will lead to unforeseen advances in end product capabilities. According to the Achronix Speedster 7t announcement, their design tools are ready now and they will have a development board ready in Q4.


Learning on the Edge Investment Thesis

Learning on the Edge Investment Thesis
by Jim Hogan on 05-20-2019 at 5:00 am

It is said that it will cost as much as $600M to develop a 5nm chip. At that price, only a few companies can afford to play, and with that amount of cash in, innovation is severely limited.

At the same time, there is a stampede in the artificial intelligence (AI) market where around 60 startups have appeared, many of which have already raised $60M or more. $12B was raised for AI startups in 2017 and, according to International Data Corporation (IDC), is expected to grow to $57B by 2021. Most of these are going after the data center, which is necessary to get the required ROI when you have a big raise. The chance of success is slim, and the risks are high. There is an alternative for investors and startups.

In this investment thesis, I talk about large disruptive changes happening in the semiconductor industry and the opportunities this creates for innovative architectures and business models.

I used a specific startup – Xceler, who are taking an alternative route to the development of an AI processor. A second organization – Silicon Catalyst is enabling them to bring silicon to market with much lower cost and risk. In the interest of full disclosure, I am a director and investor of both Xceler and Silicon Catalyst.

I love getting feedback and I share that feedback with people, so please let me know what you think. Thanks – Jim


Fig 1. Costs associated with SoC development at each manufacturing node. Source: IBS

Success with semiconductor investment in general, and AI specifically, is a multi-step process. At each stage, the goal is to reduce risk and maximize the potential for success at the lowest possible cost in dollars and time.

The low-risk structured approach comes down to executing the following steps:

  • The requirements in the chosen markets are distilled into the minimum functionality required, and target architectures identified.
  • The solutions are prototyped using FPGAs and proven in the market, creating initial revenue.

With these two steps, you have proof of technology proficiency and early market validation.

  • The solutions are then retargeted to silicon, adding further architectural innovations. An important element of this step is the utilization of silicon incubators that significantly reduce cost and risk. For an AI semiconductor startup, aside for people costs, significant costs are EDA tools and silicon. Typically, this is in the range of $3 to $5M. If the company can avoid or reduce that expense, they will see a much higher enterprise valuation and retain more ownership for the founders and early investors.

Identify the Market Opportunity


Source: AI Insight, August 30th, 2018

The AI/ML market in the data center is large. For a lot of applications, data collected at various nodes will move back to the data center. These are the public clouds such as those run by large data center companies. The problem for a semiconductor company, or sub-system vendor, is the business model associated with AI/ML in the datacenter. Systems and semiconductor companies build hardware and tools that are powerful and can run a multitude of applications, but the question is what type of AI/ML and what specific applications? It is a solution looking for a market. You need something that people would want to use.

The Cloud provides the infrastructure that your accelerator is integrated into, and they sell repeated services/applications on top of it until the product reaches the end of life. That is not a sustainable business for the technology provider. You can only sell so many units and you can’t keep selling the same volume every year. Consider how NVIDIA’s GP/GPU sales are tapering off. Sales are typically tied to the silicon cycle where every few years more silicon at a lower price, lower power consumption and improved performance becomes available. The services to get customers on board are offered for free, which is the best price, or for cheap, as the service provider is relying on volume sales with a lot of customers/users. This drives the commoditization of the underlying infrastructure as the service providers want the infrastructure at a price that makes business sense for them. In addition, with the slowing of Moore’s law, this forced obsolescence of technology is no longer a driving factor.

The ideal target is a problem looking for a solution, and for Xceler, it was found in the industrial space. Everybody wants to deploy IIoT (Industrial IoT with AI/ML), but each company is looking for the right solution. Consumer IoT was considered but most of the solutions are nice or cool to have, as opposed to must-have. In the consumer space, there are multiple policy hurdles such as legal, privacy, security, or liability. The adoption of these solutions requires a socio-economic behavioral change in the consumer mindset. It is something that requires a generational adoption cycle and a lot of marketing/PR dollars. Conversely, the adoption of industrial solutions is faster because it is a must-have capability that directly affects their bottom line.

Even though the IIoT market is fragmented, fragmentation can be your friend. Each of the larger companies in this space has a top line revenue of $50 billion per year plus. Even with limited market penetration, it is possible to build a sizeable revenue base. There are many potential end-customers, collectively presenting an opportunity, unlike the data center space where there are only a few end customers.

Predictive training and learning on the edge – the dream is now a reality with AI.
Web-based solutions appear to be free. However, someone is paying for the services. In the case of the web, it is advertisers or people trying to sell products. In the industrial space, suppliers are selling both the hardware and the solution. They are directly providing value to the purchaser and can thus make money from that directly, plus there is the potential for future revenue as more capabilities are added.

Every edge-based application is different. This fragmentation is one reason why people are scared of the edge market. You must have the ability to personalize and adapt to the context in which you are deployed. That requires learning on the edge. Inferencing is not enough. If the link breaks, what do you do? Solutions that only perform inferencing could endanger lives.

Some people are trying to build edge-based platforms. These often contain custom edge-based processors for a specific vertical application. They have the characteristics of very high volume but relatively low average selling price in comparison to Cloud devices.

I asked the founder and CEO of Xceler, Gautam Kavipurapu, about his company’s experience in the development of an edge-based processor and application. “We are running a pilot program for a company that makes large gas turbines. They are instrumented in many ways. Fuel valves have flow controls, sensors record vibration and sound, rotation speed at different stages of the turbine is measured, combustion chamber temperature – about 1000 sensors in total. We process the data and do predictive maintenance analysis.”

When the processor is connected to the machine and without connectivity to the Cloud, which may not exist for security reasons, the profile for the normally functioning system is observed. It builds a basic model for the machine, and over time, that model is refined. When you get deviations, the data from the sensors is cross-correlated in real time to figure out what is causing the anomaly. A connection to the Cloud enables the heavy lifting to build a refined model and make micro-refinements to it. However, there are huge advantages in doing the initial processing at the edge in terms of latency and power.

Define the Right Architecture
Systems need to be architected for the problems that they are solving. “We look at problems as hard real-time, near real-time or user time,” explains Kavipurapu. “Hard real-time requires response times of 5 microseconds or less; near real-time requires response times within a few milliseconds, and lastly, user time can take hundreds of milliseconds or minutes. Consumer applications fall within the last category and usually do not have Software License Agreements (SLAs)[BB2] [BB3] and performance commitments so they can work in concert with the Cloud. For problems that require hard or near real-time responses, relying on the Cloud is not viable as the round-trip time itself will take several milliseconds if it manages to complete at all.

“We have seen edge-processors evolve over time. Initially, machine learning on the edge meant collecting data and moving it to the cloud. Both the learning and inferencing were done in the Cloud. The next stage of advancement enabled some inferencing to be done on the edge, but the data and the model remain in the Cloud. Today, we need to move some of the learning to the edge, especially when real-time constraints exist, or you have concerns about security.”

Prototype and Create Revenue Stream
For this class of problem, it is possible to prototype it in an FPGA. For applications that do not need blistering performance, you can even go to market in the seed round with this solution. This offsets the need for more investment dollars and enables the concept to be validated.

“With Xceler, we started with an FPGA solution. They are acceptable in the market place we are targeting since they have good price/performance and power numbers. They are comparable to x86-based systems in price points and provide higher performance. The only downsides are that the margins are compressed, and certain architectural possibilities are not available in an FPGA solution.”

Migrate to Silicon
To capture more value, you do need a solution that is cheaper, faster and lower power. “That involves building a chip, or Edge-Based Processor (EBU),” adds Kavipurapu. “For the control processor, we are using a RISC-V implementation from SiFive. SiFive does the backend design implementation, reducing the risk for us. SiFive is also a Silicon Catalyst partner. We expect our FPGA solution to translate to between 20 million to about 36 million ASIC gates and so the proposed chip does not have to be that large.”

The only risk that remains is silicon risk. By doing the chip in 28nm, a well-understood fabrication process, the manufacturing silicon risk is minimized. All that remains are closing the design and timing which is a much-reduced problem. We have taken out most of the variability in the design element. In addition, we have restrained our design approach using only simple standard cell design with no custom blocks and no esoteric attempts at power reduction.”

Refine architecture
The FPGA solution cannot run faster than about 100MHz. “With an FPGA, we are also constrained by the memory architecture,” explains Kavipurapu. “For the custom chip, we are deploying a superior memory sub-system. New processing techniques require memory for data movement and storage. For us, to execute each computation, it takes about 15 instructions on an FPGA, which will be reduced to 4 or 5 on the ASIC. In terms of clock frequency, we will be running at 500 MHz to 1 GHz in an ASIC at a much-reduced power.”

Usage of silicon incubator
The goal of Silicon Catalyst is to limit the friction for getting an IC startup to the point they can secure an institutional Series A round by reducing the barriers to innovation. This has provided Xceler with a significant advantage compared to potential competition. While the competition struggles with multiple tape outs to achieve working silicon, burning through their dollars before first revenue, Xceler had first revenue even before the tape out. This happened because of the lower risk strategy and help from Silicon Catalyst and others.

“Silicon Catalyst offers through in-kind contributions from ecosystem partners capabilities for startups to get the tools and silicon they need,” says Kavipurapu. “It enables them to get an A round of funding that is nice in terms of valuation and raise. They bring the ability to prototype a chip at a very low cost. We get free shuttles services from TSMC. We do not have to pony up for chip design tools because there are in-kind partnerships for tools from Synopsys. We have licenses to each tool for 2 years. Silicon Catalyst also has a lot of chip industry veterans. I am not a chip guy and neither is my team. When it comes to silicon, Silicon Catalyst adds a lot of value.”

The result is that Xceler will have working samples for a little over $10M. They have customers and could be breakeven before they get to silicon.

Conclusions
We are in an era where innovation is more important than raw speed, the number of transistors or the amount of money invested. There are opportunities everywhere and getting to market with silicon no longer requires building silicon for extremely high volumes and margins. We are in the age of custom solutions designed to solve real problems and there are countless opportunities at the edge.

I’m convinced that Xceler’ s opportunity is much less risky with the help of Silicon Catalyst and the use of an open source architecture (RISC-V). I believe there will be many companies that will follow a similar path – Jim[RC5].

A Final Note
Gautam Kavipurapu was critical to me in creating this post so I’d like to tell you a bit more about him. Gautam has 20 years of experience at both large and small companies in operations, technology and management roles, including setting up and running geographically distributed teams of over 150 employees (India, US, and Europe). Gautam has 14 issued and several pending patents covering systems, networking, computer architecture. He also has several IEEE conference papers to his credit. Gautam’s team at IRIS Holdings in 2001 demonstrated a “Virtual Router,” a software router (modeled as a dataflow machine similar to Google Tensor) on a PC with NIC cards as part of the IRIS (Integrated Routing and Intelligent Switching) system development (Today’s NFV). His inventions and innovations have preceded major technology to various companies in the storage and computing industry worldwide for millions of dollars and cited in more than 350 issued patents. Gautam has an Executive MBA from INSEAD and a BSEE from Georgia Institute of Technology.