webinar banner2025 (1)

Bringing Hierarchy to DFT

Bringing Hierarchy to DFT
by Tom Simon on 01-30-2020 at 6:00 am

Tessent Hierarchical Flow

Hierarchy is nearly universally used in the SoC design process to help manage complexity. Dealing with flat logical or physical designs proved unworkable decades ago. However, there were a few places in the flow where flat tools continued to be used. Mentor lead the pack in the years around 1999 in helping the industry move from flat DRC to their Calibre hierarchical DRC flow. Similarly, now Mentor is on the leading edge of the move to hierarchical Design for Test (DFT), a part of the flow that has for many years resisted switching from a predominantly flat approach.

Mentor has a white paper that does an excellent job of highlighting the numerous advantages of taking a hierarchical approach for DFT. The white paper, titled “Hierarchical DFT: Proven Divide-and-conquer Solution Accelerates DFT Implementation and Reduces Test Costs”, also explains how the flow works and how many of the benefits are achieved. The author, Jay Jahangiri, specifically dives into the key features of Mentor’s Tessent Hierarchical DFT solution that are used in a hierarchical flow.

Looking at the advantages of hierarchical DFT, you could guess the top handful of motivations for using it. These include shortened DFT implementation and ATPG runtimes. Another would be eliminating the large memory footprint for loading designs for implementation or analysis. Often these operations take hundreds of gigabytes of memory, severely limiting the number of available machines that can be used for these jobs.

Some of the other reasons for switching to a hierarchical approach are pretty compelling too. For instance, running flat test patterns can consume a lot of power and create hot spots. Hierarchical approaches can reduce power, and help manage and avoid hot spots more easily.  It is also worth reflecting on how DFT is often in the critical path for tape out. The reduction in time required for hierarchical DFT and the reduced turnaround time for changes it provides can make a crucial difference in the time needed overall for DFT, especially at the end of the process. The Mentor white paper is quite thorough in enumerating the reasons for switching to a hierarchical approach.

Of course, if working hierarchically was easy this approach would have been used to solve the problem from the outset. It turns out that a number of key elements are needed to make it work effectively. In their white paper, Mentor provides an easy to follow summary of these elements.

Because clocking plays such an important role in DFT, Mentor makes it easy to insert on-chip clock controllers (OCCs) into blocks so that each block can run its test patterns independently of other blocks’ test clock needs. Removing the interdependencies caused by top level clocking leads to a dramatic improvement in efficiency.

Tessent’s Scan and ScanPro products not only help with adding wrapper cells to create wrapper chains, they also allow analysis and utilization of existing registers for use during test as a shared wrapper cell. They also discuss several aspects of handing internal mode and external mode, to fully cover not just blocks but the glue logic between them.

The real enabler for making a hierarchical flow work is how smoothly the flow works. There are many aspects involved in not only making the blocks themselves testable, but also in integrating each block’s test elements into the top-level design. Tessent uses enabling technology, such as IEEE 1687, also known as IJTAG. Tessent also lets designers perform a lot of the test design at RTL, saving time and reducing complexity.

With performance gains in some steps of the process of up to 5 or 10X, the hierarchical approach is proving to be an effective way to deal with test complexity. It brings with it many interesting advantages as well that it can improve yield analysis and overall chip quality. The white paper goes into much more detail on exactly where the gains are and how the process improves productivity and quality. The paper can be downloaded from the Mentor website.


WEBINAR: Prototyping With Intel’s New 80M Gate FPGA

WEBINAR: Prototyping With Intel’s New 80M Gate FPGA
by Daniel Nenni on 01-29-2020 at 10:00 am

The next generation FPGAs have been announced, and they are BIG!  Intel is shipping its Stratix 10 GX 10M FPGA, and Xilinx has announced its VU19P FPGA for general availability in the Fall of next year.  The former is expected to support about 80M ASIC gates, and the latter about 50M ASIC gates.  And, to bring this mind-boggling gate capacity into your prototyping lab immediately, S2C  has teamed up with Intel and has rolled out its new 10M Logic System prototyping platform and will be delivering first systems before the end of this year.

On-Demand WEBINAR: Prototyping with Intel’s New 80M Gate FPGA and S2C!

You might ask: “How will this affect FPGA prototyping?”.  Well, there are at least three ways you should expect to benefit from these larger FPGAs:

  1. More usable ASIC gates from a single FPGA.
  2. Higher prototype performance.
  3. Faster time-to-prototype.

More Usable ASIC Gates
For those of you gagging for more usable gates from a single FPGA, the 10M Logic System is an FPGA prototyping solution you can deploy today.  One example of an application that will need increasingly more ASIC gates over the coming years is video processing.  SoCs incorporating video blocks already exceed the gate capacity of the previous generation FPGAs, and designers are scrambling to keep up with today’s video features, with no end in sight of the addition of incremental new features.  S2C’s 10M Logic System brings more than twice the usable gate capacity per FPGA to video applications today, and supports continued gate capacity growth with Dual and Quad FPGA versions available early next year.

Higher Prototype Performance
Another inherent benefit from these new, larger FPGAs is performance.  The S10 GX 10M die core performance is rated at 900MHz, with LVDS I/O and single-ended I/O rated at 1.4GHz and 250MHz respectively. Actual prototyping performance will vary by application, but, with all other things being equal for a comparison, prototype performance will be higher with these new 14nm FPGAs.

Organizing the design hierarchy for prototyping to contain the high-performance block/signals within one FPGA will certainly lead to higher prototype performance.  With S2C’s new 10M Logic Systems, design blocks up to about 80M gates can be contained within a single FPGA, and, for larger designs, Dual and Quad FPGA Logic Systems will include high-speed interconnects between the FPGAs.

The S10 10M Logic System supports six on-board programmable clocks (up to 350MHz), 5 external clocks, and an oscillator socket.  Two dedicated programmable clocks are also provided for on-board DDR4 memories, as well as 2 global resets that can be sourced from an on-board push-button, an externally sourced reset through a connector, or under PlayerPro run-time software control.

Faster Time-to-Prototype
One of the keys to successful prototyping is minimizing “time-to-prototype”, and, fast time-to-prototype must consider:

  1. Getting your design running at your target speed on the FPGA prototype platform.
  2. A debug environment that enables high design visibility and deep test response data capture.
  3. A method for applying large quantities of high-speed test stimulus from a host computer or external source.

Getting your design running at your target speed in the FPGA prototype platform should include thoughtful preparation of the design netlist for FPGA implementation, while preserving correlation with the simulation netlist as much as possible.  It should not come as a surprise to anyone that having a prototype team member with previous prototyping experience will go a long way to minimize time-to-prototype, especially when it comes to the FPGA implementation of design clocks and gated clocks, embedded memory, and SoC IP.

Since the simulation netlist is the verification “gold standard” that will be used for silicon tapeout, and FPGA prototyping should be viewed as a way to improve verification coverage beyond the capabilities of software simulation.  Therefore, maintaining a correlation between the two netlists throughout the verification process is essential to overall verification productivity.  If something goes wrong while verifying the design in the FPGA prototype, quickly diagnosing the cause of the problem in terms of the simulation netlist is what makes FPGA prototyping such a powerful verification tool.  And, the importance to prototyping productivity of establishing and enforcing a strict discipline of bug tracking and netlist revisioning cannot be overemphasized for keeping the simulation team synchronized with the prototyping team.

One approach to FPGA prototype debug with S2C’s 10M Logic System is to use S2C’s Multi-Debug-Module, or “MDM”.  Set-up and runtime controls for MDM are integrated into S2C’s PlayerPro software, deigned to work with the 10M Logic System hardware, and allows test data from multiple FPGAs to be viewed within a single viewing window.  MDM provides for up to 32K probes in eight groups of 4K probes without recompile.  Trace data can be captured at speeds up to 80MHz, and up to 8GB of waveform data can be stored in MDM’s external hardware.

S2C Multi-Debug-Module


To assist in reducing time-to-prototype, S2C offers ProtoBridge for use with the 10M Logic System.  ProtoBridge uses a PCIe/AXI high-throughput link between the prototype hardware and a host computer to transfer large amounts of transaction-level test data to the design.  The test data width can be from 32 bits to 1,024 bits at data rates up to 1GB per second.

S2C ProtoBridge

On-Demand WEBINAR: Prototyping with Intel’s New 80M FPGA and S2C!

Also Read:

S2C Delivers FPGA Prototyping Solutions with the Industry’s Highest Capacity FPGA from Intel!

AI Chip Prototyping Plan

WEBNAR: How ASIC/SoC Rapid Prototyping Solutions Can Help You!


How Good is Your Testbench?

How Good is Your Testbench?
by Bernard Murphy on 01-29-2020 at 6:00 am

Limitations of coverage

I’ve always been intrigued by Synopsys’ Certitude technology. It’s a novel approach to the eternal problem of how to get better coverage in verification. For a design of any reasonable complexity, the state-space you would have to cover to exhaustively consider all possible behaviors is vastly larger than you could ever possibly exercise. We use code coverage, functional coverage, assertion coverage together with constrained random generation to sample some degree of coverage, but it’s no more than a sample, leaving opportunity for real bugs that you simply never exercise to escape detection.

There’s lots of research on methods to increase confidence in coverage of the design. Certitude takes a complementary approach, scoring the effectiveness of the testbench in finding bugs. It injects errors (one at a time) into the design, then determines if any test fails under that modification. If so, all is good; the testbench gets a high score for that bug. Example errors change the code to hold a variable constant, or force execution on only one branch of a condition or change an operator.

But if no test fails on the modified design, the testbench gets a low score for that bug. This could be due to problems in activation; where no stimulus generated reached the bug. Or it could be a propagation problem; the bug was exercised but its consequences never reached a checker. Or it could be a detection bug; the consequences reached a checker, but it was inactive or incomplete and didn’t recognize that behavior as a bug.

Certitude works with both RTL and software models, and for RTL works with both simulation and formal verification tools. Here I’ll talk about RTL analysis since that’s what was mostly covered in a recent webinar, presented by Ankur Jain (Product Marketing Mgr) and Ankit Garg (Sr AE).

What sort of problems does Certitude typically find in the field? Some of these will sound familiar. Detection problems through missing or incomplete checkers/assertions, and missing or incomplete test cases, for example a disabling control signal in simulation or an over-constraint in model checking. These are problems that could be caught in simulation or formal coverage analysis (eg formal core checks). That nevertheless they were caught in Certitude suggests in practice those checks are not always fully exploited. Certitude at minimum provides an additional safety net.

What I found really compelling was the class they say they most commonly encounter among their customers. They call these process problems. Imagine you build the first in a series of designs where later designs will be derived from the first. You’re planning to add support for a number of features but those won’t be implemented in the first chip. But you’re thinking ahead; you want to get ready for the derivatives, so you add placeholder checkers for these planned features. These checkers must be partly or wholly disabled for the first design.

This first design is ultimately successfully verified and goes into production.

Now you start work on the first derivative. Verification staff have shuffled around, as they do. The next verification engineer takes the previous testbench and works on upgrading it to handle whatever is different in this design. They run the testbench, 900 tests fail and 100 tests pass. They set to work on diagnosing the failures and feeding back to the design team for fixes. What they don’t do is to look at the passing test cases. Why bother with those? They’re passing!

But some are passing because they inherited checks from the first design, which were partially or completely disabled. Those conditions may not be valid in this derivative. You could go back and recheck all your coverage metrics on the passing testcases, potentially a lot of work. Or you could run Certitude, which would find exactly these kinds of problem.

In the Q&A, the speakers were asked what real design bugs Certitude has found. The question is a little confused because the objective of Certitude is to check the robustness of the testbench, not to find bugs. But I get the intent behind the question – did that work ultimately lead to finding real bugs? Ankit said that, as an example, for one of their big customers it did exactly that. They found two testbench weaknesses for a derivative, and when those were fixed, verification found two real design bugs.

You can watch the webinar by registering HERE.


Advanced CMOS Technology 2020 (The 10/7/5 NM Nodes)

Advanced CMOS Technology 2020 (The 10/7/5 NM Nodes)
by Daniel Nenni on 01-28-2020 at 10:00 am

Our friends at Threshold Systems have a new class that may be of interest to you. It’s an updated version of the Advanced CMOS Technology class held last May. As part of the previous class we did a five part series on The Evolution of the Extension Implant which you can see on the Threshold Systems SemiWiki landing page HERE. And here is the updated course description:

Date: Feb. 5, 6, 7, 2020
Location: SEMI Headquarters, 673 South Milpitas Blvd.,
Milpitas, California, 94035, USA
Class Schedule:
Wednesday: 8:30 AM – 5:00 PM
Thursday: 9:00 AM – 5:00 PM
Friday: 9:00 AM – 5:00 PM
Tuition: $1,895

Course Description:
The relentless drive in the semiconductor industry for smaller, faster and cheaper integrated circuits has driven the industry to the 10 nm node and ushered in a new era of high-performance three-dimensional transistor structures. The speed, computational power, and enhanced functionality of ICs based on this advanced technology promise to transform both our work and leisure environments. However, the implementation of this technology has opened a Pandora’s box of manufacturing issues as well as set the stage for a range of manufacturing challenges that require revolutionary new process methodologies as well as innovative, new equipment for the 10/7/5nm nodes and the upcoming 3nm node. This seminar addresses all of these manufacturing issues with technical depth and conceptual clarity, and presents leading-edge process solutions to the new and novel set of problems presented by 10nm and 7 nm FinFET technology and previews the upcoming manufacturing issues of the 5 nm Nanowire.

The central theme of this seminar is an in-depth presentation of the key 10/7/5 nm node technical issues for Logic and Memory, including detailed process flows for these technologies.

A key part of the course is a visual survey of leading-edge devices in Logic and Memory presented by the Fellow Emeritus of the world’s leading reverse engineering firm, TechInsights. His lecture is a visual feast of TEMs and SEMs of all of the latest and greatest devices being manufactured and is one of the highlights of the course.

An update on the status of EUV lithography will be also be presented by a world-class lithographer who manages an EUV tool. His explanations of how this technology works, and the latest EUV breakthroughs, are enlightening as they are insightful.

Finally, a detailed technology roadmap for the future of Logic, SOI, Flash Memory and DRAM process integration, as well as 3D packaging and 3D Monolithic fabrication will also be discussed.

Each section of the course will present the relevant technical issues in a clear and comprehensible fashion as well as discuss the proposed range of solutions and equipment requirements necessary to resolve each issue. In addition, the lecture notes are profusely illustrated with extensive 3D illustrations rendered in full-color.

What’s Included:

  • Three days of instruction by industry experts with comprehensive, in-depth knowledge of the subject material
  • A high quality set of full-color lecture notes (a $495 value), including SEM & TEM micrographs of real- world IC structures that illustrate key points
  • Continental breakfast, hot buffet lunch, and coffee, beverages, & snacks served at both morning and afternoon breaks

Who is the seminar intended for:

  • Equipment Suppliers & Metrology Engineers
  • Fabless Design Engineers and Managers
  • Foundry Interface Engineers and Managers
  • Device and Process Engineers
  • Design Engineers
  • Product Engineers
  • Process Development & Process Integration Engineers
  • Process Equipment Marketing Managers
  • Materials Supplier Marketing Managers  & Applications Engineers

Course Topics:

1. Process integration. The 10/7nm technology nodes represent a landmark in semiconductor manufacturing and they employs transistors that are faster and smaller than anything previously fabricated. However, such performance comes at a significant increase in processing complexity and requires the solution of some very fundamental scaling and fabrication issues, as well as the introduction of radical, new approaches to semiconductor manufacturing. This section of the course highlights the key changes introduced at the 10/7nm nodes and describes the technical issues that had to be resolved in order to make these nodes a reality.

  • The enduring myth of a technology node
  • Market forces: the shift to mobile
  • The Idsat equation
  • The motivations for High-k/Metal gates, strained Silicon
  • Sevice scaling metrics
  • Ion/Ioff curves, scaling methodology

2. Detailed 10nm Fabrication Sequence. The FinFET represents a radical departure in transistor architecture. It also presents dramatic performance increases as well as novel fabrication issues. The 10nm FinFET is the 3rd generation of non-planar transistor and involves some radical changes in manufacturing methodology. The FinFET’s unusual structure makes its architecture difficult for even experienced processing engineers to understand. This section of the course drills down into the details of 10nm FinFet structure and its fabrication, highlighting the novel manufacturing issues this new type of transistor presents. A detailed step-by-step 10nm fabrication sequence is presented (Front-end and Backend) that employs colorful 3D graphics to clearly and effectively communicate the novel FinFET architecture at each step of the fabrication process. Attention to key manufacturing pitfalls and specialty material requirements are pointed out at each phase of the manufacturing process, as well as the chemistries used.

  • Self-Aligned Quadruple Patterning (SAQP)
  • Fin-first and Fin-last integration strategies
  • Multiple Vt Hi-/Metal Gate integration strategies
  • Cobalt Contacts & Cobalt metallization
  • Contact over Active Gate methodology
  • Advanced Metallization strategies
  • Air-gap dielectrics

3. Nanowire Fabrication – the 5nm Node. Waiting in the wings is the Nanowire. The advent of this new and radically different 3D transistor features gate-all-around control of short channel effects and a high level of scalability. A detailed process flow of a Horizontal Nanowire fabrication process will be presented that is beautifully illustrated with colorful 3D graphics and which is technically accurate.

  • A step-by-step Horizontal Nanowire fabrication process flow
  • Key fabrication details and manufacturing problems
  • Nanowire SCE control and scaling
  • Resolving Nanowire capacitive coupling issues
  • Vertical versus Horizontal Nanowire architecture: advantages and disadvantages

4. DRAM Memory. DRAM memory haS evolved through many generations and multiple incarnations. Despite claims that DRAM memory is nearing its scaling limit, new technological developments keep pushing the scaling envelope to extremes. This part of the course examines the evolution of DRAM memory and presents a detailed DRAM process fabrication flow.

  • DRAM memory function and nomenclature
  • DRAM scaling limits
  • A DRAM process flow
  • The capacitor-less DRAM memory cell

5. 3D NAND Flash Memory. The advent of 3D NAND Flash memory is a game changer. 3D NAND Flash not only dramatically increases non-volatile memory capacity, it will also add at least three generations to the life of this memory technology. However, the structure and fabrication of this type of memory is radically different, even alien, to any traditional semiconductor fabrication methodology. This section of the course presents a step-by-step visual description of the unusual manufacturing methodology used to create 3D Flash memory, focusing on key problem areas and equipment opportunities. The fabrication methodology is presented as a series of short videos that clearly demonstrate the fabrication operations at each step of the process flow.

  • staircase fabrication methodology
  • the role of ALD in 3D Flash fabrication
  • controlling CDs in tall, vertical structures
  • detailed sequential video presentation of Samsung 3D NAND Flash
  • Intel-Micron 3D NAND Flash fabrication sequence
  • Toshiba BICS NAND Flash fabrication sequence

6. Advanced Lithography. Lithography is the “heartbeat” of semiconductor manufacturing and is also the single most expensive operation in any fabrication process. Without further advances in lithography continued scaling would difficult, if not impossible. Recently there have been significant breakthroughs in Extreme Ultra Violet (EUV) lithography that promise to radically alter and greatly simplify the way chips are manufactured. This section of the course begins with a concise and technically correct introduction to the subject and then provides in-depth insights into the latest developments in photolithography. Special attention is paid to EUV lithography, its capability, characteristics and the recent developments in this field.

  • Physical Limits of Lithography Tools
  • Immersion Lithography – principles and practice
  • Double, Triple and Quadruple patterning
  • EUV Lithography: status, problems and solutions
  • Resolution Enhancement Technologies
  • Photoresist: chemically amplified resist issues

7. Emerging Memory Technologies. There are at least three novel memory technologies waiting in the wings. Unlike traditional memory technologies that depend on electronic charge to store data, these memory technologies rely on resistance changes. Each type of memory has its own respective advantages and disadvantages and each one has the potential to play an important role in the evolution of electronic memory.

This section of the course will examine each type memory, discuss how it works, and what its relative advantages are in comparison with other new memory types.

  • Phase Change Memory (PCRAM), Cross-point memory; separating the hype from the reality
  • Resistive RAM (ReRAM) – a novel approach that comes in two variations
  • Spin Torque Transfer RAM (STT-RAM) – the brightest prospect?

8. Survey of leading edge devices. This part of the course presents a visual feast of TEMs and SEMs of real-world, leading edge devices for Logic, DRAM and Flash memory. The key architectural characteristics for a wide range of key devices will be presented and the engineering trade-offs and compromises that resulted in their specific architectures will be discussed. The Fellow Emeritus representative of the world’s leading chip reverse engineering firm will present the section of the course.

  • How to interpret Scanning and Transmission Electron microscopy images
  • A visual evolution of replacement gate metallization
  • DRAM structural analysis
  • 3D FLASH structural analysis
  • Currently available 14nm/10nm/7nm Logic offerings from various manufacturers

9. 3D Packaging Versus 3D Monolithic Fabrication. Unlike all other forms of advanced packaging that communicate by routing signals off the chip, 3D packaging permits multiple chips to be stacked on top of each other, and to communicate with each other using Thru-Silicon Vias (TSVs), as if they were all one unified microchip. An alternate is the 3D Monolithic approach, in which a second device layer is fabricated on a pre-existing device layer and electrically connected together employing standard nano-dimensional interconnects. Both approaches have advantages and disadvantages and promise to create a revolution in the functionality, performance and the design of electronic systems.

This part of the course identifies the underlying technological forces that have driven the development of Monolithic fabrication and 3D packaging, how they are designed and manufactured, and what the key technical hurdles are to the widespread adoption of these revolutionary technologies.

  • TSV technology: design, processing and production
  • Interposers: the shortcut to 3D packaging
  • The 3D Monolithic fabrication process
  • Annealing 3D Monolithic structures
  • The Internet of Things (IoT)

10. The Way forward: a CMOS technology forecast. Ultimately, all good things must come to an end, and the end of FinFET technology appears to be within sight. No discussion of advanced CMOS technology is complete without a peek into the future, and this final section of the course looks ahead to the 5/3.5/2.5 nm CMOS nodes and forecasts the evolution of CMOS device technology for Logic, DRAM and Flash memory.

  • Is Moore’s law finally coming to an end?
  • New nanoscale effects and their impact on CMOS device architecture and materials
  • The transition to 3D devices
  • Future devices: Quantum well devices, Nanowires, Tunnel FETs, Quantum Wires
  • The next ten years …
  • Is Moore’s law finally coming to an end?
  • New nanoscale effects and their impact on CMOS device architecture and materials
  • The transition to 3D devices
  • Future devices: Quantum well devices, Nanowires, Tunnel FETs, Quantum Wires
  • The next ten years …

The Tech Week that was January 20-24 2020

The Tech Week that was January 20-24 2020
by Mark Dyson on 01-28-2020 at 6:00 am

Semiconductor Weekly Summary 2

Happy Chinese New Year.  Let’s hope the Year of the Rat brings a recovery for the semiconductor industry.  The initial signs are all good with many positive indications in the news this week, but let’s hope the Wuhan coronavirus doesn’t derail the recovery by becoming a global emergency.  To all those based in China stay safe and healthy.

Here is my weekly summary of all the key semiconductor and technology news from around the world this week.

In IC Insights 2020 edition of The McClean Report they predict that 26 out of the 33 IC product categories will show positive growth in 2020 with 5 products expected to enjoy double digit growth.  This is much more positive than 2019 where only 6 categories had positive growth, but still not as good as 2018.  Product categories expecting double growth, are NAND, Automotive special purpose IC,  DRAM, display drivers and embedded MPU.

This week several major companies reported quarterly earnings, with a very optimistic message being given by all.

Texas Instruments earnings report pointed to a recovery across the IC industry.  They said “most markets showed signs of stabilising” and forecast Q1 revenue midpoint of US$3.25billion. Last quarter they posted better than expected earnings of US$3.35billion, but this was still down 10% on a year ago and down 11% sequentially.  As Texas Instruments have a very broad portfolio across all markets it is a good indicator of the general market so it is quite optimistic that they see the market stabilising after 5 quarters of decline.

STMicro also posted solid results for Q4 reporting revenue of US$2.75billion, up 7.9% sequentially on strong sales for low emission cars and next generation smartphones, though traditional older generation automotive products were down. The poor sales of established automotive products will also impact next quarter where they forecast up to 14% drop in sales.  STM did also announce they will invest $1.5billion in 2020 direct at capital expenditure.

Intel also gave a very upbeat message at their Q4 earnings call.  Intel reported that strong cloud computing demand drove revenue in Q4 to US$20.2, up 8% on Q3.  For the year they reported revenue of $71.965billion, up 1.3% on 2018.  For the coming year they forecast revenue to be up a further 2% at $73.5billion, with revenue in Q1 at $19billion.  In addition Intel plans to spend $17 billion on capex to increase capacity to Ensure they can support customer demand and build inventory..

TSMC also gave a bullish message at their investors conference.  CC Wei, TSMC’s CEO said he expected revenue for 2020 to grow by more than 17% driven by demand for smartphones, high performance computer devices, the Internet of Things related applications and automotive electronics this year.

In December the Global Purchasing Managers Index was neutral with a PMI of 50 on average, however this varies by country significantly with China, Taiwan and South Korea all showing modest expansion.

Several 3rd party foundry vendors are entering or expanding their efforts in the silicon carbide (SiC) foundry business amid booming demand for the technology especially from automotive applications.  However the entrance of these new comers may not be so easy as the traditional IDM companies like Cree and Rohm use proprietary processes to differentiate their products.

Finally, with many different variants of 7nm technology being made available here is a concise summary of the differences between the variants and the benefits.


Specialized Accelerators Needed for Cloud Based ML Training

Specialized Accelerators Needed for Cloud Based ML Training
by Tom Simon on 01-27-2020 at 10:00 am

AI Domain Specific Processor

The use of machine learning (ML) to solve complex problems that could not previously be addressed by traditional computing is expanding at an accelerating rate. Even with advances in neural network design, ML’s efficiency and accuracy are highly dependent on the training process. The methods used for training evolved from CPU based software, to GPUs and FPGAs – which offer big advantages because of their parallelism. However, there are significant advantages to using specially designed domain specific computing solutions.

Because training is so compute intensive, both total performance and performance per watt are both extremely important. It has been shown that domain specific hardware can offer several orders of magnitude improvement over GPUs and FPGAs when running training operations.

AI Domain Specific Processor

On December 12th GLOBALFOUNDRIES (GF) and Enflame Technology announced a deep learning accelerator solution for training in data centers. The Enflame Cloudblazer T10 uses a Deep Thinking Unit (DTU) on GF’s 12LP FinFET platform with 2.5D packaging. The T10 has more than 14 billion transistors. It uses PCIe 4.0 and Enflame Smart Link for communication. The AI accelerator supports a wide range of data types, including FP32, FP16, BF16, Int8, Int16, Int32 and others.

The Enflame DTU core features 32 scalable intelligent processors (SIP). Groups of 8 SIPs each are used to create 4 scalable intelligent clusters (SIC) in the DTU. HBM2 is used to provide high speed memory for the processing elements. The DTU and HBM2 are integrated with 2.5D packaging.

This design highlights some of the interesting advantages of GF’s 12LP FinFET process. Because of high SRAM utilization in ML training, SRAM power consumption can play a major role in power efficiency. GF’s 12LP low voltage SRAM offers a big power reduction for this design. Another advantage of 12LP is much higher level of interconnect efficiency compared to 28nm or 7nm. While 7nm offers smaller feature size, there is no commensurate improvement in routing density for higher level metals. This means that for a highly connected design like the DTU, 12LP offers a uniquely efficient process node. Enflame is taking advantage of GF’s comprehensive selection of IP libraries for this project. The Enflame T10 has been sampled and is scheduled for production in early 2020 on GF’s Fab 8 in Malta New York.

A company like Enflame has to walk a very fine line in designing an accelerator like the T10. The specific requirements for machine learning determine many of the architectural decisions for the design. On-chip communication and reconfigurability are essential elements. The T10 excels in this area with its on-chip reconfiguration algorithm. Their choice in selecting 12LP means optimal performance without the risk and expense of going to a more advanced node. GF is able to offer HBM2 and 2.5D packaging in an integrated solution, further reducing risk and complexity for the project.

It is widely understood that increasing training data set size improves the operation and performance of ML applications. The only way to handle these increasing workloads is with fast and efficient accelerators that are designed specifically for the task. The CloudBlazer T10 looks like it should be an attractive solution. The full announcement and more information about both companies is available on the GLOBALFOUNDRIES website.

Also Read:

The GlobalFoundries IPO March Continues

Magnetic Immunity for Embedded Magnetoresistive RAM (eMRAM)

GloFo inside Intel? Foundry Foothold and Fixerupper- Good Synergies


FPGAs in the 5G Era!

FPGAs in the 5G Era!
by Daniel Nenni on 01-27-2020 at 6:00 am

New Family of FPGAs Speedster7t

FPGAs, today and throughout the history of semiconductors, play a critical role in design enablement and electronic systems. Which is why we included the history of FPGAs in our book “Fabless: The Transformation of the Semiconductor Industry” and added a new chapter in the 2019 edition on the history of Achronix.

In a recent blog post “FPGAs in the 2020s – The New Old Thing” Achronix reminds us that even though FPGAs are 35 years old the coming age of AI in the cloud represents a new FPGA growth opportunity to which I agree to 100%. In fact, during our first webinar series last year the Achronix ML webinar broke analytic records.

Whether on the edge (eFPGA) or in the cloud (FPGA), programmable technology will play a critical role with the explosive data growth of the 5G era which has just begun.  We started tracking AI on SemiWiki in Q4 of 2015 and have published 182 blogs that have garnered close to one million views which is quite good. We also get to see who reads what, when, and where. Just to net it out, AI is everywhere and companies big and small are consuming AI design enablement information as fast as we can publish it, absolutely.

Back to the Achronix blog post “FPGAs in the 2020s – The New Old Thing”, it is full of interesting data and links that will be of great use if you are investigating FPGA use in the 5G era. I have also spent many hours researching AI and have finished several AI projects in collaboration with some big name companies and SemiWiki partners. Hit me up in the comments section if you want to talk more. AI is coming, there is no stopping it, and it is exciting so let’s talk.

FPGAs in the 2020s – The New Old Thing, January 8, 2020

FPGAs are the new old thing in semiconductors today. Even though FPGAs are 35 years old, the next decade represents a growth opportunity that hasn’t been seen since the early 1990s. Why is this happening now?

There continues to be a data explosion in the world, with IDC predicting over 175 zetabytes of data will be generated annually by 2025. With this much data, there is a tremendous opportunity to analyze it for insights that can change and influence the world. AI will play a huge role in this data mining operation, and companies are growing their workforce with deep skills in machine learning and data analytics to meet the challenges of the future…

And don’t miss the upcoming Achronix webinar:

New Block Floating Point Arithmetic Unit for processing AI/ML Workloads in FPGA

Abstract:
Block Floating Point (BFP) is a hybrid of floating-point and fixed-point arithmetic where a block of data is assigned a common exponent. We describe a new arithmetic unit that natively performs Block Floating Point for common matrix arithmetic operations and creates floating-point results. The BFP arithmetic unit supports several data formats with varying precision and range. BFP offers substantial power and area savings over traditional floating-point arithmetic units by trading off some precision. This new arithmetic unit has been implemented in the new family of 7nm FPGAs from Achronix. We cover the architecture and supported operations of the BFP unit. In this presentation, artificial intelligence and machine learning workloads are benchmarked to demonstrate the performance improvement and power savings of BFP as compared to half-precision (FP16) operations.

About the presenter
Dr. Mike Fitton is senior director, strategy and planning at Achronix. He has 25+ years of experience in the signal processing domain, including system architecture, algorithm development, and semiconductors across wireless operators, network infrastructure and most recently in machine learning.


Tesla is Teaching Us to Move Over

Tesla is Teaching Us to Move Over
by Roger C. Lanctot on 01-26-2020 at 10:00 am

Tesla is Teaching Us to Move Over

Believe it or not, Tesla Motors is teaching us to be better drivers. One of the most remarkable lessons we are learning is that motor vehicles on public roadways ought to stay away from emergency and other service vehicles. In the U.S., we can all expect to hear more about “Move Over” laws – now enacted in all 50 states.

It sometimes seems as if Tesla vehicles have an uncanny ability, while operating in Autopilot mode, to collide with emergency vehicles parked on highways while on official business. The latest incident occurred on December 29th in Cloverdale, Ind., when a Tesla Model 3 collided with a fire truck on Interstate 70. The fire truck reportedly had its lights flashing while parked in the left lane. A passenger in the vehicle was killed. (Coincidentally on the same day in Gardena, Ca., a Tesla Model S also reportedly operating on Autopilot ran a stoplight and crashed into the side of a Honda Civic, killing its two occupants.)

Let’s be clear about one thing. Non-Tesla vehicles get into crashes every day in the U.S. (and around the world) resulting in 100 lives lost daily in the U.S. (3,500 globally). Highway fatalities have become sufficiently routine as to be accepted as the cost of owning and operating our own cars on public rights of way.

Tesla crashes and the associated injuries and fatalities, on the other hand, are news events because the company has introduced computer-based driving into the equation in such a manner as to appear reckless and inspire opposition and outrage. Tesla crashes are also news events because they remain relatively rare.

The unique inclination of Tesla’s operating on Autopilot to collide with emergency vehicles, though, has shown a spotlight on a big problem for which a solution may be in the offing. Last week, U.S. Department of Transportation Secretary Elaine Chao announced the commitment of $38M to equip emergency response vehicles and infrastructure with life-saving V2X technology in the 5.9GHz band. Chao noted that emergency response vehicles are involved in roughly 46,000 crashes, causing 17,000 injuries and 150 fatalities annually.

The First Responder Safety Technology Pilot Program described by Chao will provide funding to equip emergency response vehicles, transit vehicles, and related infrastructure including traffic signals and highway-rail grade crossings with V2X technology. Chao did not specify the nature of the V2X wireless technology but her comments were interpreted to be technology agnostic – though she did note the agency’s preference that 5.9GHz be preserved for transportation applications regardless of the technology.

Chao’s announced plans mirror legislation sponsored by U.S. Senators Dick Durbin (D-IL), and Tammy Duckworth (D-IL) and introduced in 2019 intended to establish a new national safety priority within an existing federal grant program to increase public awareness of “Move Over” laws and encourage implementation of life-saving digital alert technology.

The USDOT’s announcement of these initiatives also follows appropriations language secured by Durbin, establishing a $5M pilot program to test and deploy these digital alert technologies to protect law enforcement, first responders, roadside crews, and others while on the job. (Worth noting a demonstration at CES2020 by Veoneer and Verizon equipping roadside workers with 5G infused safety vests for communication with oncoming vehicles.)

Chicago-based HAAS Alert supports the Senate bill. The company has spent years implementing its vision of a “Safety Cloud” intended to aggregate digital alerts derived from tracking devices mounted on emergency response and service vehicles.

The effort by HAAS Alert to create its safety cloud has taken many forms, but progress has been steady. The goal is to deliver a driver alerting system that might integrate with embedded in-vehicle infotainment systems and or smartphones to warn drivers of emergency vehicles stopped in the road ahead or approaching from behind or even approaching perpendicularly at an upcoming intersection.

On January 6, Haas Alert announced a deal with Oshkosh-Pierce Manufacturing whereby HAAS Alert digital alerting technology will be included as a standard safety feature at no additional cost in Pierce’s custom fire apparatus and as an available aftermarket solution for apparatus currently in service. HAAS Alert struck a similar deal with vehicle maker RevGroup in 2018.

Other HAAS Alert deals and initiatives include:

  • Signed a partnership with Code3, one of the largest emergency vehicle and work truck market light manufacturers
  • Performed a Sprint “5G” test for emergency vehicle to emergency vehicle communication
  • Surpassed 100M driver alerts in September 2019; now in use in more than 90 cities
  • Fire Standards committees: NFPA950/951 added Digital Alerting language into standards. NFPA1901 is in public comment to have a standard for Digital Alerting into the fire space.
  • Integrated head unit digital alerting pilots with multiple automotive OEMs and suppliers
  • First DOT fleets added in 2019 along with utility trucks, a major U.S. Turnpike, DOT snowplows, a large state tollway. All with flashing light vehicles which will send out Digital Alerts to drivers.
  • Awarded Department of Homeland Security contractor for First Responder V2X – already commercialized and deployed
  • NHTSA funded Digital Alerting grant in 2019 for deployment and a study, Michigan’s PlanetM funded deployment of digital alerting
  • Awarded an Air Force SBIR Contract for fleets
  • Co-founder of non-profit (https://www.arrowcoalition.org/ ) to bring awareness of Move Over Laws

HAAS Alert has learned that it isn’t easy to “do the right thing” when it comes to saving lives with driver alerting technology. Secretary Chao’s announcement has created confusion emphasizing, as it does, V2X technology. The HAAS Alert safety cloud is not yet a V2X solution – it is a V-2-cloud-2-V solution today. HAAS Alert has had to create a chipset swappable solution capable of supporting 3G, 4G, LTE, 5G, FirstNet, DSRC, and AT&T and Verizon SIMs. (The HAAS Alert solution does provide for inter-vehicle communications between first responder vehicles.)

To help spread the word on its safety cloud HAAS Alert has published a guide describing both the safety cloud and something the company calls FleetFusion (https://tinyurl.com/tthyd3z). The company also integrates with Geotab and ESRI/ArcGIS platforms: https://www.prnewswire.com/news-releases/haas-alert-launches-on-the-geotab-marketplace-to-offer-enhanced-safety-service-300937015.html

By now it’s clear that first responder fatalities are a problem for all road users, not just drivers of Tesla’s with Autopilot. There were 49 first responders killed in 2019 as a result of 90,000 collisions – the greatest single source of fatalities for this community. Illinois, alone, saw three state troopers die in roadside crashes. The numbers are already on the rise in 2020. Tesla is teaching us to Move Over. HAAS Alert is showing us how… and when.


Woven City: Smashing Toyota’s Looms

Woven City: Smashing Toyota’s Looms
by Roger C. Lanctot on 01-26-2020 at 6:00 am

Woven City Smashing Toyotas Looms

Car companies are interesting creatures in a corporate world increasingly dominated by Internet-centric behemoths from Silicon Valley, Seattle, and China. While the denizens of the Internet have demonstrated their ability to create billions of dollars in shareholder value from the whims and whimsy of browsing consumers, car companies have built their valuations on approximating the desires of their potential customers and selling expensive hardware one unit at a time – usually through networks of dealers.

While Internet-centric companies thrive on instant gratification, building solid foundations of value upon instant consumer feedback, car companies can be seen to be more or less “guessing” what consumers will want or need years in the future – due to the long product development cycle. When car companies get it right they, too, can create massive shareholder value from strong positive responses to their products: the Volkswagen Beetle, the Ford Model T, the Toyota Corolla.

At CES2020, just a few weeks ago, maximum car maker consternation at the future vehicular desires of consumers was on full display with an emphasis on autonomous vehicles and even flying cars. As the maker of the single most popular vehicle of all time, the Toyota Corolla, Toyota was notable for touting its Woven City concept for a living environment enabled by artificial intelligence and ruled by robots and autonomous vehicles.

CES2020 – Toyota presentation of Woven City concept – https://www.youtube.com/watch?v=NME7pGh-7rk

This utopian or dystopian vision, depending on your point of view, reflected Toyota’s  desire to both segregate and weave together different forms of human transportation moving at different speeds: from pedestrians on foot to low-speed micro-mobility systems, to e-Pallette autonomous shuttles. Toyota intends to break ground on this vision at a 175-acre former manufacturing facility in the shadow of Mt. Fuji in 2021.

In a press event at CES2020 Toyota’s CEO, Akio Toyoda, described the Toyota Woven City vision as a testing facility populated with as many as 2,000 citizens and accessible to scientists from Toyota as well as third parties to test new urban dwelling and transportation concepts. An appreciative audience warmly greeted Toyoda’s conceptual vision, but perhaps they were simply being polite. The implications of Toyota’s Woven City are both troubling and promising and CES2020 attendees can be forgiven for recognizing innovation.

Toyota is to be applauded for affirmatively proposing a solution to the challenges of supporting human life with all of the economic, energy, and ecological concerns currently confronting policy makers and governments. It is no surprise that Toyota emphasizes hydrogen fuel cells at the heart of its vision along with e-Palette autonomous shuttles.

Toyota was kind enough to create a CGI-type rendering of life in the Woven City showing a complete absence of individually owned and operated vehicles – which have been replaced by e-Pallette shuttles. E-Pallette shuttles are also used as delivery vehicles and mobile retail and service delivery platforms in this city of the future.

Perhaps the strangest aspect of the Woven City video is that the kind of walkable urban space that is imagined looks almost identical to the existing walkable spaces created in the typical Tokyo landscape of today. Tokyo itself is a highly walkable city, with wider pedestrian areas – above and below ground – created for shopping, dining or nightlife. It is almost as if Toyota is trying to compete with and/or replace a cityscape that is already functioning effectively.

We can forgive Toyota for focusing so narrowly on the promotion of its e-Pallette concept in addition to hydrogen propulsion. There are many experts and analysts forecasting a future dominated by autonomous shuttles – but few such visions have suggested the complete exclusion of individually owned vehicles.

More remarkable, from the video shared at CES2020, was the division of transportation below and above ground. In the Woven City vision, utilities and product deliveries are managed below ground, while all people moving appears to take place above ground. In fact, the video shows very little people moving taking place.

It is hard to accept this Toyota vision of the future from a nation where the subway system in the nation’s capital, Tokyo, moves more than eight million riders daily. The Woven City has no such subway system in addition to having no cars.

But let’s assume, for a moment, that e-Pallette’s will take on the role of people moving. This raises the question of what the future of Toyota’s vehicle marketing will become. Does the Woven City suggest a future of Toyota selling commercial vehicles in the form of autonomous e-Pallette’s to developers and cities?

It is worth bearing in mind that Toyota has a majority owned subsidiary – Toyota Housing Corporation – that is in the business of building detached houses and housing products for Japanese consumers. It’s not clear whether the Woven City vision represents an extension of this corporate vision, but it is worth noting – especially given the fact that Toyota Home stores can be found in most Japanese cities.

The only criticism of the Woven City expressed in press reports came in reference to potential privacy violations or to the process of selecting the up-to-2,000 residents of the city. All in all, the entire venture appears far too artificial to address relevant challenges facing urban leaders around the world today.

Transportation is at the core of many of the woes facing cities today. Many of the largest urban centers on the planet have maxed out their ability to accommodate individually owned and operated motor vehicles and are putting policies in place to pry people out of their cars.

The latest initiatives include selling transportation as a subscription or service packaged in segments of hours or days or weeks and aggregated across multiple means of transportation – with an emphasis on public/shared resources. Some cities in the U.S., Europe, and elsewhere have gone further by making public transportation of one kind or another – buses in particular – entirely free.

These strategies, intended to leverage existing infrastructure at minimal cost and maximum impact, are beginning to alter consumer behavior – de-emphasizing the automotive default. Toyota’s Woven City, like its hydrogen propulsion obsession, appears completely detached from current realities with no evolutionary path to adoption. And the exclusion of existing mass people movement solutions is particularly glaring coming from a country that is arguably a leader in the massive and high speed movement of people.

Near the end of his presentation, CEO Toyoda notes Toyota’s legacy as a manufacturer of looms, a heritage shared by many other large Japanese electronics companies some of which, like Nakajima and Brother, first made sewing machines. Sad for me to say, the Woven City looms as a detached dystopian vision of future living that must be reconsidered in the context of current mass public transportation needs.

More compelling, though arguably more complex, is the almost simultaneous announcement from Toyota of the launch of its Kinto car subscription and mobility portfolio in Europe. This multifaceted approach to expanding transportation options offers the prospect of having an immediate impact on the ownership and usage of existing vehicles. This is probably worth a closer look and more attention than the Woven City. More details can be found here: https://newsroom.toyota.eu/toyota-launches-kinto-a-single-brand-for-mobility-services-in-europe/


ASML “A Swing to Memory Looms” Nice performance while awaiting Memory bounce

ASML “A Swing to Memory Looms” Nice performance while awaiting Memory bounce
by Robert Maire on 01-24-2020 at 6:00 am

ASML 2020 Logic Memory
  • Good Q4 & 2019 despite weak memory
  • 2020 will be up year but memory an unknown
  • EUV ramp is on track – no China or memory impact
ASML reports an “in line” Q4 despite industry weak 2019

ASML reported sales of 4B Euros and a nice gross margin of 48% resulting in 2.70 Euros per share in earnings.  Orders came in at 2.4B Euros with roughly 80% coming from logic. Despite 2019 being a down year for semiconductor equipment as a whole, ASML managed to have 8% growth during 2019 as spending in the industry shifted back towards lithography purchases,  We expect this trend of enhanced litho spending to hold true in 2020 as the industry continues its EUV adoption.

Logic (TSMC) remains the biggest driver at roughly 80%
It is interesting to note that ASML was able to keep up its growth despite the fact that memory spend went from the majority of sales in 2019 down to roughly 20% of sales at the close of the year. Despite this huge shift in end market demand the company has maintained good growth.

It obviously helps a lot to have strong backlog and a strong order book to be able to more efficiently manage the ebbs and flows of customer mix as 2019 was not an ebb and flow but more of a stampede away from memory to logic/foundry. It also helps that EUV is obviously focused on foundry/logic so the stampede was to ASML’s benefit as well.

“Focus” changes from making EUV work to making more EUV…..
It is also very clear that now that we are well over the acceptance and HVM hurdle of EUV, attention is now turning to turning out more systems faster. Getting down cycle times and getting the supply chain cranked up while still hard is not as hard as working out the kinks has been over the last few years.

2020 looks to be about 35 EUV tools with an eye towards 50 in 2021.  These seem like reasonable, “doable” targets.  We don’t think we need a full blown memory recovery to get to this years goal of 35 and memory will likely recover soon enough to support a 2021 goal of 50.

There is still a lot of work to be done on high NA but less critical than the original work as high NA is an improvement rather than wholesale change.

Multibeam delay helps KLA
One of the few negative points raised, although minor, was the delay of multibeam.  While not totally unexpected given the complexity, it does give KLA a bit of time to work on their products and counter measures.

In our view now that the war has been won on EUV, ASML can and should shift some more focus and spend to metrology & yield related issues and tools and products as it will also support the infrastructure for EUV going forward.

Memory still an unknown
It was clear from the call and clear in our view that the recovery of the memory industry is very much unclear. While NAND will no doubt recover first and DRAM some time later, the company gave no indication other than “just hoping” that memory recovers.  There was no evidence given nor implied of improved order activity or any other indication of memory spend coming back any time soon.

Like the rest of the industry, the key to a strong up cycle is memory along with foundry/logic both working at the same time….we remain with foundry/logic at roughly 80% of business with memory barely plodding along. This is obviously more of a negative for players like Lam who are much more memory centric.  Even though business at Lam and Applied has picked up of late, its not like the rip roaring memory love fest.

China is a non-issue
There remains a lot of discussion in the press about poor ASML being the ping pong ball in a game between China and the US.  So far we see zero impact from any sales restriction to China.  We expect no near term ill effects on ASML and the real issues and impact are more political than financial.  Though ASML may not be happy to be a pawn it hasn’t impacted their profitability or overall sales. We think there is a higher level of risk of the embargo spreading to US equipment companies that would see more financial impact.

The stocks
Given that the quarter was just in line with no surprises, we expect little movement in an already fully priced stock. There was also nothing surprising nor significantly impactful on other stocks that would drive the group one way or another.  The lack of any sign of memory recovery is a little bit disappointing for the group that has seen its shares on a tear despite the weakness.

All in all no impact and we are not motivated to run out and chase stocks that have already run up nor are we tempted to short stocks that have such unusual support.