webinar banner2025 (1)

Autonomous Driving Still Terra Incognita

Autonomous Driving Still Terra Incognita
by Bernard Murphy on 12-12-2019 at 6:00 am

Whither self-driving?

I already posted on one automotive panel at this year’s Arm TechCon. A second I attended was a more open-ended discussion on where we’re really at in autonomous driving. Most of you probably agree we’ve passed the peak of the hype curve and are now into the long slog of trying to connect hope to reality. There are a lot of challenges, not all technical; this panel did a good job (IMHO) of exposing some of the tough questions and acknowledging that answers are still in short supply. I left even more convinced that autonomous driving is still a hard problem needing a lot more investment and a lot more time to work through.

Panelists included Andrew Hopkins (Dir Systems Tech, Arm), Kurt Shuler (VP Mktg, Arteris IP), Martin Duncan (GM of ADAS, ASIC Div at ST) and Hideki Sugimoto (CTO NSITEXE/DENSO). Mike Demler of the Linley group moderated. There was some recap of what we do know about functional safety, with a sobering observation that this field (as understood today) started over a decade ago. Through five generations of improvements we now feel we understand more or less what we’re doing for this quite narrow definition of functional safety. We should keep this in mind as we approach safety for autonomous drive, a much more challenging objective.

That led to the million-dollar question – how do you know what’s good enough? Even at a purely functional safety level there is still anxiety. We’re now mixing IPs designed to meet ASIL levels with IPs designed for mobile phones, built with no concept of safety. Are there disciplined ways to approach this? I heard two viewpoints; certainly safety islands and isolation are important, and modularity and composability are important. However if interactions between subsystems are complex you still need some way to tame that complexity, to be able to analyze and control with high confidence. Safety islands and isolation are a necessary but not sufficient requirement.

In case you’re wondering why we don’t force everything to be designed to meet the highest safety standards, the answer is ROI. Makers of functions in phones have a very healthy market which doesn’t need safety assurance. They’re happy to have those functions also used in self-driving cars but they’re not interested in doubling their development costs (a common expectation for safety standards) in order to serve that currently tiny and very speculative market. And no-one can afford to build this stuff from scratch to meet the new standards.

The hotter question is how safety plays with AI, which is inherently non-deterministic and dependent on training in ways that are still uncharacterized from a safety perspective. ISO 26262 is all about safety in digital and analog functionality; as in much of engineering we know how to characterize components, subsystems and systems, we can define metrics and methods to improve those metrics. We’re much less clear on any of this for AI. The “state-of-the-art” in autonomy today seems to be proof by intimidation – we’ll test over billions of miles of self-driving and that will surely be good enough – won’t it? But how do you measure coverage? How do you know you’re not testing for similar scenarios a billion times over rather than billions of different scenarios? And how do you know that billions of scenarios would be enough for acceptable coverage? Should you really test trillions,  quadrillions, ..  ?

This led on to SOTIF (safety of the intended function) an ISO follow-on to 26262 intended to address safety at the system level. Kurt’s view is that this is more of a philosophical guide rather than a checklist, useful at some level but hardly an engineering benchmark. There’s a new emerging standard from Underwriters Labs (UL) called UL 4600 which as I understand it is primarily a  very disciplined approach to documenting use-cases and the testing done per use-case. That seems like a worthwhile and largely complementary contribution.

Getting back to mechanisms, one very interesting discussion revolved around a growing suspicion that machine learning (ML) alone is not enough for self-driving AI. We already know of a number of problems: non-determinism, the coverage question, spoofing, security issues and issues in diagnosis. Should ML be complemented by other methods? A popular trend in a number of domains is to make more use of statistical techniques. This may sound odd; ML and statistics are very similar in some ways, but they have complementary strengths. For example statistical methods are intrinsically diagnosable.

Another mechanism drawn from classical AI is rule-based systems. Some of you may remember ELIZA, a very early natural language systems based on rules. Driving is to some extent a rules-based activity (following the highway code for example) so could be a useful input. Of course simply following rules isn’t good enough. The highway code doesn’t specify what to do if a pedestrian runs in front of the car, or how to recognize a pedestrian in the first place. But it’s not a bad starting framework. On top of that a practical system needs flexibility to make decisions around situations not seen before, and the ability to learn from mistakes. We should also recognize that complex rulesets may have internal inconsistencies; intelligent systems need to be able to work around these.

The panel closed with a discussion on the explosion in different AI systems and whether this is compounding the problem. The general view was that yes, there are a lot of solutions but (a) that’s a natural part of evolution in this domain, (b) some difference is inevitable between say audio and vision solutions and (c) some will likely be essential between high-end, high-complexity solutions (say vision) and lower complexity solutions (say radar).

All in all, a refreshing and illuminating debate, chasing away some of the confusion shed by the popular pundits.

Circling back to our safety roots, if you’re looking for a clear understanding of ISO 26262 and what it means for chip design teams, a great place to start is the paper, “Fundamentals of Semiconductor ISO 26262 Certification:  People, Process and Product.”


Characteristics of an Efficient Inference Processor

Characteristics of an Efficient Inference Processor
by Tom Dillinger on 12-11-2019 at 10:00 am

The market opportunities for machine learning hardware are becoming more succinct, with the following (rather broad) categories emerging:

  1. Model training:  models are evaluated at the “hyperscale” data center;  utilizing either general purpose processors or specialized hardware, with typical numeric precision of 32-bit floating point for weights/data (fp32)
  2. Data center inference:  user data is evaluated at the data center;  for applications without stringent latency requirements (e.g., minutes, hours, or overnight);  examples include: analytics and recommendation algorithms for e-commerce, facial recognition for social media
  3. inference utilizing an “edge server”:  a derivative of data center inference utilizing plug-in hardware (PCIe) accelerator boards in on-premises servers;  for power efficiency, training models may be downsized to employ “brain” floating point (bfloat16) with a truncated mantissa representation from 23 to 7 bits and the same exponent range as fp32;  more recent offerings include stand-alone servers solely focused on ML inference
  4. inference accelerator hardware integrated into non-server edge systems:  applications commonly focused on models receiving sensor or camera data requiring “real-time” image classification results – e.g., automotive data from cameras/radar/LiDAR, medical imaging data, industrial (robotic) control systems;  much more aggressive power and cost constraints, necessitating unique ML hardware architectures and chip-level integration design;  may invest additional engineering resource to further optimize the training model down to int8 for inference weights and data, accuracy-permitting (or, a network using a larger data representation for a Winograd convolution algorithm)
  5. voice recognition-specific hardware for voice/command recognition:  relatively low computational demand;  extremely cost-sensitive

Hyperscale data center hardware

The hardware opportunities for hyperscale data center training and inference are pretty well-defined, although research in ML model topologies and weights + activation optimization strategies is certainly evolving rapidly.  Performance for hyperscale data center operation is assessed by the overall throughput on large (static) datasets.  Hardware architectures and the related software management are focused on accelerating the evaluation of input data in large batch sizes.

The computational array of MAC’s and activation functions is optimized concurrently with the memory subsystem for evaluation of multiple input samples – i.e., batch >> 1.  For large batch sizes provided to a single network model, the latency associated with loading network weights from memory to the MAC array is less of an issue.  (Somewhat confusingly, the term batch is also applied to the model training phase, in addition to inference of user input samples.  In the case of training, the term batch refers to the number of supervised testcases evaluated and error accumulated before the model weight correction step commences – e.g., the gradient descent evaluations of weight value sensitivities to reduce the error.)

Machine Learning Inference Hardware

The largest market opportunities are for the design of ML accelerator cards and discrete chips for categories (3) and (4) above, due to the breadth of potential applications, both existing and yet to evolve.  What are the characteristics of these inference applications?  What will be the differentiating features of the chip and accelerator boards that are integrated into these products?  For some insight into these questions, I recently spoke with Geoff Tate, CEO at FlexLogix, designers of the InferX X1 machine learning hardware.

Geoff provided a very clear picture, indicating: “The market today for machine learning hardware is mostly in the data center, but that will change.  The number one priority of the product developers seeking to integrate inference model support is the performance of batch=1 input data.  The camera or sensor is providing real-time data at a specific resolution and sampled frame rate.  The inference application requires model output classification to keep up with that data stream.  The throughput measure commonly associated with data center-based inference computation on large datasets doesn’t apply to these applications.  The goal is to achieve high MAC utilization, and thus, high model evaluation performance for batch=1.”

Geoff shared the following graph to highlight this optimization objective for the inference hardware (from Microsoft, Hot Chips 2018).

“What are the constraints and possible tradeoffs these product designers are facing?”, I asked.

Geoff replied, “There are certainly cost and power constraints to address.  A common measure is to reference the performance against these constraints.  For example, product developers are interested in “frames evaluated per second per Watt’ and “frames per second per $’, for a target image resolution in megapixels and a corresponding bit-width resolution per pixel.  There are potential tradeoffs in resolution and performance that are possible.  For example, we are working with a customer pursuing a medical diagnostic imaging application.  A reduced resolution pixel image will increase batch=1 performance, while providing sufficient contrast differentiation to achieve highly accurate inference results.”

I asked, “The inference chip/accelerator architecture is also strongly dependent upon the related memory interface – what are the important criteria that are evaluated for the overall design?”

Geoff replied, “The capacity and bandwidth of the on-die memory and off-die DRAM need to load the network weights and store intermediate data results to enable a high sustained MAC utilization, for representative network models and input data sizes.  For the InferX architecture, we balanced the on-die SRAM memory (and related die cost) and the external DRAM x32 LPDDR4 DRAM interface.”  The figures below illustrate the inference chip specs and performance benchmark results.

“Another tradeoff in accuracy versus performance is that a bfloat16 computation takes two MAC cycles compared to an int8 model.”   

I then asked Geoff, “A machine learning model is typically represented in a high-level language, and optimized for training accuracy.  How does a model developer take this abstract representation and evaluate the corresponding performance on inference hardware?”

He replied, “We provide an analysis tool to customers that compiles their model to out inference hardware and provides detailed performance estimation data – this is truly a critical enabling technology.”  (See the figure below for a screenshot example of the performance estimation tool.)

The FlexLogix InferX X1 performance estimation tool is available now.  Engineering samples of the chip and PCIe accelerator card will be available in Q2’2020.  Additional information on the InferX X1 design is available at this link.

-chipguy


The First Must-Have in 5G

The First Must-Have in 5G
by Bernard Murphy on 12-11-2019 at 6:00 am

Bulk acoustic wave filter

If I was asked about must-have needs for 5G, I’d probably talk about massive MIMO and a lot of exotic parallel DSP processing, also perhaps need for new intelligent approaches to link adaptation and intelligent network slicing in the infrastructure. But there’s something that comes before that all that digital cleverness, in the RF front-end, which has also become pretty exotic. This is the filter(s). These are the devices that pluck out an RF channel of interest from the surrounding radio cacophony and ignore everything else.

Filters at this level look nothing like conventional circuits, either analog or digital. They operate on a piezoelectric substrate; an electric transducer (driven by the input radio signal) at one end stimulates mechanical action and thus an acoustic wave. This travels to the other end where that wave triggers a second transducer, converting the acoustic signal back into an electrical signal.

Might seem like a lot of work to accomplish not very much, however the magic is in managing those acoustic waves. Like a tiny musical instrument, the filter (plus a cavity underneath) has a narrow band of resonant frequencies; everything outside that frequency range is damped into non-existence. And as for musical instruments, the resonance range depends on the mechanical design of the device – dimensions, thicknesses, material and the cavity.

2G, 3G and 4G front-ends have used surface acoustic wave (SAW) filters in which the wave travels along the surface of the device. These are apparently very cost effective but are limited to frequencies below ~2GHz, where filter selectivity begins to decline. This is fine for 3G, on the edge for 4G and not good enough for 5G. That’s pushed a switch to bulk acoustic wave (BAW) filters which can support higher frequencies, at somewhat higher costs.

One reason for that cost may be the complexity of designing such filters. You see, these are really MEMS devices since they’re electromechanical; even though you don’t see anything moving, the acoustic waves are mechanical distortions in the piezoelectric (PE) structure. A typical filter is a thin film of PE between two electrodes, sitting on top of the cavity. I’ve talked before about the challenges in designing MEMS – no pre-characterized cells or nicely defined PDKs from which you can reliably model.

There’s a second problem. Acoustic waves go where they want to go. While a square or rectangular structure might seem like the logical way to build these things, waves can reflect off ends, and can also run in the surface. Either effect may contaminate the ideal bulk behavior. So structures are built with interesting shapes like irregular pentagons (see above), to damp out undesired behaviors. Also it’s common to build networks of resonators, each of which can have a different geometry.

Now you see the problem – electromechanical 3D modeling (because you’re modeling bulk and surface acoustic waves), through strange geometries with little reference data to guide your models. I was told that some of the leading companies producing these filters are still using a design-fab-analyze-correct loop to get to optimized designs. There has been no better way. But it’s still been worth it because the volumes for these devices are huge – one (or more?) in every 5G edge device, including cellphones.

But now there is a better way, and that’s important because that development loop also affects time to market in what are commonly winner-take-all markets (at least per model release). That is to virtually prototype these devices, starting from a custom-characterized PDK.

Which is what the Mentor/Tanner-SoftMEMS-OnScale solution does well. You design the device, layer by layer in Ledit (which incidentally does well at handling strange shapes like irregular pentagons), convert that to a 3D model by adding materials definitions, piezo properties through matrices, thicknesses, process data, mechanical properties and boundary conditions, then model the whole thing, or just a part of it, in the cloud using the scalable FEA analytics from OnScale. Mary Ann (SoftMEMS) told me they can even model a full wafer, looking for behaviors and yield hits around the edges.

Better virtual modeling and better analytics, all the way up to wafer-scale analysis. That should help reduce time-to-market. You can learn more about this flow HERE.


The ESD Alliance Honors Mary Jane Irwin

The ESD Alliance Honors Mary Jane Irwin
by Randy Smith on 12-10-2019 at 10:00 am

The Phil Kaufman Award has been given annually since 1994 to individuals who have had a significant impact on Electronic System Design. I have attended several of the award dinners during that time. Most of the time (roughly 70%), the award recipients were either people I knew or people whose textbooks I had read. The award goes to people of many different contribution areas that have given substantial service to the industry. Award recognition may come for academic and research achievements, or substantial business/organization leadership, or some combination of these activities. In November this year, the award was given to Dr. Mary Jane Irwin. I was happy to be there for that presentation.

Having not met Dr. Irwin before, my interest was piqued to learn about her contributions to the industry. I knew I had heard her name before, but I could not recall in what context that had happened. I now realize that I had heard of Dr. Irwin when she was the Design Automation Conference (DAC) chair in 1999 though she has also been on the DAC Executive Committee for many years. But, as I learned at the award dinner, her contributions are so much more.

On the technical side, Dr. Irwin’s credentials are flawless. Dr. Irwin received her Master of Science and Ph.D. degrees in computer science from the University of Illinois, Urbana-Champaign, and is the recipient of an Honorary Doctorate from Chalmers University in Sweden. Her areas of significant academic contributions include several advances in power analysis and developing VLSI architectures for signal and image processing for the discrete wavelet transform. She has been a prolific writer, authoring or co-authoring more than 200 technical publications. She has received many awards for her papers, as well.

 

While these are significant contributions, what struck me most at the presentation was Dr. Irwin’s influence and mentorship of others. While the effect may be difficult to measure directly, the scope of her influence on helping others advance the state of the electronic system design industry is abundantly obvious. Dr. Irwin advised more than 25 Ph.D. students. Along with Marie Pistilli, she co-founded the organization now known as Women in Electronic Design. As mentioned before, she has been a significant leader and contributor to DAC for many years. To make even more clear Dr. Irwin’s mentorship of others, we also saw a presentation by Dr. Valeria Bertacco from the University of Michigan. Dr. Bertacco showed many photographs of a seemingly endless list of students that Dr. Irwin has led and inspired to contribute to the electronic system design industry. We should be truly grateful for Mary Jane’s gifts to us from her efforts in electronic system design as it has been expressed through her service to our community.

I should also mention that some credit should be given to Pennsylvania State University. The university has supported Dr. Irwin’s considerable research and her copious number of publications. They gave her a platform to develop unique technology while also developing future contributors to our industry.

For more information on the Kaufman Award, see http://esd-alliance.org/phil-kaufman-award/.


IEDM 2019 Press Lunch Exposed!

IEDM 2019 Press Lunch Exposed!
by Daniel Nenni on 12-10-2019 at 6:00 am

One of the many benefits of blogging for SemiWiki is the free conference passes and buffet lunches, absolutely. IEDM is one of the more prestigious semiconductor conferences, now in its 65th year, is being held at the Hilton Hotel in San Francisco’s famed Union Square this week. This year more than 1,910 semiconductor professionals registered to see the 238 papers out of 613 submissions. The press lunch is a traditional conference event but IEDM did a very nice job this year so I will start our IEDM coverage here.

The conference theme this year is Innovative Devices for an Era of Connected Intelligence which sounds harmless enough if used for the greater good. The evil applications however are daunting if you think about it which I do constantly. Upon check-in we were given flash drives with all of the papers which will be great reading for the holidays.

The IEDM Publicity Chairs Rihito Kuroda, Tohoku University and Dina Triyoso, TEL Technology Center, America, did a brief overview of the conference followed by a Q&A session. The other conference chairs lunched with us as well which was great for networking.

From the presentation:

Reacurring themes
CMOS technology
3D Integration
Memories (NVM, MRAM, ReRAM, FeRAM)
Neuromorphic computing and devices
Novel materials and architectures
Power electronics
Negative capacitance devices – and applications

CMOS Technology
Session 3 – ALT – Monolithic 3D Integration and BEOL Transistors
Session 7 – MS – Physics of Ferroelectric and Negative Capacitance Devices
Session 11 – ALT – Gate-All-Around Device Technologies
Session 19 – ALT – BEOL and 3D Packaging Innovation
Session 29 – ALT – High Mobility Ge-Based Channel Devices
Session 36 – ALT – CMOS Platform Technologies

Memory Technology
Session 2 – MT– STT-MRAM
Session 15 – MT – Ferroelectrics
Session 22 – MT/EDT – Focus Session: Emerging AI Hardware
Session 28 – MT – Charge Based Memory and Emerging Memories
Session 35 – MT – Selectors and RRAM: Technology and Computing
Session 38 – MT – Memory for Neural Network
Related Session 7 – MS – Physics of Ferroelectric and Negative Capacitance Devices

Power Devices and Systems
Session 4 – PDS – Advances in GaN Power Devices and GaN Monolithic Integration
Session 12 – PDS – Advances in Silicon and Gallium Oxide Power Device Technologies
Session 20 – PDS – SiC Power Devices

We were not allowed to take pictures but I figured one of my lunch was okay. I get the no pictures thing to promote better content in the presentations but it is impossible to enforce and rampantly violated. The other issue brought up in the Q&A is the technical content. Several presentations by big name companies were flagged for empty content, ASML is a notorious offender. The conference organizers took our input gracefully but they should know IEDM is one of the best content conferences on the circuit.

Monday was memory day (MRAM), Don Draper will be covering that in detail. Scotten Jones has a handful of blogs in mind through the holidays so stay tuned to SemiWiki.com for semiconductor coverage by actual semiconductor professionals.

About IEDM
With a history stretching back more than 60 years, the IEEE International Electron Devices Meeting (IEDM) is the world’s pre-eminent forum for reporting technological breakthroughs in the areas of semiconductor and electronic device technology, design, manufacturing, physics, and modeling. IEDM is the flagship conference for nanometer-scale CMOS transistor technology, advanced memory, displays, sensors, MEMS devices, novel quantum and nano-scale devices and phenomenology, optoelectronics, devices for power and energy harvesting, high-speed devices, as well as process technology and device modeling and simulation. The conference scope not only encompasses devices in silicon, compound and organic semiconductors, but also in emerging material systems. IEDM is truly an international conference, with strong representation from speakers from around the globe.


As 2019 comes to an end everyone is starting to look at what 2020 holds

As 2019 comes to an end everyone is starting to look at what 2020 holds
by Mark Dyson on 12-09-2019 at 10:00 am

At the moment there are many encouraging signs based on the latest data. Let’s hope this trend continues into 2002 and 2020 is the year of recovery of the semiconductor market. However much depends on how the US China trade war pans out. Last week Trump blew hot and cold saying everything from the negotiations were going very well to saying that he thought there may not be an agreement until after the US presidential election next year. The next round of additional US tariffs are due to go in place on December 15th, so hopefully there will be enough progress to delay the imposition of these.

According to IHS Markit, global semiconductor sales dropped 14.2% in the first 3 quarters of 2019 compared to 2018, but there are signs of recovery even in the memory segment which has dragged down the sector so far. Intel retained it’s number 1 position with 16.3% revenue growth in Q3. For the full year IHS Markit estimate sales will recover slightly and only drop 12.4% compared to 2018.

For 2020, they estimate that NAND flash will grow 19%.  Strong growth in NAND flash and DRAM is forecast as momentum increases for 5G connectivity, artificial intelligence, deep learning, and virtual reality in mobile, data center and cloud-computer servers, automotive, and industrial markets in 2020.

SEMI reported that October global semiconductor sales rebounded in October with a 2.9% month on month increase with global sales of US$35.6billion, but this was still down 13.1% yoy.

Meanwhile the World Semiconductor Trade Statistics (WSTS) organization projects annual global sales will decrease 12.8 percent in 2019, before the market starts to recover with increases of 5.9 percent in 2020 and 6.3 percent in 2021.

SEMI also published it’s 3rd quarter global semiconductor equipment manufacturers billings data showing a 12% growth over Q2, but still down 6% compared to Q3 2018. Taiwan regained the worlds largest semiconductor market status by growing 21% from Q2, and up 34% from a year ago, buying $3.9billion of equipment. Taiwan was ranked third in semiconductor equipment purchases throughout 2018, behind South Korea and China before taking top slot in Q1, and 2nd in Q2 to China. TSMC capex spending accounted for $3.21billion of the total as it invested to support 7nm, 5nm and 3nm technologies.

In addition TSMC announced it plans to spend US$14~15billion on capital expenditure next year, more than half of this expenditure is going to be spent on expanding its 5nm technology to support 5G technology growth. TSMC see a much stronger than expected demand for 5nm & 7nm due to the rapid deployment of 5G around the world. TSMC also confirmed it is on schedule to start mass production of 3nm in 2022.

Taiwanese foundry UMC has announced that it has released 22nm technology for production.

Taiwan’s manufacturing index hit a 15 month high last month due to strong demand from the electronics sector driven by 5G applications.  The PMI increased from 52.7 in October to 54.9 in November with the sub index of new business orders climbed from 52.7 to 61.

In South Korea the outook is not so rosy with export orders of semiconductors decreasing 31% to US$7.4billion in November, the 12th straight month of decline. However market analysts are hopefully of a recovery soon as the Chinese PMI rebounded to 51.8 in November.

Huawei CEO Ren Zhengfei has said it plans to shift it’s US based research centre to Canada. He also said he wants to build some new factory capacity in Europe to build 5G networking equipment.

According to Bloomberg, Chinese semiconductor companies are stockpiling US semiconductor chips in case the trade war worsens and US cuts off access to US technology. In past 3 years Chinese purchases of IC’s has risen strongly, and in the last 2 months imports have been the highest since the start of 2017.

Elsewhere in China Xiaomi and Oppo both announced that they will use Qualcomms latest 5G Snapdragon 865 chip for the flagship smartphones to be released in Q1 next year.

According to the EETimes, ChangXin Memory is emerging as Chinas leading DRAM manufacturer, and is currently running 20,000 wafers per month at its Fab in Hefei. It is currently using 19nm technology to produce LPDDR4, DDR4 8Gbit DRAM products. It has plans to double it’s production in Q2 2020.

In company news, AMS has announced it has succeeded in it’s 2nd bid for Osram having managed to acquire above the required 55% of shares for it’s €41/share bid for the company which values the company at €4.5billion.

Also STMicrolectronics has announced it has acquired the remaining 45% of Swedish silicon carbide wafer manufacturer Norstel AB. Norstel develops and manufactures 150mm silicon carbide (SiC) bare and epitaxial wafers.


Bob Swan says Intel 7nm equals TSMC 5nm!

Bob Swan says Intel 7nm equals TSMC 5nm!
by Daniel Nenni on 12-09-2019 at 6:00 am

Bob Swan is really starting to grow on me. Admittedly, I am generally not a fan of CFOs taking CEO roles at semiconductor companies but thus far Bob is doing a great job. This comes from my outside-looking-in observations and from the people I know inside Intel, absolutely.

Bob did a fireside chat with Credit Suisse at their 23rd annual technical conference which is now up on the Intel website HERE. It is 51 minutes and definitely worth a listen while sorting laundry or getting a mani pedi.

The media really latched onto Bob’s comments about destroying the Intel idea of keeping the 90% CPU market share and focusing on growing other market segments. Dozens of articles hit the internet by people who have no idea what they are talking about so don’t waste your time.

The most interesting comments to me were in relation to TSMC. According to Bob Swan Intel 7nm is equivalent to TSMC 5nm, which I agree with, I just do not remember an Intel CEO ever saying such a thing. He also said that Intel 5nm will be equivalent to TSMC’s 3nm to which I am not so sure. Making a FinFET to FinFET process equivalency statement is fine but from what I was told Intel will be using Nanosheets at 5nm.

Bob also talks about Intel’s transitions from 22nm to 14nm to 10nm in very simple terms. 22nm to 14nm had a 2.4x density target which as we now know was a very difficult transition. From 14nm to 10nm Intel targeted a 2.7x density target which led to even more manufacturing challenges.  Intel 7nm with EUV will be back to a 2.0x scaling target.

Remember, Intel was on a two year process cadence until 14nm. Intel 22nm was launched in 2011, 14nm came 3 years later (2014), and 10nm 5 years after that. Intel 10nm was officially launched in 2019 and Intel 7nm is scheduled for late 2021 which I have no doubt they will hit given the above targets.

TSMC on the other hand delivered 16nm in 2015, 10nm in 2017, and 7nm in 2018. TSMC will deliver 5nm in 2020 and 3nm (also a FinFET based technology) is scheduled for 2022. You can expect 5nm+ to fill in the gap year just as 7nm+ did in 2019. Remember, TSMC is on the Apple iProducts schedule so they have to be in HVM early in the year versus late for Apple to deliver systems in Q4.  Intel just has to ship chips.

Bottom line: TSMC is still about a year ahead of Intel on process technology and I do not see that changing anytime soon, my opinion.

I am at IEDM 2019 this week with SemiWiki bloggers Scott Jones and Don Draper (new blogger) so stay tuned. TSMC is giving a paper on 5nm and of course the chatter in the hallways has even more content.

TSMC to Unveil a Leading-Edge 5nm CMOS Technology Platform: TSMC researchers will describe a 5nm CMOS process optimized for both mobile and high-performance computing. It offers nearly twice the logic density (1.84x) and a 15% speed gain or 30% power reduction over the company’s 7nm process. It incorporates extensive use of EUV lithography to replace immersion lithography at key points in the manufacturing process. As a result, the total mask count is reduced vs. the 7nm technology. TSMC’s 5nm platform also features high channel mobility FinFETs and high-density SRAM cells. The SRAM can be optimized for low-power or high-performance applications, and the researchers say the high-density version (0.021µm2) is the highest-density SRAM ever reported. In a test circuit, a PAM4 transmitter (used in highspeed data communications) built with the 5nm CMOS process demonstrated speeds of 130 Gb/s with 0.96pJ/bit energy efficiency. The researchers say high-volume production is targeted for 1H20. (Paper #36.7, “5nm CMOS Production Technology Platform Featuring Full-Fledged EUV and HighMobility Channel FinFETs with Densest 0.021µm2 SRAM Cells for Mobile SoC and High-Performance Computing Applications,” G. Yeap et al., TSMC)

Other TSMC presentations at IEDM 2019

Road map from IEDM:

Note: Intel’s slide with ASML’s animations overlayed, as shown in the slide deck distributed by ASML. Note by Anandtech: “After some emailing back and forth, we can confirm that the slide that Intel’s partner ASML presented at the IEDM conference is actually an altered version of what Intel presented for the September 2019 source. ASML added animations to the slide such that the bottom row of dates correspond to specific nodes, however at the time we didn’t spot these animations (neither did it seem did the rest of the press). It should be noted that the correlation that ASML made to exact node names isn’t so much a stretch of the imagination to piece together, however it has been requested that we also add the original Intel slide to provide context to what Intel is saying compared to what was presented by ASML. Some of the wording in the article has changed to reflect this. Our analysis is still relevant.” Please see the full article in Anandtech for all the details: LINK

Related Blog


Put Uber out of Our Misery

Put Uber out of Our Misery
by Roger C. Lanctot on 12-06-2019 at 2:00 pm

The time may have finally arrived to put app-based transportation options out of commission. The latest report of 3,000 rapes and sexual assaults committed on or by Uber drivers in 2018 highlights a serious and possibly growing shortcoming of gig-type ride-hailing and delivery services: the weakness of driver background checks and the ability of some non-approved drivers to log in as Uber drivers.

The warning signs have been flashing for several years with occasional headlines relating to particularly egregious incidents in far flung cities around the world. But London’s recent decision to not renew Uber’s license – due to 14,000 drives being given by uninsured drivers – specifically reflected on the ability of some Uber drivers to fake their identities when using the app.

Shortly before the London announcement, one of Uber’s commercial insurance providers, James River, dropped the company from its portfolio – with a lengthy explanation during its subsequent earnings call (after taking a hit to profitability). James River’s chief executive officer, Adam Abrams, attributed an extraordinary loss in JR’s latest quarter to Uber and various changes in Uber’s business model.

Abrams did not specify the nature or source of the lack of profitability from the Uber account, saying only that “the risk associated with this (changing) model shifted as the company expanded into new regions, added tens of thousands of drivers and evolved beyond just ride hailing.”

Abrams further noted additional complications from the passage, in California, of Assembly Bill 5, which James River believes will adversely “alter the claims profile for rideshare companies.”

Given the fact that one of the largest sources of operational cost for Uber and its competitors is insurance, one can expect some significant reappraisals ahead for these operators. The always tenuous app-based approach to ad hoc transportation has been simultaneously attractive for its low cost and nerve racking for its dependence on amateur drivers.

To be clear, the risks are serious and significant for both driver and passenger in this ad hoc approach to fulfilling transportation needs. Drivers are enticed by the opportunity to be their own boss and work when they want. Passengers glory in an app-based discounted taxi experience with no need for cash or credit card.

But I have yet to take a ride with Lyft or Uber without hearing half a dozen stories about past misunderstandings or disputes with argumentative or drunk passengers that inevitably involve the police, if all involved are fortunate. This is to say nothing of the male and female drivers getting propositioned by male and female passengers – I am sure the reverse occurs equally frequently.

Suffice to say it’s a hot mess. Suffice is to say I have never heard the same sketchy stories from professional taxi drivers. (I will try not to dwell on my Lyft driver in Las Vegas who showed me the two firearms he carries with him at all times.)

As it becomes increasingly clear to insurers, regulators, and passengers that ride hailing app drivers may not be who they are supposed to be according to the app (can hold true for passengers as well), pressure will grow for either greater regulation, a fundamental change in the business model, or outright sanctions. Cheap taxi rides sounds like a great time until someone gets hurt and it appears that thousands of people may have indeed been hurt.


WEBINAR: Analyzing PowerMOS Devices to Reduce Power Loss and Improve Reliability

WEBINAR: Analyzing PowerMOS Devices to Reduce Power Loss and Improve Reliability
by Daniel Nenni on 12-06-2019 at 6:00 am

The symbol for a PowerMOS device in a converter circuit schematic looks simple enough. However, it belies a great deal of hidden complexity. A single device is actually a huge array of parallel intrinsic devices connected together to act as a single high power device. While their gate lengths are small, as with many other MOS devices, the effective gate width (W) can reach up to many meters. Rows and rows of intrinsic devices need to be connected so that the high current for the device is distributed among these with minimal and uniform resistance. The total resistance of the complex internal metal structures and the connected intrinsic devices determine power loss during device operation, having a large effect on circuit efficiency. Inefficient circuits, depending on their application, can require more cooling, cost more to operate or may suffer from shortened battery life.

WEBINAR: Analyzing PowerMOS Devices to Reduce Power Loss and Improve Reliability

In order to optimize resistance from the device source to drain terminals (Rdson), you need a way to measure it. Unlike top level IC routing, metal structures in PowerMOS devices feature all angle geometry, wide metal, large via arrays and multiple parallel current paths. Extracting this kind of metal structure requires a solver-based extractor that can work with high accuracy on large complex geometry.

To help PowerMOS and converter circuit designers better understand how to accomplish this, Magwel is offering a free webinar on their Power Transistor Modeler (PTM®) which is being used on some of the largest Power MOS devices in production today to predict Rdson and power per layer, and to flag electromigration (EM) violations. The webinar will discuss the types of devices that PTM can be used with. It will also go over the set up and talk about performance and accuracy.

Magwel is a leader in developing tools for analyzing PowerMOS devices using solver-based technology. The PTM family also has tools for analyzing electrothermal properties of PowerMOS devices during their operation, taking into consideration the thermal properties of the die, package and board, and for co-simulation of PowerMOS transient switching behavior in circuit level SPICE simulations.

Sign up for this interesting look into how many leading Power Converter design companies ensure that their PowerMOS devices are optimized for power, performance and reliability. Information and registration for this webinar can also be found at the Magwel website.

About Magwel
Magwel® offers 3D field solver and simulation based analysis and design solutions for digital, analog/mixed-signal, power management, automotive, and RF semiconductors. Magwel® software products address power device design with Rdson extraction and electro-migration analysis, ESD protection network simulation/analysis, latch-up analysis and power distribution network integrity with EMIR and thermal analysis. Leading semiconductor vendors use Magwel’s tools to improve productivity, avoid redesign, respins and field failures. Magwel is privately held and is headquartered in Leuven, Belgium. Further information on Magwel can be found at www.magwel.com


New Generation of FPGA Based Distributed Accelerator Cards Offer High Performance and Adaptability

New Generation of FPGA Based Distributed Accelerator Cards Offer High Performance and Adaptability
by Tom Simon on 12-05-2019 at 10:00 am

Achronix FPGA used on BittWare Accelerator Card

We have learned from nature that two characteristics are helpful for success, diversity and adaptability. The same has been shown to be true for computing systems. Things have come a long way from when CPU centric computing was the only choice. Much heavy lifting these days is done by GPUs, ASICs, and FPGAs, with CPUs in a support and coordination role. This is happening in applications such as networking, big data, machine learning and elsewhere. Naturally, edge and cloud data center operators now have numerous choices about which hardware to use to fill their racks. When buying hardware they must anticipate the kinds of workloads that will be handled and even where they will be run. A wrong choice can mean wasted money and resources. What’s needed is processing that is adaptable, can scale and meet diverse and changing workloads.

An emerging trend to address these new workloads is use of distributed accelerator cards. In many cases they fit the power dissipation requirements for their target data centers. They offer scalability to meet rapidly growing demand. They also can incorporate high bandwidth connectivity to ensure that throughput is not limited. Accelerator cards can be fitted with a wide variety of computing engines. However, FPGAs seem to have many desirable characteristics, making them more appealing.

FPGAs are extremely adaptable because they can be reprogrammed as workloads change. They offer extremely high parallelism, which is often necessary for the tasks they are applied to. So, it might seem that the problem is solved – that’s all there is to it. However, the specific architecture of the FPGA and the details of the accelerator card it is placed on make a big difference.

In a recent white paper, Achronix makes the case that there are several aspects of the FPGA and accelerator card architecture that determine how well an accelerator card can perform in demanding applications. They point to the features in the VectorPath S7t-VG6 accelerator card recently released by Achronix and BittWare that uses the Achronix Speedster 7t FPGA. BittWare, a Molex company, has a long history of producing FPGA based accelerator cards. This particular card comes well equipped with 8GB of GDDR6 memory that can operate at 4 Tbps. It also has 4 GB of DDR4. It offers 400GbE and 200GbE Ethernet, as well as PCIe Gen3 x16 that can be upgraded to Gen4 and Gen5.

However, there are some really interesting features in the Speedster 7t that give this accelerator card a significant edge. It contains machine learning processors (MLP) that are optimized for machine learning applications. The MLPs can perform up to 32 multiply/accumulate operations per cycle. Another interesting addition is a 2D Network on Chip (NoC) that supports data movement at 2GHz between the external IO interfaces, FPGA fabric, external GDDR6 memory interfaces and MLPs. A big advantage of this is elimination of the need to use precious FPGA gates to manage data flow to and from high speed interfaces. The NoC handles this, freeing up more of the FPGA core for application related uses. There are also direct clock and GPIO interfaces, as well as OCuLink, to provide the ability to combine accelerator cards or interface with legacy equipment.

For customers who want a turnkey server solution, BittWare even has ready-to-go servers with up to 16 VectorPath PCIe cards on Dell or HPE servers with development software to allow customers to rapidly deploy this new technology. The white paper also hints strongly at a forthcoming IP offering of the Speedster 7t FPGA fabric, which would allow customers to build their own ASIC based accelerators.

The Achronix white paper makes interesting reading. It includes a summary of the accelerator board market, and its future growth potential. It also dives into the specifics of the BittWare offering and the details of the Acchronix Speedster 7t FPGA. I suggest going to the Achronix website to download this interesting document.