RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

DVCon Is a Must Attend Event for Design and Verification Engineers

DVCon Is a Must Attend Event for Design and Verification Engineers
by Daniel Payne on 02-03-2020 at 10:00 am

dvcon 2020

Learning is a never-ending process for design and verification engineers, so outside of reading SemiWiki you likely want to attend at least a few events per year to keep updated, learn something new, attend a workshop, or even present something that has made your IC project work much better than before. Sure, DAC is always a great event in July, but did you know that at DVCon there are expected to be over 1,000 engineers attending  from March 2-5 in San Jose?

I’ve attended DVCon in past years and can tell you that it’s well organized in its 32nd year, and has quite the wide range of activities:

  • Tutorials
  • Luncheons
  • Workshops
  • Receptions
  • Sessions
  • Keynotes
  • Panel Discussions
  • Exhibitors

The General Chair this year is Aparna Dey and she wrote a concise welcome blog on the DVCon site, while her day job is at Cadence working on standards. Speaking of standards, the DVCon event is sponsored by Accellera, the group that promotes so many EDA and semiconductor IP standards activities.

Topics on Monday this year include:

AI is all the buzz in our tech world, so the Tuesday Keynote is titled AI for EDA, then presented by Dr. Anirudh Devgan, president of Cadence with past stints at Magma and IBM. Deep learning applied to EDA tools will be discussed.

Panel sessions should get lively on Wednesday because the first one includes RISC-V and the second one wants to fix what’s broken today, plus you can ask questions to either stump the panelists or get clarification:

You’ll be exhausted trying to attend everything, because there are 42 papers, four tutorials and some 23 poster sessions, 10 workshops and the exhibitors. I’d love to hear your feedback about DVCon this year, so send us your trip reports for sharing with other engineers, or better yet, post your trip report in our Forum.

About DVCon
DVCon is the premier conference for discussion of the functional design and verification of electronic systems. DVCon is sponsored by Accellera Systems Initiative, an independent, not-for-profit organization dedicated to creating design and verification standards required by systems, semiconductor, intellectual property (IP) and electronic design automation (EDA) companies. In response to global interest, in addition to DVCon U.S., Accellera also sponsors events in China, Europe and India. For more information about Accellera, please visit www.accellera.org. For more information about DVCon U.S., please visit www.dvcon.org.

Follow DVCon on Facebook https://www.facebook.com/DVCon or @dvcon_us on Twitter or to comment, please use #dvcon_us.


Logic and Memory Make for a Recovery

Logic and Memory Make for a Recovery
by Robert Maire on 02-03-2020 at 6:00 am

Lam Research 2020
  • LAM- “Logic And Memory” make for a recovery-NAND (Samsung) & Logic (TSMC) + China
  • Great Q4 Results & Q1 guide as memory restarts
  • Logic strength continues-China is crucial to growth
  • 2019 better than expected- 2020 WFE up about 5-8%

Lam reports nice finish to 2019 and start of 2020
The company reported revenues of $2.58B and EPS of $4.01 with guidance of $2.8B +-$200M and EPS of $4.55+-$0.40.

This was at the higher end of guidance and above expectations for Q1.

We would note that Lam has a consistent history of beating guidance so the beat was in line with prior beats but the future guidance was better by a wider margin than expected.

Memory uptick was well expected
We have been saying for some time now that NAND memory spending had already picked up even though we remain in an over supplied condition.

We are a bit surprised that the street reaction was such surprise at the upside in memory as it has been known and expected for a while now.

Samsung spending to get ahead in technology not capacity
In our view the prior memory spending cycle was heavily weighted to pure capacity adds where the current increase in spend appears to be more technology directed and more specific to Samsung.

In prior up cycles we have seen Samsung spend money to try to get a technology/cost advantage over competitors which in this case would be pushing to get to 128 level NAND and perhaps beyond while competitors are stuck at lesser levels and higher per bit costs.

We think there still exists excess capacity and some idled machines but that having a technology advantage is worth spending on.  In prior cycles, Samsung was still able to turn a profit in memory while others were losing money due to the large cost gap from technology differences.  In this most recent cycle, Samsung was more negatively impacted as they did not have the same wide cost differential with their competitors.

In short, we think Samsung is stepping on the accelerator to put distance between them and the competitions technology.

Logic remains large
TSMC remains a big spender in foundry and Intel is spending money as well (although not as much…). As the “poster child” of memory spending Lam was doing OK with foundry/logic but real growth can come back if memory comes back.

The upside should have been well expected
We pointed out that when Ichor (a Lam sub supplier)pre-announced a strong quarter a while ago, it was a very clear, unmistakable signal that both Lam and AMAT (their two biggest customers) were obviously going to have a good quarter. We suggested buying into Lam and AMAT on the Ichor news….if if you didn’t, you were asleep and not reading…..

China likely discounted out
China has become a big part of Lam’s business and one of the biggest geographic regions with a good mix.  The company sounds like they have “pre-discounted” a corona virus discount into their guidance and planning to offset any expected problems.  This is obviously both prudent and conservative planning.  Given the companies historical under promising and over delivering they have likely discounted a worst case.

The stocks
The stock had a huge run up in the aftermarket which seems to indicate that many investors have not been paying attention to the improvement in memory of late or had been discounting it.  Memory spend is very volatile on the upside as well as downside and the spend can be very quickly turned on without much time needed for a ramp.  Memory makers certainly know the tool sets they need and have been working on the process flow non stop throughout the cycle so they can quickly turn on when they want.

We would expect other semiconductor equipment companies and their suppliers to have equally positive reports and similarly commentary on China. DRAM spend remains questionable in our view as does the length of foundry spend but right now NAND memory recovery will likely support the stocks for the near term.


The Tech Week that was January 27-31 2020

The Tech Week that was January 27-31 2020
by Mark Dyson on 02-02-2020 at 6:00 am

Semiconductor Weekly Summary 1

This week the Coronavirus has been escalated to a Global Health Emergency status by the WHO, China extended the Chinese New Year shutdown to 9th February and many companies have implemented a travel ban on business travel to/from China or Asia as well as implementing other business continuity procedures. The last crisis in 2003 from SARS was estimated to have cost the global economy US$40billion and shut down the Chinese semiconductor industry for months. Already the number of infected people with the Coronavirus is more than SARS, though luckily so far the fatality rate seems to be much lower. Already many companies are seeing the first impact of this crisis and are desperately trying to get a full understanding of the total supply chain impact which is difficult to accurately gauge at present as many people in China have not returned back to work. This is not good for an industry that was just starting to recover from a challenging year last year.

As we start the new year Semiconductor Packaging News has been running a series of articles from various leaders in the industry with their forecasts for 2020. Lena Nicolaides from KLA expects semiconductor packaging growth to be very strong in 2020 with adoption of advanced packaging solutions. Jim Faine from Marvin Test solutions expects 5G and autonomous cars to be the main growth areas. Ram Trichur from Henkel Corporation expects 2020 to be a growth year, with solid gains across semiconductor packaging applications driven by 5G telecom and mobile electronics, and by some specific growth areas within the automotive/industrial and datacenter/memory sectors.  David Butler from SPTS Technologies Inc is very optimistic about the prospects for advanced package technology in 2020 driven by the roll out of 5G. David Wang from ACM Research sees opportunity from the trade war in being able to do business in both China and US through it’s US headquarters and Shanghai based wholly owned subsidiary at a time when many US equipment suppliers are concerned about doing business in China.

Latest economic data from Taiwan shows that Taiwan is one of the main winners economically last year reporting a GDP growth of 2.73% for 2019 as a whole and a 4th quarter GDP growth of 3.38%. Taiwan is particularly benefitting from the trade war as it sees increased orders from both China and US.

SEMI has released it’s North American semiconductor equipment sales report showing that billings were 17.5% higher in December than in November, with total billings of US$2.49billion.

Apple reported a solid Q1 2020 for the quarter just finished due to increased iPhone sales. For fiscal Q1 they reported all time record revenue of US$91.8billion, and they are forecasting revenue for Q2 of US$62.4 billion as it believes that its phones and other devices such as AirPods wireless headphones will continue to sell well during what is often a slow time of year.

AMD reported revenues of US$2.13billion in Q4 increasing 50% yoy and up 18% sequentially. AMD are projecting revenue of US$1.8billion in Q1, this is up 42% yoy but down 15% sequentially. For 2019 as a whole AMD reported record annual revenue of US$6.73billion, up 4% yoy. For 2020 as a whole it is expecting revenue growth of about 28% to 30% year-over-year.

Xilinx reported a decrease in revenue in Q3 which ended December, total revenue was US$723.5million, down 10% yoy and down 13% sequentially, as it suffered from the US China trade war and particularly from the trade restrictions on dealing with Huawei as well as the slower than expected deployment of 5G technology. As result it has announced it plans to cut about 7% of its worldwide workforce. For the coming fiscal Q4, they are forecasting revenues with a midpoint of US$765million.

Cree announced its fiscal Q2 earnings reporting revenue of US$240 which is a 14% yoy decrease and a 1% sequential decline due to lower LED segment revenue and weakness for power and RF device sales. They also announced that recently their application for a licence to ship to Huawei was turned down. Despite the short term headwinds they see a growing momentum for their silicon carbide technology. Cree is forecasting revenue for the current fiscal Q3 with a midpoint of US$225million.

Finally a report by the US Department of Energy predicts that due to the adoption of LED lamps in the US general lighting market this will produce energy savings of more than 569 terawatt hours annually by 2035, equal to the annual power output of more than 92 1,000 mega-watt power plants.


Privacy is Different in Cars

Privacy is Different in Cars
by Roger C. Lanctot on 01-31-2020 at 6:00 am

Privacy is Different in Cars

The New Yorks Times’ “The Privacy Project” highlights all that is terrifying about our surveillance economy. We blithely throw away our privacy for the privilege of freely accessing mountains of information about the things we want to buy, the celebrities and teams we follow or support, or to get directions home.

Thousands of applications are tracking our movements via our smartphones – a reality we are more or less comfortable with since we can control that access using our privacy settings. Still, we hear about apps that continue to track and gather data long after we would have expected them to stop – and we have no control or visibility into how the information is being used.

The most telling trope in the latest installment of “The Privacy Project” – which appeared on-line a few weeks ago but just arrived in the physical paper last Sunday – demonstrates how the President of the United States could be tracked using data from the smartphones carried by his secret service detail. It’s a chilling illustration magnified by examples of massive data extractions regarding the movement of CIA personnel into and out of their offices in Langley, Va., along with satellite imagery illustrating similar movements for White House and Pentagon employees.

There are limits to what can be illustrated in a newspaper article, but the point is to demonstrate the ability to look at this data in the aggregate – more or less heat maps of masses of people – and individually – tracking a senior diplomat, military general or security figure all the way back to his or her home, for example. It’s enough to make you want to put your smartphone in the freezer with your car keys – or maybe wrap it in lead.

Car companies have been struggling to come to grips with the unique demands of privacy in the context of the operation of a motor vehicle. Every year, one or more car company CEOs step forward and assert their commitment to protecting the privacy of their customers. GM executives are fond of saying: “The customer owns the data.” The only problem is that the typical GM OnStar customer can’t get access to the data that GM is collecting – which renders “ownership” meaningless. The privacy game is played differently in the automotive industry and the stakes are higher.

Tesla Motors has set the terms of engagement for owning a Tesla. Owners are virtually obliged to share their vehicle information and, with that comes some level of privacy violation. Like the surveillance economy built by Google and Facebook on the foundation of freely shared information exchanged for economic value, Tesla offers a vehicle enhancement value proposition founded on software updates – which requires an always available wireless connection.

I moderated the keynote panel discussion at the Consumer Telematics Show preceding CES2020 in Las Vegas, where a senior executive from Karma, maker of a connected EV that competes with Tesla, noted that customers must agree to share vehicle data to take delivery of their Karma. No sharing, no vehicle.

The requirement sounds onerous for two reasons. Firstly, the average car buyer sees their vehicle as a refuge and a source of freedom. A vehicle connection and a data sharing proposition suggests intrusion and loss of control.

The requirement is also worrisome because cars have yet to implement smartphone-like consumer controls for privacy and data sharing. A consumer driving a connected car cannot easily take him or herself “off the grid” – without driving beyond cellular coverage.

More importantly, car companies are increasingly being told that they must take steps to ensure drivers are paying attention. New requirements emanating from the European New Car Assessment Program (NCAP) call for a driver monitoring system capable of measuring percent closure (“perclos”) of eyes. In other words, within a few years drivers will begin to see cameras introduced in vehicle cockpits to ensure they are paying attention to the driving task.

Once driver monitoring systems are in place, though, driver identification and credentialling will follow rapidly – especially given the rapid integration of on-board e-commerce systems and personalized digital assistants. For me, it’s all okay and it all makes sense as long as the guiding principle is safety and collision avoidance.

Collision avoidance is a clear value proposition. I also want the peace of mind that my car maker can find me if it needs to notify me of a flawed or failing system in my car. Year after year car makers in the U.S. and elsewhere around the world have struggled to locate all of the cars equipped with potentially deadly Takata airbags that need to be replaced. Please, please violate my privacy to get me this urgent notification.

If on-board systems in my car violate my privacy, but do so in the interest of preserving my life, I am good with that. Of course, this is a step above and beyond the software update value proposition promised by Tesla and Karma.

Driving a car is a life and death proposition. To the extent that privacy violations are tied to safety, the automotive industry should represent something of an exception or require unique regulatory accommodations.

The implications are that companies working in the automotive space are entitled to some sort of special status – and I’d include in this equation mapping companies HERE, TomTom, and the likes of Mobileye, Google, Continental, Bosch, Harman, and others. At the same time, companies such as Apple, Mapbox, Waze, Facebook and others building their businesses off of crowdsourced smartphone data ought to merit extra scrutiny and, perhaps, more stringent regulation.

The New York Times’ “The Privacy Project” reveals the ways in which crowdsouced smartphone information can be used to manipulate and oppress entire populations or even individuals. Smartphone privacy violations are not occurring in the context of a life-saving value proposition. The value proposition is purely commercial and the individual user is the economic unit.

Auto makers will be increasingly violating the privacy of their consumers. It probably is time to give car owners the ability to manage and control their data sharing in a smartphone-like manner directly from the dashboard. And, soon, it will probably make sense to compensate drivers for sharing their information. But the focus for auto makers, first and foremost, ought to be safety – with an emphasis on the safe operation of the vehicle.


Bringing Hierarchy to DFT

Bringing Hierarchy to DFT
by Tom Simon on 01-30-2020 at 6:00 am

Tessent Hierarchical Flow

Hierarchy is nearly universally used in the SoC design process to help manage complexity. Dealing with flat logical or physical designs proved unworkable decades ago. However, there were a few places in the flow where flat tools continued to be used. Mentor lead the pack in the years around 1999 in helping the industry move from flat DRC to their Calibre hierarchical DRC flow. Similarly, now Mentor is on the leading edge of the move to hierarchical Design for Test (DFT), a part of the flow that has for many years resisted switching from a predominantly flat approach.

Mentor has a white paper that does an excellent job of highlighting the numerous advantages of taking a hierarchical approach for DFT. The white paper, titled “Hierarchical DFT: Proven Divide-and-conquer Solution Accelerates DFT Implementation and Reduces Test Costs”, also explains how the flow works and how many of the benefits are achieved. The author, Jay Jahangiri, specifically dives into the key features of Mentor’s Tessent Hierarchical DFT solution that are used in a hierarchical flow.

Looking at the advantages of hierarchical DFT, you could guess the top handful of motivations for using it. These include shortened DFT implementation and ATPG runtimes. Another would be eliminating the large memory footprint for loading designs for implementation or analysis. Often these operations take hundreds of gigabytes of memory, severely limiting the number of available machines that can be used for these jobs.

Some of the other reasons for switching to a hierarchical approach are pretty compelling too. For instance, running flat test patterns can consume a lot of power and create hot spots. Hierarchical approaches can reduce power, and help manage and avoid hot spots more easily.  It is also worth reflecting on how DFT is often in the critical path for tape out. The reduction in time required for hierarchical DFT and the reduced turnaround time for changes it provides can make a crucial difference in the time needed overall for DFT, especially at the end of the process. The Mentor white paper is quite thorough in enumerating the reasons for switching to a hierarchical approach.

Of course, if working hierarchically was easy this approach would have been used to solve the problem from the outset. It turns out that a number of key elements are needed to make it work effectively. In their white paper, Mentor provides an easy to follow summary of these elements.

Because clocking plays such an important role in DFT, Mentor makes it easy to insert on-chip clock controllers (OCCs) into blocks so that each block can run its test patterns independently of other blocks’ test clock needs. Removing the interdependencies caused by top level clocking leads to a dramatic improvement in efficiency.

Tessent’s Scan and ScanPro products not only help with adding wrapper cells to create wrapper chains, they also allow analysis and utilization of existing registers for use during test as a shared wrapper cell. They also discuss several aspects of handing internal mode and external mode, to fully cover not just blocks but the glue logic between them.

The real enabler for making a hierarchical flow work is how smoothly the flow works. There are many aspects involved in not only making the blocks themselves testable, but also in integrating each block’s test elements into the top-level design. Tessent uses enabling technology, such as IEEE 1687, also known as IJTAG. Tessent also lets designers perform a lot of the test design at RTL, saving time and reducing complexity.

With performance gains in some steps of the process of up to 5 or 10X, the hierarchical approach is proving to be an effective way to deal with test complexity. It brings with it many interesting advantages as well that it can improve yield analysis and overall chip quality. The white paper goes into much more detail on exactly where the gains are and how the process improves productivity and quality. The paper can be downloaded from the Mentor website.


WEBINAR: Prototyping With Intel’s New 80M Gate FPGA

WEBINAR: Prototyping With Intel’s New 80M Gate FPGA
by Daniel Nenni on 01-29-2020 at 10:00 am

The next generation FPGAs have been announced, and they are BIG!  Intel is shipping its Stratix 10 GX 10M FPGA, and Xilinx has announced its VU19P FPGA for general availability in the Fall of next year.  The former is expected to support about 80M ASIC gates, and the latter about 50M ASIC gates.  And, to bring this mind-boggling gate capacity into your prototyping lab immediately, S2C  has teamed up with Intel and has rolled out its new 10M Logic System prototyping platform and will be delivering first systems before the end of this year.

On-Demand WEBINAR: Prototyping with Intel’s New 80M Gate FPGA and S2C!

You might ask: “How will this affect FPGA prototyping?”.  Well, there are at least three ways you should expect to benefit from these larger FPGAs:

  1. More usable ASIC gates from a single FPGA.
  2. Higher prototype performance.
  3. Faster time-to-prototype.

More Usable ASIC Gates
For those of you gagging for more usable gates from a single FPGA, the 10M Logic System is an FPGA prototyping solution you can deploy today.  One example of an application that will need increasingly more ASIC gates over the coming years is video processing.  SoCs incorporating video blocks already exceed the gate capacity of the previous generation FPGAs, and designers are scrambling to keep up with today’s video features, with no end in sight of the addition of incremental new features.  S2C’s 10M Logic System brings more than twice the usable gate capacity per FPGA to video applications today, and supports continued gate capacity growth with Dual and Quad FPGA versions available early next year.

Higher Prototype Performance
Another inherent benefit from these new, larger FPGAs is performance.  The S10 GX 10M die core performance is rated at 900MHz, with LVDS I/O and single-ended I/O rated at 1.4GHz and 250MHz respectively. Actual prototyping performance will vary by application, but, with all other things being equal for a comparison, prototype performance will be higher with these new 14nm FPGAs.

Organizing the design hierarchy for prototyping to contain the high-performance block/signals within one FPGA will certainly lead to higher prototype performance.  With S2C’s new 10M Logic Systems, design blocks up to about 80M gates can be contained within a single FPGA, and, for larger designs, Dual and Quad FPGA Logic Systems will include high-speed interconnects between the FPGAs.

The S10 10M Logic System supports six on-board programmable clocks (up to 350MHz), 5 external clocks, and an oscillator socket.  Two dedicated programmable clocks are also provided for on-board DDR4 memories, as well as 2 global resets that can be sourced from an on-board push-button, an externally sourced reset through a connector, or under PlayerPro run-time software control.

Faster Time-to-Prototype
One of the keys to successful prototyping is minimizing “time-to-prototype”, and, fast time-to-prototype must consider:

  1. Getting your design running at your target speed on the FPGA prototype platform.
  2. A debug environment that enables high design visibility and deep test response data capture.
  3. A method for applying large quantities of high-speed test stimulus from a host computer or external source.

Getting your design running at your target speed in the FPGA prototype platform should include thoughtful preparation of the design netlist for FPGA implementation, while preserving correlation with the simulation netlist as much as possible.  It should not come as a surprise to anyone that having a prototype team member with previous prototyping experience will go a long way to minimize time-to-prototype, especially when it comes to the FPGA implementation of design clocks and gated clocks, embedded memory, and SoC IP.

Since the simulation netlist is the verification “gold standard” that will be used for silicon tapeout, and FPGA prototyping should be viewed as a way to improve verification coverage beyond the capabilities of software simulation.  Therefore, maintaining a correlation between the two netlists throughout the verification process is essential to overall verification productivity.  If something goes wrong while verifying the design in the FPGA prototype, quickly diagnosing the cause of the problem in terms of the simulation netlist is what makes FPGA prototyping such a powerful verification tool.  And, the importance to prototyping productivity of establishing and enforcing a strict discipline of bug tracking and netlist revisioning cannot be overemphasized for keeping the simulation team synchronized with the prototyping team.

One approach to FPGA prototype debug with S2C’s 10M Logic System is to use S2C’s Multi-Debug-Module, or “MDM”.  Set-up and runtime controls for MDM are integrated into S2C’s PlayerPro software, deigned to work with the 10M Logic System hardware, and allows test data from multiple FPGAs to be viewed within a single viewing window.  MDM provides for up to 32K probes in eight groups of 4K probes without recompile.  Trace data can be captured at speeds up to 80MHz, and up to 8GB of waveform data can be stored in MDM’s external hardware.

S2C Multi-Debug-Module


To assist in reducing time-to-prototype, S2C offers ProtoBridge for use with the 10M Logic System.  ProtoBridge uses a PCIe/AXI high-throughput link between the prototype hardware and a host computer to transfer large amounts of transaction-level test data to the design.  The test data width can be from 32 bits to 1,024 bits at data rates up to 1GB per second.

S2C ProtoBridge

On-Demand WEBINAR: Prototyping with Intel’s New 80M FPGA and S2C!

Also Read:

S2C Delivers FPGA Prototyping Solutions with the Industry’s Highest Capacity FPGA from Intel!

AI Chip Prototyping Plan

WEBNAR: How ASIC/SoC Rapid Prototyping Solutions Can Help You!


How Good is Your Testbench?

How Good is Your Testbench?
by Bernard Murphy on 01-29-2020 at 6:00 am

Limitations of coverage

I’ve always been intrigued by Synopsys’ Certitude technology. It’s a novel approach to the eternal problem of how to get better coverage in verification. For a design of any reasonable complexity, the state-space you would have to cover to exhaustively consider all possible behaviors is vastly larger than you could ever possibly exercise. We use code coverage, functional coverage, assertion coverage together with constrained random generation to sample some degree of coverage, but it’s no more than a sample, leaving opportunity for real bugs that you simply never exercise to escape detection.

There’s lots of research on methods to increase confidence in coverage of the design. Certitude takes a complementary approach, scoring the effectiveness of the testbench in finding bugs. It injects errors (one at a time) into the design, then determines if any test fails under that modification. If so, all is good; the testbench gets a high score for that bug. Example errors change the code to hold a variable constant, or force execution on only one branch of a condition or change an operator.

But if no test fails on the modified design, the testbench gets a low score for that bug. This could be due to problems in activation; where no stimulus generated reached the bug. Or it could be a propagation problem; the bug was exercised but its consequences never reached a checker. Or it could be a detection bug; the consequences reached a checker, but it was inactive or incomplete and didn’t recognize that behavior as a bug.

Certitude works with both RTL and software models, and for RTL works with both simulation and formal verification tools. Here I’ll talk about RTL analysis since that’s what was mostly covered in a recent webinar, presented by Ankur Jain (Product Marketing Mgr) and Ankit Garg (Sr AE).

What sort of problems does Certitude typically find in the field? Some of these will sound familiar. Detection problems through missing or incomplete checkers/assertions, and missing or incomplete test cases, for example a disabling control signal in simulation or an over-constraint in model checking. These are problems that could be caught in simulation or formal coverage analysis (eg formal core checks). That nevertheless they were caught in Certitude suggests in practice those checks are not always fully exploited. Certitude at minimum provides an additional safety net.

What I found really compelling was the class they say they most commonly encounter among their customers. They call these process problems. Imagine you build the first in a series of designs where later designs will be derived from the first. You’re planning to add support for a number of features but those won’t be implemented in the first chip. But you’re thinking ahead; you want to get ready for the derivatives, so you add placeholder checkers for these planned features. These checkers must be partly or wholly disabled for the first design.

This first design is ultimately successfully verified and goes into production.

Now you start work on the first derivative. Verification staff have shuffled around, as they do. The next verification engineer takes the previous testbench and works on upgrading it to handle whatever is different in this design. They run the testbench, 900 tests fail and 100 tests pass. They set to work on diagnosing the failures and feeding back to the design team for fixes. What they don’t do is to look at the passing test cases. Why bother with those? They’re passing!

But some are passing because they inherited checks from the first design, which were partially or completely disabled. Those conditions may not be valid in this derivative. You could go back and recheck all your coverage metrics on the passing testcases, potentially a lot of work. Or you could run Certitude, which would find exactly these kinds of problem.

In the Q&A, the speakers were asked what real design bugs Certitude has found. The question is a little confused because the objective of Certitude is to check the robustness of the testbench, not to find bugs. But I get the intent behind the question – did that work ultimately lead to finding real bugs? Ankit said that, as an example, for one of their big customers it did exactly that. They found two testbench weaknesses for a derivative, and when those were fixed, verification found two real design bugs.

You can watch the webinar by registering HERE.


Advanced CMOS Technology 2020 (The 10/7/5 NM Nodes)

Advanced CMOS Technology 2020 (The 10/7/5 NM Nodes)
by Daniel Nenni on 01-28-2020 at 10:00 am

Our friends at Threshold Systems have a new class that may be of interest to you. It’s an updated version of the Advanced CMOS Technology class held last May. As part of the previous class we did a five part series on The Evolution of the Extension Implant which you can see on the Threshold Systems SemiWiki landing page HERE. And here is the updated course description:

Date: Feb. 5, 6, 7, 2020
Location: SEMI Headquarters, 673 South Milpitas Blvd.,
Milpitas, California, 94035, USA
Class Schedule:
Wednesday: 8:30 AM – 5:00 PM
Thursday: 9:00 AM – 5:00 PM
Friday: 9:00 AM – 5:00 PM
Tuition: $1,895

Course Description:
The relentless drive in the semiconductor industry for smaller, faster and cheaper integrated circuits has driven the industry to the 10 nm node and ushered in a new era of high-performance three-dimensional transistor structures. The speed, computational power, and enhanced functionality of ICs based on this advanced technology promise to transform both our work and leisure environments. However, the implementation of this technology has opened a Pandora’s box of manufacturing issues as well as set the stage for a range of manufacturing challenges that require revolutionary new process methodologies as well as innovative, new equipment for the 10/7/5nm nodes and the upcoming 3nm node. This seminar addresses all of these manufacturing issues with technical depth and conceptual clarity, and presents leading-edge process solutions to the new and novel set of problems presented by 10nm and 7 nm FinFET technology and previews the upcoming manufacturing issues of the 5 nm Nanowire.

The central theme of this seminar is an in-depth presentation of the key 10/7/5 nm node technical issues for Logic and Memory, including detailed process flows for these technologies.

A key part of the course is a visual survey of leading-edge devices in Logic and Memory presented by the Fellow Emeritus of the world’s leading reverse engineering firm, TechInsights. His lecture is a visual feast of TEMs and SEMs of all of the latest and greatest devices being manufactured and is one of the highlights of the course.

An update on the status of EUV lithography will be also be presented by a world-class lithographer who manages an EUV tool. His explanations of how this technology works, and the latest EUV breakthroughs, are enlightening as they are insightful.

Finally, a detailed technology roadmap for the future of Logic, SOI, Flash Memory and DRAM process integration, as well as 3D packaging and 3D Monolithic fabrication will also be discussed.

Each section of the course will present the relevant technical issues in a clear and comprehensible fashion as well as discuss the proposed range of solutions and equipment requirements necessary to resolve each issue. In addition, the lecture notes are profusely illustrated with extensive 3D illustrations rendered in full-color.

What’s Included:

  • Three days of instruction by industry experts with comprehensive, in-depth knowledge of the subject material
  • A high quality set of full-color lecture notes (a $495 value), including SEM & TEM micrographs of real- world IC structures that illustrate key points
  • Continental breakfast, hot buffet lunch, and coffee, beverages, & snacks served at both morning and afternoon breaks

Who is the seminar intended for:

  • Equipment Suppliers & Metrology Engineers
  • Fabless Design Engineers and Managers
  • Foundry Interface Engineers and Managers
  • Device and Process Engineers
  • Design Engineers
  • Product Engineers
  • Process Development & Process Integration Engineers
  • Process Equipment Marketing Managers
  • Materials Supplier Marketing Managers  & Applications Engineers

Course Topics:

1. Process integration. The 10/7nm technology nodes represent a landmark in semiconductor manufacturing and they employs transistors that are faster and smaller than anything previously fabricated. However, such performance comes at a significant increase in processing complexity and requires the solution of some very fundamental scaling and fabrication issues, as well as the introduction of radical, new approaches to semiconductor manufacturing. This section of the course highlights the key changes introduced at the 10/7nm nodes and describes the technical issues that had to be resolved in order to make these nodes a reality.

  • The enduring myth of a technology node
  • Market forces: the shift to mobile
  • The Idsat equation
  • The motivations for High-k/Metal gates, strained Silicon
  • Sevice scaling metrics
  • Ion/Ioff curves, scaling methodology

2. Detailed 10nm Fabrication Sequence. The FinFET represents a radical departure in transistor architecture. It also presents dramatic performance increases as well as novel fabrication issues. The 10nm FinFET is the 3rd generation of non-planar transistor and involves some radical changes in manufacturing methodology. The FinFET’s unusual structure makes its architecture difficult for even experienced processing engineers to understand. This section of the course drills down into the details of 10nm FinFet structure and its fabrication, highlighting the novel manufacturing issues this new type of transistor presents. A detailed step-by-step 10nm fabrication sequence is presented (Front-end and Backend) that employs colorful 3D graphics to clearly and effectively communicate the novel FinFET architecture at each step of the fabrication process. Attention to key manufacturing pitfalls and specialty material requirements are pointed out at each phase of the manufacturing process, as well as the chemistries used.

  • Self-Aligned Quadruple Patterning (SAQP)
  • Fin-first and Fin-last integration strategies
  • Multiple Vt Hi-/Metal Gate integration strategies
  • Cobalt Contacts & Cobalt metallization
  • Contact over Active Gate methodology
  • Advanced Metallization strategies
  • Air-gap dielectrics

3. Nanowire Fabrication – the 5nm Node. Waiting in the wings is the Nanowire. The advent of this new and radically different 3D transistor features gate-all-around control of short channel effects and a high level of scalability. A detailed process flow of a Horizontal Nanowire fabrication process will be presented that is beautifully illustrated with colorful 3D graphics and which is technically accurate.

  • A step-by-step Horizontal Nanowire fabrication process flow
  • Key fabrication details and manufacturing problems
  • Nanowire SCE control and scaling
  • Resolving Nanowire capacitive coupling issues
  • Vertical versus Horizontal Nanowire architecture: advantages and disadvantages

4. DRAM Memory. DRAM memory haS evolved through many generations and multiple incarnations. Despite claims that DRAM memory is nearing its scaling limit, new technological developments keep pushing the scaling envelope to extremes. This part of the course examines the evolution of DRAM memory and presents a detailed DRAM process fabrication flow.

  • DRAM memory function and nomenclature
  • DRAM scaling limits
  • A DRAM process flow
  • The capacitor-less DRAM memory cell

5. 3D NAND Flash Memory. The advent of 3D NAND Flash memory is a game changer. 3D NAND Flash not only dramatically increases non-volatile memory capacity, it will also add at least three generations to the life of this memory technology. However, the structure and fabrication of this type of memory is radically different, even alien, to any traditional semiconductor fabrication methodology. This section of the course presents a step-by-step visual description of the unusual manufacturing methodology used to create 3D Flash memory, focusing on key problem areas and equipment opportunities. The fabrication methodology is presented as a series of short videos that clearly demonstrate the fabrication operations at each step of the process flow.

  • staircase fabrication methodology
  • the role of ALD in 3D Flash fabrication
  • controlling CDs in tall, vertical structures
  • detailed sequential video presentation of Samsung 3D NAND Flash
  • Intel-Micron 3D NAND Flash fabrication sequence
  • Toshiba BICS NAND Flash fabrication sequence

6. Advanced Lithography. Lithography is the “heartbeat” of semiconductor manufacturing and is also the single most expensive operation in any fabrication process. Without further advances in lithography continued scaling would difficult, if not impossible. Recently there have been significant breakthroughs in Extreme Ultra Violet (EUV) lithography that promise to radically alter and greatly simplify the way chips are manufactured. This section of the course begins with a concise and technically correct introduction to the subject and then provides in-depth insights into the latest developments in photolithography. Special attention is paid to EUV lithography, its capability, characteristics and the recent developments in this field.

  • Physical Limits of Lithography Tools
  • Immersion Lithography – principles and practice
  • Double, Triple and Quadruple patterning
  • EUV Lithography: status, problems and solutions
  • Resolution Enhancement Technologies
  • Photoresist: chemically amplified resist issues

7. Emerging Memory Technologies. There are at least three novel memory technologies waiting in the wings. Unlike traditional memory technologies that depend on electronic charge to store data, these memory technologies rely on resistance changes. Each type of memory has its own respective advantages and disadvantages and each one has the potential to play an important role in the evolution of electronic memory.

This section of the course will examine each type memory, discuss how it works, and what its relative advantages are in comparison with other new memory types.

  • Phase Change Memory (PCRAM), Cross-point memory; separating the hype from the reality
  • Resistive RAM (ReRAM) – a novel approach that comes in two variations
  • Spin Torque Transfer RAM (STT-RAM) – the brightest prospect?

8. Survey of leading edge devices. This part of the course presents a visual feast of TEMs and SEMs of real-world, leading edge devices for Logic, DRAM and Flash memory. The key architectural characteristics for a wide range of key devices will be presented and the engineering trade-offs and compromises that resulted in their specific architectures will be discussed. The Fellow Emeritus representative of the world’s leading chip reverse engineering firm will present the section of the course.

  • How to interpret Scanning and Transmission Electron microscopy images
  • A visual evolution of replacement gate metallization
  • DRAM structural analysis
  • 3D FLASH structural analysis
  • Currently available 14nm/10nm/7nm Logic offerings from various manufacturers

9. 3D Packaging Versus 3D Monolithic Fabrication. Unlike all other forms of advanced packaging that communicate by routing signals off the chip, 3D packaging permits multiple chips to be stacked on top of each other, and to communicate with each other using Thru-Silicon Vias (TSVs), as if they were all one unified microchip. An alternate is the 3D Monolithic approach, in which a second device layer is fabricated on a pre-existing device layer and electrically connected together employing standard nano-dimensional interconnects. Both approaches have advantages and disadvantages and promise to create a revolution in the functionality, performance and the design of electronic systems.

This part of the course identifies the underlying technological forces that have driven the development of Monolithic fabrication and 3D packaging, how they are designed and manufactured, and what the key technical hurdles are to the widespread adoption of these revolutionary technologies.

  • TSV technology: design, processing and production
  • Interposers: the shortcut to 3D packaging
  • The 3D Monolithic fabrication process
  • Annealing 3D Monolithic structures
  • The Internet of Things (IoT)

10. The Way forward: a CMOS technology forecast. Ultimately, all good things must come to an end, and the end of FinFET technology appears to be within sight. No discussion of advanced CMOS technology is complete without a peek into the future, and this final section of the course looks ahead to the 5/3.5/2.5 nm CMOS nodes and forecasts the evolution of CMOS device technology for Logic, DRAM and Flash memory.

  • Is Moore’s law finally coming to an end?
  • New nanoscale effects and their impact on CMOS device architecture and materials
  • The transition to 3D devices
  • Future devices: Quantum well devices, Nanowires, Tunnel FETs, Quantum Wires
  • The next ten years …
  • Is Moore’s law finally coming to an end?
  • New nanoscale effects and their impact on CMOS device architecture and materials
  • The transition to 3D devices
  • Future devices: Quantum well devices, Nanowires, Tunnel FETs, Quantum Wires
  • The next ten years …

The Tech Week that was January 20-24 2020

The Tech Week that was January 20-24 2020
by Mark Dyson on 01-28-2020 at 6:00 am

Semiconductor Weekly Summary 2

Happy Chinese New Year.  Let’s hope the Year of the Rat brings a recovery for the semiconductor industry.  The initial signs are all good with many positive indications in the news this week, but let’s hope the Wuhan coronavirus doesn’t derail the recovery by becoming a global emergency.  To all those based in China stay safe and healthy.

Here is my weekly summary of all the key semiconductor and technology news from around the world this week.

In IC Insights 2020 edition of The McClean Report they predict that 26 out of the 33 IC product categories will show positive growth in 2020 with 5 products expected to enjoy double digit growth.  This is much more positive than 2019 where only 6 categories had positive growth, but still not as good as 2018.  Product categories expecting double growth, are NAND, Automotive special purpose IC,  DRAM, display drivers and embedded MPU.

This week several major companies reported quarterly earnings, with a very optimistic message being given by all.

Texas Instruments earnings report pointed to a recovery across the IC industry.  They said “most markets showed signs of stabilising” and forecast Q1 revenue midpoint of US$3.25billion. Last quarter they posted better than expected earnings of US$3.35billion, but this was still down 10% on a year ago and down 11% sequentially.  As Texas Instruments have a very broad portfolio across all markets it is a good indicator of the general market so it is quite optimistic that they see the market stabilising after 5 quarters of decline.

STMicro also posted solid results for Q4 reporting revenue of US$2.75billion, up 7.9% sequentially on strong sales for low emission cars and next generation smartphones, though traditional older generation automotive products were down. The poor sales of established automotive products will also impact next quarter where they forecast up to 14% drop in sales.  STM did also announce they will invest $1.5billion in 2020 direct at capital expenditure.

Intel also gave a very upbeat message at their Q4 earnings call.  Intel reported that strong cloud computing demand drove revenue in Q4 to US$20.2, up 8% on Q3.  For the year they reported revenue of $71.965billion, up 1.3% on 2018.  For the coming year they forecast revenue to be up a further 2% at $73.5billion, with revenue in Q1 at $19billion.  In addition Intel plans to spend $17 billion on capex to increase capacity to Ensure they can support customer demand and build inventory..

TSMC also gave a bullish message at their investors conference.  CC Wei, TSMC’s CEO said he expected revenue for 2020 to grow by more than 17% driven by demand for smartphones, high performance computer devices, the Internet of Things related applications and automotive electronics this year.

In December the Global Purchasing Managers Index was neutral with a PMI of 50 on average, however this varies by country significantly with China, Taiwan and South Korea all showing modest expansion.

Several 3rd party foundry vendors are entering or expanding their efforts in the silicon carbide (SiC) foundry business amid booming demand for the technology especially from automotive applications.  However the entrance of these new comers may not be so easy as the traditional IDM companies like Cree and Rohm use proprietary processes to differentiate their products.

Finally, with many different variants of 7nm technology being made available here is a concise summary of the differences between the variants and the benefits.


Specialized Accelerators Needed for Cloud Based ML Training

Specialized Accelerators Needed for Cloud Based ML Training
by Tom Simon on 01-27-2020 at 10:00 am

AI Domain Specific Processor

The use of machine learning (ML) to solve complex problems that could not previously be addressed by traditional computing is expanding at an accelerating rate. Even with advances in neural network design, ML’s efficiency and accuracy are highly dependent on the training process. The methods used for training evolved from CPU based software, to GPUs and FPGAs – which offer big advantages because of their parallelism. However, there are significant advantages to using specially designed domain specific computing solutions.

Because training is so compute intensive, both total performance and performance per watt are both extremely important. It has been shown that domain specific hardware can offer several orders of magnitude improvement over GPUs and FPGAs when running training operations.

AI Domain Specific Processor

On December 12th GLOBALFOUNDRIES (GF) and Enflame Technology announced a deep learning accelerator solution for training in data centers. The Enflame Cloudblazer T10 uses a Deep Thinking Unit (DTU) on GF’s 12LP FinFET platform with 2.5D packaging. The T10 has more than 14 billion transistors. It uses PCIe 4.0 and Enflame Smart Link for communication. The AI accelerator supports a wide range of data types, including FP32, FP16, BF16, Int8, Int16, Int32 and others.

The Enflame DTU core features 32 scalable intelligent processors (SIP). Groups of 8 SIPs each are used to create 4 scalable intelligent clusters (SIC) in the DTU. HBM2 is used to provide high speed memory for the processing elements. The DTU and HBM2 are integrated with 2.5D packaging.

This design highlights some of the interesting advantages of GF’s 12LP FinFET process. Because of high SRAM utilization in ML training, SRAM power consumption can play a major role in power efficiency. GF’s 12LP low voltage SRAM offers a big power reduction for this design. Another advantage of 12LP is much higher level of interconnect efficiency compared to 28nm or 7nm. While 7nm offers smaller feature size, there is no commensurate improvement in routing density for higher level metals. This means that for a highly connected design like the DTU, 12LP offers a uniquely efficient process node. Enflame is taking advantage of GF’s comprehensive selection of IP libraries for this project. The Enflame T10 has been sampled and is scheduled for production in early 2020 on GF’s Fab 8 in Malta New York.

A company like Enflame has to walk a very fine line in designing an accelerator like the T10. The specific requirements for machine learning determine many of the architectural decisions for the design. On-chip communication and reconfigurability are essential elements. The T10 excels in this area with its on-chip reconfiguration algorithm. Their choice in selecting 12LP means optimal performance without the risk and expense of going to a more advanced node. GF is able to offer HBM2 and 2.5D packaging in an integrated solution, further reducing risk and complexity for the project.

It is widely understood that increasing training data set size improves the operation and performance of ML applications. The only way to handle these increasing workloads is with fast and efficient accelerators that are designed specifically for the task. The CloudBlazer T10 looks like it should be an attractive solution. The full announcement and more information about both companies is available on the GLOBALFOUNDRIES website.

Also Read:

The GlobalFoundries IPO March Continues

Magnetic Immunity for Embedded Magnetoresistive RAM (eMRAM)

GloFo inside Intel? Foundry Foothold and Fixerupper- Good Synergies