RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

Covering Configurable Systems. Innovation in Verification

Covering Configurable Systems. Innovation in Verification
by Bernard Murphy on 10-20-2020 at 6:00 am

innovation min

Covering configurable systems is a challenge. What’s a good strategy to pick a small subset of settings and still get high confidence in coverage? Paul Cunningham (GM, Verification at Cadence), Jim Hogan and I continue our series on research ideas, here an idea from software testing which should also apply to hardware. Feel free to comment.

The Innovation

This month’s pick is Combinatorial Methods for Testing and Analysis of Critical Software and Security Systems. The link is to a long but readable tutorial generated by NIST and SBA Research (Austria), on orthogonal array testing.

Testing coverage for highly configurable systems is a constant concern. Testing one configuration is hard work; multiplying that load by all possible configurations becomes a nightmare. The authors explain a rationale for reducing the set of configurations to test in a way that should not significantly compromise coverage. They analyzed a broad range of software, looking at the cumulative percent of faults versus the number of contributing configuration variables, call this “t”. They find a strong correlation between those fault distributions and branch conditions with t variables, with very high coverage for t<=6, and 90%+ for t<=3.

This observation is central to how they build tests. Covering all possible permutations of small subsets of the variables can be done very efficiently using orthogonal arrays. Exhaustively testing all permutations of 10 binary variables takes 210 tests. Taken 3 at time, this reduces to 120 3-bit combinations, 120*23 = 960 sequences to test, not a big reduction. Instead they suggest using a covering array, which can cover all of those 960 possibilities in just 13 tests. This concept is not new – it was developed in the 1990’s. However figuring out good algorithms to find covering arrays is a difficult optimization problem. The authors say such algorithms have appeared only recently, and NIST has implemented these in their ACTS tool.

As an example, they consider a case with 34 switches requiring 17 billion combinations to fully test. They can cover all 3-way combinations of switches in just 33 tests, for 70% or better coverage, and all 4-way in 85 tests for 90% or better coverage.

Run-times on one test-case, an air traffic collision avoidance system, are impressive. For 12 variables they show run-times to generate tests up to little more than 60 seconds for t=6. The authors show examples of application to a subway control system and a smart home assistant,. Also to network deadlock detection, security protocol testing and hardware trojan detection.

Paul’s view

As a computer scientist by training, I have to begin by appreciating the pure intellectual beauty of this problem. From reading up on it a bit, looks like the concept originated in Latin squares analysis from the 17th century, and was then picked up in the 1950’s and promoted by Taguchi for statistical methods in experimental design and manufacturing, e.g. in applications to optimize spot welding and semiconductor processes.

From the perspective of chip verification, most of the examples the authors consider have a very small number of parameters – O(10) to O(100) boolean variables. A typical constrained random testbench for a modern SOC IP block might have 1,000-10,000 parameters. Many of these will be 16- or 32-bit variables, so a much larger space to consider. But otherwise the approach is absolutely applicable to chip verification and could be very powerful at catching corner case bugs.

This technique relies on the premise that most bugs are excited by a specific combination of values on only a small number of control knobs. This premise is based on empirical data from software programs. It is possible that it doesn’t apply to digital circuits, especially since hardware is a massively concurrent computing paradigm. As the authors acknowledge in their presentation, orthogonal array-finding complexity rises exponentially with the number of control knobs. Increased parallelism in digital circuits may require an increased number of control knobs to catch most bugs. If so, the approach could quickly become unscalable for chip verification.

Jim’s view

I definitely see application to extending current verification flows, as a tool feature, but there’s another intriguing possibility. RISC-V IP vendors have a unique challenge. They verify their IP as carefully as possible, across as many knob settings as possible. Then they deliver it to customers who may further extend the IP. They then need to reverify in their applications. This is already a big challenge. How are they going to ensure their testing covers enough of this parameter space for their purposes? Would it be valuable to the IP vendors to publish a tool like this along with their IP. As an aid to their customers to ensure good configuration coverage?

I also think this has interesting parallels with our first blog, BTW.

My view

Expanding on Paul’s last point, further study is essential. I think that as long as the parameters are independent (orthogonal), most faults will be triggered by a small number. This seems reasonable for external parameters (as for our first blog), less certain for parameters used in CR testing.

You can read the previous blog HERE.

Also Read

Tempus: Delivering Faster Timing Signoff with Optimal PPA

Bug Trace Minimization. Innovation in Verification

Anirudh CadenceLIVE Plays Up Computational Software


Is Intel Losing its Memory?

Is Intel Losing its Memory?
by Robert Maire on 10-19-2020 at 12:00 pm

Intel SK Hynix NAND
  • It’s reported that Intel is selling its NAND ops to SK Hynix
  • Would likely get Intel out of a tough & distracting market
  • Is Intel abandoning China? Like Intel’s sale of XPoint to Micron

Getting out of NAND makes sense
The memory market is very tough, competitive and worst of all, very cyclical. Unless you have an iron stomach, the ups and downs of memory demand and pricing can be very ugly. Samsung is super dominant, Micron has carved out a niche as both a tech leader and junkyard dog at the same time. SK has been an also ran for a long time.

Its just not a pretty or very attractive business unless you are the lead dog as the view is ugly.

WSJ scoop: WSJ article on Intel NAND sale to SK

It is likely to be reported this Thursday on Intel’s call but it makes sense as Intel has de-emphasized NAND and memory in general. Bob Swan is clearly re-engineering the company with a CFO’s focus. NAND has always been tough and AMD is gaining share in the wheelhouse market of processors….who needs the distraction? You cannot make the case that Intel needs to be in the NAND business for strategic or synergistic reasons….it just doesn’t fly.

Sale of XPoint to Micron was the beginning
The sale of XPoint to Micron was very simple as Intel was a half owner who had their partner buy them out…a very easy deal. Micron could have bought the NAND operations but it probably feels like it needs more NAND capacity like a hole in their head. SK on the other hand, may be willing to take a big risk and bet that it can buy its way into closer competition to number one Samsung in one purchase.

Although we don’t know details, $10B “sounds” cheap for SK
We obviously don’t yet know details but we would assume that SK gets the Dalian fab and all the IP and patents and know how that go with it. In our view it would likely be hard to duplicate all that for $10B but we will wait for the details. It likely would have been a stretch for Micron from a financial point of view.

Is Intel exiting China because it sees something bad happening?
Once Intel is out of Dalian, their China presence will go to near zero. AMD will certainly have a better relationship with China as it already licenses some its processors there. Is Intel ceding a big market? Or does Intel see a train wreck of US China relations and wants to get out of Dodge before the shoot out?

Could China “nationalize” US facilities in China? Are other chip companies with a China presence at risk of losing their China facilities? Did Intel get a heads up? It seems odd to us that Intel is selling off what was viewed as its “price of admission” to the Chinese market. What does it mean for the future of Intel in China? It doesn’t sound great.

On the other side of the equation…is South Korea cozying up to China by buying into the chip industry there in a big way. Will SK supply the technology to China that it wants? We would bet that China will quickly endorse this deal.

Not much talk about “High NA”
We think that now that EUV is commonplace, the next upside wave will be High NA which will likely be easier (on a relative basis) as compared to the original EUV roll out. We think the potential technology benefits as well as financial benefits may make High NA more attractive than the EUV model. But High NA is still a few years away.

The stocks
We don’t view this as a near term positive for Intel other than getting out of a distraction. Longer term they may have a more difficult dance in the huge Chinese market. It begs more questions than it answers. It is likely net neutral for equipment companies as SK will likely take over Intels spend pattern in NAND. It is slightly negative for Samsung as they have more serious competition in a market they have all but sewn up. The number of deal in the Semiconductor industry has picked up again and we may likely see more of these re-focusing deals, especially where China is involved

Also Read:

ASML is Strong Because TSMC is Hot!

SMIC Cut off by US Government is Doomsday Scenario for US Chip Equipment Companies

Could loss of SMIC lead to loss of most of China?


Webinar on Synopsys MIPI IP

Webinar on Synopsys MIPI IP
by Tom Simon on 10-19-2020 at 10:00 am

Synopsys MIPI IP

MIPI may have started out as a standard for mobile phones, but it has become very important for connecting cameras and displays in a wide range of other things such as cameras, computers, drones, VR glasses, IoT devices and cars. Along the way it has matured by adding important features to support new applications. Now when we talk about vision processing systems, we need to include AI and a much wider set of devices for capture and display. Phones often have multiple cameras with very high resolution. Displays can include rear view mirrors, goggles, or ADAS displays. Integration of MIPI into SoCs has been made much easier with the availability of IP such as Synopsys MIPI IP.

To help make sense of all the advances and applications of MIPI to these systems, Synopsys has produced a webinar on the topic of “How to Use MIPI Interfaces to Accelerate Camera and Display SoC Designs” hosted by Licinio Sousa, Product Marketing Manager at Synopsys. I’ll say that in all the time I have been covering MIPI, his presentation of the technology is one of the clearest and most comprehensive that I have seen. It covers camera and display market trends, what’s new in MIPI CSI-2 and DSI/DSI-2 and closes with a discussion of Synopsys MIPI IP for automotive applications.

Camera sensors are being built with increasing resolutions, sometimes upwards of 100 mega-pixels. They are also becoming AI enabled for specific tasks such as facial or object recognition. The number of vision sensors in systems is also increasing dramatically. Cars may have 10 to 30 cameras. At the same time displays have diversified. Systems may have multiple displays, foldable displays, and very high resolutions. We see displays on appliances, and they are going to be used for rear and side view mirrors etc.

Licinio explains in the webinar that for cameras we have CSI-2 and either D-PHY or C-PHY. On the display side we have DSI/DSI-2 with D-PHY or C-PHY. Licinio starts with the PHYs. D-PHY is the older PHY with up to 4.5 Gbps as of the October 2019 version 2.5. It uses source synchronous differential pairs for each lane. The number of lanes can be increased to improve throughput. D-PHY also offers a low power single ended connection for control and status.

C-PHY uses a completely different approach to handle higher bandwidth. C-PHY uses trios with three phase encoding and embedded clock. The Tx is single ended, but the Rx is differential, using combinations of the lines. Further decoding is necessary, but overall the result is higher throughput and better power efficiency. The bitrate is ~2.28X the signaling rate. The current spec for C-PHY from September 2019 is version 2.0 which supports 6 Gsps per trio. Licinio does an excellent job of discussing how the PHY layer works.

With similar electrical specs between the D-PHY and C-PHY it is possible to offer backwards compatibility over connections. Most of the PHY layer circuits can be reused. With 10 pins it is possible to switch between 4 lanes of D-PHY or 3 trios of C-PHY.

Licinio also discusses the important features that have been added in each newer version of the camera and display interface. On the camera side many new features have been added beyond those found in the original CSI-2 V1.x which supported mobile phones with lower resolutions and frames per second. Later additions to the spec have included virtual channels, interference reduction, latency reduction, compression and raw modes. The latest version boasts always-on monitoring, functional safety, longer channels and security. For the display side in going from DSI-V1.x to DSI-2 V1.1 and V1.2 there have been additions for functional safety, compression, content protection, functional safety and more.

Synopsys MIPI IP

Licinio wrapped up with a summary of the Synopsys MIPI IP , including C-PHY, D-PHY, CSI-2, and DSI & DSI-2. With these they can support a wide range of use cases, including consumer, industrial vision, edge/AI, automotive, and more. In particular they their offering is ASIL-B ready and the PHYs are AEC-Q100 compliant. Their automotive MIPI IP is available in 22nm, 16nm, and 7nm. As I mentioned above this webinar offers a thorough look at MIPI and the solutions available from Synopsys for SoC integration. The complete webinar is available for viewing on the Synopsys website.

Also Read:

Synopsys talks about their DesignWare USB4 PHY at TSMC’s OIP

AI/ML SoCs Get a Boost from Synopsys IP on TSMC’s 7nm and 5nm

Parallel-Based PHY IP for Die-to-Die Connectivity


yieldHUB – Helping Semiconductor Companies be More Competitive

yieldHUB – Helping Semiconductor Companies be More Competitive
by Mike Gianfagna on 10-19-2020 at 6:00 am

yieldHUB webinar

The semiconductor industry is fiercely competitive. This is widely known by the SemiWiki community. When it comes to critical design parameters such as power, performance or area you’re either in the envelope that defines the market or you’re not a player. Yield management has a similar impact. Those who can stay ahead of the yield curve are contenders.  Those who fall behind are not. This is why a webinar I recently had a chance to preview got my attention.  The title of the webinar is quite simple and direct, “Are your competitors ahead of you in yield management?” Presented by yieldHUB, this webinar discusses topics that should be top of mind for every semiconductor company. Viewing this webinar will show you how yieldHUB can help semiconductor companies be more competitive.

The webinar begins with a presentation by Carl Moore, a yield management specialist at yieldHUB. Carl has a long history in the semiconductor industry at places like Allegro MicroSystems and Maxim Integrated. Carl explains that yieldHUB was founded in 2005. The company has a worldwide footprint and has a singular passion for yield analysis. He cites an impressive list of companies that yieldHUB has worked with from early product introduction to high volume delivery. Some of them include Microchip, Infineon and Habana Labs. There are more logos that are quite impressive. You’ll need to watch the webinar to see who else is on the list.

Carl then describes the impact effective yield management can have on new product introduction, or NPI. This was the topic of a post I did a few months ago.  Carl then discusses the benefits of several automated aspects of the yieldHUB software and what kind of impact this technology can have.

Carl is followed by Kevin Robinson, director of customer success at yieldHUB. Based in the UK, Kevin also has a long career in the semiconductor industry. Drawing on his industry experience, Kevin shares some perspectives on internally developed vs. outsourced yield management strategies. Kevin begins by sharing his view of the attributes of an internal yield management system, which include:

  • Custom, focused
  • Very good at a small number of things
  • Often not well supported
  • Key individual business vulnerability
  • Scalability often an issue
  • Seemingly low-cost option becomes very expensive
  • Takes focus away from key business purpose

Looking at an outsourced yield management system, Kevin uses three primary attributes. They are:

  • Agile
    • New capabilities developed when needed
    • Grows as you scale
    • On tap resources
  • Supported
    • Contracted support
    • Dedicated specialists
    • Wide view of changing environment
  • Thought Leadership
    • Up to date with industry best practice
    • Common issues and variations already solved
    • Innovative solutions to go beyond industry standard

I think you can begin to get the picture. I spent a fair amount of my early career working on data analytics for yield management. This was at a time when there were no commercial solutions. In later years we had the chance to compare our internally developed capabilities with an emerging group of commercial offerings and I completely agree with the contrast Kevin is describing.

Kevin then discusses the differences between a multi-tool, mixed data location environment vs. one unified system. A unified system can be thought of as a vertical software-as-a-service (SaaS) offering, which is typically cloud-based. Kevin provides a number of compelling advantages for the vertical SaaS model. Some of the topics addressed here include data management and the cost to become proficient in the system. Kevin also provides examples from several relevant case studies. You need to watch the webinar to see these proof points for yourself. 

The webinar concludes with a Q&A session that addresses many probing and on-point questions. If you work in the hyper-competitive semiconductor industry, I think you will find this webinar useful and relevant. The event was broadcast on Tuesday, October 27, 2020 at 1:00 PM PDT. You can register for the REPLAY here. I highly recommend you check it out to see how yieldHUB can help semiconductor companies be more competitive.

 


Less Haste More Speed – The Importance of Test Engineers

Less Haste More Speed – The Importance of Test Engineers
by Ramsay Allen on 10-17-2020 at 8:00 am

shutterstock 1470138506 2

Back in September 2019 semiengineering.com published an article called “The Hidden Potential of Test Engineers”. This article was of particular interest to me having previously worked as a mixed signal test development engineer.

Within the article Carl Moore explained that test engineers have the potential to increase revenue. My view on this is different, I say that it is the test development engineers duty to increase revenue!  Companies understand that test is essential for quality purposes, however, they still consider it as a cost function, eating away at the margin for the device.  So, if you can save test time then you will help the company improve margins.

Test time can be reduced in several ways from ensuring stability in production test, through test time optimisation and considering test early in the design phase. This is by no means a thorough list but sets the scene for my thoughts.

Thinking about and understanding test is also critical for SoC development, after all third-party IP is likely to be used. Testing of digital systems may be well understood by the SoC developer but testing mixed signal IP’s has its own unique challenges. Take an ADC as an example, depending on the architecture different test techniques are needed which have differing impacts on test time.

In the field of sensors test techniques can be underestimated. As an example, let’s consider temperature sensors that need to be calibrated. Leaving aside issues such as self-heating some of the variables involved are …

Test equipment temperature forcing accuracy

Historically test equipment did not need to be super accurate when it came to forcing temperature. No one was too bothered if you set 25°C but got 22°C or 28°C. In the world of temperature sensors though this is too wide. Whilst the best (and most expensive) probe chucks on the market can give forcing errors of ±0.1°C it is not the only error (see stability and uniformity). Standard chucks supplied with wafer sorters would have far worse forcing error. Handlers for package test are typically quoted at ±3°C accuracy although some specialist equipment is available that improves that number a little.

Test equipment temperature stability and uniformity

Stability is how well the test equipment can maintain its set point temperature. Depending upon the chuck in use this number can be quite good however the uniformity can dominate the uncertainty. Uniformity relates to how much variance occurs across a certain area. For wafer chucks this number can be very large, even the best, and most expensive, chucks only achieve around ±0.5°C.

At package test handlers quite often quote stability of ±0.5°C at the contact site but remember that actual temperature accuracy is usually poor.

Thermal gradients

During testing thermal gradients can come in to play. At wafer sort alone heat is transferred from the chuck to the wafer but the wafer is also in contact with the probe tips to connect to the tester. Each of these interfaces will cause some form of thermal gradient affecting the actual temperature of the DUT (device under test). At final test heat will be transferred from the DUT to the DIB (device interface board) this can significantly reduce the die temperature.

Based on the above it is very important that you fully understand exactly what is being quoted in the datasheet; particularly if a calibration is needed at precise temperatures. If the accuracy does not include the error from production test take a moment to establish the real accuracy achievable.

As an example, if a datasheet quotes ±1.2°C without including the test environment variance then the true uncertainty could be ±2.2°C or more based on the stability and accuracy numbers mentioned earlier.

There are techniques to improve the uncertainty level which are beyond the scope of this article but do form part of whitepapers available for Moortec customers. Moortec provides support and guidance for our customers at all stages of their implementation including production test. We also offer advice for lab evaluation of our IP based on knowledge gained from our own test chips and lab environment.

So, in conclusion, the test development engineer can make a real difference to a company profits, however, with IP coming from many different vendors it is important that IP providers understand production test. At Moortec we always have test in mind during the development and discuss what is feasible.

To find out how Moortec’s in-chip monitoring technologies and sensing fabrics could benefit your next advanced node project contact us today.

In case you missed any of Moortec’s previous “Talking Sense” blogs, you can catch up HERE.

By Martin Buck, Moortec Senior Application Engineer


EV Pickups: The Week That Was

EV Pickups: The Week That Was
by Roger C. Lanctot on 10-17-2020 at 6:00 am

EV Pickups The Week That Was

From a legislative squabble, to a COVID-19 superspreader event, to claims of fraud and deception, the race to bring a pickup truck to market with an electrified powertrain went into overdrive last week. At the heart of the contest is the pursuit of ridiculous profit margins and impressive sales with Ford Motor Company’s F-Series of pickups serving as the main focal point.

Ford’s F-150 pickup truck is one of the best-selling vehicles of all time with combined sales for all of its variants amounting to 34M units, according to Wikipedia. It is for this reason that the F-150, the perennial best-seller in North America, is cited as a target for competing pickup makers. (It’s worth noting that though the F-150 remains a best-seller, GM and FCA sell comparable quantities of pickup trucks.)

This is especially true for wannabe makers of electric vehicles intimidated by the success of Tesla Motors in the luxury sedan segment and seeking blue ocean opportunities that offer potential for both sales volume and profit. The pickup segment, indeed, offers both, with sales volume exceeding sedans for the first time in 2020 (according to Motor Intelligence) and with five-figure profit margins, according to recent auto maker earnings reports.

The frenetic activity in the pickup sector has attracted a crowd including most Japanese auto makers contending for sales of traditional internal combustion engine pickups. And General Motors and FCA have risen to challenge Ford – while all three have tipped plans to pare back or abandon sedans entirely.

A host of EV pickup makers have now stepped forward with new models arriving in the next few years. In the vanguard of EV pickup makers is Tesla itself with its Cybertruck, Rivian with its R1T, Lordstown Motors and the Endurance, and Nikola Motors’ Badger – all of which made headlines last week with the exception of Tesla.

Nikola Motors perhaps gained the greatest amount of attention with its announced tie up with GM. GM agreed to take an 11% stake in Nikola in exchange for providing product development and manufacturing support along with powertrain technology to help Nikola deliver its Badger EV pickup to market. The two companies remain locked in negotiations after Nikola’s past claims were challenged by short-sellers, the U.S. Justice Department and Securities and Exchange Commissions opened probes, and Nikola founder Trevor Milton stepped down.

Lordstown Motors grabbed the spotlight in the wake of U.S. President Donald Trump’s positive COVID-19 test and hospitalization. Among the many events attended by the President where he may have either been infected or infectious was a campaign visit at the White House from executives from Lordstown Motors, the nascent Ohio-based EV pickup maker, attended by the Governor of Ohio and multiple congressmen from the state. The company plans to build as many as 100,000 of its Endurance pickups in plants in Ohio where GM previously manufactured the Chevy Cruze sedan.

Finally, the Michigan legislature spun up a major imbroglio over legislation intended to prevent the direct sales of electric vehicles in the state – a modification of an existing ban for which a workaround already exists for Tesla. Leading the EV industry in opposition was Rivian, which employs more than 800 people in the state. They were joined by other EV startups—Lucid, Lordstown and Bollinger—as well as a coalition encompassing fleft-wing environmental groups, right-wing free market groups, and consumer advocacy groups. The Michigan Auto Dealers Association was the lone organization supporting the bill.

The single thread that unites all three of these stories, though, is General Motors’ fundamental struggle with electrification. In spite of GM’s EV leadership with the EV-1 – leased to consumers from 1996-1999 – and the extended range EV Volt – discontinued in 2019 – the company seems incapable of driving EV adoption.

The debacle of the GM-Nikola Motors tie up – now darkened by a declining stock price and investigations – came in the wake of GM’s decision not to invest in Rivian. Notably, Ford opted to invest $500M in Rivian despite that company’s plans to take on Ford in the pickup segment where Ford has its own EV pickup plans.

Lordstown Motors stepped into taking over GM’s former Ohio manufacturing facility – with financial assistance from GM – to make its own EV pickups while GM set up a new plant nearby in a joint venture with LG Chem to make EV batteries. GM had walked away from Lordstown and a $60M local tax incentive package – agreeing to a $12M “clawback” – enduring a gale of presidential opprobrium. President Trump then turned the blade a bit on GM with the White House visit last week from Lordstown Motors execs.

Finally, the new car dealer opposition to direct EV sales in Michigan creates an odd dynamic for Michigan-based auto makers. Ford, for one, is invested in Rivian, which is Michigan based. GM has big plans for EV sales and is already battling dealers over facilities and personnel investments required to support EV sales. FCA has yet to reveal its own plans.

It is crystal clear that EV pickups are coming to market in a wide range of shapes and sizes. It’s sad that the segment is caught up in politics and posturing when there are serious challenges to overcome in shifting to electric powertrains for vehicles of this type that are used in stressful commercial and recreational applications posing unique power management issues.

GM appears to be uniquely fraught by the prospect of electrification hitting its most profitable segment. The tie up with Nikola coming in the wake of the Lordstown shutdown and the passed-on Rivian investment increasingly looks like a decision made on the rebound.

It’s not too late for all three Detroit-based auto makers to make a statement.  General Motors did make a statement opposing the legislation, but a collective rejection of the dealer-driven Michigan legislation limiting EV sales by startups in the state would be a better approach. It’s time for the Big Three to send a signal, loud and clear, that Michigan is the home of automotive innovation. It makes no sense that Michigan, of all states, should erect sales barriers to vehicles with advanced EV powertrains in whatever form they may take and from whatever company might make them.


TSMC Sets the Stage for a Great 2021!

TSMC Sets the Stage for a Great 2021!
by Daniel Nenni on 10-16-2020 at 10:00 am

TSMC Revenue Analysis 2020

TSMC is the bellwether for not just the semiconductor industry but the worldwide economy. TSMC makes semiconductors, semiconductors are where electronics begin and electronics are the foundation of modern life, absolutely.

Apple is also a key economic indicator and as we all know Apple is a strategic partner of TSMC. The Apple TSMC relationship started with the iPhone 6 and other iProducts (20nm in 2014) and continues to this day. The recently introduced iPhone 12 is based on TSMC 5nm. Next year Apple will use an enhanced version of 5nm and in 2022 it will be 3nm.

TSMC raised its 2020 revenue forecast for a second time this year (10% -> 20% -> 30%) with strong demand for 5G and high-performance computing (HPC). The pandemic has resulted in a much stronger emphasis on mobile and cloud computing which should continue in Q4. IoT is also up but Automotive and DCE is down, again due largely to the pandemic. TSMC’s HPC (cloud) content will also benefit from additional AMD and Intel wafer agreements from 7nm down to 3nm over the next five years.

TSMC Revenue Analysis:

In my opinion AI and the cloud will be the key semiconductor drivers moving forward. Vast amounts of data is being generated by our electronic devices. The data is now moving to the cloud for harvesting and monetization. Cars are an easy example. I can assure you that Tesla will be using data from their cars to make more money than from the selling the cars. Think Google and search, Facebook and personal information, or Amazon and shopping, it is all about the data.

It’s interesting to note the process node breakout:
38% of revenue is from mature CMOS nodes. Those nodes were cloned by UMC, SMIC, and GLOBALFOUNDRIES so there is strong competition where designs can be moved from one fab to another with relative ease. That is not the case with FinFET based designs so TSMC’s strong market position will continue to evolve in the future.

For 20nm and 10nm Apple was the only customer to hit HVM thus the shrinkage. TSMC moved 20nm fabs to 16nm and 12nm. The 10nm fabs were moved to 7nm and 6nm. Let’s call it the yield learning two-step where TSMC takes smaller process steps each year versus the much larger traditional semiconductor process steps.  For example, TSMC started EUV with a mature 7nm node then went full EUV at 5nm. Intel on the other hand is expected to go from zero EUV at 10nm to full EUV at 7nm.

Notable C. C. Wei quotes from the Q3 2020 Earnings Call:
“For TSMC, our technology leadership position enabled us to capture the industry megatrend of 5G and HPC. We expect to outperform the foundry revenue growth and grow by about 30% in 2020 in U.S. dollar terms.”

“This is pretty hard for me to answer, because I cannot release all the information I got from my customer. But let me say that, on the average, the 5G phone have about 30% to 40% more silicon content as compared with 4G.”

“We are complying full year with the regulations and so and we also notice that there is report saying that the TSMC got the (Huawei)  license. We are not going to comment on this unfounded speculation. And we also don’t want to comment on our status right now. For the 4Q shipment to Huawei, the ban, the regulation already say that after September 17th, zero.”

“Certainly TSMC is working with all the customers and view them as our partners. And so we don’t using this opportunity to raise our 8″ wafer price.”

“We are engaging with more customer at N3 as compared with the N5 and N7 at the similar stage. So there’s a lot of customers are working with us. And now, which one in the second half of 2022, which one would be the first product? Actually in smartphone and HPC applications, both.”

Bottom Line: TSMC and the rest of the semiconductor ecosystem seems to be somewhat COVID resistant. The new “work and learn from home” life style is accelerating the digital transformation and that means more semiconductors now and in the future.

About TSMC
TSMC pioneered the pure-play foundry business model when it was founded in 1987, and has been the world’s largest dedicated semiconductor foundry ever since. The Company supports a thriving ecosystem of global customers and partners with the industry’s leading process technologies and portfolio of design enablement solutions to unleash innovation for the global semiconductor industry. With global operations spanning Asia, Europe, and North America, TSMC serves as a committed corporate citizen around the world.

TSMC deployed 272 distinct process technologies, and manufactured 10,761 products for 499 customers in 2019 by providing broadest range of advanced, specialty and advanced packaging technology services. TSMC is the first foundry to provide 5-nanometer production capabilities, the most advanced semiconductor process technology available in the world. The Company is headquartered in Hsinchu, Taiwan. For more information please visit https://www.tsmc.com.

 


CEO Interview: Paul Wells of sureCore

CEO Interview: Paul Wells of sureCore
by Daniel Nenni on 10-16-2020 at 6:00 am

Paul Wells SemiWiki 1

What brought you to semiconductors? 
As a kid I was interested in electronics and early personal computers. I went on to graduate from Manchester University in 1986, the birthplace of the modern computer, studying Computer Engineering where for my final year project I designed a gate array using Ferranti Electronics technology (1.2um!). After seeing the devices work first time, I was so enthused I went to work for them – they later became Plessey and then GEC Plessey (GPS). Joining Fujitsu I then became a logic designer using a new language called Verilog and an early version of design compiler. We developed a Speech Processor chip in 0.8um technology for a GSM chipset. Latterly I moved into the highly successful ASIC team as a physical design engineer supporting customers all over Europe and Israel – the 90’s were an exciting time as everything went digital. I joined my first start-up, Jennic, in 2000 where after a few iterations we transitioned to a fabless model designing and supplying wireless microcontrollers targeting the Zigbee standard. Here I built and led the operations team to deliver evaluation kits, modules and packaged devices to our end customers. A brief sojourn into digital TV followed where I spent 2 years working for Pace Networks managing a team of 70 to develop a mini headend (The MultiDweller) distributing HD video and data over cable networks. Following this I co-founded sureCore in 2011 with support from a tech savvy investor.

What is the sureCore company backstory?
After leaving Pace Networks in 2010 I was introduced to an investor keen to capitalise on the technology developed by a Glasgow University spinout called “Gold Standard Simulations” (GSS) led by Prof. Asen Asenov – a world leading expert on silicon process variability. Working with GSS highlighted the challenges of SRAM design for sub-40nm nodes – the industry was demanding increased on-chip SRAM but the density and power characteristics were no longer scaling at the same rate due to process variability. My colleagues in the industry confirmed that the underlying SRAM architecture hadn’t changed in over ten years. At sureCore we started to explore various architectural enhancements that could manage variability and cut power consumption. Working closely with ST we developed a test chip in 28FDSOI that showcased our technology by demonstrating power savings in excess of 60%. We were later able to prove that our circuit techniques were as applicable to both Bulk CMOS and FinFET processes.

What range of products do sureCore develop and what is their USP?
Our principal focus is on low power and low voltage SRAM. We have a product family called “PowerMiser” that typically delivers 50% dynamic power savings and about 20% static power savings compared to competitive offerings. Our “EverOn” family enables the SRAM to be directly interfaced to the logic without the need for level shifters and can operate from the process Vnom all the way down to the bit cell retention voltage. This is facilitated by our patented “SMART-Assist” technology. We have compilers for a range of nodes from 40nm to 22nm. We have also developed a range of low voltage register files that can similarly operate at extremely low voltages – allowing architects the capability to scale performance as the application requires and make significant power savings.

We also offer a custom memory development service called “sureFIT” where we work closely with our customers to understand their application and jointly come up with a memory specification that aligns with usage requirements and delivers an optimal power profile. For many customers seeking to deliver power optimised solutions, for example, in the medical, IoT or AI spaces then this service can help deliver truly differentiated products.

Why is SRAM power such a big deal?
The last ten years has seen a dramatic rise in the quantity of embedded SRAM on chip. For multi-processor devices integrating SRAM made sense as pulling code and data from off-chip DRAM had huge power and timing penalties. Over the last few years AI developers similarly looking for power optimisations have been driven to integrate many Mbytes of on-chip SRAM. Other application spaces like AR and Networking have comparable demands. Some market researchers estimate that SRAM occupies up to 70% die area for some applications and whilst this is not true across the board the underlying trend is clearly upwards. SRAM provides the fastest most efficient access to data, however, the power consumed contributes significantly to the overall consumption, in some cases limiting the maximum performance and in others meaning expensive package selection to ensure adequate thermal dissipation. Cutting SRAM power consumption is an area whose time has come and for which sureCore technology is ideally suited.

Does sureCore only focus on SRAM or are there other areas when you bring value?
Over the course of many years developing low power SRAM we have developed a range of low  power design methodologies and know-how that enable us to rapidly port between process nodes. We have also invested heavily in statistical and parametric verification capabilities as well as timing/power characterization. Some of our customers exploit these skills to help augment their own teams.

What can be done to optimize SRAM for particular applications?
Our optimizations are pretty much focused on power – whether that be dynamic or static. Power has become the critical issue for our industry whether to prolong battery life or to reduce power dissipation for thermal or cost reasons. For battery powered applications then it is all down to the usage profile. For those that spend most of their time asleep then clearly leakage is critical. For others in the medical space like hearing aids then not only is leakage important but also dynamic power. The capability to operate across a wide voltage range can yield spectacular power benefits. At sureCore we have developed a “tool-box” of power saving techniques that can be applied to any application. Early engagement by way of a feasibility study allows us to explore a variety of potential architectures that could suit the customers need. Once this is understood then a full implementation program can be scoped.

What customer problems have you solved thus far?
One customer we worked closely with needed a large multi-Mbyte memory subsystem for an AR application. Building this from off-the-shelf SRAM proved untenable from a system power budget perspective. By creating a custom SRAM instance and integrating it into a low voltage interconnect fabric meant power savings of over 40% could be achieved compared to a standard implementation. Also contributing to the power efficiency was an understanding of access patterns meaning that an intelligent access controller could keep most SRAM instances asleep until an access was due. This was implemented in a 16nm FinFET process.

Another customer in the networking space, also targeting 16nm, needed a 1-Write, 8-Read memory. By undertaking an architectural exploration we were able to demonstrate that the optimal solution was a double pumped implementation based on a custom 1-Write, 4-Read bit cell. The 8 reads being delivered by 2 reads per port per cycle. Not only did this meet area requirements it also delivered power savings of 60%.

What does the next 12 months have in store for sureCore?
The 16FF node is starting to be seen as a highly power efficient alternative. Although initially developed for high performance applications both density and leakage characteristics are making it increasingly attractive for a range of medical and wearable applications. The company is currently engaged with a tier-1 customer keen to exploit both our SRAM and register file technologies in 16FF. Previous projects have demonstrated that sureCore technology scales well to FinFET nodes and delivers significant power savings compared to competitors. By adopting our technology the customer will be able to deliver genuinely differentiated products with dramatic improvements in battery life. We intend to capitalize on this engagement and make it available across a range of advanced process nodes and foundries. Never has our industry needed a low power SRAM alternatively more desperately. We at sureCore intend to fill that gap.

Also Read:

CEO Interview: Wally Rhines of Cornami

CEO Interview: Dean Drako of IC Manage

CEO Interview: Murilo Pilon Pessatti of Chipus Microelectronics


ASML is Strong Because TSMC is Hot!

ASML is Strong Because TSMC is Hot!
by Robert Maire on 10-15-2020 at 10:00 am

TSMC ASML EUV 2020
  • ASML has strong quarter lead by great Taiwan and EUV
  • EUV “crossed over” DUV as revenue leader- signaling new era
  • Taiwan doubles, China grows, Korea weaker, US further behind

ASML hits great numbers
ASML reported revenues of Euro 4B, with income of Euro 2.54EPS, both beating estimates handily. Ten EUV systems were shipped but 14 were recognized. Outlook is for revenues between Euro 3.6B-3.8B, which suggests upside to Euro 4B or better.

Shifting Numbers
There were significant shifts quarter over quarter. EUV went from 7 systems to 14 systems in Q3. Taiwan went from 21% of business to 47% with China going from 23% to 21% of business but increasing in absolute revenues. Korea slumped from 38% to 28% as EUV (which is not memory driven) dominated. The US (read that as Intel) fell off sharply fro.m 17% to 5%.

Intel “crushed” by TSMC in EUV spend- a very bad leading indicator
In the quarter Taiwan (primarily TSMC) was 47% of ASML’s business while the US (primarily Intel) was a paltry 5%. This means that TSMC is spending more or less ten times what Intel is spending on EUV.

In case there was any question as to who is winning Moore’s law by a tidal wave of investments. Intel investors should be scared…very scared. Intel is clearly voting with their feet and matching their words about outsourcing their future to TSMC, who is running away in the Moore’s law race. China spending four times what the US is spending (though none of it on EUV per the embargo) shows that China is deeply building out a strong semiconductor infrastructure and also clearly outspending the US.

Logic dominates at 79% versus memory of 21%
Logic at 79% is one of the highest percentages of revenues we have ever seen and is indicative of memory spending being subdued and perhaps weaker. The fact that this is so weighted to TSMC suggests that they are expecting a lot of business and also expect to put their foot on the EUV accelerator and leave both Intel and Samsung in their EUV dust. Memory obviously does not currently use EUV so the EUV domination of the current quarter will also outweigh memory DUV spend as ArF sales were down sharply.

Some “pushouts” and timing issues in the future?
A while ago we had talked about TSMC slowing spending and the pushouts and timing issues discussed on ASML’s call are likely related to what we heard as we may see some digestion in 2021 after its gigantic spending binge in 2020 (not unlike Samsung’s spending binge of a couple of years ago…).

This talk of pushouts and timing may spook investors but fits the pattern of our suggestion that the COVID led technology, work from home, spending spree will slow as the economic impact of COVID finally trickles down to the semiconductor industry.

Expect a lumpier business going forward
Given the dominance of EUV with systems north of $100M, a few systems more or less can make the quarters lumpier. Customer timing, pushouts and node changes will all add to lumpiness. The reality is that the end game of EUV and High NA remains the same and remains very good. The road to EUV itself was obviously very lumpy with fits and starts so investors should understand this. We would try to take a longer term view, though that may be difficult, and look at longer term trends.

Not much talk about “High NA”
We think that now that EUV is commonplace, the next upside wave will be High NA which will likely be easier (on a relative basis) as compared to the original EUV roll out. We think the potential technology benefits as well as financial benefits may make High NA more attractive than the EUV model. But High NA is still a few years away…..

Crossover from DUV to EUV – passing of the baton
ASML has now officially crossed over from being a DUV dominated company to an EUV dominated company. This brings a different set of challenges but a welcome set.

The key here is that they are the only EUV game in town so it cements their market share at virtually 100% versus having to compete (somewhat) at DUV. This makes the quite unique and valuable as compared to other semiconductor equipment companies who still slug it out in hand to hand battles. ASML is now “above the fray”

The Stocks
We don’t expect much movement on ASML’s stock price as it was already priced for the perfection we got. The talk of pushouts and timing may dampen sentiment and weigh on the stock and offset the positives. We do think it was prudent for management to keep expectations under control. At just over $400 per share ASML is not cheap nor expensive but appropriate given circumstances.

Collateral Stock Impact
In general we think that ASML’s strong performance first out in the quarter bodes well for the semiconductor equipment industry. Our main concern is that the stocks remain ahead of reality. The weak memory showing could be interpreted as bad for memory centric players, most notably Lam (though we have heard they are doing just fine) EUV spend clearly helps KLA and to a slightly lesser extent Applied.

Robert Maire

Also Read:

LRCX weak miss results and guide Supply chain worse than expected and longer to fix

Chip Enabler and Bottleneck ASML

DUV, EUV now PUV Next gen Litho and Materials Shortages worsen supply chain


Power in Test at RTL Defacto Shows the Way

Power in Test at RTL Defacto Shows the Way
by Bernard Murphy on 10-15-2020 at 6:00 am

scan chains crossing power domainspng

In the early days of Atrenta I met with Ralph Marlett, a distinguished test expert with many years of experience at Zuken and Recal Redac. He talked me into believing we could do meaningful static analysis for DFT-friendliness at RTL. His work with us really opened my eyes to the challenges that test groups face in integrating their highly complex additions into the mission mode RTL developed by the mainstream design group. Test is generally an independent team tasked with finding and working around test challenges in each new RTL drop. Typically in crazy short schedule allowances. One of those challenges that doesn’t get a lot of attention is the intersection between power management and test. Defacto has added power in test analytics to their tool suite in support of this need.

Why care about power in test?

Power-managed SoCs are typically designed with the assumption that they will never be on everywhere and clocking everywhere across the chip. That would be crazy – they would burn out right? But what about when the die/wafer is on a tester? Speed of test is paramount, so the default testing mode works directly against that power assumption. This means that test now also needs to become power aware. Scan chains have to understand power and DVFS domains for example. They have to be grouped in ways that are consistent with UPF-defined domains. For example, recognizing that scan chains running between different domains may require them to conflict with mean or peak power plans.

Equally domain-switching signals must be accessible from the testing environment. Usually we think of the power state engine managing these exclusively (in turn also managed by software). But test needs a more direct handle on these controls. And of course test signals crossing between voltage domains need UPF commands for level shifters.

Automating test feasibility, adaptation

This stuff is all routine these days in mission mode RTL development, but test teams have expertise in test, not so much in power management, so they welcome help in navigating these new requirements. Defacto have stepped in to help test experts model the interaction between their test plans and the power architecture. Through the STAR platform, a test expert can stitch trial scan chains and trial power control overrides (in test mode) into the RTL. This after all is what Defacto do really well, incrementally modifying RTL to stitch in nets and instances. They can visualize how chains overlap power domains and how domain control signals interact with those domains. They can also auto-fix such cases.

A test engineer can model how all of this interacts with test compression for what-if analysis, varying the number of EDT channels and scan chains. Based on this, the Defacto flow will allow them to assess compression-aware test coverage estimates. They can dump that modified RTL, along with a testbench to drive scan testing. Which they can they run in their favorite simulator to validate test functionality.

Defacto got their start in helping test engineers stitch test logic into design RTL, so this is a very natural advance for them to add, especially since the test experts already know and depend on their tools. You can learn more about Defacto here and you can see them at virtual ITC 2020 November 3rd-5th.

Also Read

Atos Crafts NoC, Pad Ring, More Using Defacto

Build Custom SoC Assembly Platforms

Another Application of Automated RTL Editing