webinar banner2025 (1)

Semiconductor COVID-19 Update!

Semiconductor COVID-19 Update!
by Mark Dyson on 03-22-2020 at 10:00 am

COVID 19 Semiconductors SemiWiki

Last week whilst China started to recover from COVID-19 outbreak, the rest of the world was seriously impacted by the growing number of cases as the number of cases and deaths outside of China grew higher than in China. With the rise, many governments around the world belatedly put in measures to prevent the further spread of the virus, ranging from lock downs to closing borders. This had a serious impact on the whole business world including semiconductors.

Here in South East Asia, on Monday evening, Malaysia announced it was implementing a Movement Control Order effective from March 18th until March 31st. This would restrict the entry of non Malaysians into the country, and prevented Malaysians from travelling outside of the country as well as restricted movement within the country, to prevent the spread of the virus. The order also instructed the shutdown all but essential businesses to close, amongst other measures. This affected the semiconductor industry in both Malaysia and Singapore.

Initially the semiconductor industry was not on the essential industry list for Malaysia, and so companies prepared to shutdown by Wednesday 18th. Then just before midnight on the 17th, the government added electronics and semiconductors to the essential industry list but companies were only allowed to operate with minimum workforce, so companies scrambled to restart operations, and get the workforce back though not with a full workforce.

The order also impacted Singapore’s semiconductor industry as over 300,000 Malaysians cross the border every day to work in Singapore, many of whom work in the semiconductor industry. On Tuesday when workers arrived at work, they were told to go home and pack for 2 weeks and come back before midnight whilst companies scrambled to find accommodation for the workers for 2 weeks. This caused huge jams on the Causeway. Whilst not all could arrange or not all employees wanted or could stay in Singapore most companies managed to secure enough workforce to maintain production.

Whilst the rest of the world goes into lock down, China is slowly opening up again from it’s lockdown. Xiaomi announced that 80% of it’s supply chain is operational ahead of it’s new 5G phone launch. Also Hon Hai (Foxconn), one of Apples main suppliers, is reported to have begun re-opening it’s factories in Wuhan, after it received approval from the local government.

Whilst China is recovering, and reporting zero local cases, there are still a lot of precautions being taken in China to prevent the re-occurrence of the virus, and the Chinese are taking these in their stride.

Elsewhere in Asia, Taiwan, Singapore and Hong Kong have managed to so far “contain” the outbreak by implementing the lessons learnt from SARS. All the countries started to put in place measures from mid January and although the countries are seeing an increase in the number of cases, most of these are imported and so far have managed to keep the number of new cases per day to double digits, unlike many other countries around the world which are seeing exponential rises.

In Taiwan they are using big data to help contain the outbreak, they have linked their health insurance service the immigration data so they know where people have travelled, and they have fixed the price of face masks and people use their insurance cards to buy their allowance from allocated pharmacies across the island, who can check if the person has used their allowance nad also where they have travelled. As a result most people have access to face masks and most people working in companies wear them all the time at work and outside. Despite these precautions it can not totally stop the spread, TSMC reported that one of their workers was infected this week and they quarantined their co-workers.

In Singapore the government raised the alert level to DORSCON Orange very early on February 7th after just 29 cases were reported, this caused fear amongst it’s neighbouring counties that Singapore was a danger country but by raising the level so early they have prevented the numbers to grow exponentially so far. Singapore has been diligently following a process of contact tracing for every person affected by the virus, to track down and quarantine people that they have been in contact with. Over 5000 people have been quarantined to date, many of them already released without catching the disease. They also have been strictly enforcing restrictions and have imposed severe penalties on those and their employers that break the restrictions. Here is an article from the BBC that explains the level of detail that Singapore goes through to contain the virus.

The impact on the economy is forecast to be very severe. Market research company IDC has evaluated various scenarios based on how long the outbreak lasts as to the impact on the world semiconductor market. They say there is a 80% chance that the market will contract in 2020 compared to 2019 instead og the previously expected growth. The most likely scenario is that world semiconductor market revenue will decrease -6% in 2020.

Whilst SEMI has published a blog on the latest market indicators which are generally already pointing down.

Over 150 companies have reported earning hit due to COVID-19 many of which are semiconductor and electronics companies.

Mobile phone sales in February are reported to have collapsed -38% yoy dropping from 99.2million a year ago to only 61.8million in Feb 2020. With Samsung reporting slow initial sales of it’s newly launched S20 flagship phone. Samsung have told shareholders that the coronavirus pandemic would hurt sales of smartphones and consumer electronics this year, while demand from data centers would fuel a recovery in memory chip markets.

Sales in the auto industry have also been badly hit and now car manufacturing plants around the world are shutting down as supply chains dry up, with all the major automakers in US and Europe declaring temporary shutdowns and halt production. This doesn’t bode well for those companies that rely on the automotive market.

With so many people working from home there are many warnings out about the dangers of hackers seeking to take advantage and infiltrate companies during this outbreak. At Cisco the numbers of security support requests to support remote workforces have jumped 10x in the last few weeks.

Business is business and keeps economy afloat but without healthy workers there is no business, so please do follow all the restrictions to stop the spread not just to the letter of the law but also follow the spirit behind the restrictions. Yes the restrictions can put some temporary hardships on your life but if we all follow them then we can beat this virus, it’s better than ignoring them and having worse consequences. This is a good video explaining the need for social distancing, and how by following it can help avoid deaths. With all the advice to stop touching your face, you still touch it. Here is an video by the BBC that explains the reasons why we touch our face and gives ideas how to stop yourself. Also here is a simple guide to the COVID-19 symptoms and how to prevent catching and spreading the disease.

So please stay safe out there and behave as if you already have symptoms even if you feel fine. Let’s not spread this virus and together we can beat it.

Related Blog


The End of Mobility as We Know It

The End of Mobility as We Know It
by Roger C. Lanctot on 03-22-2020 at 8:00 am

The End of Mobility as We Know It

The hideous reality of the coronavirus has exposed the hideous realities of the mobility industry with sobering implications for all. At its core, mobility is about moving people in the safest, most efficient, and cost effective ways and suddenly citizens around the world are being told to stop moving and stop congregating.

Ride hailing operators Uber and Lyft have suspended carpooling. Uber also offered drivers suffering symptoms two weeks of paid leave. Both moves reflected the fig leaf-flinging efforts of governments to forestall the pandemic’s spread and mitigate its impact.

The transportation proposition of an app-based taxi ride delivered by an itinerant non-employee driver has finally fully been exposed for all its frailty. There is no protection for driver or passenger in a pandemic. There is no commitment to a particular level of service or safety. There never was.

Uber took the further step of suspending fees for food delivery and Lyft stopped hiring drivers in order to preserve what little demand was left for current drivers. But whatever they may announce, these profit-less operators are looking more precarious than ever.

The onset of COVID-19, though, has called into question the appropriateness of getting into any car that isn’t your own with someone you don’t know – either a driver or a driver and another passenger. This has further called into doubt the wisdom of public transportation itself and the hygiene associated with flying in airplanes or checking into hotel rooms.

The impact has been immediate and will have both long and short-term consequences. For the travel industry as a whole, the onset of the COVID-19 pandemic has been stunning as hotels are closing while local governments consider requisitioning them for hospital space.

Airlines in the U.S. are seeking government bailouts while Italy prepares to nationalize Alitalia – and France considers similar measures for Air France. Rental car companies, dependent as many are on airport traffic, have looked on helpless as business has evaporated.

The underlying goal of most mobility operators as well as regulators, legislators, and transportation authorities for the past five years has been to increase the number of passengers in both public and private conveyances to reduce congestion and emissions. With economic activity grinding to a halt there is now no congestion and satellite photography has shown us all that emissions are suddenly less of a problem globally – but especially in hard hit areas. The skies are clearing.

It’s hard to find such nuggets of good news in the morass of misery unfolding around the world. Multiple tolling agencies have closed or gone virtual suspending toll collections or shifting immediately to electronic tolling. And the price of gasoline is plunging along with everything else – so that’s good news, right?

Public transit agencies have been especially hard hit – confronted as they are with the need to maintain operations while simultaneously seeking to discourage crowding. This has resulted in service cutbacks and contradictory efforts to limit the use of transportation services to medical or personal necessity – suggesting that public transportation will primarily be transporting sick people. That, alone, should serve as a sufficient deterrent to crowds on the platforms and at bus stops.

Car sharing, ride hailing, and taxi operators have begun disinfecting their vehicles between rides – and even used car sales operations have begun offering disinfection as a service. The final nail in the transportation coffin has been the widening stoppage of vehicle production – as was seen in China two months before (where manufacturers have recently begun ramping back up).

With public transportation winding down, micromobility has taken on greater appeal, which is likely to cause municipalities to reconsider their limitations on shared scooters and bikes. For automotive-centric operators the challenge remains one of disinfecting and distancing where possible – but the stress of a multiple-month shutdown may be challenging for taxi and rental car companies.

Car sharing and ride hailing operators with significant leverage are likely to see their prospects for profitable operation or even survival severely tested. Consolidation among taxi operators seems inevitable. Uber may see it fit to sell off its India operations to Ola – both companies are Softbank investments.

The concept of autonomy in the form of robotaxis has its appeal – but not in the context of a shared space with no provision for cleaning and disinfecting the vehicle between rides. We humans are good for something, after all. We’ll be thinking a little differently about sharing rides in the future.

Car makers are still advertising new car sales, but many new car dealers around the world have suspended operations. These developments highlight the behavioral sacrifices and compromises we normally routinely make to move en masse to work and play. In a few months we will be asking ourselves to rekindle those damaged instincts and rejoin the literal human race – the herd – to get where we need to go.

Things will look the same, but they will never be the same. There will be more gloves and more masks and, maybe, more politeness. Let’s try to remember what this period right now is like when, a year from now, we are once again getting into a dodgy looking taxi or crammed into a subway or tram.

With a little luck and foresight these public transportation spaces may be a little cleaner as operators embrace the heightened expectation for disinfection and safety. We really should have been paying attention to these issues all along. Who can forget their first visit to Tokyo and the gloved handed taxi drivers with their cars with self-opening and closing doors. I’m looking forward to that future while I hunker down to ride out the pandemic.


TSMC 32Mb Embedded STT-MRAM at ISSCC2020

TSMC 32Mb Embedded STT-MRAM at ISSCC2020
by Don Draper on 03-20-2020 at 6:00 am

Fig. 1. Cross section of the STT MRAM bit cell in BEOL metallization layers between M1 and M5.

32Mb Embedded STT-MRAM in ULL 22nm CMOS Achieves 10ns Read Speed, 1M Cycle Write Endurance, 10 Years Retention at 150C and High Immunity to Magnetic Field Interference presented at ISSCC2020

1.  Motivation for STT-MRAM in Ultra-Low-Leakage 22nm Process

TSMC’s embedded Spin-Torque Transfer Magnetic Random Access Memory (STT-MRAM) offers significant advantages compared to Flash Non-Volatile Memory (NVM).  Flash requires 12 or more extra masks, is implemented in the silicon substrate and is page mode write alterable.  STT-MRAM on the other hand is implemented in the Back-End-Of-Line (BEOL) metallization as shown in Fig. 1, requires only 2-5 extra masks and is byte-alterable.

This implementation in TSMC’s 22nm Ultra-Low-Leakage (ULL) CMOS process has a very high read speed of 10ns, and read power of 0.8mA/MHz-bit. It has 100K cycle write endurance for 32Mb code and 1M cycle endurance for 1Mb data. It supports data retention for IR reflow at 260C of 90 seconds and 10 years data retention at 150C.  It is implemented in a  very small 1transistor-1resistor (1T1R) 0.046 mm2 bit cell and has a very low leakage current of 55mA at 25C for the 32Mb array equivalent to 1.7E-12A/bit when in Low Power Standby Mode (LPSM).  It utilizes a sensing scheme with per-sense amp trimming and 1T4R reference cell.

 

Fig. 1. Cross-section of the STT-MRAM bit cell in BEOL metallization layers between M1 and M5.

2.  1Transistor-1Resistor MRAM Bit Cell Operation and Array Structure
To reduce parasitic resistance on the write current path, a two-column common source line (CSL) array structure is employed as shown.

Fig. 2. Schematic of the 1T1R bit cell in the array of 512b column with the 2-column CSL

The word line is over-driven by a charge pump to provide sufficient switching current of hundred’s of mA for write operation requiring the unselected bit lines to be biased at a “write-inhibit voltage” (VINHIBIT) to prevent excess voltage stress on the access transistors of the  unselected columns of the selected row. To reduce bit line leakage of the access transistor on unselected word lines, the word line has a negative voltage bias (VNEG). The biasing of the array structure for reading, write-0 and write-1 is shown in Fig. 3.

Fig. 3. Cell array structure biasing for word lines and bit lines for read, write-0 and write-1 operations.

3.  Read Operation, Sense Amplifier and Word-Line Voltage System
For fast, low-energy wake-up from LPSM to enable high-speed read access, a fine-grained power gating circuit (one per 128 rows) with a two-step wakeup is used as shown in Fig. 4.  The power switch consists of two switches, one for the chip power supply VDD and the other for a regulated voltage from the Low Drop-Out (LDO) regulator supplying VREG.  The VDD switch is turned on first to pre-charge the WL driver’s power rail, then the VREG switch is turned on to raise the level to the targeted level, which achieves fast wake-up of <100ns while minimizing the transient current from VREG LDO.

Fig. 4. Fine-grained power gating circuit (one per 128 rows) with two-step wake-up.

The Tunnel Magnetoresistance Ratio (TMR) house curve shown in Fig. 5 is the ratio between the antiparallel resistance state Rap to the parallel resistance state Rp  as a function of voltage, showing lower TMR and smaller read window at higher temperatures.

Fig. 5 House curve of TMR showing the reduced window for read at 125C

The resistance distributions of the Rap and the Rstates which, when including the bitline metal resistance and the access transistor resistance, determine the total read-path resistance showing the proportional reduction in the difference between the two states which the sense amp needs to measure to determine the bit value, as shown in Fig. 6.

Fig. 6. Distribution of resistance values for the anti-parallel Rap and the parallel Rp states and including the metal bit line and access transistor resistances showing the proportional reduction in the difference between the two states that needs to be detected by the sense amp.

To sense the resistance of the MTJ, the voltage across it during read must be clamped by transistors N1 and N2 to a low value to avoid read-disturb  and is trimmed to cancel the sense amp and reference current offset. The reference resistance is formed by the 1T4R configuration  R~(Rp +Rap)/2  + R1T as shown in Fig. 7.

Fig. 7.  Sense amp with trimming capability showing the read clamp voltage on transistors N1 and N2  to prevent read disturb. Reference R~(Rp +Rap)/2  + R1T

This configuration is able to achieve a read speed of less than 10ns at 125C as shown in the sensing timing diagram and shmoo plot Fig. 8.

Fig. 8.  Sensing timing diagram and read access shmoo plot at 125C.

4.  MRAM write operation
MRAM write of the parallel low resistance state, Rp and the higher resistance anti-parallel state Rap requires bi-directional write operation shown in Fig. 9. To write the Rap state to the Rp requires biasing the Bl to VPP, the WL to VREG_W0 and the SL to 0 to write the 0 state.    To write the 1 state, writing the Rp  state  to the Rap  state  requires current in the other direction, with the BL at  0, the SL at VPP and the WL at VREG_W1.

Fig. 9. Bi-directional Write for the parallel low resistance state, Rp and the higher resistance anti-parallel state Rap

For data retention during IR reflow at 260C for 90sec, an MTJ with a high energy barrier Eb is needed. This requires an increase in the MTJ switching current to hundreds of mA needed for reliable writing.  The write voltage is temperature compensated and a charge pump generates a positive voltage for selected cells and a negative voltage for unselected word lines to suppress bit line leakage at high temperatures. The write voltage system is shown in Fig. 10.

Fig. 10 Showing the over-drive of the WL and BL/SL by the charge pump and the temperature compensated write bias

Temperature compensation for write voltage is required for operation with a wide temperature range.  The write voltage shmoos from -40C to 125C are shown in Fig. 11 where the F/P blocks show fail at -40C while passing at 125C.

Fig. 11. Showing requirement for temperature compensation during write.

A BIST module with standard JTAG interface implements self-repair and self-trimming to facilitate test flow. The memory controller TMC implementing the Double Error Correction ECC (DECECC) shown in Fig. 12.

Fig. 12. BIST and Controller for self-repair and self-trimming during test and implementing DECECC.

The TMC implements the smart write algorithm which implements bias setup and verify/retry time for high write endurance (>1M cycles). It contains read-before-write to decide which bits need to be written and dynamic group-write to improve write throughput, multi-pulse write with write verify and optimizes write voltage for high endurance. The algorithm is shown in Fig. 13.

Fig. 13. Smart write algorithm showing dynamic group write and multi-pulse write with write verify.

5.  Reliability Data, Key Features and Die Photo

Fig. 14.  The write endurance test shows that the 32Mb chip access times and the read currents are stable before and after 100K -40C write cycles.

Fig. 15.  The write endurance bit error rate is less than 1 ppm at -40C after 1M cycles.

Fig. 16. The increased thermal stability barrier Egoverning temperature dependence of data retention shows more than 10 years data retention at 150C, 1ppm.

Magnetic field interference is a potential concern in many applications for spin-based STT-MRAM. The solution is a 0.3mm thick magnetic shield deposited on the package as shown in Fig. 16 showing that in a field strength of 3500Oe of a commercial wireless charger for mobile devices the bit error rate of 100 hour exposure can be reduced from >1E6ppm to ~1ppm. Also, more than 10 years of data retention at 125C was shown at a magnetic field of 650 Oe.

Fig. 17. Sensitivity to a magnetic field of 3500 Oe reduced by a factor of 1E6.

Conclusions
The 22nm ULL 32Mb high-density MRAM has very low power, high read speed, very high data retention and endurance  suitable for a wide range of applications. With a cell size of only 0.0456mm 2 , it has a read speed of 10ns and a read power of 0.8 mA/MHz/b and in low-power standby mode (LPSB) it has leakage less than 55mA at 25C, equivalent to 1.7 E-12 A/bit leakage. For 32Mb code, it has an endurance of 100K cycles and for 1Mb data >1M cycles.  It has a capability of 90sec data retention under IR reflow at  260C and a long-term retention of > 10 years at 150C. The product spec is shown in Fig. 18 and die photo in Fig. 19.

Fig. 18.  Summary table of N22 MRAM specification and die photo.

Fig. 19.   32Mb high-density MRAM macro in the 22nm Ultra-Low-Leakage CMOS process.


Hyper-Scaling Of Data Centers – Environmental Impact Of The Carbon ‘Cloud’

Hyper-Scaling Of Data Centers – Environmental Impact Of The Carbon ‘Cloud’
by Stephen Crosher on 03-19-2020 at 10:00 am

Stephen Crosher Moortec CEO Square High Res

It is predicted that by 2030 energy consumption attributable to data centers will make up a staggering 8% of the world’s total usage!

As we move in to 2020 it’s clear that every sector of industry, including the semiconductor industry, will have a responsibility to address growing environmental concerns. We should be aware that as our sector underpins the growth in AI, 5G telecommunications, crypto-currency and high performance compute applications, it is predicted that by 2030 energy consumption attributable to data centers will make up a staggering 8% of the world’s total usage. Data centers are fast becoming one of the big consumers alongside lighting, domestic heating/cooling and transportation.

What will happen next?
My prediction for 2020 is that we will see greater governmental involvement in how carbon emission targets are levied upon different industrial sectors, technology applications, and in particular, data centers. As the so called ‘evolved economies’ around the world gradually respond to the pending climate crisis I believe we could see a growth in data centers being located in ‘less evolved’ economic regions where emission levels are scrutinised less and incentives for reduced energy consumption are less apparent.

Today, we know that there is a vicious cycle to data center energy consumption. Approximately 40% being consumed through high performance compute activity, which in turn generates heat at the chip, board and system levels. However, a further 40% of energy is being consumed through subsequent cooling and thermal management. As society demands more computational capacity, so a two-fold energy demand is generated.

Taking responsibility…
For the semiconductor industry and associated technologies to grow responsibly we need seek innovative ways to reduce our energy consumption, hence optimise from the physical chip level up to overall data center deployment.

By understanding and accurately measuring thermal, supply and process conditions deep within our semiconductor devices we are able to control and therefore reduce overall data center consumption. By also harnessing mission mode in-chip analytics for optimisation, we can invoke further reduction to our carbon ‘cloud’ emissions. So in 2020, I believe that narratives from environmentalists like Greta Thunberg and subsequently the action taken by governments around the world, will see the semiconductor industry respond by helping to tackle our new existential challenge.

To read previous “Talking Sense with Moortec” Blogs click HERE

Watch out for our next blog entitled Talking Sense with Moortec … Key Applications for In-Chip Monitoring which will be dropping late March!

About Moortec
Moortec have been providing innovative embedded subsystem IP solutions for over a decade, empowering customers with the most advanced monitoring IP on 40nm, 28nm, 16nm, 12nm, 7nm and 5nm. Moortec in-chip sensing products support the semiconductor design community’s demands for enhanced performance optimization and increased device reliability, helping to bring product success by differentiating the customers’ technology. With a worldclass design team, excellent support and a rapidly expanding global customer base, Moortec are the go-to leaders in innovative in-chip technologies for the automotive, consumer, high performance computing, mobile and telecommunications market sectors.

For more information please contact Ramsay Allen ramsay.allen@moortec.com, +44 1752 875130, visit www.moortec.com and follow us on Twitter and LinkedIn.


5G Infrastructure Opens Up

5G Infrastructure Opens Up
by Bernard Murphy on 03-19-2020 at 6:00 am

5G network

It seemed we were more or less resigned to Huawei owning 5G infrastructure worldwide. Then questions about security came to the fore, Huawei purchases were put on hold (though that position is being tested outside the US) and opportunity for other infrastructure suppliers (Ericsson, Nokia, etc) has opened up again.

Building 5G baseband systems (what goes in the cell tower and beyond the tower) is immensely complicated. The baseband divides into 3 units – remote radio units (RRUs) which connect directly to the antennae, a distributed unit (DU) which sits at the base of the tower, which in turn connects to a central unit (CU) to manage connections to multiple DUs.

Incidentally wireless technologies, particularly cellular, breed acronyms like rabbits. I’ll introduce a few here. For example, that thing from Apple or Samsung on which you make calls? It’s not a cellphone, it’s a UE (user equipment).

A 5G RRU needs to deal with sub-6GHz signals and above 6GHz, including millimeter wave. Wi-Fi (802.11ax) and legacy LTE are supported, so RRUs must support multiple radio access technologies (RATs – see what I mean?). The RRU should handle massive MIMO (multi-input, multi-output) reception and transmission, together with beamforming to optimize signal strength. Most of the rest of the processing is handled in the DU and CU.

The base station part of the network in principle sits in that box at the bottom of the cell tower, handling more advanced communication functions and connecting through backhaul to central stations which manage call routing. Except all that is changing. It turns out that having a lot of dedicated electronics for each tower is an expensive proposition for the network operators, especially if demand varies significantly through the day (as it does in metro areas for example).

That has driven variation in who does what, and where, in radio access networks (RANs – from the CU do the DUs to the RRUs). The first switch was to centralized RANs (C-RAN) where almost everything except the RRUs move to central offices. That’s now evolving to virtualized RANs (V-RAN) where there is more flexibility to move functions around. There’s even discussion on an open RAN standard (O-RAN).

Through these systems, the various flavors of 5G must be supported. There’s enhanced mobile broadband (eMBB), the high bandwidth version which will allow you to view 4K TV on your phone or enjoy mobile gaming, VR, MR, etc. Ultra-reliable low latency communication (URLLC) is what you need for safety-critical applications in your car or in medical functions – fast, low bandwidth but dependable latency. Other applications include machine to machine communication (MMC) and fixed wireless access (FWA).

Sailing briefly into acronym-free waters, all these flavors require a lot of multi-core and multi-thread support along with ability to aggregate and flexibly manage traffic from and to multiple targets. Like virtualized job management in data centers except that here you are dealing with high bandwidth communication, all of it requiring a pretty high QoS and some of it requiring guaranteed QoS. (Sorry – I promised no acronyms – QoS = quality of service)

Finally, the 5G standard continues to evolve. Release 15 just came out, release 16 is expected in a few months and release 17 is planned for next year. Anyone planning to hardwire 5G baseband is dead before they start. All of these solutions have to be software based (software defined radio – SDR).

CEVA has just released their XC16 DSP, a core designed specifically for baseband and designed in close partnership with a leading equipment vendor. They’ve also announced that Nokia and ZTE have adopted this platform. This is starting to look more like a horse race again. And meantime you’ve learned more cellular acronyms than you ever wanted to know.

You can learn more about the XC16 HERE.

Also Read:

Using IMUS and SENSOR FUSION to Effectively Navigate Consumer Robotics

A Bundle of Goodies in Bluetooth 5.2, LE Audio

Glasses and Open Architecture for Computer Vision


COVID-19 and Semiconductors

COVID-19 and Semiconductors
by Bill Jewell on 03-18-2020 at 10:00 am

COVID 19 and semiconductors SemiWiki

The threat of COVID-19 (coronavirus) is continuing to spread. As of March 17, the World Health Organization (WHO) reported 179,111 confirmed cases and 7,426 deaths. WHO declared COVID-19 a pandemic as of March 11. Many countries have imposed severe restrictions to slow the spread of the disease, ranging from banning of large gatherings to near-total lockdowns.

According to Digitimes, the top five notebook computer brands (HP, Lenovo, Dell, ASUS and Apple) saw combined shipments in February 2020 drop 40% from January and drop 38% from a year ago. Digitimes also reported electronics production in China is quickly returning towards normal. However, production declines were steep in the first two months of the year. The National Bureau of Statistics of China reported combined January and February 2020 production of mobile phone units was down 34% from a year ago. The total value of Chinese industrial production in January and February was down 13.5% from a year ago.

What will be the effects of COVID-19 on the global economy, and more specifically electronics and semiconductors? It is too early to tell. Much depends on how quickly the disease can be contained and when life for most people can return to relatively normal. Regarding electronics and semiconductors, the two key factors are supply and demand. Supply has been severely disrupted in the short term. Even as China moves back toward more normal production levels, many other countries have severe restrictions which could impact electronics production – including Italy, Germany, France, the U.S., South Korea and Japan. Some factories are closed. Others have reduced staffing levels as employees self-quarantine or stay home to take care of children whose schools are closed. Even if COVID-19 is contained by the end of June, production in the second half of the year will not be fully able to compensate for lost production in the first half.

The demand side is a different story. Certainly, many households will see a reduction in income due to lost workdays. Other households with employees working from home and those with sick leave to cover lost work time will not see a reduction in income. These households could have more discretionary income (income after taxes and necessities) than previously. Many restaurants, bars, movie theaters and other entertainment venues are closed. Many clothing stores are closed. Travel for pleasure is severely curtailed. With spending on these areas severely cut back, households will have more discretionary funds. Much of this extra money will be saved due to the current economic uncertainty. However, some of the money will be available to spend on durable goods such as electronics.

An interesting case is the trend in the United States after the terrorist attacks on September 11, 2001 (9/11). After the attacks, air travel was severely disrupted. The International Air Transport Association estimated air travel demand was down by over 31% in the five months following the attacks. Much of the money people would have spent on air travel and other vacation expenses was spent on consumer goods.

The chart below shows U.S. personal consumption expenditures change versus a year ago from 1Q 2001 through 2Q 2002 using data from the U.S. Bureau of Economic Analysis (BEA). The U.S. was in a recession from March 2001 through November 2001 primarily due to the collapse of the internet bubble. Electronics showed slower growth or declined. PCs and peripherals went from growth in 2000 to declines in 2001. Communications equipment (including mobile phones) and televisions went from double digit growth in 2000 to single digit growth in 2001. However, a shift is apparent beginning in 4Q 2001, the first full quarter after the 9/11 attacks. Air transportation expenditures, already declining in 2Q 2001, declined 28% versus a year ago in 4Q 2001. Expenditures on hotels and motels followed a similar trend. Consumers shifted their spending toward automobiles and electronics. New auto expenditures, which had been in a year on year decline since 4Q 2000, jumped 20% in 4Q 2001. Communications equipment and televisions accelerated from 4% growth in 3Q 2001 to 10% and 8% growth respectively in 4Q 2001. PCs and peripherals were in decline for the first three quarters of 2001. In 4Q 2001 the rate of decline slowed and positive growth returned in 1Q 2002.

The trend in spending is also confirmed by the change in 4Q 2001 versus 3Q 2001. The numbers are seasonally adjusted, so 4Q seasonal trends are taken out. The change in consumer expenditures from 3Q 2001 to 4Q 2001 was 26% for new automobiles, 5.3% for communications equipment, 4.4% for televisions, and 2.2% for PCs and peripherals. Total consumer expenditures were up 1.6%. Meanwhile air transportation was down 9% and expenditures for hotels and motels were down 6%.

During the internet boom, the world semiconductor market peaked at $55.3 billion in 3Q 2000, according to World Semiconductor Trade Statistics (WSTS). The market fell to $30.6 billion in 3Q 2001, a 45% decline. The 4Q 2001 market was basically flat with 3Q at $30.5 billion. Quarter to quarter growth returned in 2002, with 4Q 2002 up 23% from a year earlier. The recovery in the semiconductor market coincides with the electronics boom in the U.S. in 4Q 2001. Other factors also drove the semiconductor recovery, but the post 9/11 strong growth in electronics and automobile spending was certainly a major contributor.

Could a similar trend result when the world economy begins to recover from COVID-19? It is certainly a possibility. Even when the risks of infection decrease, people will still be reluctant to travel. Fear of COVID-19 may also delay returns to restaurants and entertainment venues. People could spend more of their money on electronics, most of which can be enjoyed in the safety of the home.

Electronics and semiconductors will certainly see significant declines in the first half of 2020 due to supply constraints and a falloff in demand. Assuming COVID-19 is contained by the end of 2Q 2020, supply and demand should return to normal levels. As mentioned above, demand could possibly exceed normal levels in the second half of 2020. In February, we at Semiconductor Intelligence forecast 2020 semiconductor market growth of 7%. With the current uncertainty, we are not ready to offer a new forecast, but 2020 will most likely be a year of decline as was 2019.

Also Read:

Semiconductor Recovery in 2020?

CES 2020: still no flying cars

Semiconductor CapEx Warning


Machine Learning for EDA – Inside, Outside and Everywhere Else

Machine Learning for EDA – Inside, Outside and Everywhere Else
by Mike Gianfagna on 03-18-2020 at 6:00 am

Paul Cunningham

Artificial intelligence (AI) is everywhere. The rise of the machines is upon us in case you haven’t noticed. Machine learning (ML) and its associated inference abilities promise to revolutionize everything from driving your car to making breakfast. We hear a lot about the macro, end-product impact of this technology, but there are many more back-stories about the revolution.  Of particular interest to SemiWiki readers is what all this all means for chip design, chip verification and EDA.


I got a chance recently to chat with Paul Cunningham at Cadence about this topic. For those of you who don’t know Paul, he is a Corporate Vice President and General Manager at Cadence. He’s been there for almost nine years, overseeing everything from front-end to back-end to system verification products. With a diverse background like this, we had a lot of ground to cover during our conversation.

We started at 30,000 feet. How does EDA impact AI/ML design and how does AI/ML technology impact EDA? Paul discussed how Cadence approaches these requirements. It turns out there are three separate and distinct areas of focus at Cadence and they’re all important.

Regarding the impact AI/ML has on EDA tools, there are actually two parts to consider. EDA tools are faced with solving a lot of intractable problems that utilize heuristics to manage. Estimating congestion or parasitics for a large digital design early in the place and route flow are examples. In these cases, AI/ML can contribute to better data and a better chip layout as a result. These improvements are really invisible to the user—the tool just delivers better results. Cadence calls this “ML inside”.

The other impact AI/ML has on EDA tools has to do with the design flow. As everyone knows, chip design is an iterative process, with many parts of the design team collaborating to get the best result possible. There are many, many trial runs in the interest of the best layout, most complete verification, lowest power and so on. This process can extend over several months. In this context, AI/ML can be used to analyze the vast amounts of data each iteration produces with the goal of learning as much as possible from a given iteration or set of iterations. This process can reduce design time by essentially working smarter as opposed to harder. This process is quite new as it looks to productize designer intuition to make a design flow more efficient. Cadence calls this “ML outside”.

Paul went on to highlight the significance of ML outside. Up to now, EDA tools have a huge number of input parameters, but none of them capture the history and learning of the tool usage for the problem at hand. Said another way, the tool has no memory of its prior use. ML outside can change all that, creating a fundamentally new type of tool flow.


The third area of focus moves from tool-centric to ecosystem-centric.  That is, how can you help to enable the chip and system design ecosystem to add AI/ML to their products? Paul explained that the term, ecosystem, is quite broad in this context and also quite important to the Cadence strategy. Foundries and certain IP suppliers play an important part of course. But design challenges have grown past hardware and Cadence also needs to look at how their verification products interface with software systems like Android, Windows and Linux to deliver a holistic debug capability.

We also discussed the wide variety of markets that all need assistance adding AI/ML to their products. Mobile, automotive, data center and mil/aero are just a few of many examples. What are the demands each of these markets presents? Does each need fundamentally new and different tools, or is it more about the flow? It turns out all chips need basically the same tools to get to tapeout, but the stress points the tools experience and the way the tools need to be tested against other parts of the ecosystem are quite different. If you consider the demands of a very small, ultra-low power chip vs. the demands of a massive data center processing chip, you’ll get the idea. The long life of an automotive chip vs. the relatively short life of a cell phone chip also shed light on the diversity of the problem.

So, supporting a broad range of markets is more about optimizing and testing tools and flows than it is about developing different tools for different markets. Fundamental to this strategy is the development of robust tools that support multiple use models of course. Paul provided a memorable analogy here that is worth repeating, “a Land Rover and a Ferrari are both cars, they’re just optimized and tested to be good at different things.”

Our final topic touched on what future AI/ML chips will look like. Paul felt strongly that a collection of custom, optimized processors will always deliver superior performance for AI/ML algorithms than an off-the-shelf product. So, the future of compute in this context is heterogeneous. Having spent a good part of my career as an ASIC supplier, I couldn’t agree more. This view of the future suggests vibrant growth for both EDA and semiconductor as the number of special purpose AI/ML processors explodes. I’ll leave you with that optimistic thought.

If you’d like to learn more about the AI/ML solutions Cadence offers, visit the AI / Machine Learning page on the Cadence website.


Webinar on Tools and Solutions for Analog IP Migration

Webinar on Tools and Solutions for Analog IP Migration
by Tom Simon on 03-17-2020 at 10:00 am

MunEDA flow for analog design porting

The commonly advanced reason for IP reuse is lower cost and shorter development time. However, IP reuse presents its own challenges, especially for analog designs. In the case of digital designs, once a new standard cell library is available, it is usually not too hard to resynthesize RTL to create new working silicon. For analog designs there are many more steps and essentially the design will have to be reoptimized to meet its performance specifications before it can work. A lot of companies wade into the waters of analog porting only to realize too late that they are actually stuck in a muddy and complex process.

At that point a couple of well-known and perhaps over used platitudes are apropos – “There is no substitute for experience” and “Use the right tool for the job.” Fortunately for designers looking to smooth out the process of porting analog designs, MunEDA has tons of experience in this area and has a set of tools ideally suited to the task. Their upcoming Webinar titled EDA Tools and Solutions for Analog IP Migration, Optimization and Verification comprehensively covers the entire process and includes information about many of the particulars that can make or break the process. The webinar will be offered on March 26th at 10AM Pacific Time. MunEDA Vice President of Products & Solutions Michael Pronath will be presenting. His deep understanding of the topic and lucid presentation style make the entire flow understandable.

There are three stages, as alluded to in the webinar title. The first is porting the schematic, which is done by the MunEDA Schematic porting Tool (SPT). As Michael will point out in the webinar, it makes the tricky parts flow smoothly and reduces manual effort in many places. It helps maps new cell names for each of the devices used in the design. Rules can be set for mapping pins and pin locations. New device parameters can be set using expressions. The webinar shows the user interface for these operations. MunEDA has learned through experience many of the subtle issues that arise and have added features to SPT that work through them automatically.

At this point the user has a topologically correct schematic, but one that will not function properly or meet its specs. The circuit now needs optimization and tuning. Michael will show how the MunEDA WiCkeD tool suite is used to size and tune the circuit. For instance, some of the device geometry characteristics that need adjustment are: W, L, fins, fingers, R, C, etc. Also, device threshold values can be set. The goal is to meet specs over all PVT corners with optimal yield, power, area and reliability. Michael will show the user interface and illustrate how to run their optimizer to arrive at design that meets specs and is optimized according to the design criteria. The process is iterative but is managed automatically. He will include several examples from their major customers that show the effectiveness of the flow especially when there are design tradeoffs to be made.

MunEDA has a suite of analog verification tools that are used in the final step – verification. Michael will start by doing a fast corner search to find the worst-case corners. He uses their Worst-Case Operation (WCO) tool for this. It can find the worst-case condition for every spec and structural constraint. He will show the tool and explain some details of its operation.

Michael then will cover Monte Carlo Analysis (MCA) and how their solution generates quantile plots that visualize the probability distributions. The UI also makes it easier to link to the actual simulation runs that the user might be interested in. Another useful set of information they can provide is parameter influence analysis. Parameter sensitivity information is useful for understanding design behavior.

Lastly the webinar will discuss high sigma analysis. MunEDA’s high sigma WCA uses powerful optimization that work across a wide range of sigma values to quickly find the worst-case point for the design.  Their solution scales to large designs through the use of advanced machine learning techniques.

It’s extremely rare to find a single source for a solution to such a complex problem. MunEDA has done an excellent job of integrating all the needed elements. The webinar covers each step and goes into the details about how and why. Be sure to check out the replay HERE.

Also Read:

56th DAC – In Depth Look at Analog IP Migration from MunEDA

Free Webinar: Analog Verification with Monte Carlo, PVT Corners and Worst-Case Analysis

Schematic porting – the key to analog design reuse


Innovation in Verification March 2020

Innovation in Verification March 2020
by Bernard Murphy on 03-17-2020 at 6:00 am

Innovation

This blog is the next in a series in which Paul Cunningham (GM of the Verification Group at Cadence), Jim Hogan and I pick a paper on a novel idea we appreciated and suggest opportunities to further build on that idea.

We welcome comments on our blogs and suggestions for new topics if they’re based on published work.

The Innovation

Our next pick is End-to-End Concolic Testing for Hardware/Software Co-Validation. The paper was presented June 2019 at the ICESS conference. The authors are from Intel Hillsboro and Portland State University.

“Concolic” is a combination of concrete and symbolic, a method to increase coverage in very complex systems through an intermingling of direct code execution/simulation (at the instruction level) and symbolic analysis. Symbolic can do a more general analysis than direct (like formal), while direct execution helps bound these to be near realistic execution paths and can be used to handle libraries/IP inaccessible to symbolic analysis.

The authors earlier build on their own concolic platform, Crete, which they have described in an earlier paper, there applied only to analysis of software utilities. Crete first traces conventional execution through instruction sequences and states, in this instance tracing the system and each IP (here modeled as a virtual model), then flattening that hierarchy into a combined trace.

In the current paper, the trace is instrumented with assertions and symbolic values at hardware/software interfaces. The instrumented trace is then submitted to a concolic analysis. Interesting new traces discovered in this flow, where they violate an assertion for example, can be fed back to directed simulation for further analysis.

Concolic methods are already used in software testing, for example Microsoft reported use of these methods in testing Windows 7. Development and advances we have seen are so far academic or in-house.

Paul

This is an intriguing paper and mature in the scale and type of system they use to analyze their method (a complete mini system with OS, Driver, virtual E1000 Ethernet adapter and virtual 8051 CPU). I like that they consider realistic challenges and limitations. For example, they have thought about how to handle address translation from virtual to physical. Generally, they have pretty robust and scalable ways to instrument verification through callbacks inserted into the instruction stream.

I see a conceptual similarity with constrained random simulation. Constrained-random is a semi-random generation of traces in which each trace is discrete, where a single testbench can generate many traces. Conversely, concolic takes a single discrete trace and abstracts/symbolizes parts of that trace.

I have a couple of thoughts. First, to enable concolic simulation initial analysis must capture virtual model states along the trace. How will this work if you’re using commercial virtual platforms? Should it be an ecosystem play? Virtual component providers may need to offer a mechanism for save/restore in support of concolic methods.

The other point (which the authors fully acknowledge), is that to become really valuable, this testing needs to be able to work with multiple threads/interleaving traces (multiple IPs running concurrently). That will be a harder problem and got me thinking again about portable stimulus, about how you might randomize or explore that space of different concurrent traces. Could we extend PSS / Perspec into concolic? That would be an intriguing direction. Again, I would be interested in helping anyone in academia who wants to explore this idea further.

Finally, they found a couple of real bugs in QEMU, impressive in a well-tested model. I’d like to know more about these, also their take on what made concolic uniquely suited to finding these bugs, versus other approaches such as randomized testing.

Jim

First, I like the idea of system level testing coming together in a unified verification suite.

This area is probably too early stage to be talking about investment potential. When it does reach more maturity, it looks like a technology rather than an independent tool. I get the impression that there will be a million of these good ideas. We’ve already discussed a couple in earlier blogs. Great for verification but the verification team isn’t going to want to see more and more tools.

Maybe we should look at these as widgets that sit inside the primary verification platform. Maybe follow Paul’s idea that PSS is the cockpit where you pull down different apps depending on what kind of verification coverage you’re looking for.

Me

A very simple software example to illustrate a plus for concolic testing over randomized testing is:

int foo(…) {

if (img.magic != 0xEEEE) return -1;

if (img.h > 1024) return -1;

return img.sz / img.h;

}

This can trigger an error on the division if img.h is 0. Randomized testing has to survive two branches and have the correct value of img.h to trigger that error. Concolic can justify a method to get past both branches, and through symbolic simulation can consider all cases for img.h. Randomization is much simpler in many cases, but concolic can be more effective in threading a path through these complex cases. I see definite value in improving branch coverage in this way for security, maybe also safety and more generally functional reliability.

To see the next paper, click HERE.

To see the previous paper click HERE.


5G SoCs Demand New Verification Approaches

5G SoCs Demand New Verification Approaches
by Mike Gianfagna on 03-16-2020 at 10:00 am

Simplified 4G network

Lately, I’ve been cataloging the number of impossible-to-verify technologies we face. All forms of machine learning and inference applications fall into this category. I’ve yet to see a regression test to prove a chip for an autonomous driving system will do the right thing in all cases. Training data bias is another interesting one to quantify. The list can get quite daunting.

Mentor recently published a new white paper on the challenges of verifying 5G SoCs. It turns out this is another one of those impossible-to-verify technologies. The good news is Mentor outlines a method in their white paper on how to make this one possible. Let’s start with some background on 5G networks – why are they so hard to verify?

This is the first topic of the Mentor white paper. When 4G was developed, the systems were defined by essentially three major vendors. The standards weren’t open and connections were established with fixed cabling. So, cellular operators sourced equipment from these major vendors and 4G became a reality. Figure 1, from the white paper illustrates what this looked like.

With the rise of applications such as connected vehicles (think cars, planes, trains, construction equipment farm tractors and so forth) and all the other connected devices that comprise IoT, the data volume for cellular networks has exploded. That spawned the need for 5G and as everyone knows, 5G networks are being brought up by many carriers in all parts of the world right now. There is an important “twist” in the way the network is being implemented, however.

This time, the cellular operators took control and defined open standards, allowing many new companies to build the hardware and software required for 5G networks. 5G technology is also quite a bit more challenging to implement than 4G. For example, signal transmission requires an array of up to 64 X 64 multiple-input/multiple-output (MiMo) antennas that can support the beamforming required for 5G signals.

Landscape and population density variations (think cities vs. rural areas) will also need customization to work correctly, creating many hardware/software configurations. To help alleviate this issue, an alliance of telecom industry companies created the Open Radio Access Network (O-RAN) standard. Figure 2, from the white paper illustrates what this new environment looks like.

So, with 5G we have many new vendors (both hardware and software), a variety of use cases and configurations and evolving 5G standards. A lot of the new products for the 5G market have, at their core, a mission critical SoC. It is the verification of those SoCs, in the challenging environment described that is discussed in the new Mentor white paper.

The white paper focuses on the litany of challenges to develop robust and re-usable tests for these SoCs. The outline of the problem includes solid verification suites that can be run before silicon is available on prototypes of the hardware and after silicon is available on the real system. Due to the size of the 5G ecosystem, these test suites need to be shared to ensure interoperability.

Pre-silicon verification requires more than a standard RTL flow – emulation is required to run the requisite number of tests at speed. Mentor’s Veloce® Strato™ emulator is well-suited to address this requirement and that is explored in the white paper. Once silicon is available, the focus moves to verification of the chip in the lab and in the field. Here, Mentor offers its X-STEP™ platform. This product is focused on the unique needs of the 5G market and can be used for either data generation or data capture.

The white paper goes into much more detail on these topics and others as well. If you are engaged in design of 5G SoCs, you will want to learn about Mentor’s 5G SoC design and verification flow for pre- and post-silicon. You can access the white paper here.