You are currently viewing SemiWiki as a guest which gives you limited access to the site. To view blog comments and experience other SemiWiki features you must be a registered member. Registration is fast, simple, and absolutely free so please, join our community today!
Vehicle manufacturers in China had a tough 2018 and 2019. The overall market fell 8 percent by volume in 2018 and another 3 percent in the year to October 2019. Looking forward, demand faces several headwinds. Anyone spending time in a major city realizes just how unpleasant the experience of owning a car can be with the lack of parking and permanent traffic jams. On top of that, local authorities ration availability and increase the cost of getting a license plate to more than the cost of the car. Ride sharing is extremely cheap and available. It is possible that we have seen the peak of the internal combustion engine vehicle market in China.
In 2020, as Tesla breaks ground on its 100 percent-owned factory in Shanghai, the industry bright spot will be electric vehicles. Again, local governments play a critical role along with changing consumer tastes. Cities are switching their bus fleets to electric (close to 25 percent of all buses sold in China will likely be electric in 2019, perhaps 35 percent in 2020) and are mandating that taxi fleets shift to electric and reducing the cost of acquiring a license plate for EVs. Cities are rolling out networks of charging stations well ahead of demand. It is common to see car parks where the only spaces are those next to the EV charging stations.
Middle class consumers who are increasingly sensitive to air pollution are investigating EVs and realizing that their range exceeds that which they ever travel in a single car journey. Vehicle OEMs are responding: between 2019 and 2021 more than 200 EV models will launch . EVs represent close to 5 percent of automotive sales in 2019 (up to 20 percent in major cities) and could easily step up to 7 percent in 2020 if central government decides to include EV subsidies in any stimulus program.
China’s EV market is already 3-4 times the size of the US market. This multiple will grow, giving market leaders in China the opportunity to become world leaders in developing and manufacturing EVs, their batteries, and charging infrastructure.
Realizing Parts of the Greater Bay Area (GBA) Initiative
The GBA initiative remains a priority for President Xi Jinping. As the region covers around 15 percent of China’s GDP and is the center of innovation for many of China’s priority industries, the GBA’s success is also important for national economic growth.
The Greater Bay Area will become more concrete (literally) in 2020 as key pieces of its physical infrastructure are built. Bridges, roads, and railways to connect its east and west more closely will start construction. This will bring previously remote areas in the West of the delta much closer to existing economic hubs in the East. Developers will follow quickly to build homes, factories, and business parks in the West. Homes are critically important as this will take pressure off housing prices in Shenzhen, allowing more of China’s young talent to migrate into this vibrant hub for work. Factories that relocate to the Western side of the region will still be able to get their goods to Hong Kong or Shenzhen airport within an hour for shipment globally, using the new (and very underused) Hong Kong Zhuhai bridge.
Beyond infrastructure the GBA plan contains hundreds of softer goals, giving cities in the GBA priority sectors to focus on and creating mechanisms for cities that have historically competed aggressively to work more closely together.
Businesses need strategies for the GBA in 2020 that focus on two things. One – how to take advantage of new regional infrastructure. Two – how to shape still evolving GBA policy to their advantage, rather than reacting once policy is defined.
It should be no surprise in the current climate that the US government is ramping up investment in microelectronics security, particularly with an eye on China and investments they are making in the same area. This has two major thrusts as I read it: to ensure trusted and assured microelectronics are being used in US defense systems and to ensure that US defense electronics design practices are at least on a par with commercial practices and move much more rapidly in innovation and adoption of innovation.
From 2003 through 2016, the first of these objectives was met through use of accepted domestic trusted foundries but it was already clear that option would be challenging, especially since competitive bidding has driven more purchasing to commercial off-the-shelf-solutions (COTS), whose builders must manufacture overseas to meet competitive price, performance and power targets. Now there’s a big push to allow more trusted suppliers building in state-of-the-art foundries and using modern trust and assurance methods to certify their products.
There’s also a push to encourage multiple commercial foundry and packaging options onshore. It will be interesting to see how that works out. The DoD seems to be committed to driving business which will encourage growth and commercial competitiveness in such foundries. I speculate that they may want to mimic aspects of Chinese investment in their onshore manufacturing. Some of this will certainly be needed in support of rad-hard processes and design technologies for space and nuclear programs.
The second part of the program is under MINSEC (Microelectronics Innovation for National Security) and aims to pursue an aggressive modernization of the entire defense microelectronics infrastructure in the US: updated assurance policies and guidance, robust verification and validation, building state of art expertise (including SoC design) and engagement with academia, very active and disruptive R&D and modernizing defense systems, including reducing reliance on legacy components. Again, if China can do it, we can too.
MINSEC funding is currently modest, $2B to start, but acting deputy assistant secretary of defense for systems engineering Kristen Baldwin says there is broad recognition among legislators of the strategic importance of microelectronics leadership to the US in general and defense electronics in particular. China is investing $150B in microelectronics so it’s pretty clear where our competition stands.
I have written before about Tortuga and their design for security solutions. They have recently been awarded contracts in both of these areas. The first will use their RADIX-S platform for the detection and prevention of hardware vulnerabilities. RADIX-S works with existing simulation-based flows to detect and pinpoint potential security issues in pre-silicon designs. The platform is already proven with Cadence, Mentor and Synopsys-based flows.
The second program is based on their RADIX-M emulation-based platform, playing into the MINSEC theme of bringing defense design up to modern design standard in use of tools like emulation. Here, Tortuga will be working with the DoD and partners to advance detection of vulnerabilities crossing between hardware and software, a unique strength for the Tortuga products as far as I know.
Some of this work will be piloted by an outfit called AFWERX, which itself seems to be a major innovation in the way the government can work with technology. Hosted by the Air Force, this is an agile program to break out of traditional bureaucratic government bounds to drive fast innovation in multiple areas. Good to see that it can be possible to change calcified practices, even in the government.
Siemens today announced a partnership with Arm to “accelerate the future of mobility by redefining design capabilities for complex electronic systems”. I spent time with David Fritz to understand what this really means. You may remember David from our webinar PAVE360: Of SoCs, Digital Twins, and Validating Autonomous Vehicle Behavior. David is the Global Technology Manager for Autonomous and ADAS at Siemens. But first some cut and paste from the press release which is better than most:
“Siemens’ PAVE360™ digital twin environment, featuring Arm IP, applies high-fidelity modeling techniques from sensors and ICs to vehicle dynamics and the environment within which a vehicle operates. Using Arm IP, including Arm Automotive Enhanced (AE) products with functional safety support, digital twin models can run entire software stacks providing early metrics of power and performance while operating in the context of a high-fidelity model of the vehicle and its environment, helping deliver a new future of mobility.
Using Siemens’ PAVE360 with Arm automotive IP, automakers and suppliers can simulate and verify sub-system and system on chip (SoC) designs, and better understand how they perform within a vehicle design from the silicon level up, long before the vehicle is built. Arm’s automotive IP is helping to democratize the ability to create safety-enabled silicon, bringing it within reach of the entire automotive supply chain. By rethinking IC design for the automotive industry, manufacturers can consolidate electronic control units (ECUs), leading to thousands of dollars in savings per vehicle by reducing the number of circuit boards and meters of wire within the vehicle design. This in turn reduces vehicle weight which can promote longer range electric vehicles.”
David and I went through a dozen slides illustrating the importance of validating complex systems of systems. David and I also talked about something I have been seeing on SemiWiki, the automotive supply chain is consolidating.
Many other competitive systems companies have already brought chip design in-house (Apple, Huawei, Google, Amazon, etc…) but for car companies it is more than just being competitive, it is all about safety and liability.
More than 100 people die in car accidents in the United States per day and many more are injured. When autonomous cars crash who will be held liable first? The car companies of course.
One more quote from the press release:
“In all we do at Siemens, our goal is to provide transportation companies and
suppliers the most comprehensive digital twin solutions, from the design and
development of semiconductors, to advanced manufacturing and deployment of vehicles and services within cities,” said Tony Hemmelgarn, president and CEO at Siemens Digital Industries Software. “Siemens believes collaboration with Arm is a win for the entire industry. Carmakers, their suppliers, and IC design companies all can benefit from the collaboration, new methodologies and insight now sparking new innovations.”
From the Arm 2020 Predictions Report:
Robots and Autonomous Vehicles Most Eagerly Awaited Future Tech
“In 2020, 5G will open up new levels of automotive connectivity enabling
carmakers to explore new infotainment experiences for passengers, including multimedia streaming and more responsive navigation. It’ll also open up benefits in vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communications, with potentially life-saving features such as detailed dashboard alerts that warn of unseen dangers ahead.”
Bottom line: Automotive companies MUST have complete PAVE360 control over their silicon in order to develop and validate better chips for the many autonomous car systems to insure safety and contain liability, absolutely.
Consumer retail spending in the first 10 months of 2019 rose 8 percent year on year, ahead of income growth of roughly 6 percent. Over 10 million new jobs were created. With moderate house price growth and a positive year in domestic stock markets over the last year, the wealth impact on consumer confidence remained positive. More and more consumer purchases are now financed through installment payment schemes, through credit cards and bank debt (now well over US$ 1 trillion).
The average Chinese consumer is not yet over-leveraged (total household debt stands at only 60 percent of GDP), but the 20-30 age group who borrow most enthusiastically are getting there, pulling forward consumption from future years. These younger age groups also sustain higher current spending by not entering the property ownership market. For many, property prices are now so high it is simply not possible until much later in life. Many realize that renting is a better economic plan. A recent JLL report showed the average price of renting in top Chinese cities was less than half the average mortgage payment. At the individual city level, these trends could finally trigger a material downward adjustment of as much as 30 percent in specific city property prices in 2020.
Multiple consumer sectors suffered significant demand weakness, most notably the automotive sector and smartphones, where a 2020 rebound is unlikely. Yet many service sectors are thriving. Private education providers with quality facilities and faculty are one example, especially those with internationally focused curricula. I recently visited the brand new Whittle School in Shenzhen. With its world class facilities, it will attract students who would otherwise have commuted to schools in Hong Kong. Second tier cities, such as Suzhou, are showing that they can support multiple international schools targeted at mainland students, with Perse School from Cambridge, England adding to those present. Lego announced that it is building the world’s largest Legoland theme park in Shanghai at the cost of over US$625 million, locating it alongside Disneyland Shanghai, creating an international theme park cluster. And it has plans for many more.
Healthier eating
China’s endless food health and safety scandals along with a growing awareness of personal health (supporting the boom in gyms in China) has led many middle-class Chinese to embrace healthier eating choices. Restaurants are adding more vegetarian options, and plant-protein based meat replacements are gaining traction. In China, which consumes more than 50 percent of pork produced globally and has seen pork prices rise over 100 percent due to disease in the pig population, the need is for pork alternatives, rather than the focus in the US on beef substitutes. As a result, Asian companies such as Green Common from Hong Kong have taken the lead in meeting this demand.
The government is getting more involved, requiring manufacturers to provide additional labeling information. In 2020, the government will require that labels on foods show their glycemic index, a rating of how the carbohydrates impact blood glucose levels. The government is acting in an attempt to impact the explosion in diabetes and obesity across China. If the experience of launching this index in Australia provides guidance, food manufacturers will reformulate their products to reduce their GI rating and will market aggressively on the back of having done so, leading to a boom in consumer demand for lower GI products.
With China’s food delivery services providing more than 40 million meal deliveries a day and still growing 35 percent year on year, Meituan and Ele.me have a key role to play in shaping middle class food consumption in China. To meet this demand, they will be promoting healthier options and providing more information to consumers on their choices, whether it is lunch delivered to the office or dinner to the home.
Social Credit System not a big deal for individuals – yet
Government initiatives to create social credit systems attracted a lot of international attention earlier in 2019, which has since died down. In part this was because the system was neither as new nor as all-encompassing as initially described, and in part because Chinese citizens are currently mellow about the entire scheme. Data gathered in the system comes almost entirely from existing databases compiled by many agencies covering financial matters, Party membership, regulatory and legal compliance. As much as 75 percent of this data was already publicly available, perhaps just not online. For many citizens the question was more “what has changed?” Calling out individuals who fail to pay their debts on a public blacklist, making you aware that someone you might be about to do business with has defaulted in the past, seems like a good thing. As with any system, there is potential for misuse, blacklists can get too long, and they may not be objectively created. Evidence from a Jiangsu pilot shows that if government gets too heavy handed, citizens successfully push back.
And of course, there is a part of the social credit system that evaluates and black lists government departments, with more than 20 county level governments already having been blacklisted as “dishonest”.
Tesla Motors did it again. In September of 2019 Tesla launched its “Software Version 10.0,” a software update targeted primarily at in-vehicle infotainment systems. Principal among the extensive menu of updates was something called “Tesla Theater” which added streaming video content sources including Netflix, Hulu, Youtube, and vehicle tutorials, to the in-dash system.
Before you safety Mavens get your regulations in a bunch, the streaming content is only accessible when the vehicle is in park. Still, the streaming sources join an expanding roster of games, audio sources, and even a karaoke app complete with singalong lyrics in the dashboard of Tesla vehicles.
While Tesla always seems to be well out in front of automotive technology trends, the increased quantity and variety of in-dash content reflects a trend sweeping the industry via which infotainment screens are becoming much more lively and informative. I can’t say I am a fan of video content in the dashboard, but video was already there before Tesla launched Software 10.0 and I believe the entire industry is missing some obvious and more driving oriented applications for in-dash video.
The National Highway Traffic Safety Administration (NHTSA) mandated in-dash live back-up camera video implementations beginning with new cars manufactured in 2018 onward. This means, as I am fond of noting, that the NHTSA doesn’t want you staring at your dashboard screen – unless your driving in reverse. (It’s worth noting that the U.S. is the only automotive regulatory authority in the world that has chosen this mandatory path to safer driving.)
The backup camera mandate in the U.S. is creating a generation of befuddled drivers that now believe it is best to drive their cars backwards while looking at the in-dash display. It is just as horrible as it sounds. I still remember my first backup camera driving experience shortly after I took delivery of my new 2007 Infiniti G35 and immediately proceeded to back into the garbage cans at the end of my driveway. (More a reflection on me, I suppose.)
Before the Cameron Gulbransen Kids Transportation Safety Act was signed into law, though, moving images on dashboard screens had already become common in the form of embedded navigation systems. The CGKTSA was intended to save approximately 200 lives a year from back overs.
During the time preceding enactment of CGKTSA, the NHTSA was actively pursuing safety regulations for automatic emergency braking and targeting distracted driving with infotainment system design guidelines. The transition from the Obama Administration to the Trump Administration, though, saw a further erosion of the NHTSA’s regulatory leadership.
The CGKTSA was brought about by legislation (following extensive research by the NHTSA) and not by regulatory oversight. The adoption of infotainment system design guidelines was voluntary within the automotive industry as was the latest effort to broaden the adoption of automatic emergency braking technology.
Tesla’s flirtation with in-dash distractions arrives with the NHTSA at an ebb in its regulatory authority at precisely the moment that issues of automated driving, privacy, and vehicle cybersecurity are crying out for governmental guidance. CES 2020 in Las Vegas next week will find auto makers and their suppliers plunging into this regulatory vacuum with dazzling and distracting in-dash displays spreading across dashboards and designed to enable everything from singing karaoke while driving to buying a café latte on the go.
The evolution of the in-dash experience started with the onset of digital radio adding meta data for artists and radio stations; and integrating digital content such as traffic, weather, fuel, and parking information; enabling search for broadcast content; and setting the stage for integration with streaming audio sources – i.e. hybrid radio – see: Audi/RadioDNS.
In the U.S., the range of digital content now available in dashboards is staggering and includes satellite radio, broadcast radio, podcasts, content from connected mobile devices, and even audio books. In this regard it is important to note the pre-CES 2020 merger of Tivo and Xperi – two kings of meta data and content management – and the potential acquisition of iHeartMedia by Liberty Media (which owns a majority stake in SiriusXM). The objective of all of this content delivery is to reach a captive seatbelted audience more than likely in transit to a destination for economic activity – cue timely advertising messages.
The contenders for this audience, who will be in attendance at CES 2020, include SiriiusXM, Xevo, Telenav, TomTom, HERE, iHeartMedia, Cumulus, Apple, Amazon, Google, Microsoft, and, of course, every auto maker and Tier 1 supplier. (To be honest, Visa, Mastercard, Paypal, and multiple other payment players are in on this game as well.) What all of these organizations are missing, though, is the need for critical driving information in the dash.
Over the Christmas/New Year’s break I had the occasion to view Alex Roy’s “The Secret Race” about his record setting drive across the U.S.
Car enthusiast Roy set the record with the help of extensive route planning – taking into account traffic, weather, and local law enforcement – and aerial reconnaissance in the form of a following spotter plane. In crossing the country in less than 28 hours Roy overcomes various surprises in the form of construction, some traffic, weather, and some very slow drivers – some of which might have been avoided with tools available today that were not available when Roy made his record-setting journey.
Not every driver can be assisted by an eye in the sky, but it is precisely this sort of capability that is enabled by existing analogs.
To this day car makers have failed to find a way to integrate traffic camera information to in-dash systems. TrafficLand, a traffic camera service provider based near my home in Northern Virginia and which helps local departments of transportation manage their hundreds of traffic cameras, has an app to allow users to access live traffic camera videos showing live feeds from the road ahead.
To me, an in-dash traffic camera application accessible via voice interface to display traffic conditions is a more relevant and essential in-dash video experience. But it doesn’t end there. I expect consumers will want to be able to access their Ring and home Webcams in their car dashboards before long.
Ever listening to a different drummer, Tesla has added Twitch to its in-dash infotainment access to go with the Tesla Arcade. Tesla also enables video clips created by its cars’ Sentry Mode to be shared via the company’s mobile app – so that, too, is a means to leverage vehicle-based video.
A further step might be a full integration with other Tesla drivers to share their live dashcam info – or clips – in the event of roadside incidents that may be impacting traffic conditions. It won’t be long before Tesla’s – and other so-equipped vehicles – are able to report traffic violations to local law enforcement – live. Whether on a long drive or a daily commute, what driver wouldn’t want to actually SEE what is happening along one’s intended route – from live cameras be they built into cars or installed along the roadside?
Suffice it to say that there is ample space in the increasingly ample dashboard displays being deployed in the automotive industry for companies to capitalize on advertising, customer engagement, collision mitigation, traffic management, and brand differentiation opportunities. CES 2020 in Las Vegas is the perfect place to assess the progress and efficacy of these gambits and place your bets.
2019 in China brought together long running challenges, such as uncertainty over US-China tariff levels and ever more intrusive regulation of business in China, with a few unexpected ones as well: the crisis in Hong Kong and the flare up triggered by tweets from an NBA coach, to mention just two. Yet for many businesses, opportunities flourished throughout the year as China’s economy grew roughly 6 percent. And in multiple key industries, the government’s commitment to global leadership started to pay dividends.
2020 will offer a similar mix of evolving, often worsening, challenges. Growing separation between the US and China in technology sectors seems inevitable. While some companies will evolve to remain relevant in both markets, others will choose to focus on one. In 2020 this separation may become broader, impacting financial markets much more directly. China’s economic momentum will continue in 2020 with domestic consumption leading the way, selectively creating opportunities. If China’s priority sectors match those of your business, 2020 will be a good year to step up as the taps of government funding remain open for now.
US-China relations
Multiple areas of growing separation between the US and Chinese economies predicted in last year’s note were largely realized – investment flows, supply chain, data flows, people flows, technology procurement, standards. In all these areas, further separation will occur in 2020. One example, US government agencies, such as the National Institutes of Health and the Department of Energy, not just the Department of Defense, have been presenting US university administrators with hundreds of case examples where they believe non-US academics, largely Chinese, have failed to disclose parallel funding for their research from overseas governments along with commitments to share their IP discoveries with those governments. Those academics will likely be proactively excluded from US universities; many others will self-select out or simply not come to the US in the first place. Restrictions on investment from China into the US will shift from a focus on larger deals, which have shrunk to almost zero, to direct and indirect (i.e. through funds) investment into technology startups.
I did anticipate a year ago that we would have clarity about tariffs by now, not the ongoing uncertainty that holds back investment plans in supply chain and factories. Looking into 2020, if there is finally agreement it seems likely to be narrow and not likely to be long lasting. Multinationals have suffered least from tariff volatility. They typically send no more than 15 percent of their China production to the US and have multiple factories around the world that they can move production for the US to. Almost none of these factories are or will be in the US. Smaller businesses, often foreign-owned, that focuses solely on exports to the US, have been most hurt.
Factories do continue to move out of China. Manufacturers are also consolidating in China, doubling down on technology in their remaining factories. Indeed, China is rapidly becoming the world center for the Internet of Things in factories. These trends preceded the US tariffs and have only been marginally accelerated by them. More non-Chinese companies than Chinese are shutting down factories in China, but not all move production out of China as they close. A good number outsource their manufacturing to a Chinese owned company producing in China, believing that the Chinese company will be lower cost than the foreign-owned factory, and just as good quality.
New areas of US-China separation will come into focus in 2020. Financial markets will be front and center. The U.S.-China Economic and Security Review Commission’s 2019 report to Congress has as its first recommendation to delist Chinese companies on US exchanges that do not meet four criteria. No Chinese company listed in the US meets all four, many won’t meet any. This threat covers around 500 companies with a cumulative market capitalization of about US$ 1 trillion (dominated by Alibaba). It was smart of Alibaba to get its secondary listing in Hong Kong in place in November 2019. Companies such as Ping An’s fintech subsidiary, OneConnect, which has announced plans to list in New York, may reconsider. After all, less than US$2 billion has been raised by Chinese companies on the NYSE and Nasdaq so far this year, down 74 percent from last year. Some Chinese tech companies may list domestically within China where listings generally achieve higher earnings multiples and Chinese regulators have quietly made it possible for companies using the Variable Interest Entity (VIE) structure to list domestically.
Technology tensions
The US and Chinese governments continue their rush to embrace greater technology separation. 2020 may be a tipping point. On one side the US government excludes Chinese companies from buying US sourced technology components (at least from being able to do so with certainty), from investing in US technology companies, and from supplying their technology products into the US.
On the other, the Chinese government has launched an over US$20 billion fund to support Chinese independence in a broad range of manufacturing technologies to go alongside its similar sized fund to support developments in semiconductors.
China’s “secure and control” initiative is encouraging government departments and state-owned enterprises to buy technology without US content – perhaps 25 percent of the traditional PC and server market. Chinese manufacturers’ share of the server and storage market had already risen from around 30 percent in 2012 to 70-80 percent in 2018. It is set to rise higher. In smartphones, four Chinese brands hold more than 85 percent of the Chinese market and less than 1 percent of the US. China’s internet giants, with the exception of TikTok, are absent from the US (TikTok may not retain its presence in the US for long if US legislators sustain their focus on the company); the US giants have long been absent from China.
The pinch point in semiconductors of Taiwanese contract manufacturers who play a key role for both US and Chinese companies will become much more visible in 2020, with greater levels of government to government pressure exerted on the key companies.
One of the few business-focused outcomes from the recent 4th Plenum were plans to establish a “new national system for making breakthroughs in core technologies under socialist market economy conditions.” This feels very similar to state-driven industrial policies contained in “Made in China 2025”, if not yet with the quantitative targets. In some areas, China is likely to achieve goals quickly, for example, as China still represents nearly a quarter of global manufacturing output; taking leadership in smart factories should be a no brainer. China is turning its cities into large-scale pilots for 5G-enabled smart cities at a pace that will allow China to set de facto standards. Their products will not be accepted in the US, especially not as many will require access to large scale data sets that dwarf those that Chinese companies have been blocked from.
All Chinese and US tariffs could be eliminated tomorrow and only have a marginal impact on these trends. Both governments have embraced growing separation, the only question is how fast and with how much pain is incurred as we proceed.
Semiconductors continue to surge and lead technology sectors all over the world. TSMC has always been my economic bellwether and 2019 was another great year as the TSM share price almost doubled. But it looks like the best is yet to come with TSMC significantly increasing CAPEX to cover 7nm and 5nm demand.
TSMC CEO C.C. Wei increased 2019 CAPEX from $10.5B in 2018 to more than $14B in 2019 with a big Q4 spend. Remember, TSMC builds capacity based on customer orders and not a dart board forecasting. With TSMC winning both the 7nm and 5nm popular vote, 2020 should be another blockbuster CAPEX year to backfill demand.
There are two VERY disruptive semiconductor trends to watch in the next year or three and that is the large systems companies taking control of their silicon (including Google, Facebook, Amazon, Microsoft, etc…) and China also taking control of their silicon.
Apple started it and now all systems companies in competitive markets will follow. It will be interesting to see who the big players are at CES 2020 next week and more importantly how many of them are making their own chips. My bet would be the majority of them including the automakers. It’s not really a fair bet since SemiWiki.com is the leading semiconductor design enablement portal with more than 3.25 million unique views and we get to see who reads what, when, and where they are from.
Why are systems companies dominating semiconductor design? Because they can use prototyping and emulation to get a jump on verification and software development and really tune the silicon to the system. Systems companies are also VERY competitive and can write some VERY big checks and they will not miss tape-outs or product ship dates. This is so un-fabless-like it isn’t even funny. It really is a new semiconductor world order.
Speaking of a new world order, China is also disrupting the semiconductor industry with billions of dollars invested in the “Made in China 2025” semiconductor supply chain initiative.
Think about it, China consumes more than 50% of semiconductor production worldwide and they only produce about 20% of said chips. My guess is 2020 and 2021 will see unprecedented China chip manufacturing growth due to increased memory (DRAM and NAND) manufacturing capacity coming online. Add in the political turmoil motivator and memory hogging mobile, 5G and AI, the Made in China 2025 initiative will get a major boost, my opinion.
I will be in China again this month and am excited to see what’s new. You can Google around all you want but there is nothing like being there.
2019 was also a big year for SemiWiki.com. We unleashed SemiWiki 2.0 in June with many new cloud-based features and more to come. Traffic and member registration is again growing double digits and we are already working on SemiWiki 3.0.
I would truly like to thank all of our bloggers, partners, readers, and registered members for your continued support. SemiWiki has been an exciting 10 year adventure and I’m looking forward to working with you all in the coming years. After spending my entire 35+ year career in semiconductors I can say that without a doubt the best is yet to come, absolutely!
In the past, I’ve focused my annual predictions on electronics – ICs and EDA – but recently I’ve turned my focus to photonics, so my 2020 predictions are primarily in this area.
Historically, photonics has been the Gallium Arsenide of technologies; it was, is and always will be the technology of the future. Analysts have forever predicted the rise of photonics; that next year as Moore’s Law ends or slows in electronics, photonics will assert itself, entering the hockey stick phase of its growth. While photonics is playing a constantly greater role in our technology ecosystem with an impressive growth rate, the predicted explosive growth hasn’t yet happened.
Why not? There are a few reasons. First, photonics doesn’t adhere to Moore’s Law: The wavelength of light is the wavelength of light. It is a constant; it just doesn’t halve every two years, so the phenomenal gains in electronics driven by Moore’s Law just don’t apply to photonics.
Next, as always, engineers are clever. Engineers constantly breach barriers in electronics once considered insurmountable. So, applications where photonics were projected to replace electronics remain filled by more and more clever electronic designs instead. The replacement of electronics by photonics may still be inevitable, but the timeline keeps moving out.
Finally, the photonics ecosystem has not yet evolved sufficiently to support a large scale commercial market. Whereas electronics have evolved over the past >half century to become the sophisticated, well-oiled design and manufacturing ecosystem of today, this evolution has not yet occurred in photonics. Today’s photonics ecosystem still most closely resembles the electronics ecosystem of the early 1980s.
2019 saw the acquisition of the major photonics suppliers by even more major players, such as Cisco’s acquisition of Luxtera, and now Acacia Communications, along with II-VI’s acquisition of Finisar, the world’s leading supplier of optical communications products, and Broadcom’s reacquisition of their optical transceiver assets from Foxconn. Many see this trend as an acknowledgment of the coming of age of photonics, and the need for major telecommunications providers to offer leading photonics solutions.
What about this year? Will photonics come into focus in 2020? I’ll leave the hockey stick inflection point predictions to analysts who are paid for this sort of thing, but I will predict several trends with impact to the timing of that inflection point.
Photonics is becoming relevant, then prevalent, and finally dominant at shorter and shorter distances. Today, telecommunications delivered over kilometers to your home and business travel via fiber optics, an application dominated by photonics. Now photonics has moved into the data center. Massive hyperscale data centers across the globe struggle with power consumption and cost, heat, bandwidth, and data latency. Replacing copper wire with fiber optics addresses all these issues. Compared to copper, fiber is cheaper, faster, lower latency, higher bandwidth, and consumes less power, thus lowering heat and power costs.
The takeover of fiber optics between racks in the data center is largely complete, and fiber has moved onto interconnecting servers in the same rack. So photonics has already moved from dominance at kilometer distances, to prevalence at ten of meters distances, to relevance at single meter distances. In 2020, Photonic Integrated Circuits (PIC) will become more commonly commercially available, making photonics relevant at millimeter distances. Work is underway to integrate the photonics, including the laser, on-chip with electronics, moving photonics relevance down to microns.
As Ethernet data transmission speed continues its migration from 100G to 200G in 2020, photonics becomes more attractive in the transceivers required at either end of the optical fiber. The build-out for 100G is largely complete. In 2020 the transition to 200G will be well underway, with early adopters moving onto 400G. With clever engineering, electronics can deliver at 100G, but photonics is competitive here, and many 100G photonics transceiver designs are available on the market. We’ll see more cleverness from IC designers, but at 400G, electronics will lose more of its grip in the transceiver market and photonics will begin its move from relevance to prevalence. By the time we reach 800G and 1T (well past 2020), photonics will be in full dominance, with nary an electronics transceiver to be found.
The dominance of photonics in the data center will be hastened by the FANGs (Facebook, Amazon/Apple, Netflix, Google) building photonic design teams focused on photonic transceivers tuned to their own specifications, a new trend in 2020. Operating massive data centers, they will benefit tremendously by photonic designs crafted to their own specific needs, just as we have seen as the FANGs design their own ICs. We may not see fruit of this activity in 2020, but it will significantly increase the world’s photonic design capacity and hasten the evolution of the commercial photonics ecosystem.
Dominated by small niche commercial or R&D fabs such as SMART Photonics, LionX, Ligentec, imec, Leti, and AIM, today’s photonics foundries are generally geared toward R&D or providing MPWs, rather than large commercial runs. While solid foundries, advanced in their photonics offerings (such as Indium Phosphide for lasers), these fabs do not today have the capacity to drive a large commercial market, and they have not yet had the opportunity to develop the extreme customer support processes that large semiconductor foundries have built over decades.
The FANGS are key customers of the world’s leading semiconductor foundries. These foundries are taking notice of the increased photonic design activity and have entered or are contemplating entering the photonics business. This will hasten it’s commercialization as these fabs apply their production knowledge and experience and skills in building mature ecosystems.
Over the last couple of years, leading semiconductor foundries such as TowerJazz and GlobalFoundries commenced servicing the photonics business. 2020 will see other major semiconductor foundries (beyond those whose name is two words run together) entering the photonics business. The new entry of these foundries will serve to legitimize silicon photonics as a commercial business. One attractive advantage that photonics offers semiconductor foundries is the lack of need for leading-edge technology, so there is no requirement for the massive R&D and capital investment of electronics. Instead, photonics makes use of fully capitalized semiconductor fabrication equipment to return impressive margins.
One indicator of the maturing of photonics is the emergence of process development kits (PDKs) from foundries. The first photonic PDKs emerged about two years ago, and they are starting to become prevalent as foundries deliver PDKs targeting a variety of design tools. These PDKs tend to be primitive in comparison to libraries delivered by semiconductor foundries, but they are a significant and important step in the maturing of the commercial photonics ecosystem requiring a strong partnership between foundries and photonic design automation (PDA) companies. Together, they are producing new PDKs and advancing the state of existing PDKs. As more PICs are manufactured and measured, sufficient data is becoming available for statistical analysis. 2020 will see the emergence of statistical-based PDKs, enabling Monte Carlo and corner statistical analysis in more advanced PDA simulation tools. This will lead to more robust designs, with a new focus on manufacturability, another requirement for the commercialization of photonics.
The established EDA vendors are taking note of the emerging photonic-electronic market. They are delivering design tools targeted to this market and forming key alliances with the leading PDA companies to provide a complete integrated design flow. Last year, Mentor introduced LightSuite Photonic Compiler while leveraging their Tanner tools to provide schematics and layout. Cadence introduced curvilinear capability via CurvyCore to enable its industry-leading custom design platform, Virtuoso for photonics. Both Mentor and Cadence have integrated their design flow with the leading photonic simulation provider, Lumerical. For example, Cadence has provided co simulation capabilities, enabling the entire design flow to be driven through the Virtuoso cockpit. Synopsys has pursued more of a go-it-alone strategy.
I predict that the entry of the major EDA vendors portends higher price tags for PDA tools. Average Selling Prices for popular EDA tools are magnitudes higher than for PDA tools. This imbalance is not sustainable long term, as it will hold back needed investment in PDA and the ability of the PDA companies to compete in the new EPDA environment. Though the change won’t be sudden, it will be consistent.
While integrated Electronic-Photonic Design Automation (EPDA) flows emerged last year, in 2020 they will become more sophisticated with the addition of statistical and design for manufacturing (DFM) capabilities. Statistical considerations will require far more compute power, so in 2020 we will also see High Performance Computing applied to PDA, with Amazon AWS and Microsoft Azure becoming significant players delivering photonics in the cloud by making use of all those photonics in their data centers.
In contrast to electronics, a photonic design consists of just a few meticulously crafted components. Many of these components can be found in the PDKs delivered by foundries, but each leading edge photonic design will always include some critical components that the more generalized foundry PDKs cannot deliver. This creates an opportunity for a few well-positioned companies to establish a photonic IP (PIP) business. Well managed companies with extreme photonic design capability and focus, and inexpensive access to design tools are likely to spearhead the emergence of this market. With their unrelenting focus on photonic design, these companies will deliver superior designs. Companies looking to deliver leading-edge photonic designs will engage these PIP providers to outsource their component design in order to focus their own resources on other value-added areas such as the overall PIC design. In it’s infancy, the PIP business will likely hold serval similarities to custom design services.
Breakthroughs in photonics design methodology will provide higher quality, more manufacturable designs and will lower the barrier so that photonic design no longer requires a PhD in physics. Better designs will push forward the applications that photonics can compete in and win at. More qualified designers will result in a greater ability for companies to staff their photonics design team, resulting in greater competition leading to better products and faster evolution.
Already we are seeing the impact of Photonic Inverse Design from sources like Stanford, and Lumerical co operating with the open source community. We are starting to see component design completed much more simply with much improved Figures of Merit, over much shortened design cycle (days compared to months).
We are seeing improvements even on the best published designs, often completed in a matter of days. We are seeing orders of magnitude improvements in components. Photonic Inverse Design’s simplified, automated design methodology will replace today’s manual, iterative process, and will be applied to wide variety of photonic components in 2020. Photonic Inverse Design ‘s impact to photonics will be similar to the impact of logic synthesis to IC design in the 1980s. It will widen the circle of qualified photonics designers and hasten time to market for photonics designs. I think of the transisition to Photonic Inverse Design as analogous to raising the level of abstraction that designers work at. Just as raising the level of abstraction of IC design unleashed a torrent of IC designer productivity, I expect to see similar improvements in the productivity of photonics designers.
Applications:
Transceivers: 2020 will continue the trend of photonics’ takeover in the data center. In 2020, this will become more pronounced at we move from 100G to 200G and onto 400G Ethernet transmission speeds.
LiDAR: In 2020, we will see the introduction of multiple photonics-driven LiDAR designs. LiDAR is a key technology for autonomous vehicles, but it’s not feasible to mount onto everyday passenger cars those rotating cans seen on today’s prototype autonomous vehicles, and its not feasible to pass the thousands of dollars cost of those rotating cans onto the future everyday buyers of autonomous passenger cars. A large number of startups are focused on reducing the size (to a deck of cards) and cost (by an order of magnitude) of LiDAR, and several of them will reveal their designs in 2020. Additionally we will see a photonics-based LiDAR design from at least one established, leading LiDAR company.
LiDAR is another application impacted by the cleverness of engineers. The position of Tesla’s Elon Musk (or is it Elon Musk’s Tesla?) is that radar+cameras will be good enough for autonomous vehicles, so there is no need for LiDAR. The driving question is a race between lowering the cost & size of LiDAR vs. improving the capabilities of radar+camera. The winner of this race will determine the fate of a volume driver of photonics in autonomous vehicles. The checkered flag will be waved well past 2020.
5G: In 2020, we will see the build out of 5G in earnest. This will drive volume in PICs as new photonics-friendly technologies such as NG-PON2 are deployed in both the front haul and the back haul. There will be a hockey stick inflection in 5G also as the second part of 5G, the millimeter wave for short range within buildings, is deployed. This more extensive 5G buildout will not occur in earnest in 2020.
Sensors: Boring perhaps, but an application where photonics is making steady progress, and that progress will continue through 2020. Medical is a particularly interesting area for sensors with strong opportunity for photonics. The progress in medical will be paced more by legal regulations than by technology, and 2020 will not see a breakthrough in this area.
AR/VR: With some credibility, it is said that the easiest way to predict our technology future is to watch Star Trek. All of Star Trek’s technology will eventually come to pass. That’s good news for photonics. If we are ever to cavort in the holodeck, photonics will play a big role.
Quantum computing: Quantum is another application that will drive photonics adoption. Quantum is challenging to predict (I can’t tell whether it is here or there. . . ). 2020 will NOT be the year of Quantum, but I do I predict there will be at least one important quantum announcement that will blow everyone’s mind. The announcement will exist at both a large established company and a start up in both places at the same time.
Summary:
Photonics, the technology of the future, will see solid advancement in 2020. Growth rate will be impressive, with abundant applications coming into focus. Growth will be tethered by the cleverness of engineers extending electronics, and the evolution of the photonics ecosystem. Signs of maturity are becoming more prevalent as commercial foundries join the fray and design automation matures. 2020 is the year that the commercialization of photonics comes into focus.
IEDM 2019 had the theme: “Innovative Devices for an Era of Connected Intelligence” of which MRAM is a leading contributor. Following a very informative Plenary Session, Monday afternoon led off with Session 2: Memory Technology – STT-MRAM. This session has seven important STT-MRAM papers describing the progress of this technology and are summarized below. Especially highlighted are two papers showing high-performance devices suitable for Last Level Cache implementation including Write Error Rate (WER) reliable 2 ns switching and a single device with WER 1e-11 by the IBM-Samsung MRAM Alliance. Very high endurance values of 1e12 cycles with 4 ns read time and retention time of 1 second at 110C were achieved by Intel. MRAM pioneer Everspin demonstrated a 1Gb stand-alone DDR4 compatible MRAM product in 28nm. Samsung achieved a 1Gb embedded eMRAM in 28nm FDSOI. Global Foundries demonstrated a device capable of 125C operation and magnetic immunity of 600Oe. Samsung developed a process capable of hybrid memories implementing either high-speed or high-retention in a single chip. TSMC’s eMRAM supports -40 to 150C operation with magnetic shielding. In addition, there were several other MRAM-related papers in other sessions, and an MRAM poster session jointly sponsored by IEDM and the IEEE Magnetics Society.
2.1:Demonstration of a Reliable 1Gb Standalone Spin-Transfer Torque MRAM for Industrial Applications
Sanjeev Aggarwal, et al, Everspin Technologies, Inc.
Long an MRAM product development leader, Everspin demonstrates their stand-alone 1Gb STT-MRAM chip in 28nm. This paper describes productization and superior performance operation of the 1Gb 1.2V DDR4 STT-MRAM in 28nm CMOS shown in Fig. 1 with capability for industrial temperature range applications of -35C to 110C.
Fig. 1. Top down images of the Everspin 40nm 1.5 V DDR3 256 Mb (top) and the 1.2V DDR4 28nm 1 Gb (bottom) STT-MRAM product dies.
MRAM devices are implemented as magnetically-programmable resistors between two BEOL metal layers as shown in Fig. 2.
Fig. 2. Schematic diagrams showing integration of the pMTJ bits in the 1 Gb array and adjacent logic areas in the chip’s BEOL metallization.
The Magnetic Tunnel Junction (MTJ) consists of a fixed magnetic layer with high perpendicular magnetic anisotropy, an MgOx tunnel barrier and a magnetic free layer. Upon application of a critical voltage, a current of spin-polarized electrons tunnels through the MgOx barrier to flip the polarization of the free layer to be in a parallel or anti-parallel magnetic state, showing low or high resistance respectively to a read current. The free layer can be optimized for different applications. During write, no backhopping or switching abnormalities were observed indicating a large window for switching reliability for the industrial application temperature range from -35C to 110C. DIMM cycling indicated an endurance lifetime greater than 2e11 cycles. Fig. 3 shows data retention as a function of temperature of 10 years at 85C or 3 months at 100C.
Fig. 3. Time to Failure vs. temperature for Data Retention (DR) bakes of a collection of 1 Gb dies. Solid line fit indicates DR of 10 years at 85°C and 3 months at 100°C.
2.2:1 Gb High density Embedded STT-MRAM in 28nm FDSOI Technology
Lee et al, R&D Center, Samsung Electronics Co.
Based on the already shipping 8Mb 28nm FD-SOI eMRAM product, Samsung announces their embedded 1Gb product demonstrating read and write operation from -40C to 105C. For high performance and stable yield of over 90%, a temperature-compensated write driver and write assistor were implemented. Improved endurance up to 1e10 cycles was achieved to broaden eMRAM applications to eDRAM replacement. To guarantee high yield, 2b ECC was implemented. The MTJ stack is based on MgO/CoFeB. With an operating voltage of 1.0V and a 50ns read pulse, at an operating temperature from -40C to 105C, 10 years retention at 105C and endurance of 1e6 endurance cycles is demonstrated. The unit cell size is 0.036 um2 . MTJ stack engineering gave a higher TMR of over 200% and an improvement in MTJ efficiency (retention divided by switching current). Fig. 4 shows the vertical architecture and the TEM picture of the MTJ cell array.
Fig. 4. Vertical structure and TEM images of MTJ cell array with Bottom Electrode Contact (BEC) embedded in 28nm FDSOI logic process.
Performance is illustrated by the room temperature shmoo plot in Fig. 5, showing the product spec VDD of 1.00V and the read pulse of 50ns.
Fig. 5. Shmoo plot for 1Gb chip as a function of read condition at room temperature.
The tuneability of the process to yield different products with 10-year data retention temperature and corresponding endurance is shown in Fig. 6.
Fig. 6. Correlation between endurance and 10 year data retention temperature properties. With improved efficiency, retention temperature can be enhanced for the same endurance cycle.
2.3: 22nm FD-SOI Embedded MRAM Technology for Industrial-grade MCU and IOT Applications
B. Naik, et al, GlobalFoundries
The 40Mb, 0.8V embedded MRAM with 2b ECC achieved reliable operation from -40C to 125C with 5x solder reflows, 400C BEOL flows and 1e6 endurance cycling and stand-by magnet immunity of 600 Oe at 105C for 10 years. A high magnetoresistance (MR) ratio (Rap-Rp)/Rp where Rp is the parallel resistance or state ”0” and Rap is the anti-parallel resistance or state “1” and the figure-of-merit MR/s(Rp) resistance distributions are shown in Fig. 7.
Fig. 7 Bit-cell resistance distributions of Rp and Rap showing separation of 28 s(Rp).
Write shmoo data for AP->P at 37 ticks and P->AP at 28 ticks for 200ns write pulse at 0.8V at -40C is shown in Fig. 8.
Fig. 8. Write shmoo for AP->P at 37 ticks and P->AP at 28 ticks for 200ns write pulse.
Read shmoo is shown in Fig. 9 showing operation at 19ns read pulse.
Fig. 9. Read shmoo showing read operation at 19ns.
Projected standby magnetic field immunity at 105C for 10 years is 600Oe. Standby magnetic field at immunity at 10 years as a function of temperature is shown in Fig.10.
Fig. 10. Standby magnetic field immunity as a function of temperature.
In active mode, the magnetic immunity of 500 Oe is limited by the endurance margin.
2.4: 2 MB Array-Level Demonstration of STT-MRAM Process and Performance Towards L4 Cache Applications
Juan G. Alzate, et al, Intel Corporation
L4 cache-level application performance and reliability is shown for a 2 MB STT-MRAM array. This requires high density, high bandwidth and high endurance across industrial temperatures of operation. The required specifications for L4 cache application of an STT-MRAM are shown in Table I.
Table I. Target specs for STT-MRAM in an L4 cache application.
A bandwidth of >256 GB/sec and an array density of > 10Mb/mm2 are needed to be an SRAM or eDRAM replacement. The density requirement as shown in Fig. 11 limits the bitcell pitch and access transistor size and consequently restricts the maximum current available for STT write thus limiting the data retention time to *one second* at the maximum operating temperature of 110C.
Fig. 11. Tighter bitcell pitch required for L4 cache compared to the eNVM application.
The write endurance requirement of 1e12 cycles on the other hand limits the maximum write current to ensure endurance fails remain within ECC-correctable limits. To achieve an acceptable ECC correctable 1 Gb array Bit Error Rate (BER) of <100 dpm (probability of 1Gb array fail of 1e-4), the required fixed and random Write Error Rate (WER) errors are shown in Fig. 12 for two different architectures, 128b words with Triple Error Correction (TEC) and 512b with Double Error Correction (DEC). The random BER needs to be 1e-8 to 1e-10 for 1e12 write events.
Fig. 12. ECC calculation for allowed BER of both fixed location fails (dashed) and random fails (solid) vs 1Gb array fail probability (ECC uncorrectable) assuming either 128b words with Triple Error Correction (TEC) (blue) or 512b words with Dual Error Correction(DEC) (orange).
The 55nm MTJ needs a reliable stack optimization and reactive ion etch (RIE) process. Defective fails were found to be shorting modes (hard shorts and soft shorts) that reduce the resistance and TMR. Failing bits at time=0 are fused out. Acceptable WER levels and shorter write pulses require overdriving the MTJ, limited by the available drive current and endurance considerations as shown in Fig. 13.
Fig. 13. Write current distributions limited by available drive current and endurance requirements and read disturb requirements.
The minimum current is that required by read disturb considerations and improves as temperature decreases, hence read disturb is measured at 95C by hammering full words with 1e7 reads. Write Error Rate curves are shown in Fig. 14 for MTJs scaled from the NVM application and the optimized L4 cache device with 10ns write pulse shown in blue.
Fig. 14. Write Error Rates (WER) for different devices, showing the optimized L4 cache MTJ in blue.
The critical condition for WER is at -10C, but as temperature increases, MJTs become easier to write and at higher temperatures the VCC can be reduced. Endurance measurements are done at 105C due to thermal activation of defects causing MgO dielectric breakdown.
2.5: A novel integration of STT-MRAM for on-chip hybrid memory by utilizing non-volatility modulation
J.-H Park, et al, Semiconductor R&D Center, Samsung Electronics co. Ltd.
Samsung illustrates that it is possible to have either high-retention or high-speed STT-MRAM hybrid memory in separate zones in a single eight Mb chip in 28nm FD-SOI logic as illustrated in Fig. 15.
Fig. 15. Illustration of on-chip hybrid memory which can have two different sub-zones having MTJ arrays of modulated non-volatility: Zone I has relaxed non-volatility for high speed operation and Zone II has strict non-volatility for high retention requirements.
Retention was demonstrated at 10 years at 220C. For high-speed operation improvements were made in TMR, short fail probability, overdrive and write error rate. By tailoring the magnitude of perpendicular magnetic anisotropy (PMA) of MTJs without modifying the deposition process, the non-volatility in selected areas can be manipulated. Fig. 16 shows 10-year data retention temperature as a function of MTJ switching current.
Fig. 16. 10-year data retention temperature as a function of MTJ switching current.
To enable high-speed operation, wide read- and write margin are required. Read margin is increased by higher TMR at low RA by minimizing short failure. Two different MTJ processes, Process A and Process B are compared. Wider write margin is achieved by higher breakdown voltage (shown in Fig. 17), lower switching voltage, wider voltage margin between read and write and tighter distribution.
Fig. 17. Breakdown voltage as a function of MTJ resistance.
Fig. 18 shows write shmoo plots for 8Mb eMRAM macros integrated with the two types of MTJs of Process A and Process B , respectively. MTJs of Process A pass with much reduced write fail for the shorter pulse-width condition.
Fig. 18. Room temperature write shmoo plots as a function of pulse width and bitline voltage for two different processes, Process A (a) and Process B (b).
By implementing a highly tunable diversity of performances in a single chip as if multiple heterogeneous memories were embedded, both high performance and high retention memories can be implemented in the same chip, forming the hybrid memory. This is done by modulating the PMA energy to manipulate the non-volatility of MTJs.
2.6: Spin-transfer torque MRAM with reliable 2 ns writing for last level cache applications
Hu, et al, IBM-Samsung MRAM Alliance
Reliable 2 ns and 3 ns switching with two-terminal devices as opposed to the low-density, three-terminal SOT (Spin Orbit Transfer) devices, enables fast and dense MRAM products for Last Level Cache (LLC) applications. Reliable 2 ns switching was achieved for an STT-MRAM with 100% WER yield at 1e-6 write-error floor using 49nm CD MTJ.
In Fig. 19, switching current increases as pulse width decreases for two different free-layer designs, Stack1 and Stack2, annealed at 400C for 60minutes.
Fig. 19. Switching current vs pulse-width curves of two stacks with different free-layer materials each showing the thermally activated longer pulse width regime and the shorter pulse width of the precessional switching regime.
For long write pulses of 10 ns and above, the switching is thermally activated, but for short pulses of 10 ns and less, it is in the precessional switching regime governed by the conservation of electron spin angular momentum. LLC applications requiring write pulses <10ns operate in the precessional switching regime determined by the free-layer materials properties. The shorter pulse-width show steep increase of switching current, degradation of the WER slope and the occurrence of WER anomalies, all of which are addressed through materials optimization.
254 devices fabricated with free-layer type I having a nominal size of 49nm and median energy barrier Eb=55kT reached the required 1e-6 WER floor with 2 ns write pulses, illustrated in Fig. 20. A single device with CD=49nm and 2 ns write pulses reached the 1e-11 WER floor.
Fig. 20 (a) showing WER as a function of write voltage reaching the required 1e-6 error floor and showing the shape and duration of the 2ns pulse with a FWHM of 1.7ns.
In a test of smaller 36nm MTJs, all 256 devices tested with 3ns write pulses reached the 1e-6 error floor and 242 of 256 devices tested with 2 ns write pulses reached the 1e-6 error floor for W0 operation while 228 reached the required error floor for W1 operation. Reference layer WER anomalies known as backhopping were observed.
2.7 22nm STT-MRAM for Reflow and Automotive Uses with High Yield, Reliability and Magnetic Immunity with Performance and Shielding Options
J. Gallagher, et al, Taiwan Semiconductor Manufacturing company
A 32Mb embedded STT-MRAM in 22nm was produced using a cell area of 0.046 um2 accommodating MTJs of varying CDs for different retention and performance requirements. The technology supports 6x solder-reflow-capability and -40C to 150C operation with data retention > 10years. The most recent process gave zero median t0 die bit fails per wafer as a result of the main improvement being the elimination of MTJ shorting defects. The main challenge for high yield at 150C is the reduction of the read window due to falling off of TMR with temperature, as shown in Fig. 21.
Fig. 21 Read window reduction due to falloff of TMR at temperature
Due to the stochastic nature of magnetic switching, write-verify-write is used, where the first shots incorporate lower amplitude write pulses both for power savings and for endurance stress minimization. If multiple low amplitude shots do not result in a successful write, final high amplitude write pulses may be needed to achieve high yields. At 25C all cells were written successfully with one shot whereas at -40C, 0-15% of the dice needed a second shot. Solder reflow reliability was demonstrated through six simulated reflow cycles, equivalent to 10 year retention at 225C. Since endurance has the highest failure rates at low temperature cycling, for 1e6 write cycles were tested at -40C, the resulting 0.029 ppm fails were within the 1 ppm margin for ECC. There was no change in parallel or anti-parallel cell read current distribution after 100K cycles at -40C as shown in Fig. 22.
Fig. 22 Showing no change in either parallel (Rp) or anti-parallel (Rap) cell read current after 100K cycles
Read disturb rates showed < 1ppm for 1e12 cycles, as shown in Fig. 23 as a function of bitline bias voltage.
Fig. 23 Read disturb rates showed < 1ppm for 1e12 cycles, as a function of bitline bias voltage.
Investigations of magnetic immunity showed stand-by bit error rates for packaged MRAM arrays to be below 1ppm BER for 10-year exposures of 1100, 750 and 600 Oe at 25C, 85C and 125C respectively as shown in Fig. 24.
Fig. 24. Packaged MRAM arrays below 1ppm BER for 10-year exposures of 1100, 750 and 600 Oe at 25C, 85C and 125C respectively.
In-package shielding was used to protect against a tampering attack with a 3.5kOe magnet. Failure rates of an unshielded sample were ~30% after ~one second whereas the shielded part had <one ppm after 80hours at 25C for a reduction factor of >1e6 sensitivity.
Parts with smaller CDs were used for higher performance, trading off solder-reflow capability but still having very high retention >10 years at >150C. Tables II and III show read and write performance for a 0.038um2 cell. Table II shows read time and voltage shmoo at 125C showing a 6ns read cycle.
Table II. Shmoo showing read pulse width and bitline voltage at 125C
Table III shows bit line write voltage and programming pulse width shmoo for multi-shot programming at -40C. The smaller CDs achieved endurance of better than one ppm after 1e9 write cycles at -40C.
Table III. Shmoo showing bitline write voltage with pulse width for multi-shot programming at – 40C.
My Lyft driver in San Jose thought his Hyundai had “autopilot,” alluding I suspected, to Tesla Motors’ feature of the same name which has placed that company at the forefront of driving automation development and the focal point of fatal crash investigations. Before either of us got hurt I gently disabused my driver of his dangerous delusion, pointing out that his car was likely equipped with lane keeping technology and, possibly, adaptive cruise control and/or automatic emergency braking.
All the driver knew was that on different occasions his car, on its own, had avoided colliding with other cars, primarily by slowing or stopping.
This is the conundrum facing the automotive industry on the cusp of a new decade and another Consumer Electronics Show (2020) opening within a week in Las Vegas. How to make cars safer without making drivers less careful? My Lyft driver was a newly-minted fan of collision avoidance technology without really understanding why or how it worked.
The issue seems relatively benign on the surface but it touches the core marketing challenges of making cars safer without making them too expensive, and defining an evolutionary path to fully autonomous driving. Strategy Analytics research has routinely shown that safety technology is in demand from consumers. It is something consumers are looking and willing to pay for in a new car.
But safety is being redefined as auto makers and regulators shift the focus from passive safety (airbags, seatbelts, child restraints, etc.) to active safety specifically designed to avoid collisions, by allowing on-board vehicle systems to seize control of the car – under appropriate circumstances.
Nvidia opened this Pandora’s box at CES 2019 with the introduction of its DRIVE Autopilot system, described by the company as “Level 2+.” The DRIVE Autopilot is intended to integrate multiple sensor suites to deliver a variety of assisted driving functions including lane keeping, driver monitoring, and adaptive cruise control while being scalable to higher levels of automated driving.
In its own words, Nvidia described the DRIVE AutoPilot as integrating “for the first time high-performance NVIDIA Xavier system-on-a-chip (SoC) processors and the latest NVIDIA DRIVE Software to process many deep neural networks (DNNs) for perception as well as complete surround camera sensor data from outside the vehicle and inside the cabin. This combination enables full self-driving autopilot capabilities, including highway merge, lane change, lane splits and personal mapping. Inside the cabin, features include driver monitoring, AI copilot capabilities and advanced in-cabin visualization of the vehicle’s computer vision system.”
The announcement reflected the desperation of companies like Nvidia, Intel, Qualcomm, Renesas, and, yes, Tesla itself – to deliver an affordable, mass market self-driving or near self-driving experience. Nvidia’s choice of “Level 2+” terminology was an effort to distinguish the product from competing system-on-chip (SoC) solutions and define a “new” market segment.
The reality is that there is no such thing as Level 2+. Nvidia is attempting to suggest a value proposition that is more than just an advanced driver assist system (ADAS) which requires the driver to remain engaged and vigilant at the steering wheel. DRIVE Autopilot is “something” more.
There are two problems with this Nvidia marketing proposition. First of all, it perpetuates the perplexity brought on by Tesla’s own Autopilot offering which is decidedly NOT an autonomous driving system and definitely DOES require drivers to pay attention and keep their hands on the wheel.
The second problem with Nvidia’s Level 2+ nomenclature, aside from the fact that it lacks an endorsement from standards-setting or regulatory bodies, is that it is not a single thing. While it highlights the limitations of existing ADAS systems, it fails to remedy these shortcomings completely and fails to define a marketable consumer value proposition.
My colleague, Ian Riches, vice president of the global automotive practice at Strategy Analytics, summed up the issue in a seminar in Tokyo nearly a month ago when he asked the attendees (car makers and their suppliers): “How many consumers will pay for this technology?” The operative term: “consumers.”
The great virtue of Nvidia’s messaging and positioning is that the company emphasizes the integration of external sensing systems with driver monitoring technology. This is the value proposition that every car maker is wrestling with: How to assist drivers while at the same time insisting that drivers continue to pay attention to the driving task?
General Motors is something of a leader, along with Tesla Motors, in bringing what could be described as Level 2+ systems to market in the form of Super Cruise and Autopilot, respectively. Of course, these two systems work in different ways – and Super Cruise, a hands-free adaptive cruise control system, is a $2,500 option available on a limited range of Cadillacs. (General Motors has yet to set a date for the launch of Ultra Cruise – and has been forced to reconfigure Super Cruise to compensate for sunlight interfering with the original Super Cruise sensors.)
German auto makers Daimler and Audi have been advancing their driver assist portfolios toward automation, with Audi flirting with Level 3 automation in Europe. Nissan has brought ProPILOT to market to mixed reviews and Toyota is preparing a 2020 launch for its Team Mate driver assistant reputedly capable of lane changing, merging, and passing.
Nvidia’s DRIVE Autopilot helps to deliver all of these value proposition, but it does so at considerable cost. Next week at CES 2020 there will no doubt be many more demonstrations and announcements addressing assisted driving. The question remains as to whether and what kind of market there is for these solutions. In the words of my colleague, Ian, “How on earth will we get a return on these investments this side of 2030?”
The question is much simpler for me. Drivers should pay attention when driving and cars should not collide with other cars, pedestrians, or inanimate objects. The fact that cars DO collide with things quite routinely and with catastrophic results is but one indication that we are failing as an industry. Given the societal cost of 1.3M annual highway fatalities globally, the ongoing effort to enhance vehicle safety is worth the short-term confusion of misleading nomenclature and the high cost of research. This is the highest and most important calling in today’s automotive industry.