webinar banner2025 (1)

CES 2019: Dashboard Dreams

CES 2019: Dashboard Dreams
by Roger C. Lanctot on 01-06-2019 at 8:00 am

The annual trek to Las Vegas arrives this year with visions of sinusitis, chapped lips, flat feet and new concepts for automotive cockpit systems. It is no coincidence that the plaza in front of the Las Vegas Convention Center is dominated by automotive exhibits – along with multiple automated driving demonstrations across the street and a dozen auto makers exhibiting in the North Hall.

What was once a TV, car stereo and home computer show has become a car show. As such, it is as good a place as any to see what it will be like driving cars in the future.

Looking back at CES 2018 we find two head unit concepts that had outsize impact on the market over the past year. One system – a digital dashboard from Harman International – helped to define what has come to be known as a cockpit domain controller; the other, a large display center stack system concept from SiriusXM was actually delivered to market in certain Dodge Ram trucks from FCA.


Samsung/Harman Digital Cockpit: https://tinyurl.com/y9vv3ghm


SiriusXM with 360 L: https://tinyurl.com/yb6d7hte

These two systems represent game-changing designs with implications that resonate today including:
Samsung/Harman system:

  • Integration of safety and infotainment content
  • Availability and integration of multiple digital assistants
  • Smart home tech integration
  • SiriusXM 360 L:
  • Audio content searchable by artist, genre, category (talk, sports, news, etc.), location
  • Digital assistant control
  • Satellite-cellular integration – first of its kind
  • Up to five profiles with recommendations
  • Cross platform content management – smartphone, radio, satellite
  • In-dash account/subscription management

CES 2019 will see further explorations of digital dashboards from companies including Visteon, Continental, Panasonic and Aptiv. The abiding theme will be putting customer and vehicle data to work to enhance the driving experience with content and safety with sensor integration.

Expect in-dash account and privacy management capabilities and advanced digital assistants enabling hands-free interactions with vehicle resources. With the growing variety of vehicle connections including satellite, cellular and connected mobile devices the goal will be to integrate these connections into a holistic information and entertainment management and driving experience.

The core message of the Samsung/Harman digital cockpit is comprehensive integration of one’s home, car and mobile life. The thrust from SiriusXM is an attempt to deliver something similar, but only for content delivery purposes.
MasterCard, Visa and other payment players will be vying at CES 2019 to dominate the emerging eco-system of vehicle-centric purchases from tolls and parking to fuel and movie tickets. The wallet on wheels phenomenon will come to life in Las Vegas next week in a variety of manifestations from multiple suppliers.

The monetization of vehicle data will also be a massive theme with companies lining up to meet the challenge including Otonomo, Wejo, The Floow, High Mobility, SmartCar, Harman (Ignite) and many others. OEMs will support these efforts with open APIs enabling data access and SDKs for application development.

It will be interesting to see what new dashboard experiences have a lingering impact beyond the Las Vegas Convention Center this year. One hint: Don’t miss Honda’s updated DreamDrive demo in North Hall. 😉

Roger C. Lanctot is Director, Automotive Connected Mobility in the Global Automotive Practice at Strategy Analytics. Roger will keynote the Consumer Telematics Show on January 7 at Planet Hollywood. More details about Strategy Analytics can be found here:https://www.strategyanalytics.com/access-services/automotive#.VuGdXfkrKUk


CES 2019 Robotaxis vs. Micro Transit

CES 2019 Robotaxis vs. Micro Transit
by Roger C. Lanctot on 01-06-2019 at 7:00 am

Attendees of CES 2019 arriving at Las Vegas McCarran International Airport next week will have four options for getting to their hotels: a shuttle offering two rides for the price of one (out and back for about $15); a taxi offering one ride for the price of two (about $30), a Lyft or Uber offering one ride for the price of one ride (about $15), or a rental car.

Las Vegas is a microcosm of the transportation challenges facing cities all over the world with the addition of tourists and inebriated pedestrians and minus rail-based public transit. As such it is no stranger to traffic jams – especially during major events. So the local authorities are doing their best to test new innovative solutions to optimize travel on the available roads.

Micro transit is a popular option in the area, with airport shuttles representing a prominent example. Most cities are vying to pry drivers out of individual vehicles and into shared ride, multi-passenger minibuses and shuttles.

Las Vegas has a wide variety of large and small buses plying the strip and downtown. The CES show will bring its own subset of inter-venue busses supplied by the organizers along with various limousines, vans and shuttles operated by attendees and exhibitors.

Las Vegas even has driverless shuttles from Navya operating downtown, albeit at very low speeds. Most recently added to the mix have been 35 driverless Lyft vehicles enabled by technology provided by Aptiv. (The picture – above – was taken from the backseat of a Lyft-Aptiv “driverless” car.)

The Lyft-Aptiv effort is an example of the impending arrival of “robotaxis,” an expression popularized by Nvidia at its GTC event in Munich two years ago. At the time, Nvidia was touting its dominance of the world of robotaxis noting that of the then 225 partners developing autonomous driving technology on the Nvidia Drive PX platform, 25 were robotaxis.

Notably, Uber was and is one of those Drive PX partners. In 2018, an inattentive Uber safety driver and a flawed Uber autonomous set-up resulted in a fatal crash in Phoenix. So 2018 ended with Uber sputtering to restart its robotaxi efforts – while headlines appeared in newspapers across the country describing Phoenix residents hurling rocks at driverless vehicles…from Waymo!

The Lyft-Aptiv “driverless” effort prominently features two drivers – a safety driver and co-pilot – and you can easily “hail” one of these vehicles using the Lyft app after opting in to accept the “driverless” option. Ironically, the driverless Lyft-Aptiv has two “drivers.”

Don’t expect to get a Lyft-Aptiv vehicle running from the airport to downtown or the strip and back. The Lyft-Aptiv test vehicles are only operating within the city – presumably to better apprehend urban vehicle-human-infrastructure interactions. In fact, the vehicles will not operate on private property – i.e. the driveways etc. surrounding hotels – so the “driverless” experience is actually quite limited.

Still the presence of these “driverless” vehicles in Las Vegas highlights the tension between adding more individual passenger transportation alternatives vs. truly shared, multi-passenger propositions. There are many options in Las Vegas.

The Deuce and the Monorail are just two examples of multi-passenger transportation options that operate daily. The Deuce bus on the strip stops at every casino, but only a small proportion of CES attendees are likely to use the bus or are even aware it exists. Fares are $6 for a two-hour pass, $8 for a 24-hour pass and $20 for a three-day pass. The Monorail is $5 for a single ticket, $13 for a 24-hour pass and $29 for a three-day pass.

(For those of you renting cars in Las Vegas there may be some surprises. You can still drink for free in Las Vegas IF you are gambling – and you can still smoke in the casinos – but a free parking spot is becoming increasingly rare.)

Interesting and promising though robotaxis may be, the more immediate opportunity in 2019 lies in micro transit and the market participants are lining up. Micro transit leaders include Scoop, Chariot, Ridecell, Vulog, Bestmile, Moia, May Mobility and many others.

For cities, the goal is clear: more passengers inside fewer vehicles. Las Vegas is the perfect example of circumstances running in the opposite direction: more individual passengers in more vehicles.

For years, Las Vegas cab drivers would crab and complain about the growing number of taxi medallions authorized by the city making it that much more difficult for cabbies to make a living. Then along came Uber and Lyft and now it seems that everyone is a cab driver.

The democratization of professional driving contributes to the dubious quality of Vegas driving – but business is booming. There’s so much business to go around that cab drivers don’t even bother complaining.

Still, look for micro transit to take a bigger bite out of transportation in 2019. Your first glimpse of this emerging new reality will be on display at CES 2019 in Las Vegas. See you there.

Roger C. Lanctot is Director, Automotive Connected Mobility in the Global Automotive Practice at Strategy Analytics. Roger will keynote the Consumer Telematics Show on January 7 at Planet Hollywood. More details about Strategy Analytics can be found here:

https://www.strategyanalytics.com/access-services/automotive#.VuGdXfkrKUk


2018 Semiconductor Year in Review

2018 Semiconductor Year in Review
by Scotten Jones on 01-04-2019 at 12:00 pm

Strong Overall Market Growth but a Slowdown Looms
After six years of single digit percentage growth in the overall semiconductor market, 2017 saw almost 22% growth and 2018 year-to-date is up roughly 17% (based on numbers published by the world semiconductor trade statistics). The big growth driver the last two years has been surging memory prices driven by high bit demand and tight supply. With additional memory capacity coming on-line, memory supply is expected to ease in 2019 removing the biggest driver of growth. Depending on whose forecast you believe the overall semiconductor market for 2019 may show single digit percentage growth or a single digit percentage decline.

Leading Edge Logic Down to Three Companies
Entering 2018 three foundries; GLOBALFOUNDRIES (GF), Samsung and TSMC were all pursuing 7nm processes and Intel was pursuing 10nm (with density similar to foundry 7nm processes). Around mid-year GF announced a “pivot” leaving only three companies pursuing the leading edge. With Intel now rumored to be exiting the foundry business there are only two foundry sources of leading-edge logic. With the exit of GF from the leading-edge, Samsung is reportedly seeing a significant increase in requests for their 7nm PDK from companies concerned about being sole sourced at TSMC.

Process Delays at Intel
In 2007 Intel introduced their 45nm process, the world’s first production process with high-K metal gates (HKMG), In 2009 Intel introduced 32nm and then in 2011 their 22nm process, the world’s first production FinFET process. 14nm was originally expected in 2013 but didn’t ramp until 2014 due to yield issues. After the 14nm delay expectations for intel reset to a 3-year cadence and 10nm was expected in 2017. Intel did ship a few 10nm parts at the end of 2017, but production is now expected to be late 2019 once again due to yield issues. Intel’s 10nm has slightly denser logic than the first-generation foundry 7nm processes and Intel is paying the price for the aggressive shrink they attempted. Both Samsung and TSMC went from 16nm/14nm to 10nm and then 7nm while Intel went from 14nm to their “10nm” process in a single step, a 2.7x density increase. There has been a lot of speculation that in order to fix the 10nm yield issues Intel will relax the density specifications, I continue to believe the process that is due to ramp up next year will have the same density previously announced (this is also what Intel is saying).

Intel is now reportedly exiting the custom foundry business. Frankly I never took Intel seriously in foundry, they have always introduced their microprocessor processes a year or more before they offered a foundry version at the same node, if they were serious about foundry the foundry process would have come out at the same time. I do not however see Intel abandoning their own internal manufacturing as some have speculated. Intel has started equipping their moth-balled Fab 42 as the lead 7nm production fab and they recently announced fab expansions in Oregon, Israel and Ireland.

Intel is currently working on 7nm due in 2020. Intel 7nm is targeted as a 2.4x shrink from their 10nm process. Based on the announcements and rumors surrounding Samsung’s 5nm process due in 2019, 4nm process in 2020 and 3nm process in 2021 and TSMC’s 5nm process due in 2019 and 3nm process forecast for around 2021, these processes will be relatively modest shrinks and we expect that if Intel achieves the target shrink their 7nm process will be as dense or denser than the foundry 3nm processes. The question is can they hit their 2020 target. Intel has commented on a conference call that they believe by introducing EUV at 7nm they think the 2.4x shrink is achievable. My concern is a 2.4x shrink will be really pushing a lot of device limits and I would not be surprised to see 7nm delayed. Even if Intel is delayed to 2021 or even 2022 they will once again have competitive density with the foundries.

A lot of people resent Intel for their many years of process leadership and perceived arrogance. Lost in this resentment is an appreciation of all the technology development Intel has driven that have become standard in the industry. I personally am concerned that Intel losing their way at the leading edge could leave a technology leadership void and I am not convinced either Samsung or TSMC are prepared to take on the role of industry technology driver.

EUV Entering Production
Samsung’s 7nm process with an estimated 7 EUV masks entered “production” mid-year and is expected to ramp up over the course of 2019. We estimate Samsung is using an average dose of 50mJ/Cm[SUP]2[/SUP] and they have announced they are not using a pellicle and achieving 1,500 wafer per day. TSMC is expected to ramp their 7FFP process with an estimated 6 EUV masks in 2019. Reports out of TSMC are that this process is ready to go. Both Samsung and TSMC are expected to enter risk production with 5nm processes in 2019 with 11 to 14 EUV masks. There is still a lot of work to do on EUV, photoresist will likely need to transition from the current chemically amplified resists (CAR) to inorganic resists, pellicles are needed, further throughput improvements and a greater understanding and mitigation of stochastics issues, but the era of EUV has clearly begun.

3D NAND Growth
Since 2014 when Samsung introduced 24-layer 3D NAND to production, we have seen 32, 48 and 64 layers enter production with 96-layers currently ramping. 3D NAND is delivering on Moore’s law with increased density and bit cost reduction. The rate of density improvement and bit cost reduction has slowed from the peak 2D NAND years but is continuing and there is a path for continued improvement into the mid to late 2020s. 2018 saw 3D NAND bit shipments exceed 2D NAND bit shipments and by 2020 some forecasters expect 3D NAND to represent 90% of all NAND bits shipped. We expect to see 128-layer 3D NAND in late 2019 or early 2020 and with string stacking there is a path all the way to 512-layers. We do see issues with bit cost beyond 384-layers with our current forecast showing an increase in bit cost beyond that number of layers, but 3D NAND offers a scaling path for many more years.

DRAM Scaling Slows
Of the three main semiconductor product groups: DRAM, Logic and, NAND, DRAM is facing the most difficult scaling path. DRAM capacitor scaling has hit a wall. DRAM capacitors must achieve an acceptable capacitance for bit retention. The capacitance depends on film thickness, film k value and capacitor area. Capacitor area has been increased by going to 3D capacitor structures, but the height of the current cylinder structures is at the mechanical limit for stability. Film thickness has been reduced as much as possible within leakage constraints. There are many options for higher k films, but leakage issues have to-date limited the options. DRAM scaling has recently focused on improving the density and performance of the peripheral circuitry. Today FinFETs and HKMG are on the horizon for further DRAM periphery improvements. DRAM capacitor – capacitance values have also been reduced to values undreamed of a few year ago. At IEDM this year Imec presented work on a new higher-k dielectric material that shows promise to break the capacitor scaling bottleneck. The new Strontium Titanate based material offers a higher-k value with acceptable leakage if the film is made thick enough. The thicker film would require a change from the current cylinder capacitor structure to a pillar structure to accommodate the increased thickness, but the potential is there for increased capacitance in the same area. This is the kind of breakthrough needed for DRAM scaling to get back on track. I plan to write more about this technology in the near future.

Conclusion
In spite of slower growth expected for 2019 the industry continues to move forward on technology scaling across all three major product segments. The long term outlook for the semiconductor market and underlying technologies remains strong.


Apple as Apex of chip industry portends weaker 2019

Apple as Apex of chip industry portends weaker 2019
by Robert Maire on 01-04-2019 at 7:00 am

On the first day of trading in the new year Apple just announced, after the close, that revenues will be lower than previously expected coming in at $84B versus the expected range of $89B to $93B and analyst estimates of the current quarter at $91.5B. Ugly….. The blame was laid squarely on China as slowing sales and trade tensions took a their toll. This is down roughly 7% from where the company thought revenues would be just 60 days ago. A bit of an embarrassment….

We have been talking for a long time about the China risk in tech and pointed out that companies in China were paying employees to buy smart phones from Chinese companies rather than Apple due to the tensions between the two countries.

This should come as little surprise as things have been slowing for a while and the Santa Claus “pop” in the stocks had no basis in reality, just wishful thinking (which didn’t last long…).

The real problem is that Apple is the driver of the vast majority of the semiconductor ecosystem and the impact of the Apple China slowdown hitting the already reeling chip industry will only exacerbate the problem.

Apple is the primary driver of TSMC and the bleeding edge of Moore’s law as TSMC follows Apple’s yearly processor demands. Obviously communications chips as well as memory, both DRAM and NAND will be negatively impacted.

As the de facto Moore’s law driver and one of the biggest consumers of chips Apple is the at the Apex of the semiconductor food chain.

Its likely that the trickle down of Apple’s China Chop could be worse for Apple’s suppliers and the Chip industry than for Apple itself.

Some investors and analysts had been suggesting a quick bounce back from this cyclical downturn but we have remained concerned over Chinese “cloud” hanging over the industry. While not a full blown hurricane from a trade war we are none the less seeing the weather worsen quickly.

The Apple news will not only drive tech stocks and the broader market down but will also make it that much harder and longer for the industry to recover. For those hoping that trade negotiations will result in success in 90 days, we wouldn’t hold your breath. Even if we do get some type of a deal that leaders can brag about, much of the damage has already been done as Apple sales will likely never recover to prior rates in China now that the company has been tarnished as the face of American tech dominance to be punished. The governments can sign whatever deal they want but getting Chinese buyers fired up again for Apple, will not be as easy as a signature.

Very long trickle down impact
The trickle down impact list is too long to enumerate here but is pretty obvious. We are most concerned about the memory portion of the chip industry as it is hyper sensitive to the delicate balance between supply and demand and a whole lot of demand just went away. Memory pricing tends to be non-linear as small imbalances can cause large swings and the memory market tends to trade like a commodity market.

On the other end of the spectrum, we don’t think that the slow down will change the cadence of new processors and technology at the bleeding edge of Moore’s law that Apple drives. Apple will still expect and demand that TSMC keep up its pace and spending so that it can roll out an ever faster Iphone every fall. The only difference is that TSMC will make slightly fewer chips but probably needs all the same new tools to keep up.

Communications chips are somewhere in the middle where technology improvements needed to get to 5G probably overwhelm near term market softness as all participants will keep their foot on the gas to get a slice of the 5G pie (just at a lower volume….). Intel is obviously impacted given their exposure to Apple.

Memory could be bad for all of 2019….
On Christmas day there was a report out of Korea that Samsung was looking at further cuts in their memory fab plans. This Apple news will likely make them cut their plans even faster than before.

Further Samsung Memory Cuts

LRCX and AMAT will be hit harder than KLAC & ASML
Given the huge memory and Samsung exposure of Lam and to a slightly lesser extent Applied, they will bear the brunt of further slowing of the memory industry caused by Apple.

In general, equipment, and those companies that make equipment more related to capacity rather than technology will bear the brunt of the trickle down.

KLA and ASML are more tuned to Moore’s Law and logic devices and less exposed to memory which has already been hard hit.

To be clear, everyone in the semiconductor ecosystem will be hurt, just some more than others

Micron, the stock, is still very cheap, and getting cheaper
Perhaps Micron saw the writing on the wall better than Apple did and put the brakes on spending and expectations a bit earlier. On a relative basis the stock is very cheap but the news is very bad.
We want to be buyers at these levels but catching falling spears is not our sport…..

Equipment stocks
We have been negative on the stocks for a while and we think LRCX could test our $125 target and AMAT could easily test $30 again.

We don’t see a lot of positive data points in the near term and its going to be very hard if not impossible for semiconductor companies to fight the tape and put out positive expectations on their quarterly calls coming up in a few weeks especially in light of this negative Apple news and pronouncements on China.

On the other hand there is also not a lot more negative news other than the failure of trade negotiations and imposition of more tariffs leading to a bigger trade war…..then it would be a nuclear winter for all tech stocks and the overall market, not just the Apple food chain


Disturbances in the AI Force

Disturbances in the AI Force
by Bernard Murphy on 01-03-2019 at 7:00 am

In the normal evolution of specialized hardware IP functions, initial implementations start in academic research or R&D in big semiconductor companies, motivating new ventures specializing in functions of that type, who then either build critical mass to make it as a chip or IP supplier (such as Mobileye – intially) or get sucked into a larger chip or IP supplier (such as Intel or ARM or Synopsys). That was where hardware functions ultimately settled, and many still do.

But recently the gravitational pull of mega-companies has distorted this normally straightforward evolution. In cloud services this list includes Amazon, Microsoft, Baidu and others. In smartphones you have Samsung, Huawei and Apple – yep, Huawei is ahead of Apple in smartphone shipments and is gunning to be #1. These companies, neither semiconductor nor IP, are big enough to do whatever they want to grab market share. What they do to further their goals in competition with the other giants can have major impact on the evolution path for IP suppliers.

Talking to Kurt Shuler, VP Marketing at Arteris IP, I got some insight into how this is changing for AI IP. Arteris IP started working with Cambricon, a Beijing-based startup in fabless AI IP/devices, some time ago. Based on that work Arteris IP built the FlexNoc AI package I wrote about recently. Cambricon is a very interesting company for a number of reasons. One is that they took one of those “gee, why didn’t we think of that?” approaches to designing a platform for neural net (NN) implementations. They developed an optimized instruction set architecture (ISA) based on analysis of multiple NN benchmarks. Then they leveraged this into a design win with Huawei/HiSilicon. This company is attracting attention; including their current series B round, they have raised $200M to date.

The deal with Huawei/HiSilicon led to the IP appearing in the Huawei Kirin 970 smartphone chipset. But Huawei/HiSilicon decided to build their own neural processing unit for the Kirin 980, now in production (also apparently the first 7nm product in production). In other words, this piece of technology was so important to Huawei, they decided to ditch their IP supplier and make their own. Weep not for Cambricon though. They’re already on their next rev and squarely targeting the datacenter AI training applications for which NVIDIA is so well known.

On the cloud side, consider Baidu who are effectively the Google of China. Just like Google, they have also been working intensively on AI, for many of the same reasons such as image search and autonomous driving but also for some reasons closer to Chinese government interests such as intelligent video surveillance. Baidu started in AI working with FPGAs and (apparently) licensing IP. More recently they too developed their own AI chip, Kunlun, in 14nm and seem set to continue on this path.

As a reminder, these high-end AI systems depend on highly custom 2D-architectures of many NN-dedicated processors connected in specialized configurations such as grids or tori, with memories/caches embedded within this structure, along with other distributed services to accelerate common functions like weight updates. In these architectures, the network (NoC) connecting all of these functions becomes critical to meeting performance and other goals, which is why Arteris IP is so involved with these companies.

Another interesting aspect of the Baidu direction is that they are targeting their AI devices and corresponding software to a pretty wide range of applications. One application is certainly NN training in the datacenter, potentially replacing NVIDIA and a counter to the Google TPU. A recurring theme and perhaps a wakeup call for suppliers who thought they had a lock on sockets. But they are also planning use for inference in the datacenter, a new one on me. Apparently, a lot of this is still happening in the datacenter despite enthusiasm for moving AI to the edge, perhaps especially for IoT devices in China where IoT is taking off arguably faster than anywhere else. And Baidu have big aspirations for automotive and home automation. Which means they want an architecture they can scale across this range. Reminds me of what NXP is doing with their eIQ software.

So more big companies investing in their own AI hardware, for very logical reasons. They feel they have to manage the architecture to meet their own plans across a diverse range of applications. It also can’t have escaped your attention that virtually every company I have talked about here is Chinese. A lot of money is going into AI in China, internally in big companies and from venture funds. Another company in this class is Lynxi, also targeting an architecture for both training and inferencing in the datacenter. Lynxi are apparently are backed by serious funding though details seem difficult to find.

Overall, more big companies are building their own AI chips and more small companies are popping up in this area. And a lot more of this activity is visible in China. A disturbance in the force indeed. Arteris IP is closely involved with many of these companies, from Cambricon to Huawei/HiSilicon to Baidu to emerging companies like Lynxi, offering their network on chip (NoC) solutions with the AI package allowing for architecture tuning to the special needs of high-end NN designs. Check it out HERE.


Samsung vs TSMC 7nm Update

Samsung vs TSMC 7nm Update
by Daniel Nenni on 01-02-2019 at 7:00 am

The semiconductor foundry business has gone through a dynamic transformation over the last 30 years. In the beginning the foundries were several process nodes behind the IDMs with little hope of catching up. Today the foundries are leading the process development race at 10nm – 7nm, and will continue to do so, absolutely.

If you look at the foundry landscape, TSMC has the advantage because they are TSMC, the trusted foundry partner with the most mature and complete ecosystem bar none. TSMC is also a process technology leader and fierce competitor.

The market for Samsung Foundry as I see it is three-fold:

  • They are not TSMC. Capacity is not an issue with Samsung and it is always good to have foundry options. TSMC and Samsung are the only two leading edge foundries left so this is a much bigger point than most imagine.
  • Technology. Leading edge fabless companies look for the best technology that will also meet their time to market requirements. Samsung was ahead of TSMC at 14nm and they did quite well at that node. At 10nm and 7nm Samsung was a bit behind TSMC but Samsung 7nm had EUV before TSMC so some fabless companies are now leading with Samsung.
  • Pricing. Samsung has the best wafer pricing the industry has ever seen. Being the largest memory manufacturer does have its advantages and wafer pricing is one of them.To catch up with the latest on foundry process technology I talked to Scotten Jones, internationally recognized semiconductor expert and founder of IC Knowledge, a technology consulting company that models the economics of semiconductors. Scott has been writing for SemiWiki since 2014, his blogs are on the IC Knowledge landing page. Here are Scott’s latest thoughts on TSMC versus Samsung at 7nm:
    • Contacted Poly Pitch (CPP) – both TSMC and Samsung claim a CPP of 54nm for 7nm but for both of them I believe their actual CPP for cells is 57nm.
    • Metal 2 pitch (M2P) – Samsung is 36nm and TSMC is 40nm.
    • Tracks – Samsung minimum cell track height is 6.75 and TSMC is 6.0.
    • Diffusion break – TSMC optical process (7FF) is double diffusion break (DDB) and they are reported to be going to single diffusion break (SDB) for their EUV process (7FFP). Samsung 7nm has a 1[SUP]st[/SUP] generation process (I believe this is 7LPE) and it is DDB, they also have a second generation process (I believe this is 7LPP) that is also DDB. At VLSIT this year they talked about a 3[SUP]rd[/SUP] generation process with SDB. It is hard to know what this really is, at 10nm their second generation process was actually their 8nm process so this could be their 5nm process or it could really be a third generation 7nm process.
    • Transistor density – the minimum cell logic density for TSMC 7FF is slightly better than Samsung 7LPE or 7LPP. TSMC EUV 7FFP is slightly better than Samsung “3[SUP]rd[/SUP] generation” 7nm.
    • SRAM cell size – I think the SRAM cell size is the same for all three Samsung generations (I have a number for the 3[SUP]rd[/SUP] generation process) and both TSMC generations (I have a number for 7FF) but I am not positive. Samsung has a slightly smaller SRAM cell.

    According to Scott, overall, the two processes are similar in density with TSMC leading in the ramp-up and likely yield and I agree, absolutely.


Let The AI Benchmark Wars Begin!

Let The AI Benchmark Wars Begin!
by Michael Gschwind on 01-01-2019 at 7:00 am

Why benchmark competition enables breakthrough innovation in AI. Two years ago I inadvertently started a war. And I couldn’t be happier with the outcome. While wars fought on the battle field of human misery and death never have winners, this “war” is different. It is a competition of human ingenuity to create new technologies that will benefit mankind by accelerating innovation underpinning advances in AI solutions to benefit an already massive and still growing worldwide user base.

It all started two years ago, as I was preparing for the launch of “PowerAI”, a new type of software distribution that would take AI research in the form of Neural Network training out of the research labs across the world and in the hand of everyday users looking to create AI-powered solutions. PowerAI was designed to be the “Red Hat” of AI, a software distro that curates a rapidly improving technology on the cusp of greatness to provide stability, continuity and support. To mark the launch, I wanted to create a memorable milestone for the upcoming launch of our PowerAI product.

PowerMLDL” had been making great strides with an agile release cycle for early adopters that would make users forget the oldest, most solid – and staid – computer brand IBM was behind it. I had created the abbreviation MLDL (for Machine Learning and Deep Learning) because “AI” still had the bitter taste of defeat it acquired when “Artificial Intelligence” had been overhyped and ultimately failed to deliver on its promise in the 80s. Then as now, the most promising technology were “Neural Networks”, a computing structure loosely modeled after the human brain as a set of “neurons”, simple highly interconnected computing units.

But much had changed since Neural Networks and with them the term “AI” had fallen in disrepute: advances in computer hardware and the needs to process a veritable data deluge created by an ever more connected world answered both the “how” and “why” for rebooting “Artificial Intelligence”. And, marketing opined, the name AI was quickly rehabilitating itself along with the technical field.

As we stood to launch our PowerAI distro for Artificial Intelligence together with IBM’s first accelerated AI server “Minsky” (or, “IBM Power 822LC for HPC” in corporate branding), we needed to capture the imagination of what these new products made possible. The value of the new products was integration, ease of use and speed to solution brought to the technical innovations contributed to open source AI frameworks developed by researchers from many companies.

But how to express this benefit? One day as we were reviewing training times and working with our applied AI research colleagues at the IBM TJ Watson Research Center to improve training times for a range of Watson applications, an idea took hold: with the new server and software, we were on the cusp of training a network to recognize images from the most complete image database (“ImageNet”) available to date in less than two hours.

What if “Alexnet”, the most neural network of the day and winner of a prestigious image recognition contest, could be trained in under an hour? We set out to conquer the one hour time. With a focus on innovation and optimization, Alexnet could be trained in under an hour in late summer, and we published “Deep Learning Training in under an hour” in a blog detailing our results in October 2016.

And so, the AI benchmark war started. At IBM, we had started a project to federate many Minsky servers to train a newer, more complex network in even less time using a technology we called DDL – “Distributed Deep Learning”. But before we could publish our results , Facebook published their own blog about “Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour”.

The Facebook team demonstrated many great ideas, but were not able to release their code to the public. Thus, in another first, PowerAI made Distributed Deep Learning available to a broad user community with even better training performance. Since then, some of the most prestigious companies in technology have added their illustrious names to the growing list of new AI training records: UC Berkeley, Tencent, Sony, and Google, to name but a few.

In the course of this competition, the training time for Alexnet went from 6 hours in 2015 to 4 minutes, and for the much more complex ResNet50 (another winner of the image classification competition) from 29 hours to 2.2 minutes. These advances in training speed are particularly important because they enable AI developers and data scientists to create better solutions – despite many advances, AI and neural networks are by no means a mature technology. Not least because a “constructive theory” of neural networks – that is, how to construct a network to accomplish a well-defined task such as recognizing cancerous tissue from a biopsy sample – has eluded practitioners, and so defining new networks to accomplish a task is as much an act of artisanship as of engineering. Something that requires sketches, tests, trials and errors – and to enable that, speed in testing new ideas by training new networks.

And so this AI benchmark “war” is a war without victims, but many victors – everybody who is benefiting from advances in AI technology, from enhanced face recognition to secure data on your phone, to better recommendations for movies, books and restaurants, to enhanced security and better data management, and assistive technologies for road safety and medical diagnosis.

As AI evolves, many of us have recognized that image recognition has served us well in this competition for creativity and innovation until now, but we need to be more inclusive of the wide raneg of AI application domains, as we propel forward the performance and capabilities of AI solutions. With the recent “MLPerf” initiative to create an industry-wide AI performance benchmark standard, we are creating a better competition to propel human ingenuity even further to advance the boundaries of what is possible with AI.


The 7 Most Dangerous Digital Technology Trends

The 7 Most Dangerous Digital Technology Trends
by Matthew Rosenquist on 01-01-2019 at 6:00 am

The 7 Most Dangerous Digital Technology Trends

As our world embraces a digital transformation, innovative technologies bring greater opportunities, cost efficiencies, abilities to scale globally, and entirely new service capabilities to enrich the lives of people globally. But there is a catch. For every opportunity, there is a risk. The more dependent and entrenched we become with technology the more it can be leveraged against our interests.  With greater scale and autonomy, we introduce new risks to family health, personal privacy, economic livelihood, political independence, and even the safety of people throughout the world.

As a cybersecurity strategist, part of my role is to understand how emerging technology will be used or misused in the future to the detriment of society.  The great benefits of the ongoing technology revolution are far easier to imagine than the potential hazards. The predictive exercise requires looking ahead to see where the lines of future attackers and technology-innovation intersect. To that end, here is my 2019 list of the most dangerous technology trends we will face in the coming months and years.

Top 7 Most Dangerous Digital Technology Trends

1.      AI Ethics and Accountability

Artificial Intelligence (AI) is a powerful tool that will transform digital technology due to its ability to process data in new ways and at incredible speeds. This results in higher efficiencies, greater scale, and new opportunities as information is derived from vast unstructured data lakes. But like any tool, it can be used for good or wielded in malicious ways. The greater the power, the more significant the impact.

Weak ethics may not seem worthy of being on the list, but when applied to the massive adoption, empowerment, and diversity of use-cases of AI, the results could be catastrophic. AI will be everywhere. Systems designed or employed without sufficient ethical standards can do tremendous harm, intentionally or unintentionally, at an individual and global scale.

Take for example how an entire community or nation could be manipulated by AI systems generating fake news to coerce action, shift attitudes, or foster specific beliefs. We have seen such tactics happen on a limited scale to uplift reputations of shady businesses to sell products, undermine governments, and lure victims into financial scams. By manipulating social media, advertising, and the news, it is possible to influence voters in foreign elections, and artificially drive the popularity of social initiatives, personalities, and smear campaigns. Now imagine highly capable systems to conduct such activities at a massive scale, personalized to individuals, relentless in the pursuit of its goal, that quickly improves itself over time with no consideration of the harm it is causing. AI can not only inundate people with messages, marketing, and propaganda, it can also tailor it to the individuals based upon specific user’s profile data, for maximum effect.

AI systems can also contribute and inadvertently promote inequality and inequity. We are still in the early stages where poor designs are commonplace for many AI deployments. It is not intentional, but a lack of ethical checks and balances result in unintended consequences. Credit systems that inadvertently favored certain races, people living in affluent areas, or those who support specific government policies have already been discovered. Can you imagine not getting a home or education loan because you happen to live on the wrong side of an imaginary boundary, due to your purchasing choices, or your ethnicity? What about being in a video conference but the intelligent system does not acknowledge you as a participant because it was not trained to recognize people with your skin color? Problems like these are already emerging.

AI systems are great at recognizing optimal paths, patterns, and objects by assigning weighted values. Without ethical standards as part of the design and testing, AI systems can become rife with biases that unfairly undermine the value of certain people, cultures, opinions, and rights. This problem propagates, potentially across the spectrum of systems and services leveraging AI, and to impact people through the layers of digital services that play a role in their life, thereby limiting what opportunities they can access.

A more obvious area where AI will greatly contribute to the undermining of trust and insecurity is via synthetic digital impersonations, like ‘deepfakes’ and other forms of forgery. These include videos, voices, and even writing styles that AI systems can mimic, making audiences believe they are interacting or conversing with someone else. We have seen a number of these emerge with political leaders convincingly saying things they never did and with celebrities’ likenesses being superimposed in sexually graphic videos. Some are humorous, others are intended to damage credibility and reputations. All are potentially damaging if allowed to be created and used without ethical and legal boundaries.

Criminals are interested in using such technology for fraud. If cybercriminals can leverage this technology at scale, in real-time, and unimpeded, they will spawn a new market for the victimization of people and businesses. This opens the door to create forged identities that are eerily convincing and will greatly contribute to the undermining of modern security controls.

Scams are often spread with emails, texts, phone calls, and web advertising but have a limited rate of success. To compensate, criminals flood potential victims with large numbers of solicitations anticipating only a small amount will be successful. When someone who is trusted can be impersonated, the rate of success climbs significantly. Current Business Email Compromise (BEC), which usually impersonates a senior executive towards a subordinate, is growing and the FBI estimates it has caused over $26 billion in losses to American companies in the past 3 years. This is usually done via email, but recently attackers have begun to use AI technology to mimic voices in their attempts to commit fraud. The next logical step is to weave in video impersonations where possible for even greater effect.

If you thought phishing and robocalls were bad, standby. This will be an order of magnitude worse. Victims may receive a call, video-chat, email, or text from someone they know, like a coworker, boss, customer, family member or friend. After a short chat, they ask for something: open a file, click on a link, watch a funny video, provide access to an account, fund a contract, etc. That is all it will take for criminals to fleece even security-savvy targets. Anybody could fall victim! It will be a transformational moment in the undermining of trust across the digital ecosystem and cybersecurity when anyone can use their smartphone to impersonate someone else in a real-time video conversation.

Creating highly complex AI tools that will deeply impact the lives of people is a serious undertaking. It must be done with ethical guardrails that align with laws, accepted practices, and social norms. Otherwise, we risk opening the doors to unfair practices and victimization. In our rush for innovation and deployment, it is easy to overlook or deprioritize a focus on ethics. The first generations of AI systems were rife with such bias as the designers focused on narrow business goals and not ramifications of outliers or the unintended consequences of training data that did not represent the cross-section of users. Our general inability to see future issues will amplify the problems.

AI is a powerful enabler and will amplify many of the remaining 6 most dangerous technology trends.

2.      Insecure Autonomous Systems

As digital technology increases in capabilities, we are tantalizingly close to deploying widespread autonomous systems. Everyone marvels at the thought of owning a self-driving automobile or having a pet robot dog that can guard the house and still play with the kids. In fact, such automation goes far beyond consumer products. They can revolutionize the transportation and logistics industry with self-operating trucks, trains, planes, and ships. All facets of critical infrastructure, like electricity and water, could be optimized for efficiency, service delivery, and reduced costs. Industrial and manufacturing crave intelligent automation to reduce expenses, improve quality consistency, and increase production rates. The defense industry has long sought autonomous systems to sail the seas, dominate the air, and be the warriors on the ground.

The risks of all these powerful independently-operating systems are that if they are compromised, they can be manipulated, destroyed, held hostage, or redirected to cause great harm to economies and people. Worst-case scenarios are where such systems are hijacked and turned against their owners, allies, and innocent citizens. Having a terrorist take control of fleets of vehicles to cause massive fatalities in spectacular fashion, to turn the regional power or water systems off by criminals demanding a ransom, or manipulating industrial sites with caustic chemicals or potentially dangerous equipment could cause a hazard to nearby communities or an ecological disaster.

Autonomy is great when it works as intended. However, when manipulated maliciously it can cause unimaginable harm, disruption, and loss.

3.      Use of Connected Technology to the Detriment of Others

One of the greatest aspects of digital technology is how it has increasingly connected the world. The Internet is a perfect example. We are now more accessible to each other, information, and services than ever before. Data is the backbone of the digital revolution.

However, connectivity is a two-way path. It also allows for unwanted parties to connect, harass, manipulate, and watch people. The impacts and potential scale are only now being understood. With billions of cameras being installed around the world, cell-phones being perfect surveillance devices, and systems recording every keystroke, purchase, and movement people make, the risks of harm are compounded.

Social media is a great example of how growth and connectivity have transformed daily life but have also been used to victimize people in amazing ways. Bullying, harassment, stalking, and subjugation are commonplace. Searching for information on others has never been easier and rarely do unflattering details ever get forgotten on the Internet.

The world of technology is turning into a privacy nightmare. Almost every device, application, and digital service collects data about its users. So much so, companies cannot currently make sense of most of the unstructured data they amass and typically just store what they gather in massive ‘data-lakes’ for future analysis.  Current estimates are that 90% of all data being collected cannot be used in its current unstructured form. This data is rarely deleted and is instead stored for future analysis and data mining.  With the advance of AI systems, valuable intelligence can be readily extracted and correlated for use in profiling, marketing, and various forms of manipulation and gain.

All modern connected systems can be manipulated, corrupted, disrupted, and harvested for their data regardless if they are consumer, commercial, or industrial grade. If the technology is a device, component, or digital service, hackers have proven to rise to the challenge and find ways to compromise, misuse, or impact the availability of connected systems.

These very same systems can facilitate improved terror attacks and become direct weapons of warfare as we have seen with drones. Terror groups and violent extremists are looking to leverage such technology in pursuit of their goals. In many cases, they take technology and repurpose it. As a result, asymmetric warfare increases across the globe, as connected technology is an economical force-multiplier.

The international defense industry is also keen on connected technology that enables greater weapon effectiveness. Every branch of the U.S. military is heavily investing in technologies for better communication, intelligence gathering, improved target acquisition, weapons deployment, and sustainable operations.

Digital technology can connect and enrich the lives of people across the globe, but if used against them it can suppress, coerce, victimize, and inflict harm. Understanding the benefits as well as the risks is important if we want to minimize the potential downsides.

4.      Pervasive Surveillance

With the explosive increase of Internet-of-Things (IoT) devices, growth of social media, and rise in software that tracks user activity, the already significant depth of personal information is being exponentially expanded. This allows direct and indirect analysis to build highly accurate profiles of people which gives insights into how to influence them.  The Cambridge Analytica scandal was one example where a company harvested data to build individual profiles of every American, with a model sufficient to sell to clients with the intent to persuade voting choices. Although it has earned significant press, the models were based on only 4 to 5 thousand pieces of data per person. What is available today and in the future about people will dwarf those numbers allowing for a much richer and accurate behavioral profile. AI is now being leveraged to crunch the data and build personality models at a scale and precision never previously imagined. It is used in the advertising and sales, in politics, government intelligence, on social issues, and other societal domains because it can be used to identify, track, influence, cajole, threaten, victimize, and even extort people.

Governments are working on programs to capture all activities of every major social network, telecommunications network, sales transactions, travel records, and public camera. The wholesale surveillance of people is commonplace around the world and is used to suppress free speech, identify dissidents, and persecute peaceful participants in demonstrations or pubic rallies. Such tactics were used during the Arab Spring uprising with dire consequences and more recently during the Hong Kong protests. In the absence of privacy, people become afraid to speak their minds and speak-out against injustice. It will continue to be used by oppressive governments and businesses to suppress competitors and people who have unfavorable opinions or are seen as threats.

Cybercriminals are collating breach data and public records to sell basic profiles of people on the Dark Web to other criminals seeking to conduct attacks, financial and medical fraud. Almost every major financial, government, and healthcare organization has had a breach, exposing rich and valuable information of their customers, partners, or employees. Since 2015, the data market has surpassed the value of the illicit drug markets.

The increasing collection of personal data allows for surveillance of the masses, widespread theft and fraud, manipulation of citizens, and the empowerment of foreign intelligence gathering that facilitates political meddling, economic espionage, and social suppression. Widespread surveillance undermines the basic societal right to have privacy and the long-term benefits that come with it.

5.      The Next Billion Cyber Criminals

Every year more people are joining the Internet and becoming part of a global community that can reach out to every other person that is connected. That is one of the greatest achievements of businesses, and an important personal moment for anyone who instantaneously becomes part of the digital world society. But there are risks, both to the newcomers and current users.

Currently, at 4.4 billion internet users, it is expected that there will be 6 billion by 2022. In the next several years another billion people will join. Most of the current users reside in the top industrial nations, leaving the majority of new Internet members to be from economically struggling countries. It is important to know that most of the world earn less than $20 a day. Half of the world earns less than $10 a day and over 10% live on less than $2 a day. For these people, they struggle to put food on the table and provide the basics for their families. Often living in developing regions, they also suffer from dealing with high unemployment and unstable economies. They hustle every day, in any way they can, for self-preservation and to survive.

Joining the Internet for them is not about convenience or entertainment, it is an opportunity for a better life. The internet can be a means to make money beyond the limitations of their local economy. Unfortunately, many of those ways which are available to them are illegal. The problem is social-economic, behavioral, and technical in nature.

Unfortunately, cybercrime is very appealing to this audience, as it may be the only opportunity to make money for these desperate new internet citizens. Organized criminals recognize the availability of this growing cheap labor pool, that are willing to take risks, and makes it very easy for them to join their nefarious activities.

There are many such schemes and scams that people who are desperate may go into. Ransomware, money mules, scam artists, bot herding, crypto-mining malware distribution, telemarketing fraud, spam generation, CAPTCHA and other authentication bypass jobs, and the list goes on. All enabled by simply connecting to the Internet.

Ransomware-as-a-service (RaaS) jobs are by far the biggest threat to innocent people and legitimate businesses. Ransomware is estimated to triple in 2019, causing over $11 billion in damages. RaaS is where the participant simply connects to potential victims from around the world and attempts to convince them to open a file or click a link, to become infected with system impacting malware. If the victim pays to get their files and access restored, the referrer gets a percentage of the extortion.

There are no upfront costs or special technical skills needed. Organized criminals do all the back-end work. They create the malware, maintain the infrastructure, and collect the extorted money. But they need people to do the legwork to ‘sell’ or ‘refer’ victims into the scam. It is not ethical, but it can be a massive payday for people who only earn a few dollars a day. The risks of being caught are negligible and in relative terms, it may enable them to feed their family, put their children in school, or pay for life-saving medicine. For most in those circumstances, the risks are not worthy of consideration as it holds the potential of life-changing new revenue.

Many people are not evil by nature and want to do good, but without options, survival becomes the priority. One of the biggest problems is the lack of choices for legitimately earning money.

A growing percentage of the next billion people to join the Internet will take a darker path and support cybercrime. That is a lot of new attackers that will be highly motivated and creative to victimize others on the Internet, putting the entire community collectively at risk. In the next few years, the cybercrime problem for everyone is going to get much worse.

6.      Technology Inter-Dependence House of Cards

Innovation is happening so fast it is building upon other technologies that are not yet fully vetted and mature. This can lead to unintended consequences when layers upon layers of strong dependencies are built upon weak foundations. When a failure occurs in a critical spot, a catastrophic cascading collapse can occur. The result is significantly larger impacts and longer recovery times for cybersecurity incidents.

Application developers are naturally drawn to efficient processes. Code is often reused across projects, repositories shared, and 3rd party dependencies are a normal part of modern application programming. Such building blocks are often shared with the public for anyone to use. It makes perfect sense as it cuts down on organic code development, testing, and increases the speed in which products can be delivered to customers. There is a downside. When a vulnerability exists in well-used code, it could be distributed across hundreds or thousands of different applications, thereby impacting the collective user-base. Many developers don’t check for weaknesses during development or post-release. Even fewer force updates for 3rd party code once it is in the hands of customers.

The same goes for architectures and programs that enhance or build upon other programs or devices. Think of the number of apps on smartphones or personal computers, applets on home internet media devices, or extensions running within web browsers. Cloud is another area where many other technologies are operating in close proximity. The entire world of virtual machines and software containers relies on common infrastructures.  We must also consider how insecure the supply chain might be for complex devices like servers, supercomputers, defense systems, business services, voting machines, healthcare devices, and all other critical infrastructure. Consider that entire ‘smart cities’ are already in the planning and initial phases of deployment.

The problem extends well past hardware and software. People represent a significant weakness in the digital ecosystem. A couple of years ago I pulled together the top industry thought leaders to discuss what the future would look like in a decade. One of the predictions was a concerning trend of unhealthy reliance on technology. So much so, we humans may not pass forward how to do basic things and over time the skillset to fix complex technology would distill down to a very small set of people. Increased dependence with less support capability leads to more lengthy recovery efforts.

Imagine a world where Level 5 autonomous cars have been transporting everyone for an entire generation. What happens if the system experiences a catastrophic failure? Who would know how to drive? Would there even be manual controls installed in vehicles? How would techs get to where they needed to be to fix the system? Problems like this are rarely factored into products or the architecture of interconnected systems.

All this seems a bit silly and far-fetched but we have seen it before on a lesser scale. Those of you who remember the Y2k (Year 2000) problem, also called the Millennium bug, where due to coding limitations old software that was running much of the major computer systems needed to be modified to accept dates starting at the year 2000. The fix was not terrible, but the problem was many of the systems used the outdated COBOL programming code and there simply weren’t many people left that knew that language. The human skillset had dwindled down to a very small number which caused tremendous anxiety and a flurry of effort to avoid catastrophe.

We are 20 years from that problem and technology innovation has increased. Similar risks continue to rise as the sprint of advancement stands upon the recently established technologies to reach further upward. When such hastily built structures come crumbling down, it will be in spectacular fashion. Recovery times and overall impacts will be much greater than what we have seen in the past with simple failures.

7.      Loss of Trust in Technology

Digital technology provides tremendous opportunities for mankind, but we must never forget that people are always in the loop. They may be the user, customer, developer, administrator, supplier, or vendor, but they are part of the equation. When technology is used to victimize those people, especially when it could have been prevented, there is an innate loss in trust.

As cyber-attacks worsen then Fear, Uncertainty, and Doubt (FUD) begins to supplant trust. Such anxiety towards innovation stifles adoption, impacts investment, and ultimately slows down the growth of technology. FUD is the long-term enemy of managing risk. To properly seek the optimal balance between security, costs, and usability, there must be trust.

As trust falters across the tipping-point, but dependency still exists, it creates a recipe where governments are pressured to quickly react and accelerate restrictive regulations that burden the process to deliver products to market. This adds to the slowdown of consumer spending that drives developers to seek new domains to apply their trade. Innovation and adoption slow, which affects both upstream and downstream areas of other supporting technology. The ripple effects get larger and the digital society misses out on great opportunities.

Our world would be far different if initial fears about automobiles, modern medicine, electricity, vaccines, flight, space travel, and general education, would have stifled these technological advances that pushed mankind forward to where we are today. The same could hold true for tomorrow’s technology if rampant and uninformed fears take hold.

Companies can make a difference. Much of the burden is in fact on the developers and operators of technology to do what is right and make their offerings secure, safe, and respectful of user’s rights such as privacy and equality. If companies choose to release products that are vulnerable to attack, weak to recover, or contribute to unsafe or biased outcomes, they should be held accountable.

We see fears run rampant today with emerging technology such as Artificial Intelligence, blockchain, quantum computing, and cryptocurrencies. Some concerns are justifiable, but much of the distress is unwarranted and propagated by those with personal agendas. Clarity is needed.  Without intelligent and open discussions, uncertainty can run rampant.

As an example, governments have recently expressed significant concerns with the rise of cryptocurrencies because it poses a risk to their ability to control monetary policy measures, such as managing the amount of money in circulation, and how it can contribute to crime. There have been frantic calls by legislators, who openly admit to not understanding the technology, to outlaw or greatly restrict decentralized digital currencies. The same distrust and perception of losing control was true back in the day when electricity and automobiles were introduced. The benefits then and now are significant, yet it is the uncertainty that drives fear.

Fear of the unknown can be very strong with those who are uninformed. In the United States, people and communities have had the ability to barter and create their own local currency since the birth of the nation. It is true that cryptocurrency is used by criminals, but the latest statistics show that cash is still king for largely untraceable purchases of illicit goods, the desired reward of massive financial fraud, and as tax evasion tool. Cryptocurrency has the potential to institute controls that enable the benefits of fiat, in a much more economical way than cash, with advantages of suppressing criminals.

Unfounded fears represent a serious risk and the trend is getting stronger for governments and legislators to seek banning technology before understanding what balance can be struck between the opportunities and risks. Adoption of new technologies will always bring elevated dangers, but it is important we take a pragmatic approach to understand and choose a path forward that makes society more empowered, stronger, and better prepared for the future.

As we collectively continue our rapid expedition through the technology revolution, we all benefit from the tremendous opportunities that innovation brings to our lives. Even in our bliss, we must not ignore the accompanying risks that come with the dazzling new devices, software, and services. The world has many challenges ahead and cybersecurity will play an increasing role to address the cyber-attack risks, privacy, and safety concerns. With digital transformation, the stakes become greater over time. Understanding and managing the unintended consequences is important to maintaining continued trust and adoption of new technologies.

– Matthew Rosenquist is a Cybersecurity Strategist and Industry Advisor

Originally published in HelpNetSecurity 12/10/2019


Semiconductor Metrology Inspection Outpacing Overall Equipment Market in 2018

Semiconductor Metrology Inspection Outpacing Overall Equipment Market in 2018
by Robert Castellano on 12-31-2018 at 7:00 am

As uncertainties mount about the near-term semiconductor industry from companies in Apple’s supply chain and the significant drop in memory chip prices, the semiconductor industry has consistently grown each year since the great recession of 2009. Semiconductor revenues have consistently outpaced semiconductor equipment revenues, which I discussed in a November 27 SemiWiki article entitled “The Disconnect Between Semiconductor and Semiconductor Equipment Revenues.”

Historically, sales of process control tools have not mirrored sales of the entire front-end equipment market. Chart 1 is a graph of the change in revenues for the year as a comparison between the total equipment market and total process control market. Over the eight years of this chart the cyclicality in capacity-oriented capital spending by logic and memory chip manufacturers is obvious.

In 2017, we witnessed a ramp in memory spending – wafer front end equipment revenues from memory suppliers reached $27.8 billion, up 63.5% from revenues of $17.0 billion in 2016.

In 2012 industry-wide slowdown in memory-related semiconductor capital spending, which decreased 44.7% from 2011 revenues, followed in 2013 by a decrease of 20.9% in equipment spend from logic and foundry companies.

For the first three quarters of 2018, wafer front end equipment revenues increased 19.4% compared to 26.7% for metrology/inspection companies, according to The Information Network’s report “Metrology, Inspection, and Process Control in VLSI Manufacturing.”

A Paradigm Shift in Metrology/Inspection Demand

Semiconductor companies in the past attempted to ensure quality and reliability by using statistical analysis and data analytics capabilities of semiconductor yield-management systems or software. Statistical Process Control for semiconductor manufacturing enables a company to maximize yield and quality by merely sampling a small number of wafers out of thousands processed daily. Thus, the revenue growth in metrology/inspection systems often lags the growth in overall equipment, shown in Chart 1.

However, as semiconductor design rules decrease, yield becomes more sensitive to the size and density of defects. In addition, new manufacturing techniques and device architectures in production, which include 3D finFET transistors; 3D NAND, advanced self-aligned multiple patterning, and EUV lithography are creating a paradigm shift in metrology/inspection demand.

Semiconductor manufacturers decide to purchase metrology/inspection systems based on a number of factors, which when compiled become its “Best of Breed.” These factors include technological innovation, cost of ownership, price product performance, throughput, reliability, quality, and customer support.

Large companies such as KLA-Tencor and Hitachi High Technologies are facing competition from smaller and emerging semiconductor equipment companies, which (1) address specialized markets and (2) utilize innovative technology to gain customers.

For example, Rudolph Technologies’ CEO Michael Plisinski on Q3 2018 noted in his Q3 2018 earnings call his focus on a specialized market:

“Over the years, we’ve steadily grown our position in the RF communications market expanding our customer base to include 4 of the top 5 RF filter manufacturers as well as multiple leading module manufacturers. In fact, this quarter, we sold systems for RF process control to 7 different customers. The majority of these systems were to support investments in the manufacturing of sub-6 gigahertz devices for the initial build-out of 5G infrastructure.”

RF communication devices is currently a $10 billion market (total semiconductor market is $450 billion), but demand will mushroom with the introduction of 5G networks coming in 2019.

As an example of smaller companies utilizing innovative technology, RTEC recently introduced a new product, NovusEdge, for bare wafer edge and backside inspection. Edge die yield is becoming even more critical as semiconductor manufacturing fabs attempt to save costs by reducing the wafer edge exclusion to produce a larger number of yielding die per wafer.

Rudolph estimates the total available market for edge and backside inspection in this market to be roughly 15% to 20% of the overall unpatterned inspection market, which according to The Information Network was $435 million in 2017.

There are several startups gearing to compete against market leader KLAC. FemtoMetrix (Irvine, CA), uses Optical Second Harmonic Generation (SHG), a non-destructive, contactless, optical characterization method to characterize surfaces, interfaces, thin-films, as well as bulk properties of materials. Already, FemtoMetrix has completed its first round of equity financing in a deal led by Samsung’s Venture Division and SK Hynix Ventures, and announced a license agreement with Boeing. This type of new technology will eventually compete against KLA-Tencor.

Metrology/inspection equipment companies will benefit from the growth of the semiconductors in general, and from the need to increase in chip quality and reliability as the industry moves to 3D logic and memory chips, and more advanced technologies such as EUV lithography become more commonplace.


Designing a fully digitally controlled DC-DC buck converter

Designing a fully digitally controlled DC-DC buck converter
by Tom Simon on 12-31-2018 at 7:00 am

One of the unsung heroes of our digital world is the modest voltage converter. Batteries and wired power sources rarely match up with the supply needs for advanced ICs. Leading edge ICs have multiple voltage domains and very often, as in the case of processors, use dynamic voltage scaling to help conserve power. Looking at where power converters have come from we can see a lot of progress over the years. Certainly, no mobile device could live with the fully analog converters of the past.

The job of converting DC voltages moved away from linear voltage regulators to switching based buck converters. Mentor has published a case study that illustrates how the digital content of DC/DC converters has grown as the needs for these converters have changed. The original analog controlled buck converter based on an analog PID used off chip passives to regulate the output. However, the need to shrink designs, reduce BOMs and even integrate buck converters into large SOC packages led for the search for alternative techniques to improve output quality.

Replacing the analog PID with a digital control circuit eliminates the need for external passives, helps compensate for transistor imperfections and allows for more control over the stability of the converter. According the Mentor white paper, there are a number of tradeoffs to consider in choice of ADC resolution and DPWM selection. These tradeoffs become more complicated when improving the load transient response of the converter during the activation of aggressive power saving modes in the chips they are supplying. Mentor cites systems like the Intel Speed Shift technology as a source of rapid transitions from high power to low power states in supply circuits that can lead to voltage droop. To combat this Mentor evaluated the use of digital feed forward compensation to improve load transient behavior.

While the above approaches are likely to be effective at solving design problems and helping the system to meets its design specifications, they also introduce new design and verification challenges. Foremost among these is the smaller simulation time step needed to capture the circuit behavior. Simulation runtimes explode using rule based analog to digital simulation integration. Mentor’s new Symphony AMS simulation environment uses their Boundary Element (BE) to connect digital and analog domains and speed up simulation times.

In their white paper they simulated a fully digitally controlled buck converter using a test bench that contained a 16-core Xeon E5-2682 v4 CPU. They made simulation runtime comparisons using Symphony and one of the incumbent AMS simulation environments. Symphony running on one thread was 10.31X faster, reducing 187.8 hours of simulation to 18.2. With eight threads this advantage moves to 42.2X with a runtime of only 4.45 hours. Along with this impressive performance gain, the BEs also provide improved visualization of the analog and digital portions of the design.

Mentor looked at the effectiveness of the proposed design in limiting Vout droop during load transients. After simulation, they saw that the droop was reduced by 250mV, at a supply voltage of 1100mV. They also examined how well the converter compensated for higher internal loss by increasing the duty cycle. Because this effect is usually observed over a longer time interval, it has previously proven difficult to model. In their simulations, Mentor Symphony showed a 25mV improvement in voltage stability over a period of 1.4us, with a residual droop of only 5mV. To validate the results a test chip was fabricated. The Mentor white paper goes through the silicon measurements to illustrate the accuracy of the simulation results.

I review a lot of white papers from various vendors, I have to say that this one in particular was very informative and backed up with meaningful real-world data. More information about Mentor Symphony is available on the Mentor Website.