webinar banner2025 (1)

Synopsys and Synaptics Talk About Securing the Connected Home

Synopsys and Synaptics Talk About Securing the Connected Home
by Tom Simon on 07-23-2019 at 10:00 am

Like many people, I have been adding automation to my home, and the number of connected devices I use has slowly but steadily increased. These include light bulbs, cameras, switches, a thermostat, a voice assistant, etc. Between them, they know when I am home or away, and have the ability to record images and sound. In addition to privacy concerns, if they were hacked, they could turn off my security system, hijack my heat or air conditioning, and potentially control home appliances such as ovens and refrigerators. It should be clear that connected home devices can cause real harm if they are compromised. As a consumer I need to trust that the embedded systems in the devices are secure.

 

What are the best practices for designing home automation hardware so it is secure? According to Synopsys and Synaptics in their recent webinar on “Securing Connected Home Devices Using OTP NVM IP”, it starts with creating a trusted execution environment (TEE). Krishna Balachandran, Product Marketing Manager for NVM IP at Synopsys, and Jingliang Li, ASIC Architect in the IoT Department at Synaptics, describe how one-time programmable (OTP) non-volatile memory (NVM) is an excellent choice for creating the foundation for a TEE.

Indeed, they point out that more is at stake here than just the security of your home. Also, incoming data streams such as copyrighted media also pass through the connected home in the form of music, video, and images. In their webinar they discuss how the starting point for TEE is firmware in ROM and secure keys. With this the firmware can be validated and the keys can be verified. The unique keys are created and then stored in the device using NVM OTP that is programmed when the device is manufactured.

OTP NVM can be programmed at the time of manufacture using an externally provided voltage programming supply or it can optionally be programmed later using the chip’s supply voltage with a built-in charge pump. It is possible to permanently disable programming once the desired contents are written, adding further security.

A big advantage of NVM OTP IP from Synopsys is that they have implemented critical features to ensure security. Krishna talked about how the data is stored in complementary bit cells so that it is not possible to detect data values by monitoring fluctuations in supply voltage. They have also implemented detection for supply voltage tampering, which is sometimes used to compromise on-chip security.

During the webinar Jingliang talked about how Synaptics has used Synopsys NVM OTP IP for many generations of products. Krishna reviewed the process nodes that are supported, and those that are in the qualification process. Even though OTP NVM works on standard CMOS processes using foundry design rules, there is an extensive qualification process, where Synopsys works with the foundry to verify the yield and operation of the OTP NVM.

What was also interesting to learn is that Synopsys OTP NVM IP is useful for more than just key storage. They have several families, each targeted at a specific application. For up to 128Kb, they offer the XBC family. For larger sizes, which would be suitable for code storage, they offer the XHC family for 256Kb up to 1Mb. They have a specialized security-oriented family called XCS. In all they cover 180nm to 7nm, with operating voltages from 1.8V up to those of BCD and HV nodes.

Webinars that include product users are always much more informative. Synaptics has a long and deep history of working in the smart home area. They are the main supplier of Android based TV platforms. They also work with Google on a range of connected home products. So, it is good to hear their perspective on effective methods to add security. If you want to watch the entire webinar replay it is available on the Synopsys website.


PSS and Reuse: Great Solution But Not Hands-Free

PSS and Reuse: Great Solution But Not Hands-Free
by Bernard Murphy on 07-23-2019 at 6:00 am

If you’re new to PSS you could be forgiven for thinking that it automagically makes stimulus reusable, vertically from IPs to systems, horizontally between derivatives and between hardware-based and software-based testing. From a big-picture point of view these are certainly all potential benefits of PSS.

What PSS does provide is a standardized way to describe test intent declaratively. The declarative part is important; this means that you describe what should be tested rather than the detailed mechanics of how it should be tested. This abstracts the test intent from the test implementation, providing that standardized mechanism which should cover the needs of many possible types of testing.

This is not unlike the benefits that derive from object-oriented programming. Through complexity-hiding in classes you can hide the details of how something is implemented, say rotating a graphical image, so that an object based on such a class can easily be rotated without a user of that class having to worry about the details. Object-oriented programming (OOP), in languages like C++ or Java, gives you the mechanisms to do that hiding, but there’s nothing magical in delivering the underlying methods. You (or someone) still has to flesh out the implementation; it’s just more easily reusable across applications. If you don’t think carefully about reusability, there can be limited gain in using OOP. You may benefit in your own code but no-one else will.

The same applies to PSS, but hardware verification is an inherently very collaborative exercise so reuse across multiple applications is even more important, vertically, horizontally and between hardware-driven and software-driven testing. This means that the implementation layer – the connection between PSS models/scenarios and the various testing targets (IP/subsystem/SoC or UVM/C++) must be designed with reuse in mind, tuned to your verification methodology and between these targets.

Matthew Ballance (PE and PSS technologist at Mentor) has written a very nice white-paper on this topic. As a sidebar, I write quite a lot so I’m always working at improving my skills. I also read a lot of material written by others. Quite a lot of what I read gets the job done but can be challenging to absorb because it feels like it was written by someone for whom writing does not come naturally. I have to tip my hat to Matthew as a polished writer, easy to read and to get his point across in one pass.

But I digress. Advice in the paper breaks down into a few main sections: Identifying opportunities for reuse in existing testing code (SV or C++), building reusable PSS libraries, reusable data generation and checking, and making test realization reusable. In the first section he suggests looking at constraints, certainly SV constraints which have a global nature such as for configuration classes and operation modes. He also suggests looking at memory maps which can equally provide a source of PSS constraints. In test realization (the implementation topic above), you’re very likely to have standard UVM routines to perform basic operations on an IP such as configuration, and read and write. These can be reused in the PSS test realization. He also shows an example for a similar C++ access function.

(Tricky question here. Do you move that function to PSS? Probably not, because you may still need to use it in standalone UVM. So you have a wrapper in PSS and reference the UVM function. But then how do you ensure someone doesn’t change the function in UVM and break the PSS dependency? Clearly you need careful configuration management here.)

On building reusable PSS libraries, he points out that the standard is too new to provide already a library of predefined types and other objects. Those will presumably come in time; for now he makes some suggestions on some basic types you should consider. For checking he suggests something that seems self-evident, but will no doubt require care in design. When you’re building checks for an IP, separate those that will be globally meaningful from those that only have local relevance. The global checks are the ones you will want to use at the SoC level.

The last and longest section is on reusable test realization. Here he suggests you first focus on a common API that can be reused at the SV level and at the C++ level. Mentor offers a Micro-Executor (UEX) API at the C-level to simplify this task. The goal of this package is to provide a standardized realization interface to memory management, interrupt handling and thread handling, with realizations for SV/UVM and C at bare-metal and OS levels. This makes sense to me – not trying to bridge the whole gap between hardware-driven and software-driven interfaces in a standard library but rather bridging part of that gap.

This is obviously a complex topic that can’t be boiled down to a short how-to guide, but this seems like a good starting point. You can download the white-paper HERE.


eSilicon’s Latest SerDes Solution is Here – And It Took A Village

eSilicon’s Latest SerDes Solution is Here – And It Took A Village
by Randy Smith on 07-22-2019 at 10:00 am

I recently watched a webinar given by eSilicon about its project to enhance its licensable solution for 56 and 112 Gigabit per second PAM4 & NRZ DSP-based SerDes family in 7nm. I am sure it was complicated enough to coordinate a webinar with a host, a moderator, and three different technical presenters – however when we are talking about 56 Gigabits per second and beyond in performance, in comparison, the webinar is as easy as a walk in the park. In some systems the number of SerDes lanes is approaching 300, system power is exploding, and legacy backplanes simply won’t work. This was never going to be easy.

Mike Gianfagna, eSilicon’s VP, Marketing was the host and opened the webinar. Dan Nenni of SemiWiki then acted as moderator and gave some opening comments. Following that was Al Neves, CTO of Wild River Technology, who was responsible for designing the test board to measure and deliver the needed performance and compliance.  Then came Matt Burns, Product Marketing Manager–High Speed for Samtec, who discussed what was done to deliver the required connectivity. After that was Tim Horel, Director of Field Applications from eSilicon, who talked about putting it all together with eSilicon’s SerDes. To wrap it up there was a Q&A session where I imagine not all questions could be handled during the webinar, but I would expect eSilicon, given their high customer service standards, to follow up on any remaining questions offline. Though the session was not short, it moved quite quickly given all the information to cover.

Wild River Technology implemented the test board design. Al Neves spoke about the challenges to be faced in this area. Al and Wild River are the world’s experts when it comes to signal and power integrity for high-speed designs. They have been delivering high-speed designs to many companies for a long time and they have not lost their touch. Moreover, prior experience with these exotic waveforms is critical to solving the challenges that come up in this area, especially if you intend to deliver on time. Clearly, Wild River Technology had delivered.

Samtec also brought a lot to the table. eSilicon had determined to build the core of the design around Samtec’s Bulls Eye® Test Point System. This advanced cabling solution met the necessary high signal and power integrity requirements. Importantly, Samtec and Wild River made a great team as the project required a close working relationship between the cable solution provider (Samtec) and the board designer (Wild River). Samtec’s models also proved to be quite accurate, enabling a more predictable schedule.

As Tim Horel got into the specifics of the project, including utilizing some ideas Al Neves had proposed on how to run the project to develop this solution, I must admit I was having a hard time keeping up. This stuff is simply hard. Experts are needed. It shows how semiconductor IP has accelerated the semiconductor market. Few companies, maybe none really, can have all the expertise in every area in-house. Buying or licensing proven IP is the best solution for both making schedule and staying under budget.

If this stuff interests you, or if you simply want to learn more about this bleeding-edge technology, the webinar replay, and a white paper is available by going here. eSilicon has edited down the replay to 30 minutes, so you can learn a lot with a small investment of time. The eSilicon product description of eSilicon’s 56G & 112G PAM4 & NRZ DSP-Based Long-Reach SerDes Family in 7nm can be found here. Interesting and educational – have fun!


Fairchild’s Death March

Fairchild’s Death March
by John East on 07-22-2019 at 6:00 am

Death of Fairchild

The “20 Questions with John East” series continues

How did it end for Fairchild?  Badly!!!

In 1966 Fairch was the number one supplier of integrated circuits.  That was as it should have been.  After all,  Fairch had invented the IC.  But in 1967 TI passed them.  Still, Fairch remained a strong #2.  By the time that the mid-seventies arrived,  though, they were fading.  Motorola and some others had passed them in sales by then. Fairch was clearly struggling and beginning to look like an acquisition target.

After some near-deals, Schlumberger bought Fairchild in 1979 for $425 million. Schlumberger was a very successful supplier to the oil and gas exploration industry.  Over the years I’ve been really impressed with the way they’ve done business, but in this instance their past successes led to a terminal case of hubris. They put Tom Roberts, a financial type with no experience as a CEO and no semiconductor background either, in charge.  He fared badly.  Very, very badly!!  The death march had begun! In 1985 Don Brooks, a well regarded TI executive, replaced Roberts as CEO, but the damage had already been done.  Revenues continued to decline. Eventually, Schlumberger decided to sell.  A Schlumberger spokesman explained,  “Silicon Valley ain’t the oil business!”   In came an offer from Fujitsu.  The offer was for only $245 million  – a small amount for a company with sales of $400 million annually,  but Fairch jumped at it.  Terms were agreed to  — all that remained was government approval.  It never came.  The US government refused to approve the deal arguing that the sale of a technology company to a foreign entity was not in the best interests of the United States.  In the end,  Fairch agreed to sell themselves to National Semiconductor at the shockingly low price of $122 million.  To put that in perspective, today TI is valued at $110 billion and Intel at $210 billion.   — at its death bed,  Fairchild was worth about 1/2000 the value of Intel today.

Fairchild started out as the king of the hill.  The darling of Wall Street. They ended up virtually worthless.

Note:  National spun out a “new” Fairchild in 1997.  It wasn’t the real Fairchild.  They got out of the traditional IC rat race and into new product categories.  Power devices. Power discretes. Power analog. High voltage. Opto couplers etc.  They were quite successful — this new “Fairchild” was a winner.  The new management did a wonderful job!!  But it was Fairchild in name only.  It wasn’t even close to “our Fairchild”. The traditional IC inventor and powerhouse Fairchild was dead.  It had died a slow and painful death. 

Why?  What happened?  In my view there were three major causes.

#1.  The exodus

Fairchild could never keep their most important people.  Not long after the invention of the integrated circuit, internal strife broke out between some of the traitorous eight.  The result was that four of them left in 1961 to found Amelco.  Then,  of course,  Moore and Noyce left in 1968 to found Intel.  The last of the eight to leave was Julius Blank in 1969.  You’d think that there would have been great fanfare.  There wasn’t.  One day he was just gone.

The real personnel pirate when I first got there, though, was National Semiconductor.  In 1966 Charlie Sporck left his job at Fairchild to head up National.  A short while later Charlie recruited a trio of top Faichild managers including Pierre Lamond.  (Pierre eventually became a huge success in the venture capital field.  Today he’s a partner at Eclipse Ventures.  Pierre is 88 years old,  but has a ton of energy!!) Over the next couple of years, many key managers and engineers left Fairchild to go to National.  So  —– Intel wasn’t public enemy number one when I got to Fairch  — National was.  But then Intel took their turn at raiding  —  and they did an excellent job of it!!  Volumes of wonderful engineers and scientists made the jump to Intel.  Eventually even AMD took a turn.  Jerry Sanders and John Carey, of course, had been fired when Les Hogan came in  — victims of “Off with their heads.”  They went on to found AMD and, I’d imagine, took great delight when their turn came.

The bottom line  — Fairchild just couldn’t hang on to their most important employees.  The key Fairchild engineers usually ended up making huge contributions.  —  But – not at Fairchild.

#2 MOS happened

Fairchild started out using bipolar transistor technology.  No surprise there. MOS was technically very conceivable in the early days, but in the real world it couldn’t be made profitably.  The potential benefits of MOS were known, but always just out of reach.

In those days Fairch didn’t really understand mobile ion contamination.  Or work functions.  Or surface states.  Or oxides.   To grossly oversimplify, no one knew how to control the thresholds which moved substantially during life test. You couldn’t reliably turn off N-channel transistors.   So  — MOS at Fairchild in the early days was P-channel.  And sadly,  P-channel MOS was slow! There was one beautiful thing, though.  PMOS was a five mask process.  PMOS wafers were really cheap and easy to make so long as you didn’t mind bad sort yields and slow, unreliable parts.

CMOS had been conceptualized,  but it seemed totally out of reach in those days. It was widely recognized at the Fairchild R&D facility in Palo Alto that there were solutions to these problems and that the upside of MOS (Particularly CMOS) greatly eclipsed that of bipolar.  The roadmap was pretty clear:    Get rid of the contamination. Make silicon gate work. Switch to CMOS. And finally, scale like crazy!!  Scaling helps MOS greatly but helps bipolar only a little.  The good news:  that roadmap was followed and the problems were solved.  The bad news:  It didn’t happen at Fairchild.  It happened at Intel. And AMI. And Micron. And Mostek. And many other companies.  But Fairchild never really succeeded on the MOS battleground.

Somewhere around 1974, though, Fairchild came up with a counter-punch.  The Isoplanar process.  A team working under Doug Peltzer developed a new process for making bipolar ICs.  The new process used oxide sidewall isolation instead of the traditional reverse-biased junctions.  That would make the die a lot smaller and the parasitic capacitance a lot lower if only they could get the yields to a respectable point.  After some tough battles with Iceo – the traditional bane of bipolar transistors — they did it.  Ergo  — faster and cheaper!!  By quite a bit! Isoplanar bipolar technology staved off the MOS hoards for probably ten years longer than would have been the case otherwise.  But the fabs kept scaling.  Two microns went to 1.2, then to 1.0, then .8 then .5 etc etc.  MOS kept getting better and better.  Bipolar couldn’t keep up.  Today for the most part bipolar is a thing of the past.

(Note:  I was a bipolar transistor circuit designer.  Guess that explains why I’m having such a hard time getting a job.)

#3.  Product planning

In the late 90s we hired a consulting company at Actel.  (Name withheld to protect the guilty.)  After the normal lengthy and expensive consulting process, the consultants concluded that Actel was too focused on products. I grudgingly accepted that at the time, but I was wrong.  The fact was, we weren’t focused enough on products.  In the non-commodity IC world, your product is all that matters.  Branding works well for Apple!!  People will go out and buy a product just because it’s an Apple product.  But, when you’re selling to Cisco, what they want is the best product for the job they’re trying to do.  What does this have to do with Fairchild?  Fairchild never put together a product planning system that really worked.  Other than Isoplanar bipolar memories and a relatively small line of ECL products, they never seemed to innovate products that customers needed.

Proof of that came when Schlumberger took over Fairchild.  They put a ton of capital into the company —  rumors had it that Schlumberger invested the better part of a billion dollars after they bought the company.   They bought better fab equipment and better testers.  They improved their assembly lines. They also put more money into marketing and selling.  But they didn’t have a significant product that the world needed.  The capital spending was to no avail.  Sales didn’t rise at all.  In fact, they fell.  No good products – no sales!

Long story short?  Fairchild invented the integrated circuit and kicked off an industry that today hovers at around $400 billion annually.  They were at the root of the creation of probably hundreds of successful companies and many thousands of millionaires.  Along the way they helped create probably a few dozen billionaires as well.  But,  when the dust settled,  Fairchild  had failed.  They were worth next to nothing and the dregs that had some minuscule value lay in the hands of their once most despised competitor.

And Sherman Fairchild turned over in his grave.

Next week:  AMD and some Jerry Sanders stories.  (TJ Rodgers too!!)

See the entire John East series HERE.


Great, early signs of a recovery in logic, not memory

Great, early signs of a recovery in logic, not memory
by Robert Maire on 07-21-2019 at 12:00 pm

A “Logic Lead” recovery confirmed- Memory still mired, 3400C = “Third times a charm”. EUV finally accelerates as all ducks now in a row.

ASML posted a good quarter with great orders and capped off with a strong outlook for the current quarter.  Logic demand is sparking a recovery while memory remains essentially dead in the water.

ASML reported EUR2.6B in sales and EUR1.13EPS easily beating street earnings estimates even though revenue was in line. More importantly, guidance is for Q3 revenues of EUR3.0B with GM of 43-44%. Logic was a strong 61% of business, doubling from earlier levels. Memory demand which was expected to be down 20% is now expected to be down 30%.  Bookings were EUR2.8B and 10 EUV systems. Two thirds of bookings were for logic.

Logic comes to life first ….. Memory still dead

In our note last week, from Semicon West, we said that logic would likely lead the recovery, lead by 5G demand, while memory will remain dead for the foreseeable future. That is exactly what ASML said today.  ASML’s report is proof of our view of a different kind of recovery and a new normal of weaker memory.

ASML said that memory is now expected to be down 30% rather than the earlier 20% which suggests it may get worse before it gets better.

We had suggested that 5G would be the tip of the spear of a logic lead recovery and ASML also pointed out 5G as the driver of new technology nodes. (maybe ASML is reading our notes…)

3400C – Alpha, Beta now “production”….

“Third times a Charm”

We think that aside from the view that EUV is finally accelerating in the market, ASML finally, coincidentally has a tool thats truly ready for prime time and that is the 3400C.

The first 3400 version was likely more of an “alpha” stage tool, followed by an improved “beta” tool and now the 3400C has incorporated many fixes and improvements that make it a true “production” tool that customers can use in high volume production.  The increase in orders seems to support this view and customers we have spoken to seem to view the 3400C as a production ready tool that has key, needed improvements.  Its likely that this version could turn out to be a big seller.

It seems to us that EUV now has most all of its ducks in a row to accelerate into production.  This does not mean that there isn’t a ton of work still to be done but we think its safe to plow ahead and go into full production as we are past the point of no return.

A longer, deeper, memory downturn???

We think there is one very key piece of evidence that points to a deeper and different memory down cycle compared to prior down cycles.  In prior downturns, memory makers just idled in place, turning out chips waiting until demand picked up to suck up excess capacity and restore the supply/demand imbalance thus restoring pricing.

This cycle is very different in that memory makers are actively cutting wafer starts to artificially reduce supply to try to bring back a supply/demand balance rather than hope that demand recovers.

This means that memory makers will have a lot of idle semiconductor equipment sitting in their fabs turned off, wafer for memory demand to get back in balance.  It also means that memory makers will have a lot of excess capacity sitting off line that can be brought on line very quickly which could further dampen a recovery.  It also means that it will take, much, much longer for memory makers to start buying capacity related tools as they already are sitting on idle tools.  We would point out that this is not necessarily true of litho purchases as litho purchases are technology driven, not capacity driven as the memory industry looks at transitioning to EUV at some point in the future unrelated to the glut in capacity.

Winners are different in a logic led recovery

As we clearly pointed out in last weeks note, a logic led recovery does not equally raise all boats in the semiconductor equipment industry. Lam is the poster child of the memory industry spending spree, followed not too far behind by Applied.  ASML is clearly the leading indicator of logic as EUV isn’t currently used in memory (even though ASML still sells a lot of DUV tools into memory). KLA has historically been a logic/foundry house that closely follows the litho lead into new geometries.

Just as important we would point out the timing differences.  ASML has lead times of up to 18 months on EUV systems while process equipment makers tend to be a “turns” business. This means that the orders being seen currently by ASML may not be seen by Lam or Applied for another year, until the litho systems get installed and working.  KLA will likely slightly lag ASML tools as KLA also has longer lead times and you need to get your process worked out before you order a lot of process tools to fill out the fab.

The bottom line is that this early ASML, logic driven recovery does not immediately translate into better business for all in the equipment industry….but its not a bad omen…its a good start.

The stocks

We think this news is more ASML specific but will obviously have some positive collateral impact across the chip sector. The good ASML news is coupled with the somber fact that memory still sucks and the Sword of Damocles that is China trade is still swinging over the industry’s heads.  Given that 5G is both at the center of the logic recovery and the center of the Huawei dispute makes for an interesting dynamic.

In general, we would want to be long ASML and KLAC while potentially flat or short LRCX and AMAT. This could be an interesting “pair trade”.  ASML said in essence that their view of memory had deteriorated from down 20% to down 30% and that type of report would certainly be a net negative for AMAT and LRCX in their upcoming earnings.


Could Trump Slump Give Mobility a Bump?

Could Trump Slump Give Mobility a Bump?
by Roger C. Lanctot on 07-21-2019 at 6:00 am

Reports from Reuters last week suggested that the European Union member states “should brace for U.S. tariffs on several fronts in the coming months,” based on the comments of a senior German official who had met with “U.S. officials and lawmakers” in Washington. Among other sectors in the Trump Administration’s crosshairs vis-a-vis the EU was automobiles.

https://tinyurl.com/y2y58yfp – “Europe Should Brace for Tariffs on Several Fronts – German Official” – Reuters

This was only the latest twist in the ongoing automotive industry disruption emanating from Washington, which has coincided with a global downturn in vehicle sales – as reflected in recent reports from LMC Automotive and other industry forecasters. While LMC and its peers lay the blame for the decline on many causes, including changing emissions regulations in China, the largest global auto market, the Trump Administration factors into diminished growth calculations alongside Brexit and softness in demand elsewhere in Asia.

Some analysts also perceive a hit to vehicle sales arising from signs of declining vehicle ownership in both developed and developing markets – as consumers shift their dollars away from the expensive proposition of owning cars. It’s too early to say for sure, but a collateral outcome of tariff-based warfare between the U.S. and the rest of the world could stimulate mobility services like ride hailing and car sharing. It is easy to see how in China, Russia, Africa and Latin America, companies such as DiDi, Yandex and Uber are presenting a very real alternative to car ownership.

To be sure, forecasters have been reducing forecasted sales volumes for cars on an annual basis for several years. The latest adjustments, however, reflect a more dramatic downturn signaling an overall decline while also becoming something of a self-fulfilling prophecy: the market looks like it is declining, so the industry prepares for that decline.

One thing the Trump Administration has indisputably introduced into auto makers’ planning is a huge helping of uncertainty. Threats of import tariffs have stalled production and plant construction plans in the U.S. for some importers. And they have failed, in the short term, to stem auto industry job losses or stimulate hiring.

In effect, the ongoing confrontations over imports with China, Japan, Canada, Mexico, and the E.U. have frozen auto makers like deer in the headlights. At a time when the industry is in desperate need of maximum flexibility and nimbleness to respond to rapidly changing technology, the Trump Administration has instituted a global condition of strategic vapor-lock. Auto makers simply don’t know how matters beyond their control will sort out in the short or the long run.

This uncertainty is now telegraphed throughout the entire supply chain. Major suppliers to the auto industry are more than 12 months into their efforts to shift production away from tariff-impacted sources, if waivers can not be obtained. The global downturn has one certain impact, which is to shave off several percentage points of market growth – while threatening to take growth entirely off the table.

So car makers reduce their production targets. Suppliers reduce their expectations. Investment in plants, equipment and personnel are stalled. Progress comes to a standstill.

Trump’s slump?

Is this Trump’s slump? That might be going too far to conclude. One thing is clear, though. The auto industry and investors have little tolerance for uncertainty. In the current uncertain environment, the only certainty is uncertainty. That is not good for any car company, regardless of where they make or sell their cars.

Another certainty is at risk in this equation. The automotive industry was once a reliable source of unmitigated growth thanks to steady demand in developed markets and growing demand from the developing world. That, too, is in question as mobility alternatives (ride hailing, car sharing) emerge in the developing world as an alternative to vehicle ownership and as developed countries use rules and congestion charges to limit access of privately owned vehicles to city centers. The trade wars of the Trump slump may actually end up giving a bump to mobility operators. While it may be good news – MAYBE – for mobility operators, it will be a bitter pill to swallow for auto makers.


5G Faith & the Automotive Industry

5G Faith & the Automotive Industry
by Roger C. Lanctot on 07-19-2019 at 6:00 am

As the automotive industry slowly comes to terms with the implications of 5G connections in cars there are few outside the industry that understand the significance of this technology transition. It isn’t just a technology turning point. It is an emotional turning point for the industry as well.

Announcements are beginning to appear from telematics suppliers such as Harman International regarding program wins for 5G hardware – modules and antennas – indicating that 5G is indeed coming to embedded automotive connections. This kind of news arrives even as 5G skeptics express their concerns regarding radiation, wireless network coverage and the need for thousands of micro-cells, and the varying allocation of spectrum around the world.

It’s a clear case of ready-or-not, here it comes! For car makers it is a case of deja vu all over again. There lies the rub. Wireless carriers giveth and wireless carriers taketh away.

Wireless carriers brought the automotive industry low speed, analog 2G connections. Years later those analog networks were replaced by incompatible digital systems disconnecting hundreds of thousands of connected vehicles with automatic crash notification applications. It was years before the impacted car companies had resolved the resulting class action law suits in the U.S.

Within the past two years AT&T in the U.S. shut down its 2G network, once again impacting multiple auto makers, many of whom offered their formerly connected customers aftermarket hardware upgrades. The onset of 5G raises the specter of network transitions once again.

The issue boils down to the fact that wireless carriers simply don’t understand car companies. Conversely, auto makers tend to hate wireless carriers. One measure of the disconnect between wireless carriers and car companies is the fact that many wireless carriers talk about connected cars as being part of the Internet-of-things (IoT) universe.

IoT in the wireless industry is code for low bandwidth applications – sometimes referred to as M2M – machine-to-machine. The IoT world, though, is dominated and characterized by monitoring or diagnostic systems. Automobiles are on the cusp of becoming some of the highest bandwidth using devices on the planet – with the exception of video streamers.

Connected cars are poised to become quite lively users of wireless connectivity. Few wireless carriers are preparing properly for a world where connected cars are not only the largest source of new device connections – a reality that has existed globally for the past few years – but also some of the most intense consumers of bandwidth. That is the real revolution that is sneaking up – not so slowly – on both the wireless and automotive industries. It is also a reason for the introduction of 5G – existing wireless demand is straining capacity.

The fact that the auto industry and wireless carriers have come together under the auspices of the 5G Automotive Association – a 120-member organization spanning carriers, car companies, infrastructure, semiconductor and hardware providers – to sort out technical issues and standards for 5G implementation is a monumental turn for the industry. But one question remains, even as car makers are suspending their disbelief and committing to 5G: This time around, can the wireless carriers make a truly long-term commitment to this latest network architecture – preserving forward compatibility… indefinitely?

The depth of historical frustration and jaded skepticism among long-time automotive engineers as to the reliability of wireless carrier promises is difficult to plumb. The fact that so many have already taken the plunge and begun baking in their 5G plans with all of the technical challenges yet to be overcome is a testament to their fortitude and faith.

Car makers, with few exceptions, are embracing the 5G revolution. That revolution promises to enable and support autonomous vehicles and new, transformative crash avoidance technology while opening the door to richer infotainment experiences in cars. There is no doubt, though, that the adoption of 5G in the automotive industry is a massive leap of faith with fingers crossed in corporate boardrooms from Munich and Stuttgart to Detroit, Tokyo, Seoul, and Shanghai. Hopefully, the industry will avoid a Charlie Brown moment this time around.


Chapter 2 – Constants of the Semiconductor Industry

Chapter 2 – Constants of the Semiconductor Industry
by Wally Rhines on 07-19-2019 at 6:00 am

In the mid 1980’s, Tommy George, then President of Motorola’s Semiconductor Sector, pointed out to me that the semiconductor revenue per unit area had been a constant throughout the history of the industry including the period when germanium transistors made up a large share of semiconductor revenue.  I began tracking the numbers at that time and continue to do so today.  So far, it’s still approximately true.  If you are making a decision about a capital investment in semiconductor manufacturing, or even an investment decision for the development of a new device, this is a remarkably useful parameter to test the wisdom of your investment.  Figure 1 shows revenue per unit area data for the last twenty-five years (since I didn’t keep my records before that time). There are many possible explanations for why this empirical observation should be approximately correct. One of those explanations is the fact that semiconductor revenue and semiconductor manufacturing equipment costs, materials, chemicals and even EDA software costs all follow learning curves based upon the number of transistors cumulatively produced through history. Semiconductor revenue follows a learning curve that is parallel to the learning curves for all input costs to the design and production of semiconductors and is decreasing on a per transistor basis by more than 30% per year (FIGURES 2 through 6). The cost per transistor and the cost to process a fixed area of silicon therefore decrease at a constant rate with the same slope which is also the same decreasing slope as the revenue per transistor. The ratio between revenue and area therefore stays approximately the same.

Figure 1.  Revenue per unit area of silicon or germanium has been a long term constant of the semiconductor industry

FIGURE 2 provides another observation that most customers of electronic design automation (EDA) software find surprising. I’ve found that most EDA customers think that the EDA industry charges too much for its software and doesn’t feel the same pressure to reduce costs that is felt by its customers, the providers of chips. The learning curve for EDA software refutes this. The number of transistors sold by the semiconductor industry is a published number each year. So is the total revenue of the semiconductor industry and the EDA industry. When the EDA total available market (TAM) is divided by the number of transistors produced, we obtain the EDA software cost per transistor. This then shows that the EDA industry is reducing the cost of its products at the same rate as the semiconductor industry. That is as it must be. If the EDA industry doesn’t keep its learning curve parallel to the semiconductor industry learning curve, then the cost of EDA software as a percent of semiconductor revenue would increase and there would have to be cost reductions elsewhere in the semiconductor supply chain to offset it. As it is, EDA software costs are about 2% of worldwide semiconductor revenue (FIGURE 7) and this percentage has been relatively constant for the last twenty-five years.  This is also a fixed percentage of worldwide semiconductor research and development (FIGURE 8) which has been a relatively constant 14% for more than thirty years.

FIGURE 2.  Learning curve for transistors and EDA software

FIGURE 3.  Learning curve for front end fabrication equipment

FIGURE 4.  Learning curve for lithography and photomask making equipment

FIGURE 5.  Learning curve for semiconductor assembly equipment

FIGURE 6.  Learning curve for semiconductor automated test equipment

FIGURE 7. EDA revenue as a percent of semiconductor revenue

FIGURE 8. Semiconductor Research and Development as a percent of semiconductor industry revenue

Figure 9 shows the annual production of silicon measured by area.  This measurement follows a long term predictable curve.  Actual data moves above and below the trend line as companies over-invest in capacity when demand is strong and under-invest in periods of market weakness.  Investing counter-cyclically seems like a brilliant strategy but it’s very difficult to execute because semiconductor recessions force companies to squeeze capital budgets and to under-invest just when they should be investing.  Even so, this graph is useful because silicon area production is one thing that is predictable at least one year ahead.  We know approximately how much silicon area the existing wafer fabs are capable of producing and we are aware of the new wafer fabs that will be starting production in the coming year.  Wafer fabs that are pulled out of service are a small percentage of the total, especially during strong market periods, so next year’s silicon area is known fairly accurately.  Since market demand is not known, shortages and periods of excess capacity lead to magnified price changes as the capacity grows monotonically. But the growth or decrease of revenue in the coming year tends to be predictable when market supply and demand are reasonably balanced.  We know the revenue per unit area of silicon.  We also  know the area of silicon that will be produced next year.  Multiplying these two numbers gives us the revenue for next year.  That’s a useful number. Figure 10 shows how the annual semiconductor revenue correlates with a calculation based upon silicon area multiplied by the predicted ratio of revenue per unit area of silicon.  I find the correlation to be both remarkable and very useful.

Figure 9.  Area of silicon produced each year

FIGURE 10. Integrated circuit revenue vs calculation from silicon area and the revenue per unit area ratio

Semiconductor units shipped per year is also predictable (Figure 11).  This data from VLSI Technology covers the period since 1994.  While modest deviations do occur in years of severe recession or accelerated recovery, the long term trend is apparent and predictable. If you are purchasing capital for the long term, especially for assembly and test equipment, this data is particularly useful. Unit volume served as a cornerstone of semiconductor forecasts at the time I joined the industry in 1972. As far as I know, the unit volume had grown every MONTH since the start of the industry and it continued to do so until December of 1974 when the oil shock caused implosion of the market and semiconductor volume fell precipitously.

FIGURE 11. Integrated circuit annual unit volume of sales

One of the most common errors in semiconductor forecasting occurs when forecasters look only at revenue, ignoring the variability of price in the long term trend.  The unit volume is stable and predictable but the price is not. At the Symposium on VLSI Technology in Hawaii in 1990, Gordon Moore and Jack Kilby were present and we all commiserated about the death of Bob Noyce (who was Chairman of SEMATECH at the time) the day before the conference started. Despite his grief, Gordon went ahead with his presentation the next day highlighting what might be referred to as Moore’s Second Law, although it never caught on (for good reason). Gordon graphed the average selling price (ASP) of semiconductor components over their lifetimes, especially memory components.  His conclusion was that semiconductor components that start out at higher prices will eventually cost $1.00.  Figure 13 shows the data since 1984.  While the current trend and the distant history suggest that Gordon may have been right, this trend reveals major interruptions, the most notable of which was the DRAM shortage that occurred when Windows ’95 was introduced in 1995.  That drove up ASP’s and we have been slowly trending down ever since then toward the $1.00 asymptote.  The $1.00 price point should never be reached because there will always be newer components coming into the market but Gordon’s hypothesis is certainly interesting if not compelling.

One more interesting statistic is the number of transistors produced per engineer each year (Figure 13).  This is a quasi-measure of design productivity that reflects both the growing number of transistors per chip as well as the increasing volume of chips that have been sold each year. By this measure, productivity has increased five orders of magnitude since 1985.

FIGURE 12.  Average selling prices (ASP’s) of semiconductor components

FIGURE 13.  Transistors produced per electronic engineer

Read the completed series


Semicon West 2019 – Day 2

Semicon West 2019 – Day 2
by Scotten Jones on 07-18-2019 at 10:00 am

Tuesday July 9th was the first day the show floor was open at Semicon. The following is a summary of some announcements I attended and general observations.

AMAT Announcement

My day started with an Applied Materials (AMAT) briefing for press and analysts where they announced “the most sophisticated system they have ever released”.

There are two versions of the system, The Endura Clover system is targeted at MRAM and the Endura Pulse Impulse PVD system is targeted at PCRAM and ReRAM. The fact that AMAT has developed a platform specifically for these emerging memories really speaks to their potential in the market. MRAM for example is now available as embedded memory from Global Foundries, Intel, Samsung and TSMC. It is particularly useful at the edge where memory spends a 99% of the time in standby and MRAM is lower power than Flash and has no standby power draw. PCRAM and ReRAM are more targeted at the cloud.

MRAM requires stacks of 30+ layers with 10 different materials and layers only a few angstroms thick. The new Endura Clover has 7 deposition chambers and each chamber can deposit up to 5 materials with sub angstrom uniformity plus there is a pre clean and an oxidation chamber. A typical MRAM has a bottom electrode, a reference layer, an MgO barrier, a free layer and a top electrode.  The system offers heating to crystallize layers and cryogenic cooling for sharp interfaces, AMAT claims a +20% performance gain and greater than 100x better endurance. The critical MgO barrier is RF sputtered from a ceramic MgO target.

Figure 1. Applied Materials Endura® CloverTM MRAM PVD System. Photo courtesy of Applied Materials.

PCRAM has less layers but compound materials are very sensitive to contaminants. The Impulse PVD system is claimed to offer tight composition control with thickness and uniformity control.

Figure 2. Applied Materials Endura® ImpulseTM PVD System. Photo courtesy of Applied Materials.

The systems offer on-board metrology with a spectrograph with 1/100 nanometer resolution for on-die measurement in vacuum.

AMAT has 5 customers for the MRAM configuration and 8 customers for the PCRAM and ReRAM version. They are expecting a hundred million dollars in system business this year.

Touring the Show

I spent some time walking around the show and I thought it looked busy for the first day. One thing that struck this year is the the trend for the major equipment and materials companies to not have booths any more is continuing. Many years ago the major equipment manufacturers such as ASML, AMAT, TEL, Lam and KLA would have huge booths, they all stopped having booths years ago. The last two years I have seen less and less of the large materials companies on the show floor. Not sure what the long term impact will have on the show, there were certainly a lot of booths for the smaller companies. It may just reflect the industry consolidation to fewer – well know customers.

For an industry going through a down-turn I thought the mood was pretty good. I did talk to one person who thought the equipment companies were surprised by the downturn in capital equipment spending. Apparently, some of the companies geared up production assuming that last years spending levels would continue. Personally, I find this surprising, clearly Samsung spending over $20 billion dollars a year on capital wasn’t sustainable.

Leti-Fraunhofer Announcement

Tuesday afternoon Leti and Fraunhofer held a joint press conference to announce a new joint initiative.

The goal of the initiative is to work together on solutions for Neuromorphic computing. They want to achieve a hardware platform that is like open source for software while addressing trust, safety and security. In the US a lot of data goes into the cloud to be processed and people lose control over their data. In Europe there are a lot more regulations and people own their data. They see the need for AI to run at the edge so people can keep their data on local devices. For example medical data stays on your own device.

They looked at what each other can do and also Imec, they see Imec for FinFETs, Leti for FDSOI and Fraunhofer and Leti for 3D Packaging. “If FD12 can’t provide the compute power, look to combine FDSOI with FinFET with Imec”.

They are approaching this as an Air Bus kind of model and have Fraunhofer work with Imec and Leti and not duplicate what the others are doing. Although imec wasn’t present at the press conference they were present at the Leti-Fraunhofer technical symposium that night.

They want to find solutions to opportunities in the market. Other entities can join, it is “an open house built on strategy”.

They have gotten positive feedback from the European Commission and there may be national programs as well.

Figure 3. The Leti-Fraunhofer press conference. Pictured from left to right Emmanuel Sabonnadière (CEO of CEA-Leti, France), Patrick Bressler (Managing Director Fraunhofer Group for Microelectronics), and Jorg Amelung, Head of research for Fab Microelectronics (Germany) within the FMD group of Fraunhofer.


Location Indoors: Bluetooth 5.1 Advances Accuracy

Location Indoors: Bluetooth 5.1 Advances Accuracy
by Bernard Murphy on 07-18-2019 at 5:00 am

OK, so you’re in a giant mall, you want to find a store that sells gloves and you want to know how to get there. Or you’re in a supermarket and you need some obscure item, say capers, that doesn’t really fall under any of the main headings they post over the aisles. If you’re like most of us, certainly like me, this can be a frustrating experience. Modern stores know how to provide up-to-the-minute information, they provide wireless internet, usually, and we have our phones. But there’s been no way to provide the kind of service we take for granted with GPS – where is the thing I’m looking for and how to I get there? So we have to ask, or check mall maps, try to find the right store, try to figure out where we are on the map and try to figure out how we are oriented relative to the map. What is this, the Dark Ages?

Mega-mall

The problem for indoor positioning is that GPS is quite literally dark indoors; GPS devices can’t see satellites in those cases. That’s pretty limiting, not just to personal convenience but also to enhanced automation. In industry 4.0 we want to track assembly progress through a production line, or locate machines having problems – all indoors. In a hospital, nurses want to find a crash cart when a patient goes into cardiac arrest – also indoors. You can’t afford to waste time finding out where a cart was last left. Less dramatically (Paddy McWilliams, Engg Dir at CEVA gave me this one), where the *#!@ did I leave my left earbud (maybe down the side of the sofa)?

Helping find something in these cases is an example of indoor positioning services (IPS). Tracking something which will frequently move (such as the crash cart) is classified as real-time location services (RTLS) and Bluetooth beaconing is a major player in both spaces. When you consider the size of the GPS market and the scope for similar services indoors, it’s not surprising that the Bluetooth beacon market has been estimated at $58B by 2025.

A challenge for Bluetooth in this kind of application has been accuracy since location has been determined primarily by signal strength. This gives an indicator of distance but not direction, and triangulation to multiple sources doesn’t help much, given limited range and need to consider noise and multi-path effects. Wi-Fi with triangulation suffers from similar problems, also unpredictable latencies. Another technology, ultra-wideband (UWB), claims accuracy to centimeter levels but is expensive in cost and power and interference is a concern near critical equipment. Frankly most of the time, we don’t need that kind of location accuracy. Get me in the ballpark and I can find it from there.

Bluetooth is a strong contender for IPS and RTLS for multiple reasons. It’s already in every mobile phone and virtually every other kind of mobile device. It’s best in low power (especially in BLE) and low cost. This already makes it good bet for mass deployment. If only we could fix that accuracy problem. That’s what the Bluetooth SIG has delivered with the 5.1 release of the standard, adding angle-of-arrival (AoA) and angle-of-departure (AoD) detection. These add directional information to the distance estimate derived from signal strength, allowing location of a device to be calculated with much higher accuracy, to within tens of centimeters to a meter in everyday conditions. That’s good enough for me to find where the capers are.

Two methods for angle detection let a product developer choose the optimum IPS approach to be used for the target application; AoA is aimed at very low cost tracking tags with location calculations performed by the infrastructure system, whereas AoD allows the location calculations to be performed at the mobile device, allowing greater privacy for the user.

Bluetooth 5.1 IPS accuracy is a perfect match in smart manufacturing, healthcare, proximity services in retail, way-finding in airports, shopping malls, and hotels, for all of which, sub-meter accuracy should be just fine. And using an already widely-established standard seems like an obvious advantage in scaling to these levels of deployment.

For RTLS, adding angular to distance-based estimation can improve tracking of rapidly-moving objects so it becomes easier to track assets moving around a factory floor. RFID is the classic solution here but is obviously very short-range, where Bluetooth 5.1 can reach several 100m indoors, making (semi-)continuous monitoring much more practical.

Bluetooth 5.1 comes with some cost. The transmitter and/or receiver need an array of antennae to support angle-detection, depending on whether you want to support AoD or AoA methods. But this is modest compared to UWB and provides an opportunity for solution providers to add further differentiation to their products.

CEVA provides Bluetooth 5.1 compliant IP, for both BLE and Bluetooth Dual-Mode, under the RivieraWaves family, adding to its existing support for location services in GNSS and Wi-Fi-based location. Check them out next time you circle the supermarket three times trying to find those capers.