RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

The real race for superiority is TSMC vs Intel

The real race for superiority is TSMC vs Intel
by Robert Maire on 10-07-2018 at 7:00 am

Recent talk of AMD vs Intel market share share is misguided, the real race for superiority is TSMC vs Intel underlying that, tech dominance between US & China.

There has been much discussion of late about market share between Intel and AMD and how much market share AMD will gain at Intel’s expense due to Intel’s very late 10NM technology node. On the surface this may be the minor symptoms of a deeper conflict between Intel and TSMC and ultimately between the US and China for technology dominance.

The real root cause of AMD’s resurgence also may not be only Intel’s stumble but Global Foundries stumble as well. GloFo’s stumble in ability to deliver to AMD allowed AMD to go outside of its relationship and hook up with TSMC which “leap frogged” it into a truly competitive position.

At some point in time , its not a question of if but rather when, TSMC will be subsumed by China (along with Taiwan..). This makes the AMD versus Intel story really all about technology dominance in the future between the US and China.

Right now TSMC (China) appears to be in the lead even though Intel (USA) appears to be finally recovering from its large 10NM stumble. TSMC long ago won the foundry wars as it makes Apple chips, communications and video chips. On the logic side, the only thing left for TSMC to win is CPU/PC/server chips which AMD will serve as the vehicle for.

Although it can be argued that Samsung is still the number one chip maker by dollar volume, that is obviously due to its dominance of memory as it doesn’t compare to TSMC in the more technologically important foundry market.

Although memory is also obviously extremely important in today’s data obsessed world, we think China can more easily copy both NAND and DRAM than logic in the future. There are a number of memory fabs in China already starting on that quest.

With GloFo (owned by Abu Dhabi) out of the race, Intel is the only real participant in the semiconductor race for the US. Intel in recent years has been focused on everything and anything not core semiconductor. Software, AI, AR, VR & drones… and of course Mobileye. One could argue that BK took his eye off the ball, distracted by the lure of new shiny toys. We would hope that Intel’s new Ceo return focus to its core technology heritage and doubles down on that focus.

Maybe Jerry Sanders (founder of AMD) quote “real men have fabs” is even more important today as the real “fab king”, TSMC, owns the foundry industry and is the barbarian at the gates to the CPU industry.

TSMC will make more money from AMD than AMD will
We have pointed out in previous articles that TSMC is the dominant partner in the relationship with AMD. AMD needs TSMC more than TSMC needs AMD. Now that the good ship GloFo has been burned to the ground by its 7NM abandonment, AMD has no escape from TSMC island (Samsung is not a viable rescue). TSMC can control AMD’s profit margins by controlling their chip supply costs. They can also control AMD’s market share by dropping costs to AMD to increase share versus Intel or raising costs to tighten the competition. TSMC is at the controls…

AMD is now largely a captive puppet of TSMC
What is the correct market valuation of a captive puppet? We had suggested in previous notes that AMD was getting overly expensive. The market appears to finally, belatedly, be figuring this out. AMD’s stock seems to have discounted huge share gains and profitability both of which were still in the further future.

We think analysts would be mistaken to assume that AMD’s supply costs from TSMC will fall sharply or even remain constant as TSMC ramps up supply. AMD’s model is not that of a fab owner with high fixed costs and marginal incremental costs. TSMC controls AMD’s financial model going forward.

The relationship between AMD and GloFo was much more even handed as both sides needed one another more equally than the one sided TSMC/AMD relationship.

Design versus Fabrication
Although modern chip design is ever more critically complex and important than before, the fabrication of chips into silicon (Moore’s Law) has become exponentially more complex and has hit many more costly and technically complex barriers. Just compare the revenues of EDA companies versus chip tool companies . Look at the cost of building a fab or mask sets.

While AMD has great CPU designs and great video and AI capabilities, those designs have no value without the ability to fab them. TSMC brings that to the table and at a level equal to and better than Intel can.

At the end of the day, its the fab’s capability that wins the race….. a great design is worthless without a fab that can make it a reality…..

10NM finally out of the woods?
We were the first to break the news of Intel’s 10NM delay years ago. At the time we never would have imagined that the delay would be this long. This was far from normal as previously Intel’s “Tick Tock” strategy was as precise as a Swiss watch. Something happened, not clear what, but we think it was beyond just a technical barrier.

High on the rumor mill of causes of the delay are discussions of cobalts insertion into the manufacturing process. This sounds somewhat similar to the industries switch from aluminum to copper only a few orders of magnitude harder.
It is interesting to note that TSMC has not done cobalt yet so perhaps it has not suffered the same pain but may in the future. It is also interesting to note that GloFo was trying cobalt as well.

In any event it seems that Intel has “broken the code” and is now yielding better or at least enough to ramp production starting next year.

As most investors know, Intel’s 10NM and TSMC’s 7NM are rough equivalents in geometry and performance. Intel ramping in Q1 2019 would put them about 9 months behind TSMC’s leading edge which has already ramped.

It sounds like TSMC is already pushing hard at 5NM so Intel is going to have to go even harder to make up for lost time. So far TSMC has not gotten tripped up but it has yet to do cobalt and/or EUV. Perhaps if TSMC hits some hurdles Intel may have a chance to catch up but I would not count on it.

The US versus China in the fight for technology dominance
China has already proven to be a formidable competitor in software, Internet and AI etc;. It has a $100B checkbook to advance its semiconductor industry. So far they have made good (perhaps not great) progress. TSMC alone is a big enough prize in the technology race to justify getting even more aggressive on Taiwan. TSMC coupled with some memory fabs in the mainland would be a lethal combination way bigger than Samsung.

Maybe the US government will wake up and support the US semiconductor industry more. Maybe encourage Intel and Micron to merge. Maybe seek a buyer for GloFo Malta that will pick up the baton. Maybe put export restrictions on key US tool technology to delay China’s ascent.

So far, there is no such support from the government for the industry that holds the key to future technology dominance. There have been some very small import tariffs on Chinese goods related to semiconductors but nothing more.

Even though China is obviously no longer a communist country, we think a Lenin quote is appropriate; “The capitalists will sell us the rope with which to hang them,” In other words, we will continue to supply China with the technology that they will eventually dominate us with.

The Stocks
As we have previously suggested, we think AMD is overvalued as investors have stampeded into the stock on a good story without digging much deeper. While we think there is a lot of upside in AMD we think much of it is already in the stock and not a lot of room for any negative news.

As for Intel, ramping 10Nm is obviously good news but it won’t really impact things until at least Q1 2019 at best and realistically speaking Q2. In the mean time Intel still is still short of 14NM capacity that has caused it to perform unnatural acts to serve unexpected demand. It has also ticked up capex to try to help out but this also won’t fix the 14NM shortage for at least a couple of quarters as equipment has to roll in. This suggests that AMD will get some opportunistic share gains near term.

Semi Equipment Stocks – News remains negative…RTEC preannounces
As for the equipment companies, the news flow continues to be negative. RTEC pre-announced a roughly 10% shortfall in revenue that will cut EPS by a third or so.

The weakness for Rudolph appears broad based, not just Samsung and not just memory, and sounds like a fair number of tools have either slipped into next year or been canceled.

This also fits with our view of another 10% down leg for the quarter following on KLACs reset of Q4 expectations. The chip equipment flu which started with Samsung has spread to Micron, GloFo, TSMC and others. Intel seems to be the only chip company increasing capex near term but obviously not enough to make up for all the other cuts.

It has been a while in the chip equipment industry since we last had a negative pre-anouncement. Things have been that good for too long. RTEC’s pre-announcement underscores that we are in a standard cyclical downturn. At this point 2018 H2 is all downhill and the real question is when will we hit bottom (trough)? Q1 2019, Q2 2019 or further out?nIn looking at previous down cycles and current demand it feels a lot like a 3 or 4 quarter downturn.

We would not be surprised to see further negative news either in pre-announcements or quarterly reports as the weakness will show up in revenues and EPS.

We may have another leg down in the stocks. We are also seeing analysts downgrading the stocks, as we did earlier this week, as we get closer to a bottom. This obviously fits the “locking the barn door after the horse has bolted” as the stocks are way down from their peaks, but many bought into the one quarter air pocket fantasy.

Buckle your seat belts for a bumpy earnings season…..


AVANTI: The Acquisition Game

AVANTI: The Acquisition Game
by Daniel Nenni on 10-05-2018 at 7:00 am

This is the eighteenth in the series of “20 Questions with Wally Rhines”

Gerry Hsu’s departure from Cadence to form Avanti (originally named ArcSys) is chronicled in legal testimony as accusations of theft of software were followed by legal battles, financial awards and even prison terms. Mentor and Synopsys were simply onlookers as the drama unfolded but both had an interest in the outcome. The outcome of the trial pointed to substantial civil damages that Avanti would have to pay to Cadence. Mentor went to work with some of the top legal advisors at O’Melvany and Myers to estimate just how much those damages would be. Synopsys was reluctant to engage but was worried that, if Mentor acquired Avanti, the EDA balance of power could shift.

Gerry Hsu had taken up residence in Taipei, having avoided criminal charges for which some of his employees were not so lucky. Chi-Foon Chan, then EVP of Synopsys, suspected that Mentor was negotiating with Gerry Hsu to buy Avanti and Chi-Foon has since told me that he called every major hotel in Taipei to see if I was registered as a guest. In reality, we were much more serious about buying Avanti than Chi-Foon imagined. I rented an apartment in Taipei and spent more than a month living there and regularly meeting with Gerry. Meanwhile, Greg Hinckley, who was then Mentor CFO but effectively becoming COO, conducted meetings with the investment bankers to determine how we could put together a successful proposal to buy Avanti.

The bankers paid a lot of attention to two issues: 1) Negotiating how much they would be paid for the transaction and 2) Removing the absurd benefit in Gerry Hsu’s contract as Avanti CEO that would pay him $10 million if he left Avanti for any reason. Why would a Board of Directors approve such a condition? The Board of Avanti at that time consisted of five people, four of whom were employees who reported to Gerry, and the fifth was a forestry major whose knowledge of semiconductors and EDA was very limited. Securing approval for this condition couldn’t have been very difficult for Gerry even though it seemed to stand in the face of most responsible corporate governance. Greg, who is one of the best “out-of-the-box” thinkers I’ve ever known, addressed the bankers with a different question. “Why don’t we triple the amount”, suggested Greg, “and offer to pay Gerry $30 million instead of $10 million”? The bankers were aghast. Why would we do that? Greg’s response: “There’s obviously only one decision maker for the sale of the company so why don’t we appeal to his self- interest?” The bankers were skeptical but we put together a proposal that incorporated this feature. As justification, we asked that Gerry extend his non-compete agreement from one year to three years in trade for tripling the severance payment.

I arranged to have dinner with Gerry in Taipei. He brought his son along and I presented the proposal. When I highlighted the change in severance arrangement for Gerry, he quickly became suspicious and began arguing with me that he was entitled to the $10 million severance payment. I had to repeat twice that I didn’t dispute his right to the payment; I just wanted to extend his non-compete agreement to three years and triple the severance payment. Once Gerry understood, he became enthusiastic about the proposal and asked how quickly we could close an agreement. I cautioned Gerry that the terms of the agreement must be confidential and I had Gerry approve the letter of intent and confidentiality agreement. We shook hands on the deal and I called the Mentor team to join us in Taipei to finalize the agreement.

I can’t be sure how Gerry communicated with Synopsys but, by the time the Mentor negotiating team arrived, Gerry was already expressing second thoughts about his agreement to be purchased by Mentor. It became apparent that he was talking to another potential buyer despite his commitment to Mentor. So we returned to the U.S. with no deal. Subsequently, Gerry’s team contacted our bankers to re-start negotiations but we held firm, responding that we didn’t feel we could trust him based upon our previous experience. We didn’t engage again. Negotiations between Synopsys and Avanti continued and a deal was announced on December 3, 2001. A long period of review by the International Trade Commission ensued. After more than six months, the transaction was approved. Details were then published in a joint S4A filing by Synopsys and Avanti – https://www.sec.gov/Archives/edgar/data/883241/000095012302004502/0000950123-02-004502-index.htm Among the most interesting details for me were:
[LIST=1]

  • Synopsys hired attorneys to estimate the cost of the civil damage award that would likely be incurred, just as Mentor had done, and the answer came out nearly the same as the estimate that Mentor had received. This was somewhat remarkable when you consider the uncertainty of outcomes in the U.S. legal system for disputes in high technology.
  • The agreement between Synopsys and Avanti included a $30.6 million cash payment to Gerry Hsu for his employment agreement. He didn’t ever thank me.There was a benefit for Mentor, however. Cirrus Logic was one of the first to detect anomalies in the Avanti software that led them to believe that the Cadence accusation of theft was credible. Under certain conditions, wavy lines appeared on the screen with the Avanti place and route software in the same manner as Cirrus had experienced with Cadence place and route software. Mike Hackworth, CEO of Cirrus Logic, became concerned and talked with Joe Costello, CEO of Cadence, about switching back to Cadence for place and route. Mike’s limitation was that Cadence would have to develop a tighter integration with Mentor’s Calibre design rule checking software which Cirrus had adopted. We had a three way conference call among Mike, Joe and me where I insisted that we needed to obtain detailed specifications for Cadence’s LEF and DEF standards. Joe readily committed and assigned Bob Wiederhold, previously CEO of HLD, a company that had been acquired by Cadence, to effect the transfer of information. That’s when we found out that DEF was not one standard, even in Cadence. There were many versions and interpretations. Despite all this, we were able to work together and Calibre became tightly integrated with Cadence, and also Synopsys, making it successful in most of the design flows in the industry.

    The 20 Questions with Wally Rhines Series


Hiring has been strong this year so why is hiring so difficult?

Hiring has been strong this year so why is hiring so difficult?
by Mark Gilbert on 10-04-2018 at 12:00 pm

Let’s start with hiring through Q3 and what to expect for Q4…

Hiring for Q3 (and the year as a whole) has been strong and robust. The EDA/Semi hiring needs are indeed stronger this year than in a long time, yet more exacting than ever. We have had an exceptionally strong year even though it has been exponentially so much more difficult to find the right candidates. Even with so much demand for good people, companies continue to be extremely picky about their preferred candidate requirements. I have said it before and it bears repeating…if a company finds a smart, talented, capable engineer, who is desirable of learning whatever is needed, and has several of the primary prerequisites, it is smarter to hire (the proverbial) bird in the hand and help get them up to speed now, than hope a better option comes along. While a better candidate might follow, they also MIGHT NOT! When passing on a candidate that could fit, in lieu of that perfect fit, companies must think about the valuable time lost in searching and interviewing for the more perfect candidate, (with no guarantee of how long that might take), when they could have had someone else already up and running, learning what is necessary. The time it takes to search, find, and interview, and (hopefully) hire can be considerable, especially in today’s environment. I have seem companies pass on a relatively decent candidate, in hopes of finding someone that is a closer fit, only to be looking months, or even a year, later…it is not uncommon.

As we move into Q4, it is looking to be yet another robust quarter and hiring will remain strong. Here is what I know… All the economic numbers for EDA/Semi are strong, targets are being met and exceeded, and growth is occurring across a wide array of sectors. As I said earlier and it bears repeating… It continues to get harder and harder to find the right candidates for the overly-exact specifications that exist today. People are not leaving quite as fast as in years past and that makes it harder to “recruit” them out; harder, but not impossible. Comps seem to be going up, which is good news and a direct result of a strong market, and which should hopefully entice more people to consider alternative opportunities.

Because hiring is so difficult, companies need to ask if they can afford to wait for the exactly right candidate. I realize that for the most part, hiring managers want someone that can come in and make an immediate contribution with the least amount of training and resources. Certainly that makes sense on paper. The reality is, that is rarely the case. Even with the best of hires, ramp-up time can be considerable and more of a drain on internal resources than contemplated. All tech companies work in varying general domains of one category or another, but each domain has a specialty, shall we say a new subset, frontier that they are tackling. That newness inherently requires a learning curve. Strong internal training and support for the new hire is mandatory for them to learn the specific specialized domain. Sometimes, the new hire requires more bring-up-to-speed time than anticipated. Realistically, reality actually shows that to be the norm…more training time was needed than anticipated, and that happens more times than not. That fact brings to light this question: Is it worth the risk to pass on the decent, fits-most-of-the-specs candidate or wait and hope that a better one comes along? This is a big question, and one every company should consider when they have pressing, critical hiring needs. Certainly I am not saying hire someone that MIGHT be able to do the job, but I am saying that if they have most of what you need, you should be weighing your options carefully.

Candidates too need to learn how to impress the hiring managers during the interview process with their willingness to learn. They need to be compelling and convincing enough so that hiring managers are confident about your commitment to excel. It is essential to convince the team that you have what it takes and will do what is necessary to get up to speed quickly and succeed. Even on your own time, after hours!

ARM TechCon is right around the corner and has a good mix of technology and a decent attendance. I will be there in my famous white jacket, walking the aisles, seeing clients all day. I already have several off-site interviews scheduled with both new and existing clients. It seems like both ARM TechCon and my quick visit in and out will be quite busy…I hope to see you there and you should always feel free to call me with any questions you may have. Perhaps we can meet during the conference.

http://eda-careers.com/


Accellera Tackles IP Security

Accellera Tackles IP Security
by Bernard Murphy on 10-04-2018 at 7:00 am

I recently learned that Accellera has formed an IP security working group. My first reaction was “Great, we really need that!”. My second reaction was “But I have so many questions.” Security in the systems world is still very much a topic in its infancy. I don’t mean to imply that there isn’t good work being done in both software and hardware domains. But it still mostly feels reactionary and ad-hoc. Where’s the ISO 26262 for security? How do we quantify strong security versus weak security? And so on. Here, in no particular order, are some questions that I hope the working group will eventually answer.


How does IP security tie to SoC security and then to system security? In part this feels like the system element out of context (SEooC) topic in ISO 26262. How can you demonstrate security in a sub-component when you don’t know how it will be used in the larger system? Moreover, we still don’t have a good handle on defining security for the whole stack. Even if we have a well-defined definition for the IP, how do we compose those measures into a system-level measure?

Which raises a scope question. I see the chair is from Intel, which is a great start. They probably know more than most about security than most, despite their recent stumbles. And Synopsys is involved which is also good, not just for their IP expertise but also for their software security expertise. I hope Rambus will join, also maybe someone from Google ProjectZero (you see where I’m going with this). I hope Accellera will become a regular presenter at BlackHat. Meantime, it would be good to know how the WG plans to connect with existing compliance requirements from PCI, NSA and others.

But even given a WG group loaded with experts from the industry, how much will they share, and will that be enough to build an effective standard? Security through obscurity is still important and will likely always be important. What you don’t share is harder to attack because that makes it harder to guess at vulnerabilities. So how much can be shared in a standard? Mechanisms almost certainly not because that would limit innovation and differentiation, which hackers would love and the industry would hate. Measures of security seem more likely as long as they’re fairly general. Targeted metrics might be clues to likely weak areas. Or maybe these could be a good way to demonstrate strengths against a spectrum of possible attacks? (I said I had questions, not answers.)

Back to the element out of context point, how effective can security measures at this level be? Consider timing-channel attacks. I can run these from inside a VM nowhere near the IP, as long as I have access to an accurate timer. I just have to launch an operation that will use the IP. You could argue that attention to such attacks is out of scope for this work and should be the responsibility of a different standard. But that begs the question – how useful will this standard be if it does not consider such attacks? Answering that question requires a way to compare, at least approximately, the class of attacks that will be covered versus the class of all likely attacks (as anticipated within the lifetime of a device using the IP).

I could go on, but I do want to stress that, despite all my questions, I am very much a fan of this effort. Certainly the people contributing on the WG will know far more about security than I do and must see further and more clearly than I can. And frankly security is a huge problem, so every possible angle is worth exploring. I look forward to learning more as this develops. You can learn more about the Accellera WG HERE.


How technology is hacking love

How technology is hacking love
by Vivek Wadhwa on 10-03-2018 at 12:00 pm

Go into any bar in New York City or San Francisco (or increasingly Mumbai and Sydney) popular with the younger crowd and you will find a curious transformation. The majority of the patrons spend at least as much time checking their phones as they do checking out potential mates or talking to people they are with. Why? They are on Tinder. The wildly popular dating app has changed the mating game, in ways that we believe are toxic. A growing body of research associates Tinder use with less romantic satisfaction, less happiness, and even diminished sense of self-worth – particularly among men.

Let’s be clear: online dating isn’t itself bad. This new way of finding mates has broken down plenty of barriers. We can now meet people from different parts of the country, from diverse social groups. Websites such as Match.com and eHarmony are good at bringing people who want to have relationships together.

But Tinder brings a fundamental change to online dating. In the past, online dating was an intentional act. People logged on to a dating website to look for partners. The website was separate from other online activity and wasn’t just focused on inducing addictive behavior.

Tinder used swiping and other clever user-interface tricks that foster the actions of rating, comparing, and selecting potential mates. This made dating an omnipresent activity —swipe left, swipe right—that Tinder users could play in bars, in elevators, on the subway. Tinder’s innovation made online dating more addictive and comparative in an unhealthy way – a form of never-ending shopping that focuses on the shallowest of qualities.

The effects of dating apps on happiness are complex. On the one hand, online dating exposes people to a far wider set of options and allows filtering by criteria of the user’s choosing. On the other hand, the paradox of choice affects many by making a decision difficult—and when they do make a decision, they tend to can be less happy with it—possibly because that style of online dating promotes a mentality that views people and relationships as commodities to shop for.

Tinder promotes a winner-take-all effect, wherein everyone seeks the most attractive people. This eliminates selection of mates by other variables that may be more predictive of compatibility, leading to frustration all around. Evaluating choices side by side tends to encourage daters to emphasize factors and characteristics that are unlikely to determine compatibility. Whether someone is fairer or is taller, is highly unlikely to reflect compatibility over time; far less so than more-innate traits such as empathy, intelligence, or humor.

Particularly useless in this regard are superficial physical traits that tend to be overemphasized due to reliance on photos as the primary basis upon which to choose a date. Psychologists have long known that humans are bad at predicting compatibility. Tinder makes that bad prediction far more common and replaces other modes of interaction that might lead us to better matches.

This rating culture and mindset may also lead to diminished appreciation of people before we even meet them. Scientists are coming to believe that physical attraction is not fixed. We change what we think about people’s attractiveness based on our interaction with them. Funny people or clever people or extremely empathetic people may become more attractive to us after we talk with them or spend time with them.

Kansas University researchers documented this effect, calling it “the Tinder trap.” In a lab setting, they showed subjects pictures of potential mates and asked them to rate their attractiveness. The researchers then introduced some of the subjects to the people they had rated face to face. The scientists found, curiously, that potential partners they had rated as less attractive or moderately attractive were far more likely to get increased ratings after a face-to-face meeting than were potential partners they had rated as attractive. So evaluating a potential partner solely on visual attractiveness is a poor predictor of what you will think of that person once you meet in real life.

Perhaps most importantly, rating people’s attractiveness prior to meeting them tends to diminish the rater’s evaluation of that person afterward, “probably because the rater is comparing their conversation partner to all the other potential partners they saw online. In other words, the apparently endless choice that online dating offers may cheapen and undermine our perceptions of people in real life.

More concerning is that some online-dating applications have been linked with low self-esteem. In a survey of Tinder users and nonusers, those who used the swiping app recorded lower levels of self-worth and, along with other negative impressions, said that they were less satisfied with their own face’s appearance. Curiously, this effect was stronger in male users.

In our new book, Your Happiness Was Hacked: Why Tech Is Winning the Battle to Control Your Brain—and How to Fight Back, Alex Salkever and I look at how technologies are actually diminishing our well-being. Tinder is one of the most troubling developments we have seen, but it is in a long line of efforts by tech companies to addict users using techniques perfected in Las Vegas casinos and fine-tuned by armies of scientists and user experience experts in Silicon Valley.

The fact is that the tech industry is working overtime to steal our happiness and we must wrestle it back.

For more, please visit my website, www.wadhwa.com and follow me on Twitter: @wadhwa


AI and the Domain Specific Architecture

AI and the Domain Specific Architecture
by Daniel Nenni on 10-03-2018 at 7:00 am

Last month I attended the 2018 U.S. Executive Forum where Wally Rhines was one of the keynotes. I was also lucky enough to have lunch with Wally afterwards and talk about his presentation in more detail and he sent me his slides which are attached to the end of this blog.

The nice thing about Wally’s presentations is that they are not company specific while a lot of keynotes are company pitches in disguise. The other thing is that his slides are very detailed and tell a story so reading them really is the next best thing to being there.

When I first started in Silicon Valley in the 1980s we all designed and manufactured our own CPUs which I consider domain specific architectures. Intel then came around with a more general architecture and the personal computing revolution began. Fabless semiconductor companies then restarted domain specific computing with GPUs and SoCs that are now replacing Intel chips at an alarming pace. System companies (Apple) then took the lead with custom SoCs and now even software companies (Google) are making their own domain specific chips (TPU). There are also IoT and automotive domain specific chips flooding the markets.

We have a front row seat to this transformation on SemiWiki because we see the domains that read our site. The first IoT blogs started in 2014 and we now have over 400 that have been read close to 2 million times. Automotive also started for us in 2014 and we now have more than 300 blogs that have been read more than 1 million times. AI started for us in 2016 and now we have close to 100 blogs that have been read more than 250 thousand times. IoT wins but AI has just begun.

Wally has some interesting slides on AI, Automotive, VC Funds, and the China semiconductor initiative. Definitely worth a look. Here is his summary slide for those who are short of time:


Here are the other keynotes. I have access to the slides and will blog about them when I have time but since Wally sent me his slides he goes first. I will end this blog with the perilous thoughts I had on this subject during my long and dark drive home.

Opening Keynote: Looking To The Future While Learning From The Past
Presentation by Daniel Niles / Founding Partner / AlphaOne Capital Partners

Keynote: Convergence of AI Driven Disruption: How multiple digital disruptions are changing the face of business decisions
Presentation by Anthony Scriffignano / Senior Vice President & Chief Data Scientist / Dun & Bradstreet

AI and the Domain Specific Architecture Revolution

Presentation by Wally Rhines / President and CEO / Mentor, a Siemens Business

AI Led Security

Presentation by Steven L. Grobman / Senior Vice President and CTO / McAfee

AI is the New Normal – 3 key trends for the path forward
Presentation by Kushagra Vaid / General Manager & Distinguished Engineer – Azure Infrastructure / Microsoft

Innovating for Artificial Intelligence in Semiconductors and Systems
Presentation by Derek Meyer / CEO / Wave Computing

The Evolution of AI in the Network Edge
Presentation by Remi El-Ouazzane/Vice President and COO, Artificial Intelligence Products Group / Intel

GSA Expert Panel Discussion
Moderated by Aart de Geus / Chairman and Co-CEO / Synopsys

Keynote: Long Term Implications of AI & ML
Presentation by Byron Reese / CEO, Gigaom / Technology Futurist / Author

The semiconductor industry (EDA included) has posted some very nice gains in the past two years but how long can that continue? Take a look at this graph and you will see a pattern that will no doubt repeat itself but the question is how low will we go?

One thing I can tell you is that EDA is definitely in a bubble. Look at the VC money and all of the fabless startups that are buying tools, especially the ones in China. At some point in time money will run out and only a fraction of these companies will continue to expand and buy more tools. Someone else can run the numbers but my bet is that the EDA bubble will pop in 2019, absolutely.


Synopsys Seeds Significant SIM Segue

Synopsys Seeds Significant SIM Segue
by Tom Simon on 10-02-2018 at 12:00 pm

It turns out that consumers are not alone in their love-hate relationship with SIM cards. SIM cards save us from increasingly widespread cellphone cloning. However, if your experience is anything like mine, it seemed that with every new phone, a new SIM card format was needed. Furthermore, people travelling overseas who wanted to avoid roaming charges often find themselves trying to buy SIM cards in foreign countries (and languages) and then juggling these new cards with their original card to make calls, read texts and more recently pay for purchases.

Consumer gripes aside, there are bigger drawbacks with SIM cards as we know them. Manufacturers encounter higher costs due to the additional parts and the reliability issues associated with the slot and removable card. Also, the IoT has changed the needs for subscriber verification. It’s not just for phones anymore. Expect to see cars, watches, appliances, sensor hubs and almost all manner of connected devices needing unique and secure subscriber identification. Already, it should be clear that these things cannot have SIM cards slots that require manually adding or changing cards every time they are put into service or undergo a service provider change.

So what is a SIM card anyway? It is really a software application running on a secure processor and a set of security identifiers in the form of a subscriber ID number and an associated encryption key. There is more to it, but for the purposes of this discussion this will suffice. The physical hardware in the card is known as a Universal Integrated Circuit Card (UICC). It contains a CPU, memory and interface hardware. The software that runs on the UICC is called Universal Subscriber Identification Module (USIM) application.

In order to eliminate the need to insert, and subsequently replace, SIM cards, a method is needed to securely bootstrap a permanently built-in UICC and perform a secure Over the Air (OTA) transfer of the subscriber and security information. More than just handset side software and hardware is required to accomplish this. The GSMA has developed a standard for remote SIM provisioning. This has enabled the birth of the eSIM, which uses a UICC that is soldered to the mobile/remote device circuit board. Though, a separate component may still be an issue for some applications because it still takes up valuable board real estate and adds to the BOM.

The next logical step in this evolution is the incorporation of the UICC hardware and software into the system SOC. Synopsys has announced their iSIM, which is a UICC and USIM that can be integrated into any SOC design. Synopsys already has a large portfolio of secure and security related IP, so the addition of this innovative offering will dovetail nicely with their related IP. I recently spoke to Rich Collins, Senior Marketing Manager at Synopsys, about this.

He emphasized that the demands of IoT require moving from a standalone SIM card. IoT designs are already combining the application processor and modem, so including SIM functionality fits the requirements of these IoT systems. To ensure they are offering a complete solution he outlined for me their partnership with Truphone. Rather than leave it to their customers to find and integrate carrier side software to handle the network and OTA provisioning, Synopsys is offering a complete solution that includes the integration with Truphone’s worldwide network. Importantly, end users will have the freedom to choose any local carrier they wish, and Truphone will enable the handoff.

Synopsys is combining many of its mission critical IP components to facilitate iSIM functionality. Among these is their True Random Number Generator (TRNG) which is a vital element to avoid compromised security. Synopsys is integrating Truphone’s eSIM software stack (which includes Javacard OS) in a new tRoot HSM implementation for iSIM.

With the proliferation of IoT devices, the advent of 5G combined with the many benefits of completely integrated SIM hardware, it is clear that the next year or two will see big changes in SIM implementations. Synopsys is well positioned to help SOC and device designers navigate the process. End users and IoT deployment operators should be pleased with the end results. For more information on the complete Synopsys iSIM offering, visit the product page on their website.


GLOBALFOUNDRIES Pivot Explained

GLOBALFOUNDRIES Pivot Explained
by Scotten Jones on 10-02-2018 at 7:00 am

GLOBALFOUNDRIES (GF) recently announced they were abandoning 7nm and focusing on “differentiated” foundry offerings in a move our own Dan Nenni described as a “pivot”, a description GF appears to have embraced. Last week GF held their annual Technology Conference and we got to hear more about the pivot from new CEO Tom Caulfield including why GF abandoned 7nm and what their new focus is.

GLOBALFOUNDRIES Pivoting away from Bleeding Edge Technologies

Background
GF was created in 2008 in a spin-out of the fabs formerly owned by AMD. In 2010 GF acquired Chartered Semiconductor, the number three foundry in the world at that time and in 2015 GF acquired IBM’s microelectronics business. Figure 1 illustrates the key milestones in GF’s history.


Figure 1. GLOBALFOUNDRY Milestones

GF is owned by Mubadala Development Company (MDC). MDC financials include the technology segment made up of GF. Based on Mubadala financial disclosures, from 2016 to 2017 GF grew revenues by 12.4% and saw their operating loss widen from 8.0% of revenue in 2016 to 27.2% of revenue in 2017 calling into question the sustainability of GF’s business model.

On March 9[SUP]th[/SUP], 2018 Tom Caulfield became the new CEO of GF with a mandate to build a sustainable business model.

7nm History
In the early 2010s GF was working on development of their own 14nm process technology but realizing they were falling behind their competitors ultimately abandoned their in-house development and licensed 14nm from Samsung. The licensed 14nm process was launched in 2014 in Fab 8 (see figure 1). GF has continued to improve on that process adding process options and more recently launching a shrunk 12nm version. The 14nm and newer 12nm version have been utilized by AMD for microprocessors and graphics processors, by GF for their FX-14 ASIC platform and by other customers.

With the IBM Microelectronics acquisition in 2015, GF received a significant infusion of researchers including Gary Patton who became the CTO of GF. Beginning around 2016, the combined GF and IBM research teams started to develop their own in-house 7nm technology. The initial version was planned to be based on optical exposures with GF also planning an EUV based follow-on version.

By all account’s development was proceeding well. In a July 2017 SemiWiki exclusive, GF disclosed their key 7nm process density metrics and at IEDM in December 2017 GF disclosed additional process details. My write up of GF’s process density metrics is available here and a comparison of GF’s 7nm process to Intel’s 10nm from IEDM is available here. GF’s 7nm process appeared to be a competitive process. I have also written about the leading-edge 7nm and beyond processes here.

One concern I have had about GF 7nm for a long time is scale. GF was reportedly installing only 15,000 wafers per month (wpm) of 7nm capacity. The average 300mm foundry fab had 34,213 wpm capacity at the end of 2017 and are projected to reach over 40,343 wpm by the end of 2020, and 43,584 wpm by the end of 2025 [1]. Newer leading-edge fabs are even larger and are what is driving up the average. At the leading-edge, wafer cost is roughly 60% depreciation and larger fabs have better equipment capacity matching and therefore higher capital efficiency and lower costs. Figure 2 illustrates the wafer cost versus fab capacity for a wafer fab in the United States running a 7nm process calculated using the IC Knowledge – Strategic Cost and Price Model – 2018 – revision 03 for a greenfield fab.


Figure 2. Wafer Cost Versus Fab Capacity for 7nm Fab in the United States

Even though 15,000 wpm is past the steepest part of the curve there is still several hundred dollars in cost per wafer advantage for larger capacity wafer fabs.

Tom Caulfield also mentioned GF needed $3 billion dollars of additional capital to get to 12,000 wpm and they could only fund half of it through cash flow, they would have to borrow the other half and the projected return wasn’t good.

Customer Inputs
When Tom took over as CEO he went out on the road and visited GF’s customers. What he found was a lack of commitment to GF’s 7nm process in the customer base. Many customers were never going to go to 7nm and of the customers who were, GF wouldn’t have enough capacity to meet their demands. There was also concern in the customer base that 7nm would take up all the R&D and capital budgets and starve the other processes they wanted to use of investment.

What Did GF Give Up?
By exiting the 7nm and smaller wafer market GF has given up some opportunity. Figure 3 illustrates the total available market (TAM) for foundry wafers in 2018 and 2022. Even in 2022 the forecast is for 7nm to be less than 25% of the market and the TAM for >=12nm to increase from $56 billion dollars in 2018 to $65 billion dollars in 2022.


Figure 3. Foundry Market

In terms of specific markets, GF are conceding some of the computing, graphic processing and data center opportunity. Currently AMD is GF’s largest customer and long term that business will presumably shrink as AMD moves to smaller geometries.

What Now?
GF will be focused on four major “differentiated – feature rich” offerings going forward.

[LIST=1]

  • FinFET – GF will continue to offer 14nm and 12nm FinFET based processes and they are continuing to add to these offerings with RF and analog capabilities, improved performance (10-15%) and density (15%), embedded memory options, enhanced MIM capacitors and advanced packaging options.
  • RF – this is a segment where GF has a clear leadership position as I discussed in a another article available here. With the pivot away from 7nm GF is increasing investment in this segment with more capacity. At the Technology Conference GF said “If you think RF, think GF” and I agree that is an ap slogan.
  • FDSOI – GF’s FDX processes with 22FDX currently and 12FDX are the industry leader in the emerging FDSOI space as I discussed in another recent article available here. FDSOI shows great potential in the IOT and Automotive markets. If FDSOI really takes off this could be a huge win for GF and they have already announced $2 billion dollars of design wins for the 22FDX process.
  • Power/AMS (Power. Analog and Mixed Signal) – this segment combines Bipolar/CMOS/DMOS (BCD), RF, mmWave, embedded non-volatile memory and Micro-Electro-Mechanical-Systems (MEMS) for the consumer space such as high-speed touch interfaces.

    Conclusion
    GFs’s pivot away from 7nm has aligned the companies R&D and capital spending more closely with their customers needs. Whether GF can build a sustainable business model on the four business segments they are now focused on remains to be seen but more closely aligning your companies focus with your customers’ needs certainly appears to be a step in the right direction.

    [1] IC Knowledge – 300mm Watch Database – 2018 – revision 02


  • Make Versus Buy for Semiconductor IP used in PVT Monitoring

    Make Versus Buy for Semiconductor IP used in PVT Monitoring
    by Daniel Payne on 10-01-2018 at 12:00 pm

    As an IC designer I absolutely loved embarking on a new design project, starting with a fresh, blank slate, not having to use any legacy blocks. In the early 1980’s we really hadn’t given much thought to re-using semiconductor IP because each new project typically came with a new process node, so there was no IP even ready for re-use, at least not at the IDM that I worked at. In 2018 by stark contrast we now have a thriving IP economy providing IC designers with everything starting from simple logic functions at the low-end, all of the way up to processors at the high-end, plus every kind of AMS function that you can imagine. My former Intel co-worker Chris Rowen once famously stated, “The processor is the new NAND gate.”

    So let’s say that your next SoC has a power budget, timing specifications and thermal reliability metrics, so you naturally want to have PVT (Process, Voltage, Temperature) monitors placed around your chip in strategic locations so that you can measure and control everything, but should you create your own IP from scratch or just buy something off the shelf? Great question.

    Let’s make a quick list of what it might take to develop your own PVT blocks and start using them:

    • Analog IC design skills, do we need to hire someone
    • Expertise to achieve high accuracy from in-chip monitors, with smallest die size and robust operation
    • Awareness of legal positioning (patents, design rights, trademarks)
    • Budget for EDA tools for analog IC design
    • Design time and effort
    • Budget for test chip and mask costs
    • Understanding of fabrication and device packaging timescales
    • Awareness of silicon validation and debug time
    • Contingencies for test chip iteration before production use, allowance for refining the design to meet specifications
    • Understanding and expertise to know where to place each monitor and how many monitor instances to place an SoC for a given application

    According to the website www.payscale.com the total pay for an analog IC design engineer ranges between $74,378 and $182,461 per year, with a median pay at $105,979.


    You also want that Analog IC design engineer to have experience making PVT monitors from 40nm down to FinFET process nodes, with working silicon as proof.

    Each PVT instance has to be accurate enough to feed back data to a digital controller that then makes decisions about DVFS (Dynamic Voltage Frequency Scaling), clock throttling, or changing VDD levels to meet specs and enhance reliability. If the accuracy of the sensor is off, then the control decisions will be inefficient or worst case harm the chip operation or fail to meet the specs.

    Other IP vendors have created their own PVT monitors and may have patented them, so you need to ensure that your novel IP designs and techniques aren’t infringing an existing patent. There were 141 semiconductor patent suits in 2013 for US District Court Cases.


    Source: Jones Day

    The lawyers get rich in patent disputes and both parties drain their precious financial savings downwards until a victor is established. In many cases the patent victor is able to batter the losing company down enough to either bankrupt them or cause them to be acquired, not a pretty sight.

    EDA tools for an analog IC designer include:

    • Schematic Capture
    • Circuit Simulation (SPICE)
    • Layout Tool
    • Schematic-driven Layout
    • DRC/LVS
    • Parasitic Extraction
    • Reliability analysis
    • Transistor Sizing
    • Design centering with Monte Carlo analysis
    • IR Drop analysis
    • Electromigration analysis

    For the digital controller portions you’ll likely need:

    • HDL entry
    • Logic synthesis
    • Static Timing Analysis
    • Place & Route
    • DFT tools

    The PVT sensors themselves are largely analog while the controller is digital, so more tools for co-simulation will be required:

    • AMS simulation and verification

    Getting all of the PVT monitor blocks designed and implemented is going to take time, probably on the order of many man-years effort, so add that figure up in your total calculations for making the IP.

    Mask costs are highly dependent on the process node that you’re at, so 40nm is about $900K while 28nm masks are about $1.5M, and the costs get steeper. You’re going to need a test chip with the new PVT monitors on to really be certain.


    Source: AnySilicon

    Test chip costs depend on the process node, die size and foundry partner that you choose. Contact your local account manager and get a figure to work with.

    Fabrication time depends on the foundry, their capacity at the moment, and whether you are using a multi-project wafer or not. Think weeks to months of time just waiting, when you could certainly be doing something more productive with your engineering staff.

    The magic moment is when packaged parts or a raw wafer are delivered and you get to debug your first silicon, engineers and test engineers huddle around and make frantic measurements, debugging their test program, maybe several days to determine if your new IP is working properly across voltage and temperature range. Worst case you’ll find some functional bugs or see that the design doesn’t quite meet the accuracy of your on-chip monitor requirements, so a re-spin is required, sending you back to iterate which will take you more weeks of design, verification and fabrication again.

    Even when your PVT IP is working, you now have to be judicious in where to place each sensor and the controller portion in order to optimize SoC-level performance or to fully enhance device lifetime.

    If this process outline above sounds laborious, error-prone, engineering heavy, expensive and defeating to your corporate goals of time to market, then just know the alternative is to purchase your PVT in-chip monitoring IP from a trusted vendor like Moortec. I’ve been talking with these folks over the past year and am rather impressed because of these factors:

    • 8 years of commercial experience with PVT monitoring subsystems
    • 60+ customers to date using their IP

      • Consumer Electronics (Digital TV, Mobile, Notebooks, SSD)
      • Datacenter (AI, Networking, Enterprise, Cloud Computing, HPC)
      • IOT (Wearables, Smart Home, Smart City)
      • Automotive (Infotainment, Collision Avoidance, Autonomous Driving)
      • Cyrpto-currency Mining (Bitcoin, Litecoin, Etherium)
    • IP working in 40nm down to 7nm

    Here’s a high-level view of their PVT subsystem:

    Source: Moortec

    Summary
    Every SoC design project has the same decision to make about using PVT in-chip monitoring, make or buy. Hopefully you do some back of the envelope calculations on the make side, then give the folks at Moortec a call to help complete the comparison. Most markets are moving so fast that efficient deployment of your internal design teams, combined with the ever-present time to market pressures, dominate business decisions. So, using trusted IP from a vendor like Moortec sounds like the lowest risk, fastest route to market.

    Related Blogs


    SURGE 2018 Silvaco Update!

    SURGE 2018 Silvaco Update!
    by Daniel Nenni on 10-01-2018 at 7:00 am

    The semiconductor industry has been very good to me over the past 35 years. I have had a front row seat to some of the most innovative and disruptive things like the fabless transformation and of course the Electronic Design Automation phenomenon, not to mention the end products that we as an industry have enabled. It is truly amazing that my iPhone X has more processing power than NASA and Neil Armstrong had when landing on the moon, absolutely.
    Continue reading “SURGE 2018 Silvaco Update!”