RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

Chapter 5 – Consolidation of the Semiconductor Industry

Chapter 5 – Consolidation of the Semiconductor Industry
by Wally Rhines on 08-09-2019 at 6:00 am

For the last decade, semiconductor industry analysts have been writing articles and giving presentations that predict the increasing consolidation of the industry to the point where a few large companies dominate worldwide sales of semiconductor components.  In recent years there has been some justification for this view as the combined market share of the top five companies in the industry has increased, as has the combined market share of the top ten.

The general thesis of these discussions of semiconductor industry consolidation is the widely accepted model of growth and maturation of an industry.  Industries like steel, automobiles and others that have propelled decades of economic expansion in the world should grow rapidly in their youth and then slow down as their markets saturate and stabilize.  During this period approaching maturation, revenue growth is not large enough to drive increased profit and enterprise value so the focus becomes cost reduction.  By becoming more efficient, these mature industries reduce their labor and material costs, acquire competitors to achieve better economies of scale and reduce their research and development expenses since their industry is no longer evolving rapidly and there are fewer opportunities for new product and technology innovations. The acquisition process eventually leads to an oligopoly of a few large surviving companies that can achieve the required economies of scale to prosper despite their slow or declining revenue.

There are at least two problems with this kind of analysis.  First, the assumption that industries mature and consolidate down to a few large enterprises may be the exception rather than the rule.  Second, the analysis of the semiconductor industry as a candidate for this model in 2016 is probably premature since we’re seeing new growth in revenue and profits and innovation despite the sixty year age of the semiconductor electronics industry.

Consider first the assumption that most industries eventually consolidate.

Figure 1. Steel industry consolidation in the U.S.1,2

While consolidation certainly occurred in the U.S. steel industry in the 1960’s and employment has now been reduced by nearly 85%, the number of steel companies was only reduced by 50%.  New technology provided by mini mills created a set of new competitors in the industry.  Worldwide, consolidation of the steel industry has left us with far more than the classical oligopoly of companies (Figure 2). The five largest steel companies in the world account for only 18% of the revenue of the industry and it takes forty companies to account for half of the worldwide steel production.

Figure 2. Competitive state of the worldwide steel industry

The case of the automobile industry, though different, also provides insight into the maturation process of industries. Figure 3 shows the growth of the automotive industry, reaching a peak of 272 companies in 1909 and consolidating down to GM, Ford and Chrysler with 91% U.S. market share in the 1960’s.  This oligopoly was temporary, however, as foreign manufacturers from Europe, Japan and Korea gained market share in the U.S., passing the combined market share of GM, Ford and Chrysler in 2007. Emergence of electric cars and evolution of technology for driverless cars has stimulated the emergence of over 400 new companies announcing plans to produce electric cars and light trucks in the near future and nearly 200 planning driverless cars.

Figure 3.  Growth of the automobile industry

Are there any industries that consolidate down to an oligopoly and remain that way?  The answer is, “yes, but….”.  The well accepted model of consolidation seems to work in industries that operate in relatively free worldwide markets that are largely free of regulatory and tariff barriers and have a low cost of transport so that products can flow easily from one region to another. Two examples of this are the hard disk drive and the dynamic RAM (dynamic random-access memory) businesses.

Figure 4. Market shares of the leading hard disk drive manufacturers in 2017

The number of competitor companies in the hard disk drive industry peaked at 85.  Figure 4 shows the current state of that industry with three participants controlling almost 100% of the revenue of the industry.  But like most industries, technical discontinuities change the game.  Emergence of solid-state storage to replace rotating media hard disk drives is changing the market share outlook (Figure 5). Samsung is emerging as the new leader partly because of its leading position in the NAND FLASH component business.

Figure 5. Solid state storage changes the competitive landscape

The other example of the consolidation of an industry is the DRAM business.

Figure 6. DRAM worldwide market share. Combined share of the three largest companies grew from about 35% in 1994 to 95+% in 2016.

In 1997, the top three producers of dynamic RAM had less than 40% of the market. By 2014, they had 95%.  Both DRAMs and hard disk drives satisfy the requirement of low cost of transport. They are also industries that have relatively free market design, production and distribution worldwide.

How does all this relate to the broader semiconductor industry? Will it consolidate down to a dominant few companies and remain there, as the analysts suggest?  It’s doubtful, at least for the near term.  Let’s look at the history of semiconductor industry consolidation, or more accurately, its “deconsolidation”.

Since 1965, the semiconductor industry has been “deconsolidating” (Figure 7). In 1966, three companies, TI, Fairchild and Motorola, shared about 70% of the total semiconductor market.

Figure 7. Semiconductor industry deconsolidation from 1965 to 1972

Over the next seven years, that share dropped to 53%, driven by new entrants like National Semiconductor, Intel, AMD, LSI Logic and about 25 more.  Over the next 40 years, the market share of the top semiconductor company remained roughly the same, near 15% market share, although the names changed from TI in 1972 to NEC and then to Intel.  Combined market shares of the top five and top ten semiconductor companies decreased or remained flat during this period (Figure 8).

Figure 8.  Combined market share of the five and ten largest semiconductor companies

During 2016 through 2018, the combined market share of the top ten semiconductor companies increased modestly, partly due to an unusual increase in DRAM unit prices as well as a very strong computer server market that favored Intel.  The most remarkable piece of data is shown in Figure 9.  Throughout history, the combined market share of the fifty largest semiconductor companies has been decreasing.

Figure 9.  Combined market share of the fifty largest semiconductor companies from 2003 through 2014

This observation says a lot about the character of the semiconductor industry both now and throughout history.  Company leadership in the industry is continuously changing as new technologies emerge and new companies secure the leading market share in these new technologies.  Figure 10 shows the top ten ranking of semiconductor companies over a fifty year period. The company names shown in green are ones that have dropped out of the top ten and never reappeared except for NXP.  The number of companies that have retired from the top ten is greater than half of all those who have ever been in the top ten. Only Texas Instruments has remained in the top ten throughout the fifty year period and even it is probably destined to drop out as it focuses its business in analog and power and further disengages from the high volume “big digital” chips that constitute, along with memory, so much of the semiconductor revenue today.

It’s difficult for semiconductor companies to reinvent themselves as new growth markets emerge.  The large semiconductor companies tend to grow at about the overall semiconductor market average growth rate while the new entrants grow much faster, albeit from a smaller revenue base. Gradually, these small companies climb the ranks on their way to the top ten.

Figure 10. Top ten semiconductor companies change with time. Companies shown in green fell out of the top ten

Will the wave of merger mania in 2016 and 2017 continue into the future as the semiconductor industry finally matures and consolidates? Surely the competitive advantage of scale will lead to more mergers and a more difficult environment for small companies to compete without the scale of the big ones? The recent slowing of merger activity, although significantly affected by government regulatory disapprovals, suggests that we may not have reached that stage of consolidation (Figure 11). Actual numbers make 2017 and 2018 among the lowest dollar value of major merger years in recent history, both in number and in enterprise value. The recent increase in semiconductor industry revenue growth rate to 22% in 2017 after two years of no growth also suggests that the announcement of industry maturity may have been premature.

Figure 11.  Value of semiconductor industry mergers by year

In the next chapter, we will examine the factors behind the consolidation that has been occurring. A reasonable conclusion would be that the limited amount of consolidation that is occurring in the semiconductor industry is not motivated by size or broad economies of scale but by specialization.  Profitability in the semiconductor industry is driven by market share in very specific specialties and the industry is in a transition to increased specialization which is also increasing overall profitability.

1https://www.nwitimes.com/business/local/steel-ceo-more-consolidation-inevitable/article_c407cc83-7d1b-59eb-a838-f1ea1723845c.html

2https://247wallst.com/investing/2010/09/21/americas-biggest-companies-then-and-now-1955-to-2010/

Read the completed series


WEBINAR: The Brave New World of Customized Memory

WEBINAR: The Brave New World of Customized Memory
by Randy Smith on 08-08-2019 at 10:00 am

The need to design low power devices is not new. However, the criticality of lowering the power consumption of chip designs has never been as important as it is now. In 1989, I purchased one of the first consumer cell phones produced by Panasonic. The battery was the size of a brick, but only about a third of the thickness. If the battery were half that size, it would not have mattered much to me since it was still like carrying a purse. Or sometimes I clipped it into a docking station under the passenger seat of my car. Today, in cell phones and a myriad of IoT devices, battery size is critical, as is the total runtime available on a single charge to the system. While battery technology is important, it is even more important to reduce the power required to operate a device. Founded in 2011 with just this focus in mind, sureCore Limited will be presenting at a SemiWiki Webinar Series event to discuss the technologies that they have available to assist chip designers and chip architects in substantially reducing chip power.

For at least the past 20 years, memory blocks have dominated the on-chip real estate. The larger area has also consumed the largest portion of the chip power budget. sureCore has developed an arsenal of low power memory IP services and products to enable designers to build customized low-power SRAM memory blocks – SureFIT™, PowerMiser™, and EverOn™. These are not simple memory compiler solutions as they integrate very advanced features supporting low power. For example, EverOn has been built to support DVFS (dynamic voltage and frequency scaling).  sureCore’s “SMART-Assist” technology allows robust operation down to the retention voltage, critical in ‘keep-alive’ specifications.

Memory compilers have been used for a couple of decades now, though not all have been successful in their ability to deploy in low power processes. These tools are used to generate SRAM memory blocks over a huge number of memory configurations and memory specification options for a specific process. There is a need for them to be quite robust in the face of low voltage thresholds, process variation, and a large number of possible option choices. To do this, the memory architecture, as well as the generators, need to embed special knowledge beyond simply repeating bit cell patterns. There may be the use of self-timing chains or circuits tricks to get the memories to work based on the options selected. These designs seem to be built using engineering, science, – and art.

The event will be moderated by SemiWiki founder, Daniel Nenni. The presenter will be Paul Wells. Paul has worked in the semiconductor industry for over 30 years. He co-founded sureCore in 2011 and has kept them focussed on the market for low power embedded SRAM. His previous experience in design & management at several respected companies such as Pace Networks, Jennic Ltd., Plessey Semiconductors and Fujitsu Microelectronics, has enabled sureCore to build this broad yet focused portfolio of low power memory IP solutions while also offering a Low Power Mixed Signal Design Service.

This webinar, “The Brave New World of Customized Memory” will be held on Wednesday, August 28, 2019, from 10:00a m to 10:45 am PDT. To sign up for the webinar, register using your work email address HERE. A replay URL will be sent to all registrants in case you miss the live version.

About sureCore
sureCore Limited is an SRAM IP company based in Sheffield, UK, developing low power memories for current and next-generation, silicon process technologies. Its award-winning, world-leading, low power SRAM design is process independent and variability tolerant, making it suitable for a wide range of technology nodes. This IP helps SoC developers meet challenging power budgets and manufacturability constraints posed by leading-edge process nodes.


Tortuga Webinar: Ensuring System Level Security Through HW/SW Verification

Tortuga Webinar: Ensuring System Level Security Through HW/SW Verification
by Bernard Murphy on 08-08-2019 at 6:00 am

Jason Oberg

We all know (I hope) that security is important so we’re willing to invest time and money in this area but there are a couple of problems. First there’s no point in making your design secure if it’s not competitive and making it competitive is hard enough, so the great majority of resource and investment is going to go into that objective. Security might get one person or a small team, working with the hardware root of trust and providing guidance and review for the rest of the design team. Which the latter will address and prioritize as best they can among the thousand other things they have to do.

WEBINAR REPLAY

Then there’s that nagging question – how much work do I have put into security verification to know my design is truly secure? Does this mean lots of new simulation and formal testbenches have to be built, and what should they be stimulating and checking? Even more important, a lot of emerging hardware hacks leverage a combination of software and hardware (Spectre for example). How are you going to check these? Formal is out, and for simulation how would you even stimulate, much less check for these classes of problem? Even if you could answer these questions for specific threats (each probably a research project in its own right) is it feasible to cover a realistic set of potential threats?

That’s a lot of questions without easy answers in traditional design verification. You want to do a thorough job, but you don’t want to have to create masses of new and highly complex testbenches. Fortunately there’s a good answer. Tortuga have been developing technology around this area for multiple years and are now partnered with Synopsys and Cadence and licensed by Xilinx. They have a very interesting approach to both build confidence that you are comprehensively covering a range of threats and to reuse existing verification testbenches in their analysis. That’s exactly what you need, and I have to believe that this stuff works, based on their industry references.

Check out their upcoming webinar on applying these techniques to your hardware root of trust.

WEBINAR REPLAY

Summary

In this webinar, we will discuss common hardware security concerns in many market verticals including IoT, Datacenter, and Aerospace/Defense that are often centered around a Hardware Root of Trust. We then discuss common hardware security verification techniques, as well as their benefits and drawbacks. Next, we will present the best-in-class techniques and methodologies for understanding the system security ramifications of a mixed hardware/software system. Lastly, we will present an example security analysis on a real-world hardware/software system using the discussed techniques.

All attendees will receive a copy of the white paper, “Detect and Prevent Security Vulnerabilities in your Hardware Root of Trust.”

The presenter will be Jason Oberg, co-founders and Chief Executive Officer of Tortuga Logic. Jason oversees technology and strategic positioning of the company. He is the founding technologist and has brought years of intellectual property into the company. His work has been cited over 700 times and he holds 6 issued and pending patents. Dr. Oberg has a B.S. degree in Computer Engineering from the University of California, Santa Barbara and M.S. and Ph.D. degrees in Computer Science from the University of California, San Diego.

About Tortuga Logic
Founded in 2014, Tortuga Logic is a cybersecurity company that provides industry-leading solutions to address security vulnerabilities overlooked in today’s systems. Tortuga Logic’s innovative hardware security verification platforms, Radix™ enable system-on-chip (SoC) design and security teams to detect and prevent system-wide exploits that are otherwise undetectable using current methods of security review. To learn more, visit www.tortugalogic.com.

 

 


5G Auction without Action​ = Fraud

5G Auction without Action​ = Fraud
by Roger C. Lanctot on 08-07-2019 at 10:00 am

“We don’t need 10 Mbit/s, but rather basic bandwidth and guaranteed latency. We need coverage!” thus spoke BMW Senior Vice President of Electronics Christoph Grote at the recent Automobil-Elektronik Kongress in Ludwigsburg, Germany. Grote was making this plea in the context of the onset of 5G technology. For Grote, 5G isn’t higher capacity or faster speeds – 5G is a new way of thinking about wireless.

Local wireless carrier and BMW partner, Deutsche Telekom, won an auction in June for 5G spectrum.  The company has been pumping out press releases describing the installation of new base stations throughout Germany.

Simultaneously, DT has chosen to highlight – with a unique Web page – approximately 1,000 locations where it has been working for months or years to receive approvals from local municipalities or land owners to build towers and base stations where gaps exist in its network. The months and years-long delays for approvals raise serious questions regarding the value of newly won spectrum.

Challenges to Network Expansion – https://tinyurl.com/y56l46ye – Deutsche Telekom

More importantly, the delays raise questions regarding the obligation of the Federal government to step in and arbitrate or accelerate these deliberations. What is the value of the spectrum if it can’t be tapped?

Wireless carriers, like DT, pay billions of dollars for spectrum and then must spend billions more to deliver on the promise of enhanced wireless technology. Normally, average citizens could care less. Billions of Euros spent on spectrum are far from the daily concerns of the man and woman on the street.

But things are different this time around. This is not “your father’s” wireless network. Wireless networks built around 5G are promising entirely new value propositions designed to transform factories and transportation networks in ways that are likely to save thousands of lives and mitigate ills such as congestion, emissions, and the overall inefficiency of the economy.

5G promises to enable self-driving cars, smarter cities and enhancements to efficiency from the factory floor to highways and city streets. But that won’t happen with a lot more base stations and micro-cells.

BMW’s Grote is not alone in seeking comprehensive wireless coverage – at least encompassing most major roadways. Today’s reality falls far short of this expectation or requirement, as evidenced by DT’s coverage gaps Website.

DT is not alone. Wireless carriers across the world have Swiss-cheese-like coverage maps riddled with gaps that indeed represent the gap between the promise and the reality of cellular technology in the 5G age.

With better coverage – requiring the deployment of hundreds of base stations and thousands of micro-cells – transportation authorities will be able to consider the elimination of proprietary and often incompatible roadside infrastructure. But to realize this promise carriers require the support of the very organizations that are auctioning off the spectrum in the first place.

If public authorities fail to step in to assist carriers in the deploying of network equipment capable of supporting both commercial applications and safety and public service applications, then perhaps the spectrum should simply be free. How can spectrum be auctioned with no guarantee of access?

To be clear, the vast majority of new connections to existing wireless networks have been coming from connected cars for the past five years. Numerous 5G awards have already been allocated by multiple car makers. Crazily enough, many of these awards involve two or more connectivity devices – to enhance connectivity and target different vehicle-related applications.

The European Union saw fit several years ago to mandate embedded connections in cars. What are those connections worth if the network is not available at the time and location of a vehicle crash?

As the wireless industry and the transportation industries prepare to leverage new capabilities enabled by both LTE and 5G connectivity for saving lives and removing friction from people-moving systems, it is time for more assertive support from Federal authorities. It is irresponsible, fraudulent, and disingenuous to auction off spectrum and require embedded connections (in the EU) without any quality of service guarantee. If Federal authorities across the world won’t step in to assist carriers with 5G rollouts, then they should be required to refund their ill-gotten billions. Lives are at stake.


Insurers Not Ready to Discount Premiums on ADAS

Insurers Not Ready to Discount Premiums on ADAS
by Bernard Murphy on 08-07-2019 at 6:00 am

Think because your new car is loaded with ADAS your insurance company should give you a break on premiums? Think again. The purpose of all those fancy features is to reduce the risk of an accident or damage to your car, either of which could be costly to your insurance company and quite possibly to you also. If you’re paying extra for that car to reduce the likelihood of such claims, why won’t insurance companies reflect that in your premiums?

For those of us who can afford it, there are other good reasons to pay extra for those car features. From a purely financial point of view, if you make a claim your premiums may well go up. If ADAS buys you a longer period without accidents and therefore increases, that may justify the cost. More importantly, ADAS gives you and your family a higher assurance of safety, which for most of us is worth a whole lot more. So for those of us who can afford it, the added cost is amply justified. I know since I got the full ADAS package on my current car I will never go back to a lower level of support.

Insurance companies already offer safe driver discounts, based on a device plugged into your car’s diagnostic port which tracks things like mileage, braking and acceleration. The discounts they offer drivers under these plans can be substantial – up to 40% of your premium. So why shouldn’t ADAS earn you an even better deal? A recent Reuters article provides some insight.

Bottom line, insurers don’t have enough data yet to accurately price the impact of these safety features on risk. According to the article, auto insurance is a low margin business (though it may not feel that way at times). If the insurer assessment of reduced risk is even a little bit wrong, they lose money. And there are a lot of other factors. Other vehicles may be involved in an accident and maybe not similarly equipped (short of force fields, you can’t stop other vehicles crashing into you). Or perhaps you’re driving so fast and close to another car that collision avoidance systems will be unable to help if that car stops suddenly.

Automakers are apparently not very helpful in sharing detailed features information by model with insurers. The article adds that there’s also a lack of standards, making it more difficult to calibrate such statistics as are available.

One more thing. A lot of accidents are fender-benders, which didn’t necessarily require costly repairs in the pre-ADAS days. But now a damaged fender has to be replaced with a new fender with all those fancy smart sensors, turning what had been maybe a $300 repair into a $1,500 tab.

So the insurers are waiting on more data. Even after the automakers have figured out how to share their data effectively, I am completely confident that insurers will ignore all theoretical information on what various ADAS features ought to be able to deliver and will instead rely on the empirical evidence they see in day-to-day use – the same way good engineers would.


The China trade issue is back with a vengeance!

The China trade issue is back with a vengeance!
by Robert Maire on 08-06-2019 at 10:00 am

Watching the boats go by in Shanghai-

As I write this note I happen to be looking out my hotel window over the Bund onto the brightly lit party boats cruising the Huangpu river that meanders through Shanghai.  All is well here in China and the parties on the boats with millions of LEDs go on……

The view from China is that the trade issue is the US’s problem, not theirs and they don’t seem overly worried about it here. Not so in the US and especially chip stocks that are bearing the brunt of trade issues, getting trashed.

We have been talking about China trade issues for several years now, long before it became popular and it appears that we are coming to some sort of denouement in which there will likely be a resolution….but not likely a good one.

China issue won’t die…like Jason in cheesy horror movies

Like a dumb teenager in a Friday the 13th movie, we believed the monster would be dead after the China issue was kicked down the road several months ago and the administration had new topics to tweet about.  But it seems to have been resurrected in an even more serious form as both sides appear even more dug in and have already escalated their positions. I don’t think we will be able to kick the can down the road again….there may be blood in the trade wars.

Chip stocks recent bubble just burst….

Chips stocks had been on a tear for no real good reason as the stocks went up for no visible reason at all while reality remained ugly and uninspiring even without adding in the hibernating China issue. That bubble has just been burst as China has brought us back to reality which is not pretty.

China will make for a longer slower chip recovery

We have been saying that China will cause the recovery of the current chip downcycle to be longer and slower.  Demand will remain muted due to China and memory makers will have to idle even more capacity as balance will not be restored by increasing demand only falling supply….not a good recovery scenario…especially for equipment.

June was the worst month in DRAM pricing in over ten years….prices continue to crater….how were memory related stocks going up in the face of that reality?

The newly invigorated trade war will certainly slow not only memory demand but demand across the spectrum of the chip industry.

Huawei & rare earth issues bound to come back as well

As the tit for tat escalates in trade the collateral issues will start up again as well.  We already see this collateral issue come up between South Korea and Japan as the chip industry is so important that inflicting pain on the opposite side through the chip industry seems an easy thing to weaponize.

So far, semiconductor equipment sales has not been weaponized but we may not be far off as cutting China off from chip making equipment could be the coup de grace or the start of mass destruction….depending on how you look at it.  Neither side seems to want to push that button but things are twitchy……

The stocks

We had advised investors to take money off the table in Lam as the stock had clearly gotten ahead of itself in a weak market, which has clearly worked.

We think we could break through some important support levels in chip stocks as the China news rolls out.

Its not like fundamental news is strong enough to offset the dangerous China news….memory still sucks.

We could be at the beginning of a longer unstable period as the China issue will not go away overnight.  This time we will likely see a more sustained period of concern related to China which could go on through a seasonally slow August with the only real hope coming in the fall September October Iphone & holiday sales season which could also be muted due to China.

We would likely stay on the sidelines until things blow over one way or another….not likely to be a pretty or clean ending as the escalation has been quick….


Adding CDM Protection to a Real World LNA Test Case

Adding CDM Protection to a Real World LNA Test Case
by Tom Simon on 08-06-2019 at 6:00 am

In RF designs Low Noise Amplifiers (LNA) play a critical role in system operation. They simultaneously need to be extremely sensitive and noise free, yet also must be able to withstand strong signal input without distortion. LNA designers often struggle to meet device performance specifications. Their task is further complicated by the need to add ESD protection to these highly tuned and sensitive circuits. HBM protection shields circuits from pin-to-pin discharge events, and the methodology for adding these protections is fairly straightforward.

CDM events are typically more difficult to characterize and prevent. CDM events occur much more rapidly than HBM events, which means that HBM protections will not respond quickly enough to be useful. To make matters worse, LNA circuits often use thin-film NFETs, which are more likely to be damaged at lower voltages by CDM events.

In an upcoming webinar on CDM protection network analysis, Magwel will mention a real world case involving Qorvo LNA test chips. Qorvo has shown that Magwel’s CDMi solution for evaluating the effectiveness of CDM protections correctly predicted over-voltage damage in a test chip.

The challenge is to add sufficient protection without adding parasitics that would impair circuit performance. In the Qorvo case study, it was shown that designers can determine the optimal protection diode sizing that offers adequate protection and preserves LNA performance.

Initially, the CDM simulation was run with insufficient protection diodes. The error report from Magwel’s CDMi tool shows over-voltage on one of the thin-film NFETs. Qorvo tested physical parts and performed imaging to pin-point the location of the failure, which agrees with the analysis.

Attendees of webinar replay will receive a copy of the case study for their own reference. The webinar should provide insight into an effective solution to the challenges of designing and verifying CDM protection for a range of circuit types.

Webinar Abstract:

Failures during manufacturing and assembly or in the field caused by charged device model (CDM) type ESD events are a serious concern for IC design teams. CDM failures are generally caused by charge build up on device packages, which capacitively charge large internal nets, such as GND or VSS. Once a device pin contacts a current path, the charged internal net can discharge through triggered devices to the pin. ESD protection devices allow this to occur harmlessly. However, if the ESD protection network does not work as intended, dangerously high voltages and currents can affect protected devices in the IC.

The only reliable method of determining if ESD protections will be effective is simulation. However, conventional circuit simulation is difficult to set up, too slow and provides hard to interpret results for CDM events. Magwel has developed a simulation based solution specifically designed to address CDM discharge events.

In this webinar you will learn how Magwel’s CDMi efficiently models the complex behavior of a CDM event in an integrated circuit. CDMi uses vf-TLP models in conjunction with 3D solver based resistive network extraction and dynamic simulation to predict device triggering. The results are comprehensive reporting of discharge event voltage and current flows. We will show how CDMi enables CDM ESD signoff before tape out to ensure high product quality and improved yields.

Webinar Replay

About Magwel
Magwel® offers 3D field solver and simulation based analysis and design solutions for digital, analog/mixed-signal, power management, automotive, and RF semiconductors. Magwel® software products address power device design with Rdson extraction and electro-migration analysis, ESD protection network simulation/analysis, latch-up analysis and power distribution network integrity with EMIR and thermal analysis. Leading semiconductor vendors use Magwel’s tools to improve productivity, avoid redesign, respins and field failures. Magwel is privately held and is headquartered in Leuven, Belgium. Further information on Magwel can be found at www.magwel.com


eFPGA – What a great idea! But I have no idea how I’d use it!

eFPGA – What a great idea! But I have no idea how I’d use it!
by Daniel Nenni on 08-05-2019 at 10:00 am

eFPGA stands for embedded Field Programmable Grid Arrays.  An eFPGA is a programmable device like an FPGA but rather than being sold as a completed chip it is licensed as a semiconductor IP block. ASIC designers can license this IP and embed it into their own chips adding the flexibility of programmability at an incremental cost.

We covered the history and importance of the FPGAs in our book “Fabless: The Transformation of the Semiconductor Industry”. In fact, you can get the 2019 updated version of Fabless at our upcoming webinar HERE but I digress…

Flex Logix landed on SemiWiki.com in 2016 as the first eFPGA company. From our first blog on 2/12/2016:

Nearly 30 years after the FPGA debuted, Flex Logix was formed in March 2014 based on programmable logic technology described in a ISSCC paper from UCLA alumni Cheng Wang and Fang-Li Yuan. CEO Geoff Tate (of Rambus fame) set a course away from competing with FPGA companies, instead adopting an IP strategy and aiming to embed reconfigurability in high-volume SoCs for mobile, IoT, wearable, server, and other applications. Flex Logix begins 2016 with 1 patent issued and 6 more applications pending, a recent $7.4M round of Series A1 financing, a new VP of silicon engineering in Abhijit Abhyankar, and a new VP of sales in Andy Jaros.

A lot has changed in the eFPGA business over the last 3 years but not Andy Jaros. Andy is still VP of Sales of Flex Logix and he will be presenting at our upcoming webinar eFPGA – “What a great idea! But I have no idea how I’d use it!” Here is the abstract in case you are interested in talking to Andy and learning more about eFPGAs:

For decades, chip designers have thought, “wouldn’t it be great to have RTL flexibility for their ASICs?” Decades have come and gone, and there have been many failed attempts at providing this type of technology. Now that there is viable, usable FPGA IP available for designers, the challenge now is up to the designer to take advantage of it. This webinar will discuss why FPGA IP is viable now. It will also provide some ideas for designers where they may be able to take advantage of this programmable technology on their next ASIC.

Embedded FPGA (eFPGA) Overview handout for attendees included!

Andy has decades of semiconductor experience to share and has been championing eFPGA use for that last 3+ years so he knows where the bodies are buried. I first worked with Andy when Virage Logic acquired ARC and we have been friends ever since. Previous to ARC, Andy was at ARM and Motorola so he knows the processor core business. After Synopsys acquired Virage, Andy spent 5+ years with the Synopsys IP group before joining Flex Logix in early 2016. Andy and I are neighbors so I literally know where he lives. I remember talking about his job offers at our local coffee shop with him. I remember voting for Flex Logix with both hands.

The upcoming eFPGA webinar is offered in three different time zones for your convenience or you can register without attending and the replay will be sent to you automatically.  Either way, I hope to see you there!

Flex Logix Company Website

Flex Logix on SemiWiki


Intel,  Motorola, and the IBM PC

Intel,  Motorola, and the IBM PC
by John East on 08-05-2019 at 6:00 am

Wikipedia  …   “In chaos theory, the butterfly effect is the sensitive dependence on initial conditions in which a small change in one state of a non-linear system can result in large differences in a later state”.  In other words, a butterfly bats its wings in Argentina and the path of an immense tornado in Oklahoma is changed some time later.

In 1980, IBM undertook a very secret project.  They had decided to develop a personal computer.  Apple Computer was making a killing in the personal computer market.  (See my upcoming weeks #13 and #14 dealing with Apple). IBM owned the big computer market.  They weren’t about to allow upstart Apple to horn in on their territory! Normal IBM policy was to design their products in a central design group in New York and to use primarily IBM manufactured ICs.  They recognized that sticking to this policy would slow things down.  They didn’t want to go slowly.  They wanted to announce the product in the summer of 1981.   They formed a task-force group in Boca Raton, Florida working under a lab manager named Don Estridge.  The task:  get a personal computer on the market and do it by August 1981.  Use outside ICs.  Use outside software.  Do whatever it takes, but get it out on time!!!   And keep it secret!!!

Meanwhile, Intel was in a tough place.  The memory market was already extremely competitive.  (See my week #6.  “Intel let there be RAM”). The microprocessor market was becoming so as well.  Seemingly every company was offering their own version of a microprocessor. (At AMD we were a microprocessor partner of Zilog who was offering a 16 bit microprocessor called the Z8000.)   Over the past decade Intel had gone from a place where — having introduced the first commercially successful DRAM and microprocessor — they controlled the market to a place where they had  become just one of the pack.  They didn’t like that!  They created Operation Crush  — a massive project  aimed at regaining domination in the microprocessor space.  Bill Davidow managed the effort.  Andy Grove supported it strongly via a message to the field sales organization saying essentially,  “If you value your jobs,  you’ll produce 8086 design wins”.

Paul Indaco (Now the CEO of Amulet Technologies) was a young kid just out of school. He was working at Intel in the Applications Department. As part of a rotational program (common in those days.),  he was sent out into the field to learn the selling side of the business.  As luck would have it,  he ended up in the Intel sales office in Fort Lauderdale, Florida.  The custom was (And I’d imagine still is) to give the new guy the account scraps that didn’t much matter while the experienced guy kept the important accounts.  So —  Earl Whetstone,  the existing salesman in the office, took the accounts to the south of Ft Lauderdale and Indaco got the less important ones to the north.  One of the accounts that “didn’t matter” was IBM Boca Raton.  How could IBM “not matter”?  Because Boca Raton was not a design site.  That is, it wasn’t where decisions regarding what parts to use were made.  Those decisions always came down from Poughkeepsie.   — or so everyone thought.

One day not long after Indaco had moved to Florida, he happened to be talking with a salesman from his distributor (Arrow).  “Oh.  By the way.  An IBM guy asked me today for some info on the 8086.  He works in some secretive new group. He didn’t say why he wanted to know.”  With nothing better to do, Paul got the name and number and called the guy.

Yes.  It turned out that IBM was up to something.  They wouldn’t say what it was. That was top secret.  But  —  they said they were in a huge hurry trying to make a very short deadline.  They said that they had more or less decided to go with a Motorola processor (Probably the 68000) but they conceded that they might be willing to take a quick look at the Intel 8086 along the way.  That wasn’t good for Intel.  It was generally acknowledged that the Motorola 68000 was technically superior to the 8086.   It looked like a longshot for Intel,  and they weren’t even sure what they were shooting at.

Intel had a few advantages though.  The first was their development system  — the 8086 in-circuit emulator.  It was better than what Motorola had to offer.  That would be helpful in speeding up the design and software debugging process.  Given the tight deadline, that could be important!  Paul loaned them one.  Then came good news.  The IBM engineer soon said something like, “Hey, I like this development system, would you loan me another one?”   The Intel policy was one loaner to a customer. The issue was clear though:  “Any work they do on an Intel development system applies to Intel only, so let’s help them do a lot!”  So Paul talked with Arrow who happily agreed to loan three more.   IBM often needed help on site from the Intel FAE.  The project was so secretive, though, that when the FAE went to help with the emulation work, the emulator was separated from the rest of the lab by curtains.  All he could see was the door, the emulator, and the curtains.  IBM would escort him in, he would solve the problem, and then IBM would escort him out.

Intel had three other advantages:  Bill Davidow, Paul Otellini, and Andy Grove.  Those were good advantages to have!! Bill ran Intel’s microprocessor division, Paul ran Intel’s strategic accounts, and Andy ran Intel.  They wanted this win!  Operation Crush was in full force!  Any number of issues had to be solved.  Among them was the ever-present issue of needing to beat Motorola.  And of course, there was the issue of pricing.  IBM wanted a price that was in the neighborhood of one half the current 8086 ASP.  Then, the Intel team had an epiphany!  Why not switch from the 8086 to the 8088?  (The 8088 was an 8 bit external bus version of the 8086.) Pricing would be less of an issue with the 8088 and IBM might like it because it would speed up the design cycle.  Why?  Because Intel had a complete family of 8 bit peripherals which would eliminate the time required to design the functions that the peripherals handled.  The available peripherals would not only speed up the project, they’d also reduce the number of components required to do the job. Neither the 68000 nor the 8086 had a complete family of peripheral chips at that time.   In the end the Indaco/Whetstone/Otellini/ Davidow/Grove team pulled put a victory.  Even after they won, though, they didn’t know what they had won until the day IBM announced.  The design win report that Indaco filed listed a win in a new IBM “Super intelligent terminal”.

It ended up being the most important design win in semiconductor history.

What does this have to do with butterflies and chaos theory? ……    Intel is the biggest semiconductor company in world.  To a great extent that is due to the IBM design win.  I wonder what company would be biggest if Indaco hadn’t happened to be talking with the Arrow salesman that day?  What if the Arrow guy happened to talk with a Zilog salesperson or an AMD salesperson instead? Or one from National or Motorola or Fairchild?!!!  The world might be very, very different!

Grove went on to be Time Magazine’s Man of the Year in 1997.  Otellini went on to be CEO of Intel for a decade.  Davidow went on to become a very successful venture capitalist with the distinction of leading one of Actel’s financing rounds.  They all ended up well.

But Indaco has them topped.  He went on to become Actel’s Vice President of Sales!!

Next week:   Steve Jobs

Picture #1. The Plaque awarded to Paul Indaco for winning the IBM PC design

Picture #2.  Paul Indaco holding his plaque earlier this year.

 

See the entire John East series HERE.


Chapter 4 – Gompertz Predicts the Future

Chapter 4 – Gompertz Predicts the Future
by Wally Rhines on 08-02-2019 at 6:00 am

In 1825, Benjamin Gompertz proposed a mathematical model for time series that looks like an “S-curve”.1  Mathematically it is a double exponential (Figure 1) where y=a(exp(b(exp(-ct)))) where t is time and a, b and c are adjustable coefficients that modulate the steepness of the S-Curve.  The Gompertz Curve has been used for a wide variety of time dependent models including the growth of tumors, population growth and financial market evolution.

FIGURE 1. The Gompertz Curve

S-Curves are common in nature.  In any new business, or in biological phenomena, we start out small with an embryonic business or a tiny cell and it reproduces slowly but the percentage growth rate is large. As time goes on, the growth accelerates until it finally slows down as it reaches saturation.  A new product takes a significant period of time for early adopters to spread the word of its benefits but it then goes viral, saturates the market and then declines (Figure 2).  On the right half of Figure 2, we see the same phenomena when the vertical axis of the graph is the cumulative number.  An example would be the freezing of water in a pond.  It starts with a few water molecules and then grows to a critical nucleus which grows rapidly until the pond is mostly frozen.  Then the last bit of water freezes over a longer period of time.  Expressed mathematically, the integral of the cumulative function is the area under the curve and it increases until the S-Curve finally flattens.

FIGURE 2. Typical product life cycle or life cycle of an industry

Figure 3 shows the stages of growth of the S-Curve.  It starts out slow but the highest percentage growth is early in the S-Curve evolution.  The curvature of the “S” increases upward until about 37% of the time on the horizontal axis is completed.2 Then the curvature is downward.  Mathematically we would say that the second derivative of the Gompertz function is positive until about 37% of the time is completed and then the second derivative becomes zero.  The rate of the rate of growth becomes negative and so the growth rate is less each year after that point.

FIGURE 3. Gompertz Curve Life Cycle

I first became acquainted with the Gompertz Curve while managing a design project that TI was doing for IBM.  IBM wanted us to report the number of simulated transistors that we had completed in our design each week.  They then plotted them as a Gompertz Curve (Figure 4).  Inexperienced project managers would have been frustrated by the fact that progress was initially very slow.  The specification for the design project kept changing, new architectural approaches were tested and the number of simulated transistors remained small for some time.  Then, things took off.  The number of transistors completed each week grew linearly.  Our inexperienced design manager would have been delighted and would have extrapolated this progress to an early completion as shown in Figure 4.  With more experience, he would realize that the last fifth of the project would take more than one third of the total time.

FIGURE 4. Use of Gompertz Curve for Project Management

While the Gompertz Curve is useful for project management, it provides even more insight for forecasting the future success of an embryonic product.  Figure 5 shows the evolution of worldwide sales of notebook PCs.  Using the data available to us with the actual shipments of PC notebooks in the years up through 2001, we can solve for the Gompertz coefficients a, b and c.  We could then have used these coefficients to predict the future evolution of the growth curve for cumulative units of PC notebooks shipped.  Figure 6 shows the Gompertz prediction versus the actual results reported in 2016. The results are nearly identical.  If you were an aspiring competitor in the PC notebook business in 2001, or even an investor in the personal computer business, accurate knowledge of the future market for PC notebooks over the next fifteen years could be very useful.

FIGURE 5. PC Notebook Shipments through 2001 provide data for Gompertz forecast

FIGURE 6. Actual PC Notebook shipments though 2016 (shown in green) versus Gompertz prediction in 2001 (shown in yellow)

Finally, Gompertz Curves can be used to predict the future of an industry.  A good choice would be the future of the silicon transistor since lots of research dollars have been devoted to developing an alternative to the silicon switch and we don’t even know how soon we need it.  Or do we?  Gompertz analysis provides an opinion.  It’s shown in Figure 7.  Although the semiconductor industry and silicon technology may seem mature to some, we are in the infancy of our production of silicon transistors.  The cumulative number of silicon transistors produced thus far is almost negligible compared to the future, as shown in Figure 7. The actual RATE of growth of shipments of silicon transistors is predicted to increase until about 2038.  At that time, the Gompertz Curve suggests that the increase in the RATE of growth will become zero and the RATE of increase will be less each year until we reach saturation, sometime in the 2050 or 2060 timeframe.  By then, we should have developed lots of alternatives.

Figure 7. Future of the silicon transistor

1https://en.wikipedia.org/wiki/Benjamin_Gompertz

2https://arxiv.org/ftp/arxiv/papers/1306/1306.3395.pdf

Read the completed series