webinar banner2025 (1)

Intel,  Motorola, and the IBM PC

Intel,  Motorola, and the IBM PC
by John East on 08-05-2019 at 6:00 am

Wikipedia  …   “In chaos theory, the butterfly effect is the sensitive dependence on initial conditions in which a small change in one state of a non-linear system can result in large differences in a later state”.  In other words, a butterfly bats its wings in Argentina and the path of an immense tornado in Oklahoma is changed some time later.

In 1980, IBM undertook a very secret project.  They had decided to develop a personal computer.  Apple Computer was making a killing in the personal computer market.  (See my upcoming weeks #13 and #14 dealing with Apple). IBM owned the big computer market.  They weren’t about to allow upstart Apple to horn in on their territory! Normal IBM policy was to design their products in a central design group in New York and to use primarily IBM manufactured ICs.  They recognized that sticking to this policy would slow things down.  They didn’t want to go slowly.  They wanted to announce the product in the summer of 1981.   They formed a task-force group in Boca Raton, Florida working under a lab manager named Don Estridge.  The task:  get a personal computer on the market and do it by August 1981.  Use outside ICs.  Use outside software.  Do whatever it takes, but get it out on time!!!   And keep it secret!!!

Meanwhile, Intel was in a tough place.  The memory market was already extremely competitive.  (See my week #6.  “Intel let there be RAM”). The microprocessor market was becoming so as well.  Seemingly every company was offering their own version of a microprocessor. (At AMD we were a microprocessor partner of Zilog who was offering a 16 bit microprocessor called the Z8000.)   Over the past decade Intel had gone from a place where — having introduced the first commercially successful DRAM and microprocessor — they controlled the market to a place where they had  become just one of the pack.  They didn’t like that!  They created Operation Crush  — a massive project  aimed at regaining domination in the microprocessor space.  Bill Davidow managed the effort.  Andy Grove supported it strongly via a message to the field sales organization saying essentially,  “If you value your jobs,  you’ll produce 8086 design wins”.

Paul Indaco (Now the CEO of Amulet Technologies) was a young kid just out of school. He was working at Intel in the Applications Department. As part of a rotational program (common in those days.),  he was sent out into the field to learn the selling side of the business.  As luck would have it,  he ended up in the Intel sales office in Fort Lauderdale, Florida.  The custom was (And I’d imagine still is) to give the new guy the account scraps that didn’t much matter while the experienced guy kept the important accounts.  So —  Earl Whetstone,  the existing salesman in the office, took the accounts to the south of Ft Lauderdale and Indaco got the less important ones to the north.  One of the accounts that “didn’t matter” was IBM Boca Raton.  How could IBM “not matter”?  Because Boca Raton was not a design site.  That is, it wasn’t where decisions regarding what parts to use were made.  Those decisions always came down from Poughkeepsie.   — or so everyone thought.

One day not long after Indaco had moved to Florida, he happened to be talking with a salesman from his distributor (Arrow).  “Oh.  By the way.  An IBM guy asked me today for some info on the 8086.  He works in some secretive new group. He didn’t say why he wanted to know.”  With nothing better to do, Paul got the name and number and called the guy.

Yes.  It turned out that IBM was up to something.  They wouldn’t say what it was. That was top secret.  But  —  they said they were in a huge hurry trying to make a very short deadline.  They said that they had more or less decided to go with a Motorola processor (Probably the 68000) but they conceded that they might be willing to take a quick look at the Intel 8086 along the way.  That wasn’t good for Intel.  It was generally acknowledged that the Motorola 68000 was technically superior to the 8086.   It looked like a longshot for Intel,  and they weren’t even sure what they were shooting at.

Intel had a few advantages though.  The first was their development system  — the 8086 in-circuit emulator.  It was better than what Motorola had to offer.  That would be helpful in speeding up the design and software debugging process.  Given the tight deadline, that could be important!  Paul loaned them one.  Then came good news.  The IBM engineer soon said something like, “Hey, I like this development system, would you loan me another one?”   The Intel policy was one loaner to a customer. The issue was clear though:  “Any work they do on an Intel development system applies to Intel only, so let’s help them do a lot!”  So Paul talked with Arrow who happily agreed to loan three more.   IBM often needed help on site from the Intel FAE.  The project was so secretive, though, that when the FAE went to help with the emulation work, the emulator was separated from the rest of the lab by curtains.  All he could see was the door, the emulator, and the curtains.  IBM would escort him in, he would solve the problem, and then IBM would escort him out.

Intel had three other advantages:  Bill Davidow, Paul Otellini, and Andy Grove.  Those were good advantages to have!! Bill ran Intel’s microprocessor division, Paul ran Intel’s strategic accounts, and Andy ran Intel.  They wanted this win!  Operation Crush was in full force!  Any number of issues had to be solved.  Among them was the ever-present issue of needing to beat Motorola.  And of course, there was the issue of pricing.  IBM wanted a price that was in the neighborhood of one half the current 8086 ASP.  Then, the Intel team had an epiphany!  Why not switch from the 8086 to the 8088?  (The 8088 was an 8 bit external bus version of the 8086.) Pricing would be less of an issue with the 8088 and IBM might like it because it would speed up the design cycle.  Why?  Because Intel had a complete family of 8 bit peripherals which would eliminate the time required to design the functions that the peripherals handled.  The available peripherals would not only speed up the project, they’d also reduce the number of components required to do the job. Neither the 68000 nor the 8086 had a complete family of peripheral chips at that time.   In the end the Indaco/Whetstone/Otellini/ Davidow/Grove team pulled put a victory.  Even after they won, though, they didn’t know what they had won until the day IBM announced.  The design win report that Indaco filed listed a win in a new IBM “Super intelligent terminal”.

It ended up being the most important design win in semiconductor history.

What does this have to do with butterflies and chaos theory? ……    Intel is the biggest semiconductor company in world.  To a great extent that is due to the IBM design win.  I wonder what company would be biggest if Indaco hadn’t happened to be talking with the Arrow salesman that day?  What if the Arrow guy happened to talk with a Zilog salesperson or an AMD salesperson instead? Or one from National or Motorola or Fairchild?!!!  The world might be very, very different!

Grove went on to be Time Magazine’s Man of the Year in 1997.  Otellini went on to be CEO of Intel for a decade.  Davidow went on to become a very successful venture capitalist with the distinction of leading one of Actel’s financing rounds.  They all ended up well.

But Indaco has them topped.  He went on to become Actel’s Vice President of Sales!!

Next week:   Steve Jobs

Picture #1. The Plaque awarded to Paul Indaco for winning the IBM PC design

Picture #2.  Paul Indaco holding his plaque earlier this year.

 

See the entire John East series HERE.


Chapter 4 – Gompertz Predicts the Future

Chapter 4 – Gompertz Predicts the Future
by Wally Rhines on 08-02-2019 at 6:00 am

In 1825, Benjamin Gompertz proposed a mathematical model for time series that looks like an “S-curve”.1  Mathematically it is a double exponential (Figure 1) where y=a(exp(b(exp(-ct)))) where t is time and a, b and c are adjustable coefficients that modulate the steepness of the S-Curve.  The Gompertz Curve has been used for a wide variety of time dependent models including the growth of tumors, population growth and financial market evolution.

FIGURE 1. The Gompertz Curve

S-Curves are common in nature.  In any new business, or in biological phenomena, we start out small with an embryonic business or a tiny cell and it reproduces slowly but the percentage growth rate is large. As time goes on, the growth accelerates until it finally slows down as it reaches saturation.  A new product takes a significant period of time for early adopters to spread the word of its benefits but it then goes viral, saturates the market and then declines (Figure 2).  On the right half of Figure 2, we see the same phenomena when the vertical axis of the graph is the cumulative number.  An example would be the freezing of water in a pond.  It starts with a few water molecules and then grows to a critical nucleus which grows rapidly until the pond is mostly frozen.  Then the last bit of water freezes over a longer period of time.  Expressed mathematically, the integral of the cumulative function is the area under the curve and it increases until the S-Curve finally flattens.

FIGURE 2. Typical product life cycle or life cycle of an industry

Figure 3 shows the stages of growth of the S-Curve.  It starts out slow but the highest percentage growth is early in the S-Curve evolution.  The curvature of the “S” increases upward until about 37% of the time on the horizontal axis is completed.2 Then the curvature is downward.  Mathematically we would say that the second derivative of the Gompertz function is positive until about 37% of the time is completed and then the second derivative becomes zero.  The rate of the rate of growth becomes negative and so the growth rate is less each year after that point.

FIGURE 3. Gompertz Curve Life Cycle

I first became acquainted with the Gompertz Curve while managing a design project that TI was doing for IBM.  IBM wanted us to report the number of simulated transistors that we had completed in our design each week.  They then plotted them as a Gompertz Curve (Figure 4).  Inexperienced project managers would have been frustrated by the fact that progress was initially very slow.  The specification for the design project kept changing, new architectural approaches were tested and the number of simulated transistors remained small for some time.  Then, things took off.  The number of transistors completed each week grew linearly.  Our inexperienced design manager would have been delighted and would have extrapolated this progress to an early completion as shown in Figure 4.  With more experience, he would realize that the last fifth of the project would take more than one third of the total time.

FIGURE 4. Use of Gompertz Curve for Project Management

While the Gompertz Curve is useful for project management, it provides even more insight for forecasting the future success of an embryonic product.  Figure 5 shows the evolution of worldwide sales of notebook PCs.  Using the data available to us with the actual shipments of PC notebooks in the years up through 2001, we can solve for the Gompertz coefficients a, b and c.  We could then have used these coefficients to predict the future evolution of the growth curve for cumulative units of PC notebooks shipped.  Figure 6 shows the Gompertz prediction versus the actual results reported in 2016. The results are nearly identical.  If you were an aspiring competitor in the PC notebook business in 2001, or even an investor in the personal computer business, accurate knowledge of the future market for PC notebooks over the next fifteen years could be very useful.

FIGURE 5. PC Notebook Shipments through 2001 provide data for Gompertz forecast

FIGURE 6. Actual PC Notebook shipments though 2016 (shown in green) versus Gompertz prediction in 2001 (shown in yellow)

Finally, Gompertz Curves can be used to predict the future of an industry.  A good choice would be the future of the silicon transistor since lots of research dollars have been devoted to developing an alternative to the silicon switch and we don’t even know how soon we need it.  Or do we?  Gompertz analysis provides an opinion.  It’s shown in Figure 7.  Although the semiconductor industry and silicon technology may seem mature to some, we are in the infancy of our production of silicon transistors.  The cumulative number of silicon transistors produced thus far is almost negligible compared to the future, as shown in Figure 7. The actual RATE of growth of shipments of silicon transistors is predicted to increase until about 2038.  At that time, the Gompertz Curve suggests that the increase in the RATE of growth will become zero and the RATE of increase will be less each year until we reach saturation, sometime in the 2050 or 2060 timeframe.  By then, we should have developed lots of alternatives.

Figure 7. Future of the silicon transistor

1https://en.wikipedia.org/wiki/Benjamin_Gompertz

2https://arxiv.org/ftp/arxiv/papers/1306/1306.3395.pdf

Read the completed series


Lam beats reduced guide as 2019 is done and 2020 is just a hope!

Lam beats reduced guide as 2019 is done and 2020 is just a hope!
by Robert Maire on 08-02-2019 at 4:00 am

Nice house in a neighborhood in decline

Lam posted EPS and revenues ahead of reduced expectations, but guided the current quarter below street current estimates.

Is a “beat” really a “beat” if its against greatly reduced numbers? We would remind investors that we are looking at EPS cut more or less in half from a year ago.

Execution has been great as Lam management has done a great job of cutting opex and doing everything in their control that they can possibly do but it doesn’t make up for a market continuing to decline quarter after quarter. $1.1B in stock buy backs this quarter when added to previous buy backs have taken 15% of Lam’s shares off the streets also propping up EPS.

Management did not “call a bottom” as there is no bottom in sight but was hopeful for a better 2020. Right now 2020 is nothing more than a hope that the down cycle will be over and memory will recover—-but its a “hope” with no hard evidence.

Memory is still two thirds of Lams business – and in decline

While memory is no longer 85% of business it is still the vast majority as Lam remains the most exposed to memory.  As we had pointed out in our Semicon West report, and Lam echoed, foundry is recovering based on 5G early production. While this clearly helped ASML and will also help KLAC, Lam has less exposure than average to the foundry segment. Even Applied has more foundry exposure. While Lam has some unique applications that are not memory specific its not enough to offset the weak memory market….at the end of the day Lam is still a memory driven company…..

Memory buys are technology not capacity driven

We have said for many years that there are two cycles that underlie buying; technology buying cycles and capacity buying cycles. Technology buys obviously follow Moore’s law and the progress in memory/logic technology. Capacity buys, which are higher in overall volume are based on market demand such as the switch to SSD’s. Right now capacity buys are zero as equipment is still being idled to artificially reduce supply while technology buys to increase NAND layer count or migrate from 1Z to 1A continue.

Lam needs capacity related buying to come back before it can recover but capacity related buying will take a longer time to come back as the idled capacity will come back on line long before memory makers will need to buy new equipment for additional capacity purposes.

Still searching for a bottom….

Setting the “Limbo Bar” lower….

As we continue in a slow downward spiral, estimates continue to be lowered, resetting the bar ever lower so that the company can “beat” the new lower number and say they beat expectations, meanwhile the stock is up 50% on the year in a declining memory sector with EPS cut in half….go figure….  It might be reasonable if there was a clear or even a murky recovery coming together but right now there is no difference as most analysts continue to kick the can of recovery down the road another quarter or 6 months or talk about an “etheral” recovery in 2020. We would remind investors that the vast majority of so called analysts also thought that industry was no longer cyclical or the downturn was a one quarter “air pocket”, are now calling for a 2020 recovery after previously calling for a 2019 recovery.

The problem is that no one really knows…..

Niche technology is nice but not impactful

Lam talked about some new areas of business outside of the core wheelhouse of advanced etch and dep.  Given the huge number of steps in chip manufacturing and many types of process there is a lot of fertile ground for new business which Lam is doing a good job of rounding up.  This too helps cushion but not offset the downturn in mainstream memory tools.  We think some of these applications could potentially be larger in a recovery scenario.

One that we find interesting, though not publicly mentioned, is “cryo” etch (or a “cold” etch) for buttery soft materials used in new memory types such as Intel’s Optane. We remain a fan of the upside of these type of memory devices.

The stocks

If we put a 15X market multiple on Lams current outlook for the year we get a $210 price target…which is where we are today.  The problem is that the EPS outlook continues to come down and we think Lam should trade at a discount to the market as its business is in contraction mode not expansion mode. If we take a haircut to the EPS outlook and discount the multiple we get a target well below the current stock price.

All this is beside the point that the stock is up 50% on the year in a declining business.   We still have a lot of time to buy the stock prior to an upcycle cause we are still far off in the future.  Add to all this the uncertainty of China still out there.

We would consider taking money off the table as the downside beta is clearly higher than the upside at this point.

Other Stocks

ASML was driven by logic/foundry and an earlier recovery of litho tools.  KLAC has always been a logic/foundry driven company and not a memory company like Lam.  Applied has strong ties to the foundry market.

However, we need to be clear that the memory industry has grown so large and so fast that no tool company can remain immune to its weakness or escape the gravitational pull of the weak memory sector.  We would also caution investors that a “stabilizing ” memory market does not mean that memory companies will rush out and start ordering tools in bulk again.  It could be a long slow climb of using up the idled capacity before we start buying new again.  Even if memory stabilizes in H2 2019 there is no guarantee of increased equipment purchases in 2020.


GPU-Powered SPICE – Understanding the Cost

GPU-Powered SPICE – Understanding the Cost
by Daniel Nenni on 08-01-2019 at 10:00 am

To deploy a GPU-based SPICE solution, you need to understand the costs involved. To get your hands on this new report analyzing this specific issue, all you need to do is attend Empyrean’s upcoming webinar, “GPU-Powered SPICE:  The Way Forward for Analog Simulation,” which will be held on Thursday, August 8, 2019, at 10:00 am (PDT). This webinar is the first webinar in the SemiWiki Webinar series. Click here to sign up using your work email information.

SPICE (Simulation Program with Integrated Circuit Emphasis) was initially developed at the Electronics Research Laboratory of the University of California, Berkeley by Laurence Nagel in the early 1970s. SPICE1, the version of SPICE first presented at a conference in 1973, was largely an effort to have a circuit simulator without ties to the Department of Defense, essentially keeping the electrical engineering department in step with the rest of the anti-war movement at UC Berkeley. SPICE1 was coded in FORTRAN. I believe it ran on the IBM mainframe computer available to the department.

In 1989, SPICE3 was released, the first version of SPICE written in C. As the code has long been available under the standard BSD license, SPICE has been available via open-source, and it was ported to many CPU-based systems as commercial versions started popping up from the emerging EDA industry during the 1990s. There are still many versions of SPICE out there, commercial, academic, and proprietary.

Unfortunately, SPICE being ported to newer computers was still insufficient to keep up with the increasingly large circuits engineers wanted to simulate. Simulation times kept getting longer. Less accurate versions of SPICE were then invented, generally referred to as fast-SPICE. They traded off some accuracy for improved speed of analysis. This has continued to be the state of the market until just recently.

For the last 15 years or so, companies have been trying to find ways of harnessing the incredible computation powers of GPUs (Graphics Processing Units) as an alternative to CPUs. CPU-based computers have typically one-to-eight processors at their disposal. GPUs have hundreds of processors, though they cannot do general-purpose computing well. The idea when programming a GPU is to give it a large sequence of calculations to do with little branching (e.g., avoid IF statements). GPUs are data throughput engines. Think of them as the dragsters of processing units – they go extremely fast but do not turn very well. So, the longer you can only feed the GPU data and let it just calculate it runs fast. When you ask a GPU to branch, it slows down. It takes experienced GPU programming skills to re-write code written for a general-purpose processor (CPU) and make it perform well on the GPU. Not all types of algorithms can be ported to a GPU and run faster, but matrix solving, which is critical in SPICE, will be one of them. I have seen this before in photo-lithography simulations. It makes sense it can be done with SPICE as well.

Empyrean is working on a paper which goes into comparing the cost differences between its GPU-accelerated ALPS-GT™ SPICE simulator and CPU-based simulators. Keep in mind that Empyrean already boasts the fasted CPU-based SPICE simulator Empyrean ALPS™, which was voted “Best of DAC 2018” for being the fastest SPICE simulator with True SPICE accuracy.  Empyrean ALPS™ has displayed 3X – 8X faster performance than the next fastest SPICE simulator in the market. Empyrean claims ALPS-GT is an order of magnitude faster with the same True SPICE accuracy. There are the so-called fast-SPICE simulators that sacrifice some accuracy to achieve faster throughput. That is not what Empyrean’s tools are. They provide true SPICE accuracy.

About Empyrean Software
Empyrean Software provides electronic design automation (EDA) software, design IPs and design services, including analog and mixed-signal IC design solutions, SoC design optimization solutions, and Flat Panel Design (FPD) solutions and customized consulting services.

Empyrean Software is the largest EDA software provider in China, and its research and development team has more than 30 years of experience in technology and product development. Our company has comprehensive cooperation with many corporations, universities and research laboratories.

Empyrean’s core values are dedication, collaboration, innovation, and professionalism. Our company is committed to working with customers and partners through win-win collaboration to achieve our common goals.


Arm Gets More Creative with Licensing

Arm Gets More Creative with Licensing
by Bernard Murphy on 08-01-2019 at 6:00 am

Arm flexible access

Without a doubt, RISC-V is generating a lot of buzz and I’m sure a lot of new designs, especially in spaces that are super-cost competitive or demand added differentiation in the processor. I doubt this is having meaningful impact on Arm business, in $$ rather than press. It takes a long time to replace an ecosystem of that size and the confidence markets have in Arm products. It’s not even clear it would make sense to displace Arm in the foreseeable future, any more than it would make sense for Arm to displace Intel in servers. In both cases, there are subset markets that can be better served by an alternative but there’s no apparent (to me) reason to switch most applications.

Still, I’m sure Arm is feeling some heat. I’m guessing they are also under pressure from customers needing to respond to highly fluid demand, such as system builders moving into SoC design, where what IPs they need and how many they need may not be very clear until relatively late in the development cycle. Perhaps some of those design teams might also wonder if life would be a lot easier if they could instead work with other platforms.

After all, the Arm business model for development wasn’t very flexible; you had to decide and pay up-front those IPs you wanted to license. Many RISC-V implementations are open-source and free to use as a starting point, or available under attractive terms compared to Arm options. And, no doubt, there is some appeal to the thought that you might be able to get processor IP for free, with no up-front payment and perhaps no royalties, even if you have to do more design work yourself.

Arm now offers Flexible Access, a new engagement model intermediate between DesignStart ($0 for access to Cortex-M0, M1 and M3, software trial for 90 days, royalties when you go to production) and the standard licensing model where fees vary with IP you want to use, single or multi-use access, levels of access and so on.

In Flexible Access you get access to a much wider range of core and support IP (eg system and security IP), can choose between an option of one tapeout per year or multiple tapeouts per year at (per the current website details) $75k or $200k annually. Again you pay royalties on production volumes. Arm already has several partners active in this engagement model, including AlphaICs, Invecas and Nordic Semi.

IPs included under the plan include most Cortex-M, -A and -R processors, TrustZone and CryptoCell IP, a number of Mali GPUs, system IP such as the AMBA fabric generators and other tools and models for design and software development. Global support and training are also included.

OK, so it’s not free, but it definitely is more flexible. A lot of customers don’t know exactly upfront which IP they are going to need or how many they are going to need. The big systems houses – hyperscalars, communications equipment and similar – won’t particularly care about cost but they do need flexibility. Smaller and more cost-sensitive ventures needing to react quickly to updated spec demands from their customers should definitely appreciate this new model. And I would imagine for all of these customers, easier access to this range of Arm IP has to be a more attractive and safer option than launching a RISC-V adventure.

Not everyone has the stomach for the inevitable risk in embracing open hardware standards or the need to differentiate on the processor. WD knows exactly what they want from RISC-V and has years of experience and large teams building similar designs around Arm cores; their work with RISC-V must feel like a relatively incremental step for them. But IMHO (and I’m not alone) this step will be a big and unnecessary unknown and risk for many. That said, more heat on Arm (or any near-monopoly) is never a bad thing. They’ll work harder and we’ll all benefit.


56th DAC – In Depth Look at Analog IP Migration from MunEDA

56th DAC – In Depth Look at Analog IP Migration from MunEDA
by Tom Simon on 07-31-2019 at 10:00 am

Every year at DAC, in addition to the hubbub of the exhibit floor and the relatively short technical sessions, there are a number of tutorials that dive in depth into interesting topics. At the 56th DAC in Las Vegas this year, MunEDA offered an interesting tutorial on Analog IP migration and optimization. This is a key issue for large and small companies. Digital IP migration is a fairly well bounded problem, making digital IP reuse a common activity. Though no less important, analog IP has been more difficult to adapt to new processes and new foundries. The 4 hour MunEDA tutorial was rich with technical content and real life case studies from Fraunhofer, inPlay Technologies, Rohm and STMicroelectronics. As always it is way more interesting to hear about design tool experiences from customers.

Michael Pronath from MunEDA started off the tutorial by discussing tools and methodologies for analog IP migration. MunEDA addresses issues in full custom design, which includes memories, custom cells, RF, and of course, analog. There are many challenges in this domain, including difficult design trade-offs, and design for yield & aging. Their migration flow includes SPT, for schematic porting. It handles many of the tedious and error prone steps involved in moving a design. The ported schematic can then be sized and tuned for the new technology with MunEDA’s WiCkeD sizing and tuning tools. WiCkeD stands for their Worst Case Distance optimization and analysis techniques. The final step is applying the WiCkeD based analysis and verification tools to ensure proper operation and performance of the finished design. This can include accounting for processes parameters, global and mismatch variation, reliability and more.

I found the portion of the tutorial given by Rohm’s Hidekazu Kojima particularly interesting. Rohm is a company that has been innovating semiconductor products since the late 1950’s. Their IP group uses MunEDA products to tailor their IP to the individual product group needs. They rely heavily on MunEDA WiCkeD tools for this. Hidekazu compared the flow they use to a traditional optimization flow. The main problem he highlighted with other flows is that there can be many iterations, where each small change requires reverification of all the performance specifications. MunEDA’s WiCkeD Deterministc Nominal Optimizer (DNO) pretty much handles the whole process and only required a few automatic iterations to reach closure.

Hidekazu then talked about finding the worst-case operating condition and corner. The WiCkeD tools can detect a worst operating condition between a min and max range. He also mentioned the easy to understand output graphs produced by the tools. The next part of his presentation was discussion of several case studies, including an AMP circuit, memory circuit, a comparator, and a logic circuit. For the AMP circuit, the optimization time went from 160h to ~6h. The area was also reduced by 60% compared to the original circuit with improved operating characteristics.

He closed with an overview of what he felt were the most useful features. Naturally the schematic porting features were included. He said it made it easy to replace devices with the new ones from the target PDK. It also automates any necessary rewiring. The Worst Case Analysis (WCA) algorithm significantly reduced the number of simulations needed for high sigma verification. This is useful for designs intended for automotive applications. WCA was also very useful for helping improve robustness for process variation and mismatch, with higher accuracy in fewer simulations.  Lastly, they were easily able to produce corner models based on PCM data from typical models. They came to within 1% of their target in only ~15 minutes.

Over time companies develop an array of design IP, which comes to represent significant value. Having the ability easily and predictably migrate this IP means that this value can be effectively leveraged for future projects. MunEDA tools make this a reality. The tutorial was filled with a rich variety of applications for MunEDA WiCkeD tools. If you were not able to attend the tutorial, the comprehensive slides are available are available on the MunEDA website. I expect that, just as I did, you will find the contents very informative.


IP Provider Vidatronic Embraces the ClioSoft Design Management Platform

IP Provider Vidatronic Embraces the ClioSoft Design Management Platform
by Randy Smith on 07-31-2019 at 6:00 am

Having worked at several semiconductor intellectual property (SIP) companies, I know how important it is to have a strong design data management platform for tracking the development and distribution of SIP products. Everyone doing semiconductor design should care about design data management. But for an IP company, it is imperative. Life gets complex quickly when you start giving your customers different versions of the same IP. So, it got my attention when Vidatronic, a provider of energy-efficient analog and power management unit (PMU) IP, said they were willing to talk about their use of ClioSoft’s SOS Design Platform to develop their IPs.

First, a bit about Vidatronic. They have been around since 2010 and have a couple of interesting outside board members amongst some of the biggest names in semiconductors, including Hector Ruiz, Ph.D. (former CEO of AMD) and Mike Bartlett, M.S.E.E. (former Texas Instruments VP). Recently, Vidatronic announced that they will provide PMU and analog IP cores to ARM for use in their solutions and have also teamed up with Samsung Foundry to provide analog IP core designs for licensing through SAFE™, Samsung Advanced Foundry Ecosystem. They sport two primary engineering locations, one in Austin, Texas and one in Egypt. Their analog SoC IP portfolio includes power-management solutions including LDO linear voltage regulators, DC-DC switching converters, bandgap voltage references, and other support circuitry. They also provide radio-frequency solutions, including CMOS transmitters.

Based on this basic description of Vidatronic, we can see that they need to support many SIPs across a large number of design process nodes, where they also need to develop customized versions for certain customer/process node combinations. But when diving in deeper with Vidatronic, we find even stronger reasons to deploy ClioSoft’s SOS Design Platform:

  1. Reduced complexity and efficiency while supporting multiple sites
    1. Used in Texas and Egypt
    2. Supports real-time sharing of design data between the sites
    3. Performance needs for auto-synchronization and secure, efficient data transfer
    4. Easy to control/restrict design access
    5. Optimized disk usage using SOS smart caching (with links to cache work areas to optimize network storage)
    6. Read-only local copy work areas with exclusive or concurrent locking
  2. Support for Cadence Virtuoso platform
    1. Ability to manage complex hierarchical cell views
    2. Integrates well with Cadence Virtuoso
  3. Critical features for tracking multiple versions of each IP
    1. Easy to take and label design snapshots of the designs which helps in efficient collaboration between the teams
    2. “Revert back” feature for recovering to a stable version of the design data, if necessary
    3. Design teams can record important milestones, plus review and track open issues
    4. Use “visual design diff” to identify differences between two revisions of the schematic or layout of an IP

That is a lot of strong reasons for Vidatronic to utilize ClioSoft’s SOS Design Platform.

When I asked Moises Robinson, President and Co-Founder of Vidatronic about his company’s experience with ClioSoft, he told me “We selected ClioSoft’s SOS Design and IP Management Platform for our design needs about four years ago and we have been extremely happy with it ever since. The number one reason we chose ClioSoft was for its design collaboration features. Operating on a global scale is not without its challenges, but ClioSoft allows our engineers across the world to work seamlessly together on the same projects while maintaining tight control over revision histories so we never lose any of our work. Effective management of the different versions of our IP and efficient collaboration among our designers is integral to our success, and ClioSoft plays an important role in us ultimately delivering the highest quality IP to our customers.”

That is an impressive endorsement. Indeed, ClioSoft’s SOS Design Platform seems like a perfect tool for companies developing IPs over multiple sites worldwide.

Also Read

56thDAC ClioSoft Excitement

A Brief History of IP Management

Three things you should know about designHUB!


Semicon West 2019 – Day 3 – Global Foundries

Semicon West 2019 – Day 3 – Global Foundries
by Scotten Jones on 07-30-2019 at 10:00 am

On Wednesday, July 10th I got to sit down with Gary Patton, CTO and SVP of worldwide research and development of Global Foundries and get an update on how the company is doing.

We started with a discussion of Global Foundries (GF) general business health. Revenue for the year is expected to be around $6 billion dollars. They are focused on profitability and will generate over $600 million dollars in free cash flow after $700 million dollars of capital expenditure and $600 million dollars of R&D spending not including any transactions. In the past GF was cash flow negative and this is a huge accomplishment making the company self-funding.

The first key decision in achieving this was pivoting away from 7nm. 7nm is fine for TSMC and Samsung but is messy with EUV, etc. and the R&D and IP investments are very high. According to a graphic they showed me from Gartner, 7nm and smaller nodes are only expected to represent about 20% of the total available market in 2023 so GF is not missing out on a lot of opportunity.

A second key decision has been rationalizing their fabs. GF had 3 – 200mm fabs, with large fabs in Burlington and Singapore and a small fab doing MEMS (Singapore Fab 3E). They have now sold Fab 3E. They also had 4 – 300mm fabs with large fabs in Malta. Dresden and Singapore and a small fab in Fishkill. They have now sold the Fishkill fab to On Semiconductor. On’s product have a lot fewer masks than GF’s making Fishkill a more appropriate scale fab for them. Fishkill will transition to On over three years with 45RFSOI and Silicon Photonics transitioning to Malta and 130nm RFSOI going to Singapore, Dresden has 22FDX and 40nm RF SOI. Even after the fab sales GF still has plenty of space available for growth. Dresden is only about 50% full, Fab 7 in Singapore has some space and after moving out 7nm from Malta there is about 40% available space there. GF can grow revenue by 40% with current cleanroom space.

GF has also sold their ASIC business to Marvell making them a clean provider of foundry services. They are focusing on being a manufacturing service provider, not a product provider.

GF wants to focus on key market segments that are growing, mobile, automotive and IOT (smart devices). In mobile the BOM is switching to more FEM (Front End Module) where GF is strong (I have previously written about GF’s broad portfolio of RF solutions here) and they are the only foundry with turn-key RF.

Some examples of applications for GF technologies are shown in the following slides.

Figure 1 illustrates GF technologies in wireless base stations.

Figure 1. Wireless infrastructure applications.

Figure 2 illustrates GF technologies in smart phones.

Figure 2. Smartphone applications.

Figure 3 illustrates GF technologies in automotive.

Figure 3. Automotive applications.

22FDX will have double the design wins this year and GF has restarted work on 12FDX. 12FDX is being developed in Malta on a slow ramp, they aren’t being pressured by customers for 12FDX yet. They have two new $1 billion-dollar opportunities in the last year for 45RFSOI in Dresden. They think their embedded MRAM solution is more flexible than other suppliers and they have started to get design wins on MRAM and also mmWave.

After years of questions about GF’s long-term survival they appear to be carving out a sustainable position in some key markets.

 

 

 

 


Mentor Highlights HLS Customer Use in Automotive Applications

Mentor Highlights HLS Customer Use in Automotive Applications
by Bernard Murphy on 07-30-2019 at 6:00 am

Catapult HLS

I’ve talked before about Mentor’s work in high-level synthesis (HLS) and machine learning (ML). An important advantage of HLS in these applications is its ability to very quickly adapt and optimize architecture and verify an implementation to an objective in a highly dynamic domain. Design for automotive applications – for example in an intelligent imaging pipeline such as you might find for object detection in a forward-facing sensor – present all of these challenges.

Evolving Demands in Automotive Design

Certainly they can be on very tight deadlines; one example mentioned below required a team to develop three designs in a year. But two other constraints are even more challenging. First, verification suites are naturally based on images, often at 4K resolution, with 8-12-bit color depth and 30 frames per second. On top of this ML inference testing suites using images of this complexity can be huge, since correct detection in these applications needs to be near-foolproof.

Finally, the ecosystem from SoC developer to module maker to auto OEM has become much more tightly coupled, especially to meet the tighter requirements of ISO26262 Part 2 and now also SOTIF (safety of the intended function, another emerging ISO standard). Part 2 and SOTIF demands have placed more burden on the value chain as a whole, from IP suppliers through SoC integrators to Tier1s and the automotive OEMs, to ensure that the final product can meet safety requirements. For example Part 2 now requires a confirmation review to “provide sufficient and convincing evidence … to the achievement of functional safety”. This is a matter of judgment, not just meeting metrics; a tier 1 or a chip maker can require additional support from lower levels to meet that objective, which means that design specs can continue to iterate until quite late in the design schedule.

Under these constraints RTL-based design flows would be impossibly challenging; there simply wouldn’t be enough time to experiment with enough architecture variations, verify over huge reference image databases and respond to and re-characterize and re-verify late-stage changes from Tier1s or OEMs.

This is where HLS shines. You can develop code in C/C++ and experiment with architectures at least an order of magnitude more efficiently than you can at RTL since these are algorithmic problems most easily represented in that format (or in MATLAB or the common ML frameworks, to which the Mentor HLS solutions can connect). You can also run verification of those giant datasets at this level, multiple orders of magnitude faster than RTL-based verification. (I believe this should even be faster than emulation since C-modeling is close to virtual prototyping which runs at near-real time performance.) And in response to late changes, you can incorporate those changes at the C-level and re-verify and re-synthesize pretty much hands-free, limiting impact on your schedule.

Case Studies

Mentor recently released a white-paper (see below) on outcomes for three of their customers using their Catapult flow for designs in the automotive imaging pipeline. Bosch, a well-known mobility Tier1, are finding it valuable to enhance their own differentiation by building their own IPs and ICs for image recognition. This was the example where a design team had to produce three designs in a year. Using the Mentor flow they were able to pull this off and deliver a 30% power reduction because they were could easily experiment with and refine the architecture for power. They also commented that it will be much easier to migrate the C-based model to new designs and evolving standards than it would have been with an RTL model.

Chips and Media, a company providing hardware IP for video codec, image processing and computer vision (CV), also used the Mentor flow to develop a new CV IP. This was their first time using the HLS flow and they ran an interesting experiment with two teams, one developing with HLS, the other with hand-coded Verilog. The Verilog team took 5 months to complete their work, with little experimentation on architecture, whereas the HLS took 2.5 months. This was from a cold-start – they had to train on the tools first, then develop the C code, synthesize and so on. Apparently they were also able to experiment quite a bit in this period.

Finally, ST are well-known for their image signal processing (ISP) products, commonly used in automotive sensors. They have seen comparable improvements in throughput for such designs, delivering (and this is pretty awe-inspiring) more than 50 different ISP designs in two years, ranging in size from 10K gates to 2 million gates. Try doing that with an RTL-based flow!

You can learn more about these user examples and more detail on the Catapult HLS flow HERE.


Virtuoso Adapts to Address Cyber Physical Systems

Virtuoso Adapts to Address Cyber Physical Systems
by Tom Simon on 07-29-2019 at 2:00 pm

LIDAR is a controversial topic, with even Elon Musk weighing in on whether it will ever be feasible for use in self driving cars. His contention is that the sensors will remain too expensive and potentially be unreliable because of their mechanical complexity. However, each of the sensors available for autonomous driving have their strengths and weaknesses. LIDAR offers many of the advantages of camera based sensors, plus it can work in the dark. The other advantage it has over cameras is that it can provide object speed detection.

At DAC I had a chance to talk to Ian Dennison, Senior Group Director at Cadence about innovations occurring sensor technology and their integration into cyber physical systems. For instance, there are potential developments in LIDAR technology that could eliminate the need for mechanical elements, replacing them with a transmitter optical phase array. According to Ian a major roadblock for this kind of development was the difficulty combining optical and electronic design and analysis into a single integrated platform.

Beyond the elimination of mechanical elements, electro-optical design can help expand the application areas of a given technology.  It is well understood that LIDAR is not suitable for close range sensing. For automotive applications an accuracy 10 cm is fine. However, for industrial robotics this will not suffice. Ian believes that with more accurate laser modulation this resolution could be improved. One method could be adding additional electro-optical elements to create a PLL.

Cadence has been working hard on developing a photonics solution that extend design capabilities to solve problems that are holding back what Ian describes a gold rush in cost reductions for sensor systems. Cadence has established partnerships with Lumerical, Coventor and Mathworks to develop Virtuoso integrations that can accelerate design and integration of these systems.

Cadence has developed features and products to facilitate these integrations. An excellent example of this is their CurvyCore for creating curvilinear structures in Virtuoso. They have a SKILL API that allow symbolic curvilinear layout and discretization. It enables waveguide creation and model property calculation. Other useful tools can be added through SKILL IPC calls.

LIDAR is not the only application that is benefiting from Cadence’s enhanced Virtuoso solutions. In the RF space Cadence has announced Spectre X which offers up to a 10X speed improvement coupled with up to 5X capacity while maintaining Spectre’s golden accuracy. At high frequencies, such as 122 GHz, it is possible to include the antenna on the chip with the LNA and PA. Designs such as this need EM simulation for transmission like accuracy. Cadence has recently announced new EM solver solutions that can address all elements of a 122 GHz FMCW radar system.

From my conversation with Ian, it is pretty clear that Cadence is attacking the design issues in sensor design across the board. Indeed, there was a lot to digest. Nevertheless, as designers pick and use these new features, it is sure to change the landscape in autonomous vehicles, robotics, etc. While I am often a big fan of Elon Musk, I would not bet on LIDAR remaining unfeasible for automotive applications. History is full of examples of unforeseen advances due to improving technology. If anything, the rate of change is accelerating. A full description of the many developments in the Cadence solutions for sensor system development is available on their website.