RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

Tensilica DNA 100 Brings the AI Inference Solution for Level 2 ADAS ECUs and Level 4 Autonomous Driving

Tensilica DNA 100 Brings the AI Inference Solution for Level 2 ADAS ECUs and Level 4 Autonomous Driving
by Randy Smith on 08-27-2019 at 10:00 am

I recently wrote about Tensilica’s HiFi DSPs which played a significant role at Cadence’s Automotive Design Summit which was held on the Cadence San Jose campus at the end of July. That article focused on infotainment while briefly touching on Advanced Driver-Assistance Systems (ADAS). ADAS is NOT synonymous with autopilot. Level 2 automotive ADAS features help the driver to drive more safely and also complement additional ADAS safety features in the current fleets of deployed cars, buses, trucks. The technologies used in ADAS are merely one step along the way to driverless or full self-driving (FSD) vehicles if viewed as a continuum. We define the  progression to driverless or FSD vehicles like this:

Level 0: No automation
Level 1: Driver assistance
Level 2: Partial automation
Level 3: Conditional automation
Level 4: High automation
Level 5: Full automation

The most advanced ADAS features available in production cars today are Level 2/3. There are some limited taxi-like early pilot test services available now with Level 4 with limited range and use-case testing, subject to NHTSA guidelines. Cadence is doing a great job of making the EDA tools and IP available to build and enable these types of safe vehicles. But before we get into those details, I wanted to point out two important statements that stuck in my mind from the event: (1) security is more important than safety; and (2) several legal issues regarding autonomous vehicles have yet to be addressed.

The explanation for the higher priority security over safety is simple. If someone with evil thoughts can take over control of your vehicle, you are no longer safe. If someone else has control of your vehicle, they can determine where it goes, or they can crash it. Or they can be very annoying and blast music at full volume while leaving your wipers running continuously. If they can control one car, why not millions all at once? Unsecure autonomous driving systems would be unsafe and likely deadly. We must put security first, though safety is also critical. The crucial requirement of security is why Cadence’s relationship with Green Hills is so important. It will take software and hardware working together to implement the needed levels of security in autonomous FSD vehicles.

Autonomous vehicles are expected to reduce traffic fatalities dramatically. However, not all fatalities will be avoided. According to the World Health Organization (WHO), there were 1.25 million road traffic deaths globally in 2013. I think it would be fantastic if by 2032, through the increasing use of autonomous vehicles, we can cut that number in half. But that still means there would be over 600,000 traffic fatalities. Would we blame the car manufacturers? Would they be sued over every collision? Let’s face it, even if they reduce the number of traffic fatalities by 90%, when people get injured, they still want to blame someone. How will this work?

Sorry for the detour, but these things are important. Tensilica is enabling the drive from Level 2 ADAS up to Level 5 FSD by supplying a crucial piece of the autonomous driving system IP, artificial intelligence (AI). The Tensilica DNA 100 processor IP can handle the demands of the latest features in ADAS such as neural networks based emotions and Facial Action Coding Systems (FACS), gesture tracking, and occlusion detection. The number of new AI features in the pipeline for our cars is staggering. Beyond all that, the Level 5 systems will have many AI processes monitoring a huge hub of sensors and communication networks/protocols while also providing navigation, mapping, and trajectory planning with a focus on safety. AI neural networks will be key to all of this, and Tensilica will be a key IP supplier of AI-based systems.

I came away from the Cadence Automotive Summit impressed with Cadence’s overall grasp of the needs of the designers of autonomous driving systems. They seemed to have expertise in every field, and partners where the required expertise is well outside the Cadence domain. My thanks to Pradeep Bardia, Cadence Product Marketing Group Director for AI, for his presentation on “Automotive – AI Processor Solutions.” And thanks also to Robert Schweiger, Cadence’s Director of Automotive Solutions, for the timely and insightful event.


Build More and Better Tests Faster

Build More and Better Tests Faster
by Bernard Murphy on 08-27-2019 at 5:00 am

Breker has been in the system test synthesis game for 12 years, starting long before there was a PSS standard. Which means they probably have this figured out better than most, quite simply because they’ve seen it all and done it all. Breker is heavily involved in and aligned with the standard of course but it shouldn’t be surprising that they continue to leverage their experience in extensions beyond the standard. This is evident for example in their support for path constraints to prevent or limit creation of certain paths and to bias towards other paths during test synthesis.

The value of path constraints becomes apparent when you look at the coverage view the Breker tools generate following simulation/emulation/ etc. In the image a red action was not tested, orange was tested once, green multiple times and blue a lot of times. You could imagine running multiple times, randomizing each time, to see if you could hit all the actions with sufficient frequency. You really want an equivalent to constrained random, but it needs to be more than simply randomizing fields (what PSS currently provides). Path constraints provide a better way for you to steer randomization to improve hit frequency for more comprehensive coverage along paths that are meaningful to real system usage.

Eileen Honess, tech specialist at Breker, recently delivered a SemiWiki-hosted webinar on the objective of these flows, together with a demo. I can’t blog demos so instead I’ll tell you what really stood out for me from this session. What Breker is doing is all about accelerating high-value test generation. Whether or not you are enamored with PSS, you certainly get the value of generating 10k high-value tests, getting you to 95% coverage (because you can better control how to get to high coverage), in 2 weeks. This is versus manually creating 500 tests in 2.5 months which only get you to 87% coverage (stats based on real customer case).

Along similar lines, Breker support accelerated testing through pre-packaged apps. The app concept was most notably introduced by Jasper as a method to pre-package formal verification for dedicated use-models. Breker have built on that idea in four domains – cache coherency checking, power verification, security and an ARMv8 app focused on typical processor test issues (there is a rumor of a RISC-V app on the horizon also). I’ll touch on a couple of these. For cache coherency checking, you’re probably heard that formal methods are extensively used. These play an important role in verifying control logic sequencing but that alone isn’t enough. Directed/randomized testing also plays a big role in verifying inter-operation of caches and the bus together with the controller. The Breker approach is a perfect application for this class of testing, providing a mechanism to capture the potential graph of actions, then generate randomized tests over that graph. This has to be substantially easier than building such tests from scratch in UVM. It should also be much easier to review how effectively you are covering the problem.

Verifying power management is another area where an app can accelerate test development and is also a natural for a PSS approach. After all, the power state-machine is – a state machine. It’s a graph defining the various possible states and what drives transitions between those states. You can build that model in UVM but why not start with a model already built from the UPF power state table?

Evidence that Breker are enjoying success with approach is apparent in their customer list, including companies like Broadcom, Altera, IBM, Huawei among others, including 8 out of the top 10 semis I’m told. They’re also proven with pretty much every tool I know of in the functional verification flow:

    • Simulation: Questa, VCS and Xcelium
    • Emulation: Palladium, Veloce and ZeBu
    • FPGA prototyping: HAPS, Virtualizer and custom FPGA boards (curiously no mention of Protium, maybe an oversight)
    • Virtual prototyping: Imperas and QEMU
    • Debuggers: Verdi naturally plus the other standard platforms
    • VIP: Synopsys, Cadence and Avery

So if you’re into PSS, you should check these guys out because they have a pretty advanced solution. If you’re a PSS sceptic, you should check them out because you can build more tests faster to get to higher coverage. Just ignore that it’s PSS-based. You can watch the webinar HERE for more details.

Also Read

Taking the Pain out of UVM

WEBINAR: Eliminating Hybrid Verification Barriers Through Test Suite Synthesis

Breker on PSS and UVM


Achieving Functional Safety through a Certified Flow

Achieving Functional Safety through a Certified Flow
by Daniel Payne on 08-26-2019 at 10:00 am

Methodics: Tool and process certification

Functional safety (FuSa) is a big deal, especially when driving a car. My beloved 1998 Acura RL recently exhibited a strange behavior at 239K miles, after making a turn the steering wheel would stay tilted in the direction of the last turn instead of straightening out. The auto mechanic pinpointed the failure to the ball joints, so I ended up selling the vehicle to someone who was willing to make the repairs. Today our cars are filled with complex electronics, semiconductor IP, firmware, software, sensors, actuators and a variety of propulsion systems: internal combustion engines, hybrids, and electric motors. How should we meet functional safety requirements for industries like automotive, medical and life-critical domains?

Complying with functional safety requirements is an emerging field, and in the automotive industry we see the quest for Advanced Driver Assistance Systems (ADAS) inch towards autonomous vehicles. A recent White Paper written by Vadim Iofis of Methodics brought me up to speed on the challenges, and their approach of using an IP-centric flow to reach compliance by tracing down to the hardware and software component levels.

If each component in a system is designed and developed with certified tools and processes, then it becomes functional safety compliant. Here are examples of tools that are certified for FuSa compliance:

Your functional safety team ensures that each new version of a tool is certified before it’s used in a FuSa-compliant design and development process. Even the design and development process becomes FuSa compliant, adopting rules for approved tools and listing out the steps on verifying components, so that requirements and specifications are met. Both tools and process are version controlled. Wokflows are described using the Business Process Model and Notation (BPMN) 2.0 standard.

Even when your team has defined FuSa-compliant tools, processes and workflows, there’s still the issue of tracing the certifications down to each hardware block and software component. Where does this traceability come from?

Methodics has an approach to this traceability concern, and that is by defining each project as an IP hierarchy, where IP can be either a hardware design block or a software component. The Methodics Percipient tool provides IP management, versioning IP, and linking an IP object to hardware or software. Each IP also has metadata to enable traceability of certificates, process documentation and workflows.

Project Example

OK, time to get specific and look at a typical ADAS application for an Automatic Collision Avoidance System (ACAS), breaking up the project into hardware and software components as shown below.

Shown in Yellow are Containers, a way to use hierarchy with groupings of other IP blocks. The Green blocks represent hardware blocks, Blue blocks are for software components, and finally the Purple blocks are the FuSa artifacts for each component to certify that the design and development process are meeting your team’s compliance standards.

The Sensor Package on the left-hand side has sensors, hardware and FuSa artifacts (tool certifications, Design Workflow Model using BPMN 2.0 diagram). The ACA Module on the right-hand side is all software components and FuSa artifacts, and you could use Git or Perforce as a code repository.

Using the Percipient tool it only takes a few steps to create the Sensor Package hierarchy.

You can define using hierarchy for both hardware blocks and artifacts. Each artifact has certification IPs attached, and in this case it was from Word documents. Your team can update each new certification version, add new project components and certify each component.

Summary

It’s much easier to reach FuSa compliance in a system design by using an automated approach for hierarchy and traceability like that found in the Percipient tool from Methodics. Both hardware and software components are linked with their FuSa artifacts as proof of compliance. The functional safety manager on your team can now inspect the version history and know which releases belong to which tool certification, even see when and where anything was changed.

Why use manual, error-prone methods when a vendor like Methodics has helped automate so much of the FuSa process. Request the full 11 page White Paper here.

Related Blogs


What Fairchild, AMD, Actel and Jurassic Park Have in Common

What Fairchild, AMD, Actel and Jurassic Park Have in Common
by John East on 08-26-2019 at 6:00 am

Me.

In the early stories of this series  (Weeks three though six), I talked about what I believe were the three seminal events in the history of the semiconductor:  Shockley’s invention of the transistor, Noyce’s invention of the integrated circuit, and Intel’s 1971 —  the introductions  of the first commercially successful DRAM, EPROM, and microprocessor.  I was taking some poetic license when I talked about Intel because that wasn’t really an “event”.  It was a series of events that took place over a year or two.

(It should be said here that, as is true in pretty much all inventions and product introductions, if the whole story is told,  scores of people and companies would join in getting credit.  But — I have to stay within my word limit or Dan won’t print my stories.)

Given that I’ve already gotten away with a loose definition of “event”, I’m now going to talk about a fourth seminal “event”:  A decade long event  — one that changed the industry and left us old guys scratching our heads wondering what had just happened.

When I first joined AMD the marching orders were: “Building blocks of ever increasing complexity!!”  It made sense.  For most of the ‘60s the IC market was comprised of TTL “small scale integration” products  — gates and flip flops.  (See week #8 of this series.  Texas Instruments and the TTL Wars) We called those “SSI”.  In the late ‘60s “Medium Scale Integration” (MSI) emerged.  Companies that bought gates and flip flops invariably used them to make a generally accepted array of larger elements:  multiplexors, decoders, register files, adders, shift registers, counters, and ALUs were on the top of that list.  Those functions became the industry’s standard MSI products. Every company who was designing electronic equipment would use products out of that group. So  –  as we progressed along the Moore’s law curve,  we knew what to do with the additional gates.  Instead of selling gates and flip flops, we sold decoders and multiplexors.  More progress along the density curve led to “Large Scale Integration”  (LSI).  AMD’s building block concept called for defining generic blocks that could be used by a variety of different customers in a variety of different applications.  When AMD got to the Large Scale Integration point on the Moore’s Law curve, they took their best shot with the AMD 2901. The 2901 was a four bit wide slice of a processor’s Arithmetic-Logic Unit. It hit the bull’s eye!  Thanks John Mick. Thanks Tom Wong!  Thanks John Springer!!

The 2901 was a big success commercially.  Just as was true of the MSI building blocks, you could pretty much use the 2901 in any kind of electronic system.  But, would it always be that easy? Were there plenty of products like the 2901 at hand? And  — looking ahead —  was there a commonly accepted set of VLSI  (Very large scale integration)  products that could be used anywhere when we got to that point on the curve?  Not really.  There were a few products, but it was a much shorter list.  Microprocessors and memories could be used everywhere.  They were VLSI.  Gate arrays as well —  but gate arrays were custom products by the time they shipped. What else?  Not much.  “Building blocks of ever increasing complexity” was reaching the end of its rope.  As time passed and IC densities increased, customers ceased wanting to use building blocks to build their own systems.   When densities got to the point that the entire system could be put on a single chip, then that’s what the customers wanted.  Why go through the hassle of designing the system when you could just go out and buy it?

So  — the game had changed.  From the IC house’s point of view, instead of designing general purpose building blocks that could be used for pretty much any system,  IC houses had to pick a particular system and then design chips specifically for that.  One chip for switches. One for ISDN.  One for ethernet. One for FDDI.  Etc.  That was problematic on two fronts.  First, which systems would you pick?  If you picked FDDI and ISDN (As I did) and those markets never went anywhere (As those two didn’t), then you’d eat the development costs and your sales would languish because you wouldn’t have the products that the customers wanted.  Second,  how would you learn enough about the specialized market you were going after to be able to define and design a good product?  After all networking experts, for example, by and large preferred working at networking companies – not at chip companies.

From the customers’ point of view, by the time 1988 rolled around they were demanding microprocessors,  memories,  and gate arrays.  Customers could use gate arrays to design their own LSI chips.  They liked that!! And virtually all systems use memories and microprocessors.  The Japanese saw this coming. They coined the term “master slice”  (for gate arrays).  Then they embarked on an effort to conquer the market for the three M’s:  Memories,  Microprocessors,  and Master Slices.  It seemed for a time that they might succeed.  Basically, their plan was to dominate the business of manufacturing ICs.  The good news is that they didn’t succeed.  (The bad news is that Taiwan and Korea eventually did.)

Coincident with this was the advent of the foundry.  To really understand this, you should read Daniel Nenni’s Fabless:  the Transformation of the Semiconductor Industry.  Semiconductor companies no longer had to understand Iceo or work functions, or mobile ions or minority carrier lifetimes or any of the many other time-honored problems that had existed since the days that I was a hands-on engineer.  Those problems would be handled by some man or woman in Taiwan whom you had never met and would never need to meet.  The world had changed. In 1980 the typical successful semiconductor CEO understood semiconductor physics, fab processing, and transistor level circuit design.  By 1990 that set of knowledge was already rendered nearly useless.  By 1990 you needed to understand the architectures of very specialized, complex systems and the end markets of these systems.  Were there exceptions besides the three Ms?  Yes. A few analog functions and programmable logic.

Today I’m spending quite a bit of my time with semiconductor start-ups.  Over the past three years I’ve been working with Silicon Catalyst.  Silicon Catalyst is an incubator who incubates only semiconductor companies.  That has allowed me to get a good look at dozens of start-ups in the field. My take?  Successful semiconductor CEOs today often know little or nothing about semiconductors per se.  Moreover, they don’t even care!!  Nor should they!!! They know about the market they’re trying to serve.  They understand the hardware, firmware, software and applications related to their chosen market.  Building blocks of ever increasing complexity went out the window a long time ago.  Today the call is for a complete solution to an existing problem.  Let TSMC worry about how to make the things. The change in the attitudes of customers caused a corresponding change in the strategies of the traditional IC companies.  In the case of AMD, the thrust moved away from “Building blocks of ever increasing complexity” to “High performance computing is transforming our lives”.  It was a big change.  Companies that don’t embrace changes wither away.  In the case of AMD,  they made the changes well.  The stock market now pegs their value at north of $30 billion!!!

Congratulations to Lisa Su,  her staff, and the entire AMD team!!!  Great work!!!

Next week: Joining Actel

Pictured:  A trio of dinosaurs. (There would have been four, but I couldn’t find a picture of Pete)

See the entire John East series HERE.

# Texas Instruments, John Mick, Tom Wong,  John Springer,  Daniel Nenni,  Silicon Catalyst, Lisa Su

 

 

 


Tesla Stakes Insurance Claim!

Tesla Stakes Insurance Claim!
by Roger C. Lanctot on 08-25-2019 at 2:00 pm

In its most recent shareholders’ meeting Tesla Motors CEO Elon Musk noted that the Tesla Model 3 is the highest revenue producing car in the U.S. and fourth best-selling by unit volume. He also noted Tesla’s vehicles’ industry-leading fuel efficiency and range.

Musk took the further step of emphasizing the lower cost of ownership of Tesla’s vehicles versus internal combustion engine vehicles such as the Toyota Camry or Honda Accord. What he neglected to mention was the potentially higher cost of insurance.

Tesla owners have been wrestling with escalating insurance costs as a troubling factor has emerged in the Tesla ownership experience: the challenge of obtaining timely repairs resulting from the limited availability of replacement parts and supporting service organizations. Without directly addressing the issue during the shareholder meeting, Musk noted the company’s addition of direct repair services and roadside assistance for everything from tire replacement to bumper repair. “We just did our first bumper repair,” Musk said.

This is but one contributing factor to an enduring multi-year effort on the part of Tesla to provide its own car insurance. Also in the shareholder meeting Musk mentioned that an insurance offering from the company would involve “a small acquisition and a bit of software to write.” There were no follow-up questions on either the direct repair services or the potential acquisition of an insurance company.

Tesla has partnered with State National Insurance to handle underwriting as a response to an 18% escalation in the cost of repairs for insurers such as Liberty Mutual, GEICO, and Progressive. Musk went a little further in a more recent earnings call suggesting that the company was one month away from launching its own insurance product and implicating Tesla drivers as the cause of rising insurance rates.

Said Musk:  “(The company’s insurance offering) will be much more compelling than anything else out there. We have direct knowledge of the risk profile of customers, and based on the car, and then if they want to buy a Tesla insurance, they would have to agree to not drive the car in a crazy way. Or they can, but then their insurance rates are higher.”

The comment is aligned with Musk’s frequent claims regarding the superior performance of Tesla vehicles in surviving and avoiding crashes. Musk got involved in a dispute with the National Highway Traffic Safety Administration (NHTSA) last month when he claimed, using NHTSA’s own data, that drivers and passengers were least likely to be killed or injured in a Tesla vehicle. This claim followed Musk’s assertion from two years prior, that Tesla vehicles equipped with the company’s autopilot technology were approximately 40% less likely to be subject to an insurance claim resulting from a crash.

The bottom line appears to be that Tesla’s insurance cost problem is less related to vehicle performance or driver behavior than it is to the logistics of replacement parts availability. Glass for cracked or shattered windshields and damage to large sunroofs appear to be just one of many chokepoints for replacements parts and warranty repair vulnerabilities.

It is possible, though, that the frequency of claims is an issue as well but the jury is out and the data is not in. Anecdotally, reports continue to come in from around the world of Tesla vehicles operating in autopilot mode plowing into emergency and service vehicles parked on the shoulders of highways.

Nevertheless, Musk stands by his safety claims for Tesla vehicles’ ability to survive and avoid crashes. Still, the company does need to address the repair issue.

For now, it appears that Tesla intends to offer its own insurance and is taking on its own repairs. The former problem seems easier to solve than the latter. Either way, it’s a problem Tesla will have to solve for it to continue to be able to tout a lower cost of ownership relative to Toyota and Honda. If there is one thing Toyota and Honda owners can count on, in addition to having among the lowest costs of ownership, it is the broad availability of replacement parts and repair organizations.


Memory Manifesto – Examining the “new (slower) normal” of the memory industry

Memory Manifesto – Examining the “new (slower) normal” of the memory industry
by Robert Maire on 08-25-2019 at 12:00 pm

  • The perfect storm that drove memory is over
  • The “new normal” goes back to older model
  • The capex & tech pendulum swings to logic

We try to examine the future of the memory segment of the semiconductor industry given the current market and technology drivers combined with historical behavior.

Passing the Baton
The memory market over the past few years has been the larger variable of the semiconductor industry and has been the determinate of the leader in the overall chip industry as well as the fortunes of the semiconductor equipment industry.

In the past, the chip industry was historically driven both from a technology perspective as well as an economic perspective by logic devices. Over the past few years, that changed to a memory based industry which saw a primarily memory driven company, Samsung, surpass Intel, a logic company, in overall semiconductor revenue, if even for a brief time. The fortunes of the semiconductor equipment industry also followed suit as memory manufacturing became the largest purchaser of semiconductor equipment tools.

A new yardstick
If we use the old yardstick of Moore’s Law as a measure of technology, as it relates to planar transistor density, logic has always led the race. But if we look at true three dimensional transistor density, memory has grown at a faster rate.

In logic we have changed transistor shape and structure but remain in a primarily two dimensional world. There are some new developments regarding stacking logic transistors but nothing yet mainstream so logic is still stuck in a 2D world

Will the pendulum swing back?
In our view, the fundamental drivers of the semiconductor industry over the past few years which favored the memory industry have changed such that we will not likely see the same high levels of technology or revenue growth in the memory segment and the industry will transition back to a more balanced, maybe even logic driven, state. We think the drivers of memory had a very positive confluence of events which came together in a “perfect storm” of growth which may not be repeated again, at least not in the near term.

Waiting for Godot?
Are we waiting for a memory recovery which will never come? At least not at the rip snorting previous pace of the memory industry.

Most importantly we have to look at demand because without demand, there is no recovery. On the NAND side of the memory industry, the single biggest driver was the change from rotating magnetic media to SSD’s. In essence the semiconductor NAND memory industry had to build enough capacity to replace much of an already huge disk drive industry (for the most part). This was a gigantic step function of demand which was in itself driven by NAND memory cost reduction which was the catalyst of price elastic explosion of demand.

Although there are other increasing uses of NAND such as smart phones, AI, the cloud etc;, SSD’s suck up a huge capacity and were the key variable in outsize growth.

Near term memory plateau
The problem is that now that the industry has put in place this monsterous capacity the incremental drivers from here are not as strong as the step function implementation of SSD’s. There are lots of laptops with 128GB, 256GB and even 1TB SSD’s now that 1TB has fallen below the $100 price point. Its not like laptop memory is going to grow in the near term to 2TB or 4TB SSD’s as there is a diminishing return of usefulness for the additional capacity.

We are also likely at a near term plateau for NAND in smart phones. We bought a 256GB Iphone last year and have yet to use more than 64GB of memory despite installing many, many apps and taking lots of pictures. 5G doesn’t exist yet and AI or VR has not come to smart phones yet. On top of this plateau is the fact that the smart phone market is seeing a near term decline.

While servers and the cloud continue to suck up memory, both DRAM and NAND, laptops and end user devices are clearly slower.

Shutting down tools and lines in memory fabs
The clear and obvious response to slowing demand is to slow production. The industry has been taking capacity off line by shutting down lines and tools that were previously in production.  Much like OPEC stopping pumping when oil prices and demand declines. This has been going on for several quarters now and is still going on according to recent earnings conference calls.

This is a very clear indication that supply and demand in the memory market are still not in a reasonable balance despite slowing of price declines.

Turning tools off is a sign of a very desperate industry as the variable cost of making memory devices is very low as compared to the fixed (capital equipment) costs which are sitting idle.

Slow to turn off….slower to turn it back on?
A lot of of line capacity to turn on before buying new production tools

Given that the industry has spent several quarters (at least 3 or 4) turning off capacity its clear that they are not going to flip the switch and turn on all that idled capacity overnight and ruin a newly stabilized market with a flood of capacity. We expect the idled capacity to come back on line slowly, perhaps at the same speed as it was taken off, over a number of quarters, assuming demand recovers enough (which is in and of itself a big assumption).

Technology buys but not capacity buys??
The other factor that many have not thought about is that there is a constant increase in supply due to technology improvements without bringing the idled tools back on line. Shrinking geometries, more layers stacked etc; can increase output without a lot of new tools.

What happens if the new, slower memory growth rate can be satisfied with technology improvements alone??? and we don’t need “capacity buys”?

Even if we assume we need new “capacity” related tools, the industry has a huge amount of them, already installed, waiting to be powered back on.

It could be a very, very long time before the memory makers buy semiconductor equipment for “capacity” reasons. If we assume the idled tools get turned back on at a rate at which they were turned off (which again is an optimistic assumption) then it will be well into 2020, if not the end of 2020 or into 2021 before we see “capacity buys” again.  All this is assuming good demand growth and no external factors like China etc;.

Summary:
We think the “perfect storm” of 2D to 3D conversion coupled with the SSD revolution caused an “abnormal” memory cycle which will not be repeated in the current cycle as potential demand drivers and technology drivers do not exist for such “outsized” growth.

This will impact not only memory makers but capital equipment suppliers as the mix of equipment and consumers will change as we shift back to a more balanced equation between memory and logic.

The stocks
Obviously Lam has the highest exposure to memory spending and Samsung.  Applied is not far behind and perhaps this is the reason for the company not “calling a bottom” or remaining conservative or talking more about AI and other things in the future.

KLA and ASML will likely fare better as KLA has always been more logic driven and the move to EUV, for ASML, will continue to be driven by “logic only” uses as memory does not use EUV (for the foreseeable future)

Total capex will remain under pressure as logic/foundry cannot make up for the decline in memory spend we are seeing.

While we hope the down cycle is not as long as the up cycle was, we have seen multi year down cycles in the past and are likely in one now.

Time will tell…..how long til the next “perfect storm”….


WEBNAR: How ASIC/SoC Rapid Prototyping Solutions Can Help You!

WEBNAR: How ASIC/SoC Rapid Prototyping Solutions Can Help You!
by Daniel Nenni on 08-23-2019 at 10:00 am

If you are considering an FPGA prototype for an ASIC or SoC as part of your verification strategy, which more and more chip designers today are doing to enhance verification coverage of complex designs, please take advantage of this webinar replay:

How ASIC/SoC Prototyping Solutions Can Help You!

Or to get a quick quote from S2C click here.

S2C has been developing FPGA prototyping platforms since 2003, and, over the years, they have figured out what users want for a complete FPGA prototyping solution.

If you are new to FPGA prototyping, the S2C website provides a good overview of what to consider as you begin your project; your Application type, your Design Stage, and your Design Classification.  If you are an experienced FPGA prototype user, and you know what you need, then you can skip the preliminaries and go straight to the Products and Services tabs.  Here you will find a comprehensive display of the hardware and software products that S2C has available to support your FPGA prototyping project, along with custom FPGA prototyping and verification services.

S2C supports both Xilinx and Intel FPGAs, as well as different size FPGAs from each of these two leading FPGA providers.  To further support a wide range of design complexity options, S2C offers Single, Dual, and Quad FPGA platforms.  For Enterprise-class prototyping, S2C offers the Prodigy Cloud Cube, which supports up to eight Quad Logic Modules with a total of up to 32 FPGAs.

S2C prototyping platforms are available in two configurations; Logic Modules, and Logic Systems.  Logic Modules take a “component” approach to the FPGA hardware with separate FPGA Module and Power Supply.  Logic Systems are a newer integrated, modular configuration that includes up to four FPGA Modules, a Power Control Module, and the Power Supply, all in a single low-profile enclosure.

S2C’s “complete FPGA prototyping solution” includes additional hardware and software products that complement their FPGA Logic Modules and Logic Systems;

  • Prodigy PlayerPro Software
    • Prodigy PlayerPro Runtime (Included)
    • Prodigy PlayerPro Compile (Optional)
  • Prodigy ProtoBridge (Optional)
  • Prodigy Multi-Debug Module (Optional)
  • Prodigy Prototype IP Daughter Cards (Optional)

PlayerPro Runtime software is included with S2C’s FPGA hardware, and configures the FPGA prototype, provides for remote system management and facilitates debug set-up.  PlayerPro Compile software is an optional integrated GUI environment and Tcl interface that enables users to compile and partition a design into multiple FPGAs, and generate the individual FPGA bit files.  If a single FPGA is all you need, PlayerPro Runtime will provide everything you need to set-up, run, and debug your FPGA prototype.

ProtoBridge is an out-of-the-box hardware and software solution for applying large amounts of test data in the form of bus traffic, or communications traffic, or video images, etc. to the FPGA prototype from your host computer.  ProtoBridge includes AXI master/slave and PCI bridge modules that are compiled and loaded into the FPGA with the prototype design, physical PCI cable hardware that supports up to 1GB/s transfers, PCI driver software, and a set of C-API function calls to drive AXI bus transactions from the host computer.

The Multi-Debug Module (“MDM”), supports very wide and very deep test data traces up to 32K signals per FPGA, in 8 groups of 4K probes per FPGA, without re-compile, and can store 8GB of waveform data without consuming user memory (MDM hardware includes memory), with a rich selection of trigger event conditions.  MDM provides the unique ability to view waveforms from multiple FPGAs in a single viewing window.

Prototype Ready IP (a.k.a. “Daughter Cards”) rounds out S2C’s complete solutions for fast, reliable FPGA prototype project ramp-up.  With more than 80 Daughter Cards, S2C offers a rich off-the-shelf library of I/O interfaces (USB, PCI, GPIO, etc.), Clock Modules, Multimedia Interfaces (MIPI, HDMI, etc.), Memory Modules, ARM and Zynq Interface Modules, and Cable Adapters.  S2C’s Daughter Cards attach to the FPGA modules using high-quality, high pin-count connectors for quick, reliable, Tinkertoy-like assembly.

Then, if you have some unique FPGA prototyping requirement that is not met by S2C’s off-the-shelf products, they will also entertain discussions for joint development of custom Daughter Cards, FPGA Boards, and other FPGA prototyping services.

So, if you are considering FPGA prototyping for the first time, or you are looking to upgrade your FPGA prototyping capabilities for more complex designs please register for this webinar and find out everything you need for FPGA prototyping:

WEBINAR Replay: How ASIC/SoC Prototyping Solutions Can Help You!

Or to get a quick quote from S2C click here.


Chapter 7 – Competitive Dynamics in the Electronic Design Automation Industry

Chapter 7 – Competitive Dynamics in the Electronic Design Automation Industry
by Wally Rhines on 08-23-2019 at 6:00 am

Electronic design automation, or EDA, became the  term used for computer software and hardware developed to aid in the design and verification of electronics, from integrated circuits to printed circuit boards to the integrated electronics of large systems like planes, trains and cars. As the EDA industry evolved, certain dynamics of change became apparent.

Three large companies have led the EDA industry in each of its eras of growth. Computervision, Calma and Applicon were the three largest engineering workstation companies of the 1970’s.  They provided special purpose computer workstations for designers to capture the layout of integrated circuits and printed circuit boards and to edit that layout until the designers were satisfied. For all three companies, much of their business came from mechanical CAD (Computer Aided Design) applications but electrical design applications grew as a part of their revenue.  The “GDS” standards of today’s IC design came from Calma. In the early 1980s, automation was applied to more than just the physical layout of chips and printed circuit boards.  Circuit schematics were captured and simulated on special purpose computers in addition to being transformed into physical layouts of “wires” connecting the circuit elements. Daisy, Mentor and Valid took over the lead during this next decade as Calma and Computervision faded (Figure 1). Large defense, aerospace and automotive companies selected one of these three for standardization across the diverse operations of their companies.  Over time, Valid focused primarily on printed circuit board (PCB) design while Daisy and Mentor did both PCB and integrated circuit (IC) design. Daisy and Valid developed their own computer hardware while Mentor was the first to “OEM” third party hardware, adopting the Apollo workstation and developing software to run on it.  Although this triumvirate had the leading market share through much of the 1980’s, the cost and resources required to develop both hardware and software dragged Daisy and Valid down while Mentor survived.  Mentor was founded in 1981. Soloman Design Associates (SDA) emerged in 1984 and transformed into Cadence in 1988.  Synopsys emerged in 1988.

Figure 1. Successive oligopolies in EDA

 Since the late 1980’s, Mentor, Cadence and Synopsys have been an oligopoly with combined market share of 75% plus or minus 5% for most of the 1990’s and the next decade.  More recently, that percentage has increased to nearly 85%. While Mentor, Cadence and Synopsys had 75%, the other 25% was shared by dozens of smaller companies (Figure 2). Mentor accelerated its market share gains after acquisition by Siemens in 2017 and continued to grow much faster than the market in 2018.

Figure 2.  “Big 3” oligopoly with the largest combined market share

While three companies dominated the EDA business through most of its history, the products making up the revenue of the industry were diverse. GSEDA, one of the leading statistics organizations that tracks the industry, reports on the revenue for 65 different types of products (Figure 3). Forty of these segments generated $1 million or more of revenue annually. One would think that it would be very difficult for dozens of small companies with very specialized EDA products to survive when three big companies dominate. The big companies, however, dominate the big market segments and the little companies dominate the little market segments.  From time to time, little companies are acquired by the big ones.  Overall profitability for the industry remains high because, within any one product category, there is a dominant supplier. Switching costs for a designer to move between EDA suppliers is very high, given the infrastructure of connected utilities and the extensive familiarization required to adopt a specific vendor’s software for one of the design specializations.  In the forty largest segments of EDA, the largest supplier in each category has a 67% market share on average.  Almost no product segment has a leading supplier with less than 40% market share in that segment (Figure 4).  As a result, EDA companies with two thirds of the market in any given segment can spend far more on R&D and support in that segment than their competitors.  This gives rise to stability.  Engineers are reluctant to change the design software they use because they are familiar with the intricacies of each tool.  Since one EDA supplier usually has a commanding market share in each tool category, that company tends to become the defacto supplier of the tool for that particular application. High switching costs drive stability and profit for the EDA industry and most market share gains come from acquisitions.

Figure 3. 65 Product segments tracked in EDA. Big companies dominate the big segments and little companies dominate the little segments

Figure 4.  Largest product categories of EDA have a #1 supplier with 70% market share.  Minimum market share is about 40%

Companies that use the tools have the task of integrating design flows using different vendors’ tools.  Although sometimes difficult when EDA suppliers make it so, this integration is worth the effort to have a “best in class” design flow made up of best in class tools. Defacto standards abound and most users find that life is too short to use the tool that few others use.  For decades, Synopsys has been the defacto logic synthesis supplier, Cadence for detailed physical layout, Mentor for physical verification and so forth.

Figure 7 of Chapter 2 shows that EDA industry revenue has been 2% of semiconductor industry revenue for over 25 years. Why doesn’t it increase as needs and applications grow?  Or why doesn’t it shrink when R&D cost reduction becomes necessary in the semiconductor industry.  First, semiconductor industry R&D has been nearly constant at about 14% for over thirty years.  During the 1980’s, EDA software costs rose to two points of that 14% by reducing other R&D costs such as labor.  Ever since then, EDA budgets have been set so that they averaged one seventh of the R&D expense of semiconductor companies, or 2% of the total revenue.  I’m convinced that salespeople for the EDA industry work with their semiconductor customers to provide them with the software they need each year, even in times of semiconductor recessions, so that the average spending can stay within the budget.  As was discussed in Chapter 2, increasing the percent of revenue spent on EDA would require a reduction in some other expense.  Rather than do that, the semiconductor industry has unconsciously kept all suppliers on the same learning curve that is parallel to the learning curve for semiconductor revenue. EDA software cost per transistor is decreasing approximately 30% per year, just as semiconductor revenue per transistor does.

What has all this automation done for the semiconductor industry?  Figure 13 of Chapter 2 shows the productivity growth per engineer.  The number of transistors manufactured each year per electronic engineer has increased five orders of magnitude since 1985.  I can’t think of another industry that has produced that level of productivity growth.

Read the completed series


Speeding up Circuit Simulation using a GPU Approach

Speeding up Circuit Simulation using a GPU Approach
by Daniel Payne on 08-22-2019 at 10:00 am

ALPS-GT

The old adage that “Time is Money” certainly rings true in the semiconductor world where IC designers are being challenged with getting their new designs to market quickly, and correctly in the first spin of silicon. Circuit designers work at the transistor-level, and circuit simulation is one of the most time-consuming tasks for SPICE tools, so any relief is quite welcomed, although there is one caveat and that is engineers need fast and correct answers, not fast incorrect answers, so accuracy is a high requirement.

Yes, there’s a category of Fast SPICE simulators out there, however they tend to work best for mostly digital circuits, so what choices do you have for the most challenging analog circuits?

One promising new development for SPICE circuit simulation is running jobs on a GPU, instead of a general-purpose CPU. I attended a webinar this month  presented by Chen Zhao, of Empyrean, where he talked about their approach to GPU-powered SPICE.

Most EDA software companies are located in Silicon Valley, Austin, Boston, Europe or Japan, however Empyrean started out in Beijing, China back in 2009 and has grown to some 300 people. I’ve seen them at DAC for the last couple of years, and they’re becoming more visible in the US with an office in San Jose.

The challenges for Analog simulation are well known:  FinFET devices have complex models that evaluate slowly, there are more parasitics with each new interconnect layer, process variations require more simulations, and all IP blocks must be verified.

The circuit simulator from Empyrean is called ALPS, and acronym for Accurate Large capacity Parallel SPICE. With ALPS they created a full-SPICE accurate simulator that is between 3X to 5X faster, has a capacity of 100M elements, and has been silicon proven down to 7nm. To reach this speed required a new approach to solving the matrices used in SPICE, and they call that technology Smart Matrix Solver (SMS).

GPUs have a massively parallel architecture, so Empyrean created a version of their ALPS simulator called ALPS-GT to harness the GPU, which then provides up to a 10X speed improvement for circuit simulation run times.

With a 10X speed improvement you can now reach your tapeout goal more quickly, simulate more PVT corners and even simulate scenarios that weren’t feasible with older, slower simulators.

ALPS-GT accepts netlists in HSPICE and Spectre formats, handles all of the popular model files, accepts Verilog-A, and can even co-simulate with 3rd party Verilog simulators. Output file formats are the industry standard tr0 and fsdb, so you can keep using your favorite viewers.

The 10X speed improvement with ALPS-GT comes from the Smart Matrix Solver being optimized to run on a GPU, and SMS-GT has a 5X faster solver than what comes standard with the NVIDIA suppled CUDA matrix solver. Chen showed examples from customer circuits where ALPS-GT was much faster than a competitor:

ALPS-GT vs competitor SPICE with 16 CPUs

As the netlist size grows and you start to add extracted parasitics, then the speed differences between ALPS-GT and the competition just grow larger, here’s two examples with millions of parasitics in the netlist:

ALPS-GT vs competitor SPICE with 16 CPUs

You may have noticed in this comparison that the competitor SPICE tool would have taken over 100 days to complete a single simulation, so that’s not even practical to consider.

Summary

The need for speed, capacity and accuracy are ever present for SPICE circuit simulators, and the engineers at Empyrean have harnessed the capabilities of the GPU to speed up run times, while maintaining SPICE accuracy. If you’d like to view the recorded webinar, then here’s the link.

Related Blogs


Webinar: Using Embedded FPGA to Improve Machine Learning SOCs

Webinar: Using Embedded FPGA to Improve Machine Learning SOCs
by Tom Simon on 08-22-2019 at 6:00 am

By its very definition, machine learning (ML) hardware requires flexibility. In turn, each ML application has its own fine grain requirements. Specific hardware implementations that include specialized processing elements are often desirable for machine learning chips. At the top of the priority list is parallel processing. However, to make effective use of parallel processing units, memory and network architecture are critical. There has been an explosion of different approaches to ML hardware, including dedicated processor arrays, FPGA based solutions, etc. Early on it became clear that FPGAs had a lot to offer, but their use was limited by the requirement to move data on and off chip.

Embedded FPGAs offer a way to connect all the storage and computational elements of a full programmable ML solution inside of a single die. Achronix has been working on solutions for embedded FPGAs that are specifically tailored for ML SOCs. By properly choosing processing elements and memory configurations, dramatic improvements to logic utilization and throughput can be realized. These advantages become important in cloud applications, and even more so when the target is a mobile device.

Achronix is offering a webinar to show how embedded FPGA can become the fabric for optimized machine learning solutions. The presenter will be Achronix Senior Director Mike Fitton, who has over 25 years of experience in system architecture, algorithm development and semiconductor design in wireless, network and ML.

The webinar will show how bringing programmable hardware into an SOC design can allow fine tuning of applications and data transfer. The webinar promises to include real benchmark data showing the value of embedded FPGA versus other approaches.

The webinar will be on Thursday August 29th at 10AM PDT. This should be an interesting and informative look at significantly better ways to implement ML for data center, edge, mobile, IoT and other areas where ML is proving useful.

About Achronix Semiconductor Corporation
Achronix Semiconductor Corporation is a privately held, fabless semiconductor corporation based in Santa Clara, California and offers high-performance FPGA and embedded FPGA (eFPGA) solutions. Achronix’s history is one of pushing the boundaries in the high-performance FPGA market. Achronix offerings include programmable FPGA fabrics, discrete high-performance and high-density FPGAs with hardwired system-level blocks, datacenter and HPC hardware accelerator boards, and best-in-class EDA software supporting all Achronix products. The company has sales offices and representatives in the United States, Europe, and China, and has a research and design office in Bangalore, India