RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

Synopsys and Infineon prepare for expanding AI use in automotive applications

Synopsys and Infineon prepare for expanding AI use in automotive applications
by Tom Simon on 10-03-2019 at 10:38 am

We all know that cars are using processors for many tasks, but it is easy to fail to comprehend just how many there are in a typical modern car. Browsing through the Infineon AURIX automotive processor application guide, you can start to see just how pervasive processors are. The AURIX processors are specifically designed for automotive and industrial applications. The subsystems are found in include:

      • doors, alarm, windows, locks, seats and mirrors
      • transmission, traction control and braking
      • airbags and safety
      • lights and blinkers
      • fuel injection, emission control and engine monitoring
      • battery management and charging for conventional and EV systems
      • infotainment
      • driver assistance and autonomous driving
      • navigation and communication

Each one of these systems require one or more processors. Interestingly, if asked where AI might fit in, most people would automatically choose driver assistance and autonomous driving. However, with the increasing power and utility of AI systems, there are many new applications in cars for AI based processing. Systems like engine control or traction management typically need to process huge amounts of data and perform large numbers of computations to do their jobs. Applying AI to these and other systems can dramatically improve their efficiency and even reduce the number of sensors needed. Some of these sensors are expensive, difficult to service and can be prone to failure.

 

Infineon understands the value of adding AI capabilities to their proven AURIX family of processors. They have chosen to work with Synopsys to develop a Parallel Processor Unit (PPU), which integrates Synopsys’ ARC® EV processor IP, for  the AURIX processor line. The addition of the PPU to the AURIX processors will greatly enhance real-time data processing capabilities. Since both Infineon and Synopsys are well versed in ISO 26262 and other automotive safety standards, end users can rest assured that safety and reliability will be paramount.

The PPU will support a wide range of AI algorithms, such as Recurrent Neural Network (RNN), Multi-Layer Perceptron (MLP), Convolutional Neural Network (CNN), and Radial Basis Function (RBF). AI use can be expanded to applications such as intrusion detection and system monitoring. If anything, more and new uses for AI will be developed in the automotive space. During the ARC Processor Summit in September Infineon gave a presentation on “System Modelling for Real-Time Automotive Applications using Deep Learning and Complex Data Processing.”

By integrating the ARC EV processors in their PPU, Infineon AURIX customers will have immediate access to the comprehensive Synopsys MetaWare EV Development Toolkit for Safety, which should speed up software development for these applications.

Today we look back at cars that were developed before microprocessors and wonder at how they even worked. In the future, we’ll look back at cars built without AI in their systems and wonder how difficult it must have been to build them. This is part of a larger shift in computing in general, but the automotive segment is the tip of the arrow for applying many of these new technologies. Yet, while being on the leading edge, it is also an area where there can be no compromise on safety and reliability. This makes the automotive market a fascinating crucible for creating the systems and software that will be pervasive in the future. More information about the new AURIX PPUs powered by Synopsys DesignWare ARC EV Processor IP can be found on the Synopsys website.

 


AI Hardware Summit, Report #3: Enabling On-Device Intelligence

AI Hardware Summit, Report #3: Enabling On-Device Intelligence
by Randy Smith on 10-03-2019 at 6:00 am

This is the third and final blog I have written about the recent AI Hardware Summit held at the Computer History Museum in Mountain View, CA. Day 1 of the conference was more about solutions in the data center, whereas Day 2 was primarily around solutions at the Edge. This presentation from Day 2 was given by Dr. Thomas Anderson, Head, Machine Learning and AI, Design Group at Synopsys. Thomas started his presentation with an analysis of the types of AI/ML applications that are particularly difficult to implement today and how Synopsys is helping designers solve this challenge. The journey this presentation went through then got increasingly interesting.

When we look at some of the current and near-future AI/ML challenges, we see huge scaling issues. Thomas pointed to the massive numbers shown in the diagram above. Granted these are challenges for the design centers, but there are similar problems at Edge as well. Next Thomas mentioned some recent breakthrough advances in AI. Thomas first pointed to Natural Language Processing, a problem that is typically solved using supervised learning, as an application that needs to be solved in a model that will run (on TPU2) with a 100ms response time. As another example, Generative Adversarial Networks uses two neural nets – one generates images and the other analyzes images – to learn how to detect fake images. AI breakthroughs are coming at a furious pace.

By now we have all seen the progression in AI solutions in the Cloud migrate from CPU, to GPU, and now to FPGA and application-specific processors. Synopsys EDA tools have played a large role in building these solutions. Putting together these advanced technologies for computing at the Edge is just as difficult. Synopsys support these efforts in many ways through software, tools, IP, and other advanced technologies. One example of this is the Synopsys DesignWare EV6x and EV7x Vision Processors. In a 16nm process, the EV6x convolutional neural network (CNN) engine delivers a power efficiency of up to 2,000 GMACs/sec/W, and the EV7x delivers an industry-leading 35TOPS.  Both processors support a comprehensive software programming environment based on existing and emerging embedded vision and neural network standards.  Using these software and hardware building blocks can save a tremendous amount of time when building your AI chip. Read more about the EV Processor family here.

The presentation by Thomas went on to discuss many other tools and the IP Synopsys has available for these types of designs. I won’t go into detail here, but Platform Architect Ultra looked like a very useful tool for architectural exploration, a topic that came up repeatedly at the summit.

Rather than go into the other tools that were discussed in the presentation, I want to take a quick look at where the conversation went next – using AI in the chip design process itself. A couple of things we know AI can do well is search a large solution space and self-training. So, the question is, “Can we train machines to build ICs?”

The AI program, AlphaGo Zero, went from zero to world champion in 40 days! It did this by teaching itself. By using a technique called reinforcement learning, the program played against itself to learn how to play better. It learned very quickly. This type of learning is quite interesting since it doesn’t rely on human knowledge or experience. Applied to chip design, we might find AI solutions that are completely different from previous human-designed chips. However, chip design is a lot more difficult than Go. According to Thomas, the estimated number of states in Go is 10360, while the number of states in a modern placement solution likely exceeds 1090,000.

So, if the design space is so huge, how can we use AI to design chips in the new future? It seems likely by using a combination of reinforcement learning along with Neural Acceleration Search and focusing on one functional design area at a time for now, likely physical design problems such as placement. This technique, Neural Acceleration Search, just announced by MIT earlier this year, provides a way to speed up learning by about 200x. While it is unlikely that AI technique will be designing entire chips from a functional specification in the next decade, we may see tremendous advances by applying AI to several chip design tasks. It is good to know that Synopsys is researching these new technology advancements from non-EDA areas to apply them to difficult EDA problems.


Debugging SoCs at the RTL, Gate and SPICE Netlist Levels

Debugging SoCs at the RTL, Gate and SPICE Netlist Levels
by Daniel Payne on 10-02-2019 at 10:00 am

Concept Engineering - auto schematic

Debugging an IC is never much fun because of all the file formats used, the levels of hierarchy and just the sheer design size, so when an EDA tool comes around that allows me to get my debugging done quicker, then I take notice and give it a look. I was already familiar with debugging SPICE netlists using a tool called SPICEVision Pro, but hadn’t spent much time learning about Gate and RTL debugging automation tools. Perfect timing, because EDA Direct organized a webinar recently on this very topic, so I signed up and will share what I learned.

Our webinar presenter was an AE named Sujit Roy from EDA Direct, and he has plenty of hands-on experience debugging designs at the RTL, Gate and SPICE netlist levels. The four products from Concept Engineering that were discussed and shown included:

  • RTLvision Pro – RTL debugging (Verilog, SystemVerilog, VHDL)
  • GateVision Pro – gate-level netlist debugging: Verilog, EDIF 2.0.0, LEF/DEF, Liberty, VCD, SDF
  • SpiceVision Pro – SPICE netlist debugging: HSPICE, Spectre, CDL, Calibre, Eldo, SPICE, SPEF/DSPF
  • StarVision Pro – All three of the previous tools, combined

By debugging, I mean the act of reading in netlist files, then graphically viewing, filtering and examining portions of a design throughout its hierarchy to understand the connectivity. Sujit first demonstrated RTLVision Pro by reading a digital design from Open Cores, then traversing the hierarch using a tree widget in the left pane, while showing the auto-generated schematic in the right pane:

Sure, using a text editor you can view an RTL design, but you won’t understand the connectivity or be able to trace a signal path as quickly as a graphical tool. In the above diagram a net called RXD was selected and highlighted, in order to understand which block drives RXD, and which blocks read RXD. The tool allowed us to create a cone of logic starting from RXD and going any direction we wanted to explore.

Logic Cone

RXD was driven by a FF cell, so we looked inside the FF cell and could view the source code in another pane. The Clock signal to the FF was selected and we could view all of the cells that used Clock, while hiding other signals to improve clarity.

Source Code

The drop-down menus had plenty of useful commands, and you can even run any of the 100 or so pre-built Tcl scripts. CAD groups can extend or even modify these Tcl scripts to automate design-specific debugging. SPICE netlists can be loaded, viewed and traversed:

SPICE Netlist

During the demo we saw a mixed-signal design that had an Analog block called Parity, along with a digital block called CPU:

Mixed-Signal Design

Even netlists that have extracted parasitics from a SPEF file can be quickly loaded, viewed and followed:

Parasitic Netlist

We saw even more cool, time-saving features of the StarVision tool in action on real designs, but I’ve covered the high-level features.

Summary

Most design engineers use simulators and some formal tools, so adding a tool like StarVision Pro is going to complement your debug flow and make your debug process go much quicker, because now you can really see where all of the signals go in a design; how all of the cells, blocks and modules are connected in a hierarchy; and what comes before and after any signal in a design. Design capacity is more than adequate, with customer designs at 20B gates being run.

You can start out using StarVision Pro in GUI mode first, then start to run batch scripts for anything repetitive.

To view the recorded webinar, follow this link, or contact EDA Direct to schedule a demo of StarVision Pro to learn more.

Related Blogs


Acceleration in a Heterogenous Compute Environment

Acceleration in a Heterogenous Compute Environment
by Bernard Murphy on 10-02-2019 at 5:00 am

Acceleration

Heterogenous compute isn’t a new concept. We’ve had it in phones and datacenters for quite a while – CPUs complemented by GPUs, DSPs and perhaps other specialized processors. But each of these compute engines has a very specific role, each driven by its own software (or training in the case of AI accelerators). You write software for the CPU, you write different software for the GPU and so on. Which makes sense, but it’s not general-purpose acceleration for a unified code-set. Could there be an equivalent in heterogenous compute to the multi-threading we use every day in multi-core compute?

Of course we need to think outside the box on how this might work; you can’t just drop general code on a mixed architecture and expect acceleration, any more than you can drop general code on a multi-core system with similar expectations. But given sufficient imagination, it appears the answer is yes. I recently came across a company called CacheQ which claims to provide a compelling answer in this space.

The company was founded just last year and is headed by a couple of senior FPGA guys. Clay Johnson (CEO) was VP of a Xilinx BU for a long time before going on to lead security ventures, and Dave Bennett (CTO) has a similar background, leading software dev at Xilinx for many years before again joining Clay in the security biz and now in CacheQ. Funding comes from Social Capital (amount not disclosed).

Given their background, it’s not surprising they turned first to FPGAs as a resource for acceleration. FPGAs are becoming more common as resources in datacenters (just look at Microsoft Azure) and in a lot of edge applications, I’m guessing for flexibility and easy field update, also as an appealing option for relatively low volume applications. And they are also starting to appear as embedded IP inside SoCs.

Back to the goal. CacheQ’s objective is to let software developers start with C-code (or object code) and to be able to significantly accelerate that code by leveraging a combination of CPU and FPGA resources while speeding and simplifying implementation and partitioning between processor and FPGA. At this point I started to wonder if this was some kind of high-level synthesis (HLS) play. It isn’t (the object code option is perhaps a hint). They position their product as an ultravisor (think amped-up hypervisor). They build a virtual machine around an application which then goes through an optimization and partitioning phase, then into code generation and mapping across any of several possible targets: x86 servers or desktops, FPGA accelerators, embedded Arm devices or heterogenous SoCs.

Still, doesn’t mapping onto FPGAs require going through RTL with all its concomitant challenges? Here the company provides some detail though they are understandably cagey about providing too much. So here is what I can tell you. Part of what they’re doing is unrolling complex loops in the code and mapping these, with pipelining, into the FPGA. They also automatically create the stack to manage CPU to FPGA communication and they manage memory allocation transparently across these domains.

The “aha” here is that they’re providing a way for software developers to get acceleration, not a way for hardware developers to build a design. This is a quite different intent from HLS and a goal that many have been chasing for a while. They don’t have to map everything to the FPGA, they just have to provide significant net speedup in critical pieces of code. They show some impressive numbers for key functions on their website

I asked about active applications today. Clay mentioned use in weather simulation, industrial and government applications. I also asked about support for other potential accelerators (GPU, DSP, …). He said that these are in long-term planning; each can offer acceleration in its own way, I would guess for big matrix operations as an example.

This looks like an interesting challenge to the long-standing problem of making FPGAs (and ultimately other platforms) more accessible to the general-purpose programmer. Worth a closer look. The website is HERE.


Webinar: OCV and Timing Closure Sign-off by Silvaco on Oct 10 at 10AM

Webinar: OCV and Timing Closure Sign-off by Silvaco on Oct 10 at 10AM
by Daniel Nenni on 10-01-2019 at 10:00 am

The old adage that goes the one constant thing you can always count on is change, could easily be reworded for semiconductor design to say the one constant thing you can count on is variation. This is doubly true. Not only is variation, in all its forms, a constant factor in design, additionally the methods of analyzing and dealing with are continuously changing as well. For both of these reasons it is necessary to stay current and on top of the latest developments in the role of variation in timing closure.

Designers are often faced with a tradeoff between less pessimistic results and extraordinarily long runtimes and large data sets. There is a long history of innovation that has brought us to what are considered the best approaches for massive designs. The foundation of the entire process relies on library characterization to produce models that support effective chip level analysis. Advanced nodes have further complicated the process.

Fortunately, Silvaco is offering a free webinar that will provide a useful update on both the history and state of the art for how on-chip variation can affect the sign-off timing flow. Silvaco has longstanding expertise at the process and cell level, as well as flows for chip level timing verification.

The October 10th webinar will be presented by Bernardo Culau, Director of Library Characterization at Silvaco. He has in-depth experience developing tools for library characterization. The topics that will be covered include:

    • What variation means in the context of library characterization
    • What on-chip variation is and its different causes
      • Inter-chip variation
      • Intra-chip variation
    • Review of different industry approaches to account for on-chip variation
      • OCV, AOCV, LVF
      • Their advantages and limitations
    • Current industry standards for variation-aware libraries
    • The improvements needed to handle leading-edge technology nodes
    • The characterization challenges involved in creating variation-aware libraries
    • Silvaco solutions for variation-aware library characterization

The free webinar will be offered at 10AM on October 10th. This seems like a good opportunity to stay current with the latest trends in a critical area of chip design. It also looks like it might provide a glimpse of what is ahead in this area. Should be interesting.

About Silvaco, Inc.
Silvaco Inc. is a leading EDA tools and semiconductor IP provider used for process and device development for advanced semiconductors, power IC, display and memory design. For over 30 years, Silvaco has enabled its customers to develop next generation semiconductor products in the shortest time with reduced cost. We are a technology company outpacing the EDA industry by delivering innovative smart silicon solutions to meet the world’s ever-growing demand for mobile intelligent computing. The company is headquartered in Santa Clara, California and has a global presence with offices located in North America, Europe, Japan and Asia.


Webinar: Finding Your Way Through Formal Verification

Webinar: Finding Your Way Through Formal Verification
by Bernard Murphy on 10-01-2019 at 6:00 am

Finding your way through formal book

Formal verification has always appeared daunting to me and I suspect to many other people also. Logic simulation feels like a “roll your sleeves up and get the job done” kind of verification, easily understood, accessible to everyone, little specialized training required. Formal methods for many years remained the domain of academics and one-time-academics performing an essentially black-box service to solve really hard problems unreachable in simulation. That’s changed, because those hard problems are becoming much more common and because verification tool providers have made it much easier to attack many cases without needing a PhD or induction into the formal priesthood. Better yet, there are excellent books to introduce novices to the domain and walk them through their first steps in using those tools.

REGISTER HERE to watch the recorded webinar.

One thing we felt was missing was a higher-level introduction, for a verification engineer, manager or director who is curious about formal but not yet ready to commit. They don’t want a tutorial; they want to know first “is this right for my organization?” “Is it really going to improve our verification quality or throughput?” “What are we going to have to do differently?” We wrote “Finding Your Way Through Formal Verification” for them. “We” here is myself (Bernard Murphy), Manish Pandey and Sean Safarpour. The book was published by SemiWiki and is available for download in the handouts section of this webinar.

Speakers:

Bernard Murphy – SemiWiki

Bernard Murphy is a freelance blogger and author, content marketing/messaging advisor for several companies and serves on the board of Mother Lode Wildlife Care in the California Gold Country. In a previous life he held down a real job as CTO at Atrenta. Earlier still, he held technical contributor, management, sales and marketing roles variously at Cadence, National Semiconductor, Fairchild and Harris Semiconductor. In his re-invention as a writer, Bernard has published well over 400 blogs between SemiWiki and EETimes. He received his BA in Physics and D. Phil in Nuclear Physics from the University of Oxford.

Manish Pandey – Synopsys

Manish Pandey is a Fellow at Synopsys, and an Adjunct Professor at Carnegie Mellon University. He completed his PhD in Computer Science from Carnegie Mellon University and a B. Tech. in Computer Science from the Indian Institute of Technology Kharagpur. He currently leads the R&D teams for formal and static technologies, and machine learning at Synopsys. He previously led the development of several static and formal verification technologies at Verplex and Cadence which are in widespread use in the industry. Manish has been the recipient of the IEEE Transaction in CAD Outstanding Young author award and holds over two dozen patents and refereed publications. 

Sean Safarpour – Synopsys

Sean Safarpour is the Director of Application Engineering at Synopsys, where his team of specialists support the development and deployment of products such as VC Formal, Hector and Assertion IPs. He works closely with customers and R&D to solve their current verification challenges as well as to define and realize the next generation of formal applications. Prior to Synopsys, Sean was Director of R&D at Atrenta focused on new technology, and VP of Engineering and CTO at Vennsa Technologies, a start-up focused on automated root-cause analysis using formal techniques. Sean received his PhD from the University of Toronto where he completed his thesis entitled “Formal Methods in Automated Design Debugging”.

 

 


AI Hardware Summit, Report #2: Lowering Power at the Edge with HLS

AI Hardware Summit, Report #2: Lowering Power at the Edge with HLS
by Randy Smith on 09-30-2019 at 10:00 am

I previously wrote a blog about a session from Day 1 of the AI Hardware Summit at the Computer History Museum in Mountain View, CA, held just last week. From Day 2, I want to delve into this presentation by Bryan Bowyer, Director of Engineering, Digital Design & Implementation Solutions Division at Mentor, a Siemens Business. This conference brought together many companies involved in building artificial Intelligent and machine learning hardware solutions. Naturally, there were several discussions around AI software and applications as well. Day 1 of the conference was more about solutions in the data center, whereas Day 2 was primarily around solutions at the Edge.

Most solutions at the Edge have power restrictions. These often are battery-powered or energy harvesting devices such as remote cameras, robots, cell phones, and many other sensor-carrying devices. A different class of edge devices represents those devices which are always on, such as smart appliances, leading to power concerns for a different reason – because it is always on. Of course, higher power typically leads to higher heat dissipation which again will lead us to prefer lower power. At the Edge, power is critically important.

When you look at designing for low power, we have been given many new tools and techniques over the past few years. Some are in process technology and the potential power saving we achieve from going to new process nodes. These savings have been getting less dramatic at each node. There are also circuit innovations that can occur, such as new memory techniques. The biggest savings will come from architectural decisions. This is for two important reasons: (1) if you can find a more efficient algorithm you can save energy; and (2) if you can implement the system in a set of blocks that give you the ability to turn off a large amount of the system resources when they are not in use, you can also save substantial power. The challenge is how best to achieve this.

To explore the algorithmic space, you need to be able to work at a sufficiently high level to make a difference. By far, the simplest way to do this is with high-level synthesis (HLS). Companies are doing this today, particularly in the areas of video and image processing, and in machine learning. HLS can be applied to ASIC or FPGA design. It can even make it possible to make late functional changes without severely impacting the project schedule. Mentor’s Catapult HLS has been deployed by many top companies in this area, as shown in the diagram above. You can get more information on Catapult HLS here.

One of the significant challenges of building custom hardware solutions is trying to explore multiple architectural choices to find the best combination of power, performance, and area (PPA). One aspect of this is in estimating the impact in different levels of precision in different stages of the design. Reducing accuracy in earlier stage may have little impact on the final result, yet greatly reduce power or area. Exploring these architectural choices in RTL is impractical. Instead, designers are turning to High-level Synthesis (HLS) to survey these custom solutions.  HLS provides several high-level optimizations such as automatic memory partitioning for complex memory architectures needed by the PE array, interface synthesis of AXI4 memory master interfaces for easily connecting to system memory, and synthesis of arbitrary precision data types for tuning the precision of the multiple hardware architectures.  Since the source language for HLS is typically C++, it can easily plug back into the deep-learning framework where the network was created, allowing verification of the architected and quantized network.

For those working on AI/ML accelerators, Mentor has made it even easier to get started by providing Catapult HLS Toolkits. There are currently four toolkits available, as described in the figure above. These toolkits seem especially well suited to AI/ML designs in Edge devices. Mentor’s participation in the AI Hardware Summit, the number of customers already using Catapult HLS in production, and the release of these toolkits specifically targeting designs people need for AI and ML have me convinced that Mentor is quite serious about this area and designers in this area need to consider them.

Below are some additional resources on the Mentor website about this topic:

Chips&Media: Design and Verification of Deep Learning Object Detection IP

Bosch Visiontec Case Study

NVIDIA Case Study on High-Level Synthesis (HLS)


Crashing the Mars Rovers!!! Actel and Aerospace Corp

Crashing the Mars Rovers!!! Actel and Aerospace Corp
by John East on 09-30-2019 at 6:00 am

In early 2003 Actel announced a new product family:  RTSX-A.  It was a family of antifuse FPGAs aimed at the satellite market.  Customers had known for a long time that it was coming and there had been prototypes available for many months.  Our space customers loved the product.  This was going to be a big win for us!  One of the first programs to use the product was the Mars Rover program. There were four Mars Rovers made:  two went to Mars and two lived in a huge sandbox in Pasadena which allowed scientists to emulate conditions on Mars.  The two that went to Mars were named the Spirit and the Opportunity.  NASA’s plans were that the Rovers would live for three months before succumbing to the treacherous Mars environment. The Spirit launch was on June 10,  2003.  It would take the Spirit about eight months to make the trip to Mars.  The Opportunity was launched a month later.

Shortly after the launch we got reports from potential customers of a few RTSX-A burn-in failures in their labs. What’s burn-in?  It’s a test that assures the user that a part that starts out good will stay that way after being in actual use for some time.  Testing parts for one week at extremely high temperatures is the normal way to assure that they’ll last a long time at normal temperatures. We hadn’t seen failures with our first tests, but when we got those reports from our customers, we looked harder.  When we looked harder, we saw some failures on occasion.  We would burn-in around 100 units at a time.  Sometimes we would get 1 or 2 failures.  Sometimes none.

A small number of failures is worse than it sounds! A typical satellite would cost in the neighborhood of 100 million dollars in those days.  The Mars Rover project cost much, much more than that. The Mars Rover project used the RTSX-A.  Uh oh!!

Satellites can’t be repaired. If one IC fails, the cost of the entire project may well be flushed down the toilet.  Worse, there wasn’t just one RTSX-A part in each Rover.  There were, as I recall, 38.  That meant if there was a 1% chance that one particular Actel part would fail, there was a 38% chance that one of our parts somewhere in the Rover would fail.  (My math isn’t quite correct here, but you get the point) It was clear something wasn’t quite right, but we couldn’t figure out what.

The Mars Rover program was by no means the only program planning to use our parts.  There were many others! Word got around the space community. Many customers weren’t sure if they dared to launch their satellites. They looked to us to tell them it was OK.  We couldn’t. We just didn’t know. We couldn’t figure it out. A few skeptics thought we were covering something up. We weren’t. It was a very tricky problem. We were working hard on it, but we just didn’t understand what was going on.

Then, I got a call from Bill Ballhaus, the CEO of Aerospace Corporation. Aerospace Corporation operates a federally funded research and development center. They provide technical guidance and advice on all aspects of space missions to military, civil, and commercial customers.  Dr Ballhaus asked me to come to the Aerospace headquarters in El Segundo to discuss “the reliability problem with Actel FPGAs”

I, of course, accepted the invitation, but if you had offered me a choice of going to this meeting or getting a root canal on every tooth, my oral surgeon would be a richer man today.  The invitation appeared to be for a one-on-one meeting between me and Dr Ballhaus.  I planned on going by myself, but our VP of Technology,  Esmat Hamdy, saw it differently.  He didn’t quite trust my technical savvy.  He thought that, if the meeting turned out to have a lot of technical content, I might not be able to answer all the questions well.  So —  Esmat insisted on coming with me. Bless his heart!

We flew to LAX and then took a quick cab ride to Aerospace.  It’s about a mile from the airport. A secretary led us to a conference room  — except it was more like a sports arena.  There was a long rectangular table that probably sat 15-20 people.  Then there was an aisle circling those seats.  But on the other side of the aisle, there was an elevated set of chairs circling the table below.   There were maybe another 20-25 chairs in that set.  In total I would guess 30 or 40 chairs.    All but four were full.  Not full of just anybody   —  but full of PhDs.  Full of experts in any aspect of integrated circuits that you could think of.  Full of technical wizards who all had at least double my IQ. One of the empty chairs was for Dr Ballhaus.  One for Esmat.  One for me.  We took our chairs and were ready to start the meeting but Dr Ballhaus said that we’d have to wait.  The last chair was for a high ranking Air Force general who had invited himself to the meeting.   The general was running late.  When he got there, there was plenty of bowing and scraping done.  He was the head honcho!!

They hammered into me just how important this was.  The Rover program cost about one billion dollars.  The Rovers had been launched. There would be no calling them back.  No fixing them. If they went bad, that would be a billion dollars flushed.  And worse, the Mars Rover project was by no means the only satellite project with plans to use Actel.  There were several military satellite programs related to our national defense as well.  Those folks were even more worried than the Mars Rover people.  They were not happy campers!!!  As you would expect — the general had zero interest in putting up military satellites critical to the nation’s defense that were likely to fail!!!!!  (That unhappiness earned me an invitation to meet later with Peter Teets,  the Undersecretary of the Airforce, in the most secure area of the Pentagon.  Mr Teets wasn’t a happy camper either, but that’s another story.)

Back in Mountain View, we were breaking our picks.  We didn’t always see failures. When we saw them, there weren’t many.  That makes the problem harder to solve.  We suspected what we called the programming algorithm. The oversimplified explanation of programming an antifuse is this —  put a high voltage across it,  the dielectric will rupture and the antifuse will be a conductor for the rest of time. — In fact, it’s much trickier than that.  There are a lot of knobs to turn.  How high should the voltage be?  How much current should flow through the fuse?  How many times should you repeat what you’ve done?  How long should you apply the voltage?  How much should you “soak” the fuse? We would twist some of these knobs, come up with a new algorithm, and voila.  No failures!!!   Problem solved,  right?  Wrong!!!  The next time we’d run exactly the same test, there would be one or two failures.  We were completely perplexed!

The first Rover (the Spirit) was launched prior to suspicions that we might have a reliability problem. Then came the reliability worries. And then, around New Years when the reliability concerns had become rampant, the Spirit reached Mars. It was a big deal in the press.

The Spirit Lands on Mars!

It’s working fine!!

It was on the front page of every newspaper. Boy.  Did we ever feel good.  The Spirit was working perfectly!!!   …..……   But then  —  one week later —-

The Spirit Fails!!

The Spirit went bad.  That was on every front page too, but in bigger letters.  Was it the Actel parts? We didn’t know, but in my judgement … it could well have been. I was terrified!  I could picture the headlines when it was determined that Spirit failed because of an Actel part.  I could picture the lawyers lining up to file lawsuits against us.  I could picture the process servers skulking in the bushes waiting to spring out and serve me with subpoenas.  It wasn’t pretty!!  In fact, it was really, really ugly!! When I went home after work that night, I opened a bottle of wine and drank the whole thing by myself.  I like wine ……  but not a whole bottle.  My advice?   —  Don’t do that!!!  It didn’t work out well!!

Luckily, I was wrong. The Actel parts were not at fault.  After a week or so, NASA figured it out.  It was a software problem that was fixable by uploading new software.  They fixed the Rover, and independently we tracked down our problem and fixed it. To my knowledge Actel (Now part of Microchip) has never experienced a failure in space.

Scientists had planned for the Rovers to live for three months.  When did they actually die?  The Spirit lived for 6 years.  The Opportunity 14.

Epilogue

Somewhere around the year 2000 our board of directors asked the question, “John, how long do you plan to stay on as CEO?”.  My answer:  “I’ll retire no sooner than my 65th birthday and no later than my 66th”.  When you’re 55,  65 seems really old, doesn’t it?  Well, ten years later along came my 65th birthday.  January 20, 2010.  Funny thing.  By then 65 didn’t seem so old.  Still – all in all it was the right thing to do. We released an 8K (that’s the document that public companies use to disclose relevant information.) saying that we were beginning a search for a new CEO immediately and that we planned to complete the search and appoint a new CEO within a year.

We spent several months on the search.  There were some ups and downs – but bottom line, we made zero progress.  We were back to square one.  Then came a surprise. We received an unsolicited and unexpected offer to buy us out from Microsemi  — an Orange County based semiconductor company that I was barely aware of.  We discussed it at length!

Why would we be interested in selling?  We were in the middle of trying to transform ourselves from an antifuse company to a flash company.  I firmly believed that it could be done (In fact,  it was done!!!  See my week # 16, “From AMD to Actel”),  but it was obvious that it was going to take a long time before we finally started to see the results on our bottom line.  The bottom line matters!!  Shareholders want high stock prices!! And — for a company our age, the stock price is determined by the bottom line.

Dan McCranie (Who we had recently appointed chairman of the board) went to Orange County and met with the Microsemi CEO, James Peterson.  After some negotiations, Peterson offered a price that was higher than we believed we would be able to command consistently in the foreseeable future.  After a few vigorous board discussions,  we decided that we owed it to our shareholders to accept the offer.  We did. On November 3, 2010 Actel became part of Microsemi Corporation and I rode off into the high tech sunset  –  unemployed for the first time in 45 years.

This is the last episode in the series.  I hope you’ve enjoyed reading them as much as I’ve enjoyed writing them.

See the entire John East series HERE.

# Mars Rovers, Spirit,  Opportunity, Actel, Microchip, Aerospace Corporation,  Bill Ballhaus, Esmat Hamdy  Microsemi, Dan McCranie

 


Fossil Fuels in the Crosshairs

Fossil Fuels in the Crosshairs
by Roger C. Lanctot on 09-29-2019 at 11:00 am

In the heat of a presidential campaign, especially one with 19 competing candidates, the contenders may get carried away in the interest of getting attention and, presumably, attracting supporters. Beto O’Rourke might be accused of such rhetorical excess for his call, in the third Democratic Party debate, for a mandatory Federal assault rifle buyback.

But fellow Democrat Senator Bernie Sanders may have beat Beto with his plan to pursue criminal charges against fossil fuel executives for, in the words of truthdig.com, “knowingly accelerating the ecological crisis while sowing doubt about the science to the American public.”

Sanders’ comments came during an MSNBC climate town hall hosted by news anchor Chris Hayes at Georgetown University last Thursday. Truthdig.com quotes Sanders:  “Duh, of course I would. They knew that it was real. Their own scientists told them that it was real. What do you do to people who lied in a very bold-faced way, lied to the American people, lied to the media? How do you hold them accountable?”

Over the subsequent din of shredding machines going into overdrive at the offices of the major oil companies could be heard the clacking of worry beads at the headquarters of the major car companies. Knowledge of climate change is one thing. Building a product collectively responsible for 1.2M highway fatalities globally and hundreds of thousands of premature deaths from emissions every year – with an aggregate estimated negative societal impact of more than $1T – is enough to give even Senator Sanders and former Congressman O’Rourke pause to consider their options.

President Donald Trump’s efforts to roll back emissions standards and fuel efficiency requirements are pulling back the curtain on some very unpleasant issues the automotive industry would rather not highlight. While the industry is wrestling with the challenges of connecting cars and the onset of autonomous driving and electrification, the specter of premature death and injury caused by motor vehicles with internal combustion engines is an ugly open secret the industry would prefer remain out of the spotlight.

President Trump has dragged the issue to the center of the stage in the interest, so he says, of helping to jumpstart automobile production. The subtleties of managing emissions and fuel efficiency while enhancing vehicle safety – a high wire act which the automotive industry has ably executed thus far – has clearly eluded the commander in chief.

Young people and old around the world have made their concerns known regarding climate change. Their ire is mainly focused, today, on legislators and politicians. It won’t be long before they turn on the fossil fuel producers. Car makers could well be next – as was clear from the presence of protesters demonstrating once again outside the 2019 Frankfurt Auto Show. The President isn’t helping.


Micron Mired in Murky Memory Market – Cutting Capex 30%- 2020 Challenging

Micron Mired in Murky Memory Market – Cutting Capex 30%- 2020 Challenging
by Daniel Nenni on 09-29-2019 at 10:00 am

  • Solid Quarter but soft Outlook
  • Recovery Slow- Future Cost Downs Harder
  • Demand slightly ahead of supply-Shelf Stuffed?
  • Bouncing along the Bottom of the Cycle

Results ahead of expectation but guide behind expectation
Quarterly results were slightly better than street expectations at $0.56 EPS and $4.87B in revenues however guidance is for $5B +-$200M in revenues and $0.46 +-$0.07 in EPS which is well below expectations. While there may be some normal “sandbagging” of forward looking guidance even with that assumption its an unimpressive guide.

Disappointment that we are not yet at low tide
Investors are clearly not happy that the guidance does not yet indicate a bottoming of business. The stock was priced to perfection and a quarter guide that suggested being past the bottom of the cycle was also baked in to the high valuation.

We have been saying for a long time that this will be a longer, slower, shallower recovery. Investor and analyst hopes had gotten way ahead of reality. The reality is that we still have excess supply and demand is lukewarm with a number of potential risks in the market that add to uncertainty.

Cutting Capex 30% in 2020, front end cuts are deeper than back end
The company said what we have heard before and have been talking about for a while now……Capital spending will be down by 30% in 2020 versus 2019. Worse yet, the spending will be more focused on back end , assembly and test as well as buildings and less on front end equipment.

If we had to guess we would bet that front end equipment purchases are down at least 40% maybe as much as 50% while back end may only be down 20% or less.

This is obviously very negative for Lam and Applied and to a lesser extent ASML and KLAC. This additional data point of front end being cut more than 30% is obviously incrementally a lot more negative than some analysts and investors had been hoping for.

Wafer starts still being cut, bit growth will come from technology advancement
Micron made it clear that they are still idling capacity and wafer starts continue to come down as machines come off line as plugs are pulled. They said that bit growth will come from density (technology) increases not wafer start increases and that technology increases are enough to keep up with the needed teenage demand.

This is something we have been repeating for a while now, that bit growth can be met with Moore’s Law. So we can read this as selected technology only purchases, focused on pushing Moore’s law forward.

Further cost reductions will be more difficult in 2020
The company was also very clear that the aggressive cost of manufacturing reductions seen in 2019 will not be repeated in 2020. It sounds as if we are past the “easy” technological advancements are are now into more difficult technology changes that will be slower, harder and more costly.

We would read this as the company telegraphing that gross margins will be harder to come by in 2020 as costs will not come down as quickly as pricing.

Is Channel Stuffing going on?…yes
We have been warning of potential channel stuffing going on as Chinese buyers may be stocking up on fears of being cut off or buyers who frequent Samsung may be concerned about Japan cutting them off from critical materials.

Micron said this was going on but could not quantify how much of demand was this “mirage demand” due to stocking up.

Continued progress on 1Z and 96/128 NAND
Micron has made excellent progress in moving the technology ball forward and executing on all these new fronts.

These many advancements are clearly why Micron has been able to keep costs coming down ahead of falling prices. In past cycles, price drops always got ahead of cost drops but Micron has done a better job in this down cycle.

This has kept the company in much better shape on a competitive basis than in previous down cycles. Micron is likely a lot more competitive with the industry leader , Samsung, and claims to be ahead in some aspects.

Huawei is still “No Way”
Huawei is still on the “verboten” list even though Micron has applied for permission. News coming out today makes it seem as if the odds of Huawei being taken off the entity list or waivers being granted seem very low.

We would assume little to no business from Huawei going forward. If somehow this changes we would consider it a lucky break.

Still in wait and hope mode…maybe less hopeful
Micron’s stock has been in “high hopes” mode as expectations for a recovery got well ahead of themselves along with the stock price. We will obviously see a return to the reality of a slow recovery out of a murky bottom with an ill defined turning point.

We could see the run up in overall semi stocks reverse a bit as the wind comes out.

The stocks
There will likely be a significant correction in Micron’s stock price which was well ahead of where it should have been.

If we are at a run rate of $2 per year in EPS and were closing in on a $50 stock price we are at a 25 multiple which is obviously hard to support even at bottom numbers. There is likely support at the $40 level but we would not be interested in buying unless and until we got back to a “3” handle.

Collateral Damage
We don’t expect any better report coming out of Samsung as they are in the very same memory market with similar dynamics plus the additional worry of the Japanese embargo and now what looks like yield issues on their 7NM logic side as well. Samsung obviously gets the same pricing as Micron and even though Samsungs costs are generally lower there is less of a differential in the current down cycle.

Applied and Lam could see a 40-50% cut in business from Micron in 2020, making an equipment recovery all that much harder. While ASML and KLAC are more associated with technology advances than capacity advances there will also see weakness though less as KLA has always been more foundry/logic driven.

It would not be unreasonable to expect a 10% haircut in Micron’s stock price from its recent peak and AMAT and LRCX perhaps a 3-4% cut.