webinar banner2025 (1)

Speeding up Circuit Simulation using a GPU Approach

Speeding up Circuit Simulation using a GPU Approach
by Daniel Payne on 08-22-2019 at 10:00 am

ALPS-GT

The old adage that “Time is Money” certainly rings true in the semiconductor world where IC designers are being challenged with getting their new designs to market quickly, and correctly in the first spin of silicon. Circuit designers work at the transistor-level, and circuit simulation is one of the most time-consuming tasks for SPICE tools, so any relief is quite welcomed, although there is one caveat and that is engineers need fast and correct answers, not fast incorrect answers, so accuracy is a high requirement.

Yes, there’s a category of Fast SPICE simulators out there, however they tend to work best for mostly digital circuits, so what choices do you have for the most challenging analog circuits?

One promising new development for SPICE circuit simulation is running jobs on a GPU, instead of a general-purpose CPU. I attended a webinar this month  presented by Chen Zhao, of Empyrean, where he talked about their approach to GPU-powered SPICE.

Most EDA software companies are located in Silicon Valley, Austin, Boston, Europe or Japan, however Empyrean started out in Beijing, China back in 2009 and has grown to some 300 people. I’ve seen them at DAC for the last couple of years, and they’re becoming more visible in the US with an office in San Jose.

The challenges for Analog simulation are well known:  FinFET devices have complex models that evaluate slowly, there are more parasitics with each new interconnect layer, process variations require more simulations, and all IP blocks must be verified.

The circuit simulator from Empyrean is called ALPS, and acronym for Accurate Large capacity Parallel SPICE. With ALPS they created a full-SPICE accurate simulator that is between 3X to 5X faster, has a capacity of 100M elements, and has been silicon proven down to 7nm. To reach this speed required a new approach to solving the matrices used in SPICE, and they call that technology Smart Matrix Solver (SMS).

GPUs have a massively parallel architecture, so Empyrean created a version of their ALPS simulator called ALPS-GT to harness the GPU, which then provides up to a 10X speed improvement for circuit simulation run times.

With a 10X speed improvement you can now reach your tapeout goal more quickly, simulate more PVT corners and even simulate scenarios that weren’t feasible with older, slower simulators.

ALPS-GT accepts netlists in HSPICE and Spectre formats, handles all of the popular model files, accepts Verilog-A, and can even co-simulate with 3rd party Verilog simulators. Output file formats are the industry standard tr0 and fsdb, so you can keep using your favorite viewers.

The 10X speed improvement with ALPS-GT comes from the Smart Matrix Solver being optimized to run on a GPU, and SMS-GT has a 5X faster solver than what comes standard with the NVIDIA suppled CUDA matrix solver. Chen showed examples from customer circuits where ALPS-GT was much faster than a competitor:

ALPS-GT vs competitor SPICE with 16 CPUs

As the netlist size grows and you start to add extracted parasitics, then the speed differences between ALPS-GT and the competition just grow larger, here’s two examples with millions of parasitics in the netlist:

ALPS-GT vs competitor SPICE with 16 CPUs

You may have noticed in this comparison that the competitor SPICE tool would have taken over 100 days to complete a single simulation, so that’s not even practical to consider.

Summary

The need for speed, capacity and accuracy are ever present for SPICE circuit simulators, and the engineers at Empyrean have harnessed the capabilities of the GPU to speed up run times, while maintaining SPICE accuracy. If you’d like to view the recorded webinar, then here’s the link.

Related Blogs


Webinar: Using Embedded FPGA to Improve Machine Learning SOCs

Webinar: Using Embedded FPGA to Improve Machine Learning SOCs
by Tom Simon on 08-22-2019 at 6:00 am

By its very definition, machine learning (ML) hardware requires flexibility. In turn, each ML application has its own fine grain requirements. Specific hardware implementations that include specialized processing elements are often desirable for machine learning chips. At the top of the priority list is parallel processing. However, to make effective use of parallel processing units, memory and network architecture are critical. There has been an explosion of different approaches to ML hardware, including dedicated processor arrays, FPGA based solutions, etc. Early on it became clear that FPGAs had a lot to offer, but their use was limited by the requirement to move data on and off chip.

Embedded FPGAs offer a way to connect all the storage and computational elements of a full programmable ML solution inside of a single die. Achronix has been working on solutions for embedded FPGAs that are specifically tailored for ML SOCs. By properly choosing processing elements and memory configurations, dramatic improvements to logic utilization and throughput can be realized. These advantages become important in cloud applications, and even more so when the target is a mobile device.

Achronix is offering a webinar to show how embedded FPGA can become the fabric for optimized machine learning solutions. The presenter will be Achronix Senior Director Mike Fitton, who has over 25 years of experience in system architecture, algorithm development and semiconductor design in wireless, network and ML.

The webinar will show how bringing programmable hardware into an SOC design can allow fine tuning of applications and data transfer. The webinar promises to include real benchmark data showing the value of embedded FPGA versus other approaches.

The webinar will be on Thursday August 29th at 10AM PDT. This should be an interesting and informative look at significantly better ways to implement ML for data center, edge, mobile, IoT and other areas where ML is proving useful.

About Achronix Semiconductor Corporation
Achronix Semiconductor Corporation is a privately held, fabless semiconductor corporation based in Santa Clara, California and offers high-performance FPGA and embedded FPGA (eFPGA) solutions. Achronix’s history is one of pushing the boundaries in the high-performance FPGA market. Achronix offerings include programmable FPGA fabrics, discrete high-performance and high-density FPGAs with hardwired system-level blocks, datacenter and HPC hardware accelerator boards, and best-in-class EDA software supporting all Achronix products. The company has sales offices and representatives in the United States, Europe, and China, and has a research and design office in Bangalore, India


Low Power Design – Art vs. Science

Low Power Design – Art vs. Science
by Daniel Nenni on 08-21-2019 at 10:00 am

I have heard many times before that low power and mixed-signal design is more Art than Science. I believe this is a misconception. Science is a field that builds upon previous experiences and discoveries. Art primarily seeks out creative differences, things we have not seen before that evoke emotion. The most successful designers of low power and mixed-signal devices are those with the most experience. The best of them likely have also worked on a very broad range of designs – they have seen everything. They rely a lot on what they have seen before.

sureCore, with design centers in Sheffield, England, and Leuven, Belgium delivers custom low power design services. They do this down to the latest design nodes (7nm) and have specific experience in network, machine learning/AI, and IoT devices (e.g., medical, wearables, etc.). sureCore will be discussing their service engagement methodology and the unique mix of technology expertise at a SemiWiki Webinar Series event on Wednesday, August 28, 2019, from 10:00a m to 10:45 am PDT. To sign up for the webinar, register using your work email address HERE. We did a run through of the presentation last week and it is worth 20 minutes of your time, absolutely.

The breadth of sureCores’s experience cannot be overstated as they have handled all types of mixed-signal designs including ADC, DAC, amplifiers, regulators, PLLs, oscillators, etc. over process nodes ranging from 180nm down to today’s latest and greatest nodes. They have built their own proprietary automated design environment around proven industry tools from Cadence, Mentor, and Solido. Being an expert at Solido Varation Designer is a key skill/expertise in these types of designs. I know sureCore from my time at Solido Design Automation. They were one of many happy Solido customers, absolutely.

It is generally understood that most modern designs have a large portion of the layout area dedicated to memory, specifically SRAM. Any memory power savings gets multiplied due to the size of the memory. sureCore’s sureFIT custom memory design service gives its clients delivers memory instances specifically tuned to the needs of the application. At the webinar, they will be discussing an example of a custom-built, high capacity low power SRAM which delivered more than a 40% power savings compared to the typical memory compiler options available to their customer. You should attend the webinar to hear more about this example.

Beyond design services, sureCore also offers two other services, characterization, and verification. sureCore’s characterization solution is built using the Cadence Liberate tool suite. They characterize both SRAMs and standard cell libraries. The results include the generation of a .lib file for modeling across extreme PVT corners. As their focus is lower power design, they emphasize characterization at low operating voltages and support all the necessary views (NLDM, CCS (timing and noise), and LVF. The resulting models are validated using Monte-Carlo simulation on extracted netlists with the margins adjusted for improved design robustness.

Getting back to my original point, success in producing robust designs that meet the challenges of today’s low power specification does not require ‘art’; it requires science, experience, and expertise – and lots of it. sureCore’s low power design team has more than 200 person-years of combined lower power design experience. They have proven design experience down to 7nm, including custom SRAMs, layout, characterization, and more. Learn more about sureCore at their upcoming webinar.

About sureCore
sureCore Limited is an SRAM IP company based in Sheffield, UK, developing low power memories for current and next-generation, silicon process technologies. Its award-winning, world-leading, low power SRAM design is process independent and variability tolerant, making it suitable for a wide range of technology nodes. This IP helps SoC developers meet challenging power budgets and manufacturability constraints posed by leading-edge process nodes.

Also Read:

WEBINAR: The Brave New World of Customized Memory

Custom SRAM IP @56thDAC

Low Power SRAM Complier and Characterization Enable IoT Applications


Semicon West 2019 – Day 4 – Soitec

Semicon West 2019 – Day 4 – Soitec
by Scotten Jones on 08-21-2019 at 6:00 am

Last year at Semicon I sat down with Soitec and got an update on the company. You can read my write up from last year here. A key point last year was Soitec was continuing to be profitable and grow after several years of financial struggles.

On Thursday, July 11th I got to sit down with Soitec’s CEO, Paul Boudre and get an update on Soitec’s continued progress.

Soitec is a key supplier of SOI substrates for both RFSOI and FDSOI processes. In my previous Semicon write up from this year I discuss GLOBALFOUNDRIES (GF) commitment to SOI and business opportunities. The GF article is available here. They are seeing GF and Samsung offering FDSOI foundry and ST Micro use FDSOI internally, now they see the platform being adopted by IOT products, with FDSOI used by Microsoft and NXP voice recognition. GF and Samsung have platform roadmaps, not just nodes, memory, RF, etc. TSMC now makes RFSOI. The market for SOI into mobile, automotive and IOT is expected to continue to grow in the years ahead.

Smart phones had 20mm2 of SOI two years ago, today it is 50 to 60mm2 and in two more years they expects 100 to 150mm2.

But Soitec doesn’t just see themselves as an SOI provider but rather as a provider of engineered substrates and they are applying their core competencies to broaden their market. For example:

  • Piezzo On Insulator (POI) provides piezo materials on an insulator for filter applications. This is a huge market, as big as the front-end-module market for cell phones and they have 0% market share currently.
  • Silicon Carbide (SiC) is a rapidly growing material for power semiconductors but the wafers are very expensive and supply constrained. Soitec can use their SmartCut technology to make 10 SiC wafers out of one wafer.
  • They acquired EpiGan from Belgium for GaN on silicon for 5G base station power amplifier, no market share now in PA, also a big opportunity and a new market for them, they are not in base stations.

In terms of capacity:

  • Singapore has just been qualified and can ramp up to 1 million 300mm wafers per year.
  • France is ramping from 650 thousand 300mm wafers per year to 1 million wafers per year.
  • The France R&D factory is being converted to POI and will have 400 thousand wafers per year of capacity. Soitec’s R&D has been moved to a joint lab at Leti.
  • 200mm capacity is now full in France and the overflow is going to their China partner who is ramping up from 150 thousand wafers per year to 375 thousand wafers per year.

Soitec will spend $130 million euro on capital this year and self-fund it while generating approximately $180 million euro of EBIT. In 2015 they were in chapter 11 with a $130 million euro valuation, today they are valued at $3 billion euro! They have been growing at 30% to 40% per year the last few years and are shooting at 30% this year.

About Soitec
Soitec (Euronext Paris) is an industry leader in designing and manufacturing innovative semiconductor materials. The company uses its unique technologies and semiconductor expertise to serve the electronics markets. With more than 3,500 patents worldwide, Soitec’s strategy is based on disruptive innovation to answer its customers’ needs for high performance, energy efficiency and cost competitiveness. Soitec has manufacturing facilities, R&D centers and offices in Europe, the U.S. and Asia. For more information, please visit www.soitec.com


Making pre-Silicon Verification Plausible for Autonomous Vehicles

Making pre-Silicon Verification Plausible for Autonomous Vehicles
by Daniel Payne on 08-20-2019 at 10:00 am

AV diagram

I love reading about the amazing progress of autonomous vehicles, like when Audi and their A8 model sedan was the first to reach Level 3 autonomy, closely followed by Tesla at Level 2, although Tesla gets way more media attention here in the US. A friend of mine bought his wife a car that offers adaptive cruise control with auto-braking, as needed. All of these cool and life-saving features from the ever-increasing levels of automotive automation are made possible only through advanced silicon, sensors, radar, lidar, firmware and software.

For SoC chip design teams the risks and rewards are huge, but the biggest rewards likely will go to the companies that get their new IC designs into silicon first, so the race is on, and one big part of the engineering schedule continues to be pre-silicon verification. The only way to verify is to model and then extensively run simulations for all features and scenarios.

 

Software-based simulation is always a good starting point for verification, because it’s a mature technology and familiar to the majority of engineers, but the downside is that the larger the SoC and the more scenarios required the longer the runtimes, until the approach becomes infeasible because of time constraints. Our industry has an answer for meeting those verification time constraints, and that is from using a hardware-based approach with emulation.

Attending DAC in 2019, plus daily reading of news headlines and my LinkedIn feed, I have learned three mega-trends in the automotive market segment:

Increased complexity in automotive systems is creating bigger verification demands for SoC verification. To reach Level 5 autonomy requires car makers to have systems using multiple CPUs, AI engines and image processors. These systems are controlled by sophisticated software, requiring verification for functional correctness plus meet safety standards described by ISO 26262.

Car companies will need to debug the source of hardware and software bugs during testing and verification, so the quicker the better.

There are three verification phases to consider in designing SoCs for automotive use:

  • Model-in-the-loop (MiL), using high-level models
    • Plus: speed, capacity
    • Minus: accuracy
  • Software-in-the-loop (SiL), using logic simulation
    • Plus: accurate, debugging
    • Minus: speed, capacity
  • Hardware-in-the-loop (HiL), using emulation
    • Plus: speed, capacity

Siemens PLM Software already has a vehicle design and verification program called PAVE360, and within that the IC design part uses the Veloce emulator. Attending DAC this summer I noticed that Mentor and other vendors all prominently displayed their emulators, because of how important they are to verification of the most complex systems. The more that a system relies on software, the more likely it is that you will adopt emulation for verification.

Why emulation? Well, mostly speed and capacity, so running 1,000X to 10,000X faster on a specialized emulator compared to a logic simulator is a compelling methodology. The emulator gets this speed advantage by using a hardware fabric that can be reconfigured for each new SoC design to be verified, enabling engineers to actually run software and an operating in a reasonable amount of time.

Siemens tools for system-level verification

So what’s attractive about the emulation from Veloce?

  • Maturity, many years of improving results
  • Integration
    • Vista/SystemC (multi-physics, high-level modeling)
    • Amesim (electro-mechanical)
    • PAVE360 (scenarios)
  • Verification IP (VIP) library
  • Capacity of 15 billion logic gates
  • Debug visibility
  • Automotive specific applications
  • Power verification
  • Meeting functional safety requirements

Summary

I eagerly await Level 5 autonomous driving, and between now and then will closely watch the growth in this exciting industry because of the safety and wow factors. Behind the scenes I expect more design companies in this segment to adopt hardware emulation like Veloce in order to meet their time window, produce the most safe vehicle controls and enable both hardware and software debug. Mentor has assembled a compelling collection to serve the automotive design market.

Read the full size page White Paper here.

Related Blogs


Xilinx on ANSYS Elastic Compute for Timing and EM/IR

Xilinx on ANSYS Elastic Compute for Timing and EM/IR
by Bernard Murphy on 08-20-2019 at 5:00 am

RedHawk-SC

I’m a fan of getting customer reality checks on advanced design technologies. This is not so much because vendors put the best possible spin on their product capabilities; of course they do (within reason), as does every other company aiming to stay in business. But application by customers on real designs often shows lower performance, QoR or whatever metrics you care about than you will see in ideal claims, not because the vendor wants to mislead you but because they can’t quantify the inefficiencies inherent in live design environments. That’s why customer numbers are so interesting; they reflect what you’re most likely to see.

ANSYS has already hosted webinars with customers like NVIDIA and others talking about the benefits of big data/elastic compute in optimizing power integrity, EM and other factors for their designs. In a recent webinar Xilinx added their viewpoint in looking for scalability in analysis for their designs at 7nm and beyond, particularly in use of SeaScape, RedHawk-SC and Path FX. Why they felt the need to do this becomes apparent when you see some of the stats below.

The customer presenter for this webinar was Nitin Navale, CAD Manager at Xilinx responsible for timing analysis and EM/IR. He started by explaining a little about their architecture to give context for the rest of the discussion. The top-level of a die is made up of between 100 and 400 fabric sub-regions (FSRs), each of which contains between 2500 and 5000 IP block instances. A packaged part may contain one or more die in a 2.5D configuration on an interposer. In this context, analysis for timing, EM and IR is first at the die level and then across die in a multi-die package.

SeaScape

The first application Nitin discussed was STA. Surely this will all be managed by the standard signoff timing tools? It seems the days of full-flat STAs are behind us and would be pointless anyway for the kind of analyses Nitin and colleagues need. So you have to strip away all but a select set of logic for analysis, but he told us that even after stripping back to a single FSR an STA run would not complete. The problem here is apparently that in Xilinx designs you may have to blackbox (BB) millions of instances; all those BB boxes and pins still consume too much memory.

What they wanted was not just to BB millions of instances but remove them and floating nets completely. Xilinx programmed this operation in Python on top of standalone SeaScape. Remember their first test kept a single FSR, with everything but blocks of interest BBed, and that wouldn’t complete in STA. They ran their SeaScape script (taking just over 6 hours) then STA completed in 12 hours per corner. A medium-sized design (33 FSRs retained) after stripping back in SeaScape ran through STA in 4 days per corner. (Nitin also talks about running Path FX trials here, which ran much faster than the equivalent STA. I’ll touch on that tool below.)

Now you could do all this fancy stripping in the reference timing tools but not very quickly. There’s nothing intrinsically parallelizable about Tcl scripts and even if you do some very clever scripting you would have to make sure those scripts wouldn’t get confused on overlapping paths. SeaScape takes care of all this by directly managing map-reduce and compute distribution. Pretty neat that Xilinx were able to use SeaScape to make 3rd party tool runs viable.

RedHawk-SC

Next up, Nitin talked about their trials with RedHawk-SC (the SeaScape-based version of RedHawk) for EM and IR analysis. Here they have the same scale problems as for STA except that RedHawk-SC has SeaScape built-in so can natively work at full-chip scale. He mentioned that when they were doing analysis on Ultrascale back in 2015, they had to break the design into 7 partitions. It took one person-month to do the initial analysis and one person-week to do an iterative run with ECOs. On the Versal product (2018) this jumped to five person-months on the initial run and five person-weeks per iteration. Clearly this won’t be scalable to larger designs which is why they started looking at the ANSYS products.

Nitin shared preliminary data here. In both the small and mid-size experiments, run-time was reduced by 3X on the small job and 10X bigger job and they saw good correlation between RedHawk and RedHawk-SC results. Nitin said they’ve seen enough already – they’ll be deploying RedHawk-SC on their next chips. Also interesting, the RedHawk (not SC) runs need a machine with almost 1TB of memory. RedHawk-SC distributed to worker machines needing only 30GB of memory per machine. Worth considering versus when you’re thinking about requesting more expensive servers.

Path FX

Nitin wrapped up with a discussion on their trials on Path FX, ANSYS’ Spice-accurate path timer. In the SeaScape trials I mentioned earlier, Path FX was running 4X faster than STA on the mid-size design, already notable. Of course you don’t switch to a different reference signoff of that importance based on a couple of tests, but Xilinx have another application where Path FX looks like a very interesting potential fit. In Vivado (the Xilinx design suite) it would be impossibly expensive to re-time a compile each time so the tool uses lookup tables for combinational paths. Populating those lookup tables is something Xilinx calls timing capture, and requires that paths be timed independently. While some parallelism is possible through grouping, this can become complicated on a reference STA tool and still only runs on a single host even with multi-core and multi-threading. Apparently resolving path overlaps further reduces performance.

Path FX however can take advantage of the same underlying elastic compute technology, intelligently distributing paths to workers and calculating pin-to-pin delays simultaneously, even on conflicting paths. And it’s more accurate than Liberty-based timing since it’s closer to Spice. They again ran a trial on a single FSR-based design. Compile was 4X faster and timing was 10X faster, compared with STA running fully parallelized. Correlation was also pretty good, though Nitin cautioned they (Xilinx) made a mistake in setting up the libraries. They have to correct this and re-run to check correlation and run-time. However he likes where this number is starting, even if it might be a bit wrong.

Overall, pretty compelling evidence that the elastic compute approach is more widely effective than parallelism in accelerating big tasks. You can register to watch the full webinar HERE.


Digging Deeper in Hardware/Software Security

Digging Deeper in Hardware/Software Security
by Bernard Murphy on 08-19-2019 at 10:00 am

When it comes to security we’re all outraged at the manifest incompetence of whoever was most recently hacked, leaking personal account details for tens of millions of clients and everyone firmly believes that “they” ought to do better. Yet as a society there’s little evidence beyond our clickbait Pavlovian responses that we’re becoming more sensitized to a wider responsibility for heightened security. We’d rather ignore security options in social media, in connecting to insecure sites or in clicking on links in phishing emails, because convenience or curiosity provide instant gratification, easily outweighing barely understood and distant risks which maybe don’t even affect us directly.

This underlines the distributed nature of security threats and the need for each link in the chain to assume that other links may have been compromised, often by the weakest link of all – us. Initial countermeasures, adding a variety of security techniques on top of existing implementations, proved easy to hack because the attack surface in such approaches is huge and it is difficult to imagine all possible attacks much less defend against them.

Hardware roots of trust are the new “in” technology, stuffing all security management into a tightly guarded center. Google now has their Titan root of trust for the Google Cloud and Microsoft has their Cerberus root of trust, both implemented in hardware. These aren’t marketing gimmicks. A security flaw discovered this year in baseboard management controller firmware stacks and hardware allows for remote unauthenticated access and almost any kind of malfeasance following an attack. When a major cloud service provider is hacked and it wasn’t clearly a user problem, the reputational damage could be unbounded. Imagine what would happen to Amazon if we stopped trusting AWS security.

However – just because you built a hardware root of trust (HRoT) into your system, that doesn’t automatically make you secure. Tortuga Logic recently hosted a webinar in which they provided a nice example of what can go wrong even inside an HRoT. This is illustrated in the opening graphic. An AES (encryption/decryption) block inside the HRoT first reads the encrypted key, decrypts it and stores it in a safe location inside the HRoT in preparation for data decryption. To decrypt data for use outside the HRoT two things have to happen: the data has to be run through the AES core and the demux on the right has to be flipped from storing internally to sending outside. Makes sense to flip the switch first then start decrypting, right?

But realize that the state from the key decryption persists on that path until other data is run through the AES. If you flip the demux switch first, the plaintext key can be read outside the HRoT. Oops. A seemingly reasonable and harmless software choice just gave away your most precious secret. Why not hardwire this kind of thing instead? Because for most embedded systems users expect some level of configurability even in the HRoT (which areas of memory should be treated as secure for example). You can’t hardwire your way out of all security risks and even if you tried, you’d just replace possibly firmware-fixable bugs with definitely unfixable HW bugs.

Bottom-line, to run serious security checks, you have to check the operation of the software on the hardware. Like for example booting Linux on the hardware. But how do you figure out where to check for problems like this? And how do you trigger such cases? A standard software test probably won’t trigger this kind of problem. Exposing the problem likely depends on some unusual event which might happen almost anywhere in the hardware + software stack, making it close to impossible to find.

The Tortuga approach using their Radix tool takes a different approach. It runs within your standard functional simulations/emulations, looking for sensitizable paths representing potential security problems. These are captured in fairly easy to understand security assertions, not SVA but assertions unique to the Tortuga tools (they can help you develop these if you want the help).

I like a couple of things about this approach. First, the collection of assertions represents your threat model for the system. Which means that once you understand the assertions, which in my view have a declarative flavor, you can easily assess how complete that model is, rather than trying to wrap your brain around all the details of the RTL implementation and how it might be attacked.

Second, this runs with your existing testbenches. You don’t need to generate dedicated testbenches, so you or your assigned security expert can start testing immediately and regress right alongside your functional regressions. A common question that comes up here is how complete the security signoff can be if it is simulation based. Jason Oberg (the Tortuga CEO) answered this in the webinar. It’s not a formal guarantee, but then no known method (including formal) can provide a guarantee for most security threats. However if your testbench coverage is good enough for functional signoff, Radix routinely finds more problems than other methods such as directed testing

Tortuga is already partnered with Xilinx, Rambus, Cadence, Synopsys, Mentor and Sandia National Labs, so they’ve obviously impressed some important people. You can register to watch the WEBINAR REPLAY HERE .


More Steve Jobs, Apple, and NeXT Computer

More Steve Jobs, Apple, and NeXT Computer
by John East on 08-19-2019 at 6:00 am

My first meeting with Steve Jobs was in early 1987 when he was running NeXT Computer.  I was a VP at AMD and was hunting for potential customers.  I visited him in the NeXT Palo Alto facility with the objective of selling him some existing AMD products.  He had a different objective:  to get me to produce a new product that we had no plans to make but that he felt he needed for his NeXT machine.

Have you ever noticed that dogs can always tell if a person likes them?  Somehow we humans give off some sort of emanation that every dog can pick up.  You can’t fool a dog.  People are the same in concept,  but less sensitive.  Sometimes we can pick up the emanation,  sometimes not.  It depends on how strong the emanations are.  Well  — Steve Jobs could really emanate!!  You didn’t have to be a dog to pick up Steve’s emanations.  About two seconds after I shook Steve’s hand,  I picked up — this guy doesn’t like me.  He thinks I’m stupid. —  He didn’t say it.  He was cordial enough in a stand-offish sort of way. But there was no doubt!!  But don’t feel badly for me. I didn’t feel alone.  In the course of that meeting I found out that Apple was stupid too. And John Sculley.   And anything to do with IBM/Microsoft.  So  — at least I was in good company.  But here’s my take: he was very, very smart.  He’d gone to a liberal arts college for one semester and then dropped out.  He’d never had a day of formal training as an engineer.  Yet  — I could barely keep up with him as he was describing the technical job that he was trying to get done.

The second meeting was not that pleasant.  I told him that we had decided not to make the part that he wanted.  That didn’t please him and he said so.  He let me know in clear terms that I was not making a smart decision.  He also let me know that stupid decisions were made by stupid people.  QED.  He might have been right.  In fact, as I look back on it,  he probably was right.  Rich Page was in those meetings too.  Rich was once a fellow at Apple and was now one of the NeXT technical gurus.  Rich and I had lunch the other day  (See the attached picture). We couldn’t remember exactly what Steve was asking for,  but we could guess well enough to agree that it probably would have been a good product.  Oh well. Still — even though I was never comfortable with Steve, I was in awe of the guy.  He was smart, rich, and good looking.  When it suited him,  he could really turn on the charm! I thought that if he could come up with a little better way of dealing with people that he could own the world.  You know what?  He did.

Sculley ran Apple until 1993.  It wasn’t easy!  Selling personal computers was nothing like selling mainframes, but it was also nothing like selling soda pop.  It was a tough combination of the two.  The company languished and Sculley was eventually fired. Mike Spindler took over briefly.  Spindler was replaced by Gil Amelio who had been running National Semiconductor.  One of the movies that was made about Apple did a not very favorable depiction of Gil.  That was wrong!  Gil and I worked together at Fairchild back in the 70s.  I love Gil Amelio.  I think he’s a really, really good guy.  Smart.  Hard working.  Good with people.  Unfortunately Steve Jobs didn’t see it that way.  Why did that matter?  Because Amelio decided to buy NeXT computer and when he did, he brought Jobs back as an advisor.  No good deed goes unpunished.  Within a year Gil was out and Steve was back in power.

Steve Jobs was more than just a technologist.  In fact, he wasn’t really a technologist at all.  But,  as Laurene Jobs said, Steve could see what wasn’t there and what was possible.  A great example of this is the IPod.  In 2000, Toshiba announced a new, very small form factor hard drive with a 1.8 inch diameter platter.  Jobs saw it at roughly the same time that everybody else did,  but his mind put the pieces together better and faster than anyone else.  When he put the pieces together, he saw a music player.  A year later, in 2001,  the IPod was born. Soon it seemed as though every teenager in America had an IPOD.  I had one too and I was nowhere near to being a teenager.  The Ipod was a huge win!!

And yet a third example?   The MAC,  the IPod  and then????   —  The IPhone!!  People wanted to be on the web from wherever they happened to be – not just from their office.  And they wanted to do it without having to work hard at it  — that is,  they wanted a really simple GUI on a big,  easy to read screen.  It wasn’t there.  But it could be.  Apple didn’t invent dual touch technology.  Apple didn’t invent capacitive sensing. Apple didn’t invent the ARM 11 processor.  But Steve saw what wasn’t there and what could be.  He acted on it.  He won again.  In the neighborhood of two billion IPhones have been sold.

Over a decade or so Apple introduced I-Tunes,  the IPod,  The IPhone, the IPad and finally the IWatch.  All really easy to use.  So easy that even a CEO could do it (My administrative assistant used to use that line all the time) What difference did those products make?  Apple went from a company who was hemorrhaging cash at the time of Gil Amelio’s appointment to a trillion dollar market-cap company. Pretty big difference,  wouldn’t you say?!!

To repeat Laurene Jobs  “Steve had the ability to see what wasn’t there and what was possible.”  How hard could that be? Anybody could do that,  right.? Steve got it right almost every time.  I tried like crazy to do it,  but could never quite pull it off.  One of us must have been a very special person.

I hope it was him!

Next week: the decade that changed the industry.

Pictured:  Above – Rich Page next to Steve Jobs with the other founders of NeXT Computer.

Below – Rich and I having lunch at Don Giovanni restaurant a couple of months ago. If you enlarge the picture you can barely make out some diagrams sketched on the (paper) tablecloth.  That’s what happens when a couple of old engineers share lunch.

See the entire John East series HERE.


Designing Connected Car Cockpits

Designing Connected Car Cockpits
by Roger C. Lanctot on 08-18-2019 at 4:00 pm

Creating engaging, though not distracting, in-vehicle experiences that enhance driving and maximize safety represents an intimidating and inspiring opportunity for automotive designers – who have already made great strides. The widespread adoption of connectivity means that artificial intelligence in the form of increasingly sophisticated digital assistants are transforming that experience.

Nuance, Affectiva, and Strategy Analytics have proposed a SXSW panel discussion on this topic focused on the revolutionary impact of AI on in-vehicle experiences. We’d like your vote in favor of our presentation proposal.

On Monday, August 5, all ideas received during the open Panel Picker application process for the 2020 SXSW event were posted for the online community to vote on.

Community votes make up 30% of the final decision of what makes the stage at SXSW. Input from SXSW Staff (30%) and the SXSW Advisory Board (40%) is also part of the decision making process and helps ensure that lesser-known voices have as much of a chance of being selected as individuals with large online followings. Together, these percentages help determine the final programming lineup.  

How to Vote

To participate in the voting process, visit panelpicker.sxsw.com/vote and login or create an account. Each voter can vote once per proposal – selecting “arrow up” for yes or “arrow down” for no.

Nuance, Affectiva, and Strategy Analytics’ session can be found here: https://panelpicker.sxsw.com/vote/104568

Thank you, in advance, for your interest and support. The goal is to make driving safer, more pleasant, more intelligent, and more human.


Hopes of a 2020 recovery but nothing solid yet

Hopes of a 2020 recovery but nothing solid yet
by Robert Maire on 08-18-2019 at 6:00 am

An in line quarter

Applied reported a quarter just above the mid point of guidance and analyst numbers (which mimic guidance) with revenues of $3.56B and EPS of $0.74 with guidance of $3.685B+-$150M and EPS o $0.72 to $0.80, also in line with current expectations. All in all a fairly boring quarter with business bouncing along a soft bottom cycle.

Memory still in decline

Management pointed out the same thing we have heard from others as well as memory companies and that is that memory companies continue to reduce capacity and shipments.  This means that existing tools are being taken off line, and sit idle in the fab, in order to reduce the excess supply in the market.  All this idled capacity is going to be a long buffer on any upturn which will delay purchases of new equipment until all the idled capacity comes back on line first.  We think all this idled capacity, which continues to pile up, to be a significant impediment to new equipment purchases when memory does indeed recover.

Logic/Foundry is better

As we have also hear and reported from Semicon West, the foundry industry has kicked up spend most notably TSMC. The biggest driver there is the roll out of 5G.  However, its clear that the strength of logic alone is not enough to either offset memory weakness or spark a logic driven up cycle.

Display is weaker

Display business has been weaker and is expected to move down a bit more.  We don’t expect a near term recovery in display given the weakening overall consumer market

Execution and returns remain good

Overall financial performance remains very disciplined and we see good shareholder value return in buy backs.  Gross margin remains better than expected and experienced in previous down cycles.

 The stock

The lack of some inspiring new news and no clear bottom nor clear indication of a recovery (other than pure hope) suggests that there is no real reason to run out and buy the stock.

The story was more or less in line with what we heard from Lam and KLAC so its not like Applied is out performing in any significant way.

In line performance is certainly better than a downward revision to the numbers but management is still not ready to call a “bottom” to the downcycle which suggests that they are not confident either that things will get better any time soon.

The China trade issue just adds to the overall weakness and adds more potential downside versus upside.

The stock is certainly not cheap….with EPS down by almost 50% from roughly a year ago, the stock is not down anywhere near that amount and thus remains at a high valuation for the bottom of the cycle.