RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

Low Power Design – Art vs. Science

Low Power Design – Art vs. Science
by Daniel Nenni on 08-21-2019 at 10:00 am

I have heard many times before that low power and mixed-signal design is more Art than Science. I believe this is a misconception. Science is a field that builds upon previous experiences and discoveries. Art primarily seeks out creative differences, things we have not seen before that evoke emotion. The most successful designers of low power and mixed-signal devices are those with the most experience. The best of them likely have also worked on a very broad range of designs – they have seen everything. They rely a lot on what they have seen before.

sureCore, with design centers in Sheffield, England, and Leuven, Belgium delivers custom low power design services. They do this down to the latest design nodes (7nm) and have specific experience in network, machine learning/AI, and IoT devices (e.g., medical, wearables, etc.). sureCore will be discussing their service engagement methodology and the unique mix of technology expertise at a SemiWiki Webinar Series event on Wednesday, August 28, 2019, from 10:00a m to 10:45 am PDT. To sign up for the webinar, register using your work email address HERE. We did a run through of the presentation last week and it is worth 20 minutes of your time, absolutely.

The breadth of sureCores’s experience cannot be overstated as they have handled all types of mixed-signal designs including ADC, DAC, amplifiers, regulators, PLLs, oscillators, etc. over process nodes ranging from 180nm down to today’s latest and greatest nodes. They have built their own proprietary automated design environment around proven industry tools from Cadence, Mentor, and Solido. Being an expert at Solido Varation Designer is a key skill/expertise in these types of designs. I know sureCore from my time at Solido Design Automation. They were one of many happy Solido customers, absolutely.

It is generally understood that most modern designs have a large portion of the layout area dedicated to memory, specifically SRAM. Any memory power savings gets multiplied due to the size of the memory. sureCore’s sureFIT custom memory design service gives its clients delivers memory instances specifically tuned to the needs of the application. At the webinar, they will be discussing an example of a custom-built, high capacity low power SRAM which delivered more than a 40% power savings compared to the typical memory compiler options available to their customer. You should attend the webinar to hear more about this example.

Beyond design services, sureCore also offers two other services, characterization, and verification. sureCore’s characterization solution is built using the Cadence Liberate tool suite. They characterize both SRAMs and standard cell libraries. The results include the generation of a .lib file for modeling across extreme PVT corners. As their focus is lower power design, they emphasize characterization at low operating voltages and support all the necessary views (NLDM, CCS (timing and noise), and LVF. The resulting models are validated using Monte-Carlo simulation on extracted netlists with the margins adjusted for improved design robustness.

Getting back to my original point, success in producing robust designs that meet the challenges of today’s low power specification does not require ‘art’; it requires science, experience, and expertise – and lots of it. sureCore’s low power design team has more than 200 person-years of combined lower power design experience. They have proven design experience down to 7nm, including custom SRAMs, layout, characterization, and more. Learn more about sureCore at their upcoming webinar.

About sureCore
sureCore Limited is an SRAM IP company based in Sheffield, UK, developing low power memories for current and next-generation, silicon process technologies. Its award-winning, world-leading, low power SRAM design is process independent and variability tolerant, making it suitable for a wide range of technology nodes. This IP helps SoC developers meet challenging power budgets and manufacturability constraints posed by leading-edge process nodes.

Also Read:

WEBINAR: The Brave New World of Customized Memory

Custom SRAM IP @56thDAC

Low Power SRAM Complier and Characterization Enable IoT Applications


Semicon West 2019 – Day 4 – Soitec

Semicon West 2019 – Day 4 – Soitec
by Scotten Jones on 08-21-2019 at 6:00 am

Last year at Semicon I sat down with Soitec and got an update on the company. You can read my write up from last year here. A key point last year was Soitec was continuing to be profitable and grow after several years of financial struggles.

On Thursday, July 11th I got to sit down with Soitec’s CEO, Paul Boudre and get an update on Soitec’s continued progress.

Soitec is a key supplier of SOI substrates for both RFSOI and FDSOI processes. In my previous Semicon write up from this year I discuss GLOBALFOUNDRIES (GF) commitment to SOI and business opportunities. The GF article is available here. They are seeing GF and Samsung offering FDSOI foundry and ST Micro use FDSOI internally, now they see the platform being adopted by IOT products, with FDSOI used by Microsoft and NXP voice recognition. GF and Samsung have platform roadmaps, not just nodes, memory, RF, etc. TSMC now makes RFSOI. The market for SOI into mobile, automotive and IOT is expected to continue to grow in the years ahead.

Smart phones had 20mm2 of SOI two years ago, today it is 50 to 60mm2 and in two more years they expects 100 to 150mm2.

But Soitec doesn’t just see themselves as an SOI provider but rather as a provider of engineered substrates and they are applying their core competencies to broaden their market. For example:

  • Piezzo On Insulator (POI) provides piezo materials on an insulator for filter applications. This is a huge market, as big as the front-end-module market for cell phones and they have 0% market share currently.
  • Silicon Carbide (SiC) is a rapidly growing material for power semiconductors but the wafers are very expensive and supply constrained. Soitec can use their SmartCut technology to make 10 SiC wafers out of one wafer.
  • They acquired EpiGan from Belgium for GaN on silicon for 5G base station power amplifier, no market share now in PA, also a big opportunity and a new market for them, they are not in base stations.

In terms of capacity:

  • Singapore has just been qualified and can ramp up to 1 million 300mm wafers per year.
  • France is ramping from 650 thousand 300mm wafers per year to 1 million wafers per year.
  • The France R&D factory is being converted to POI and will have 400 thousand wafers per year of capacity. Soitec’s R&D has been moved to a joint lab at Leti.
  • 200mm capacity is now full in France and the overflow is going to their China partner who is ramping up from 150 thousand wafers per year to 375 thousand wafers per year.

Soitec will spend $130 million euro on capital this year and self-fund it while generating approximately $180 million euro of EBIT. In 2015 they were in chapter 11 with a $130 million euro valuation, today they are valued at $3 billion euro! They have been growing at 30% to 40% per year the last few years and are shooting at 30% this year.

About Soitec
Soitec (Euronext Paris) is an industry leader in designing and manufacturing innovative semiconductor materials. The company uses its unique technologies and semiconductor expertise to serve the electronics markets. With more than 3,500 patents worldwide, Soitec’s strategy is based on disruptive innovation to answer its customers’ needs for high performance, energy efficiency and cost competitiveness. Soitec has manufacturing facilities, R&D centers and offices in Europe, the U.S. and Asia. For more information, please visit www.soitec.com


Making pre-Silicon Verification Plausible for Autonomous Vehicles

Making pre-Silicon Verification Plausible for Autonomous Vehicles
by Daniel Payne on 08-20-2019 at 10:00 am

AV diagram

I love reading about the amazing progress of autonomous vehicles, like when Audi and their A8 model sedan was the first to reach Level 3 autonomy, closely followed by Tesla at Level 2, although Tesla gets way more media attention here in the US. A friend of mine bought his wife a car that offers adaptive cruise control with auto-braking, as needed. All of these cool and life-saving features from the ever-increasing levels of automotive automation are made possible only through advanced silicon, sensors, radar, lidar, firmware and software.

For SoC chip design teams the risks and rewards are huge, but the biggest rewards likely will go to the companies that get their new IC designs into silicon first, so the race is on, and one big part of the engineering schedule continues to be pre-silicon verification. The only way to verify is to model and then extensively run simulations for all features and scenarios.

 

Software-based simulation is always a good starting point for verification, because it’s a mature technology and familiar to the majority of engineers, but the downside is that the larger the SoC and the more scenarios required the longer the runtimes, until the approach becomes infeasible because of time constraints. Our industry has an answer for meeting those verification time constraints, and that is from using a hardware-based approach with emulation.

Attending DAC in 2019, plus daily reading of news headlines and my LinkedIn feed, I have learned three mega-trends in the automotive market segment:

Increased complexity in automotive systems is creating bigger verification demands for SoC verification. To reach Level 5 autonomy requires car makers to have systems using multiple CPUs, AI engines and image processors. These systems are controlled by sophisticated software, requiring verification for functional correctness plus meet safety standards described by ISO 26262.

Car companies will need to debug the source of hardware and software bugs during testing and verification, so the quicker the better.

There are three verification phases to consider in designing SoCs for automotive use:

  • Model-in-the-loop (MiL), using high-level models
    • Plus: speed, capacity
    • Minus: accuracy
  • Software-in-the-loop (SiL), using logic simulation
    • Plus: accurate, debugging
    • Minus: speed, capacity
  • Hardware-in-the-loop (HiL), using emulation
    • Plus: speed, capacity

Siemens PLM Software already has a vehicle design and verification program called PAVE360, and within that the IC design part uses the Veloce emulator. Attending DAC this summer I noticed that Mentor and other vendors all prominently displayed their emulators, because of how important they are to verification of the most complex systems. The more that a system relies on software, the more likely it is that you will adopt emulation for verification.

Why emulation? Well, mostly speed and capacity, so running 1,000X to 10,000X faster on a specialized emulator compared to a logic simulator is a compelling methodology. The emulator gets this speed advantage by using a hardware fabric that can be reconfigured for each new SoC design to be verified, enabling engineers to actually run software and an operating in a reasonable amount of time.

Siemens tools for system-level verification

So what’s attractive about the emulation from Veloce?

  • Maturity, many years of improving results
  • Integration
    • Vista/SystemC (multi-physics, high-level modeling)
    • Amesim (electro-mechanical)
    • PAVE360 (scenarios)
  • Verification IP (VIP) library
  • Capacity of 15 billion logic gates
  • Debug visibility
  • Automotive specific applications
  • Power verification
  • Meeting functional safety requirements

Summary

I eagerly await Level 5 autonomous driving, and between now and then will closely watch the growth in this exciting industry because of the safety and wow factors. Behind the scenes I expect more design companies in this segment to adopt hardware emulation like Veloce in order to meet their time window, produce the most safe vehicle controls and enable both hardware and software debug. Mentor has assembled a compelling collection to serve the automotive design market.

Read the full size page White Paper here.

Related Blogs


Xilinx on ANSYS Elastic Compute for Timing and EM/IR

Xilinx on ANSYS Elastic Compute for Timing and EM/IR
by Bernard Murphy on 08-20-2019 at 5:00 am

RedHawk-SC

I’m a fan of getting customer reality checks on advanced design technologies. This is not so much because vendors put the best possible spin on their product capabilities; of course they do (within reason), as does every other company aiming to stay in business. But application by customers on real designs often shows lower performance, QoR or whatever metrics you care about than you will see in ideal claims, not because the vendor wants to mislead you but because they can’t quantify the inefficiencies inherent in live design environments. That’s why customer numbers are so interesting; they reflect what you’re most likely to see.

ANSYS has already hosted webinars with customers like NVIDIA and others talking about the benefits of big data/elastic compute in optimizing power integrity, EM and other factors for their designs. In a recent webinar Xilinx added their viewpoint in looking for scalability in analysis for their designs at 7nm and beyond, particularly in use of SeaScape, RedHawk-SC and Path FX. Why they felt the need to do this becomes apparent when you see some of the stats below.

The customer presenter for this webinar was Nitin Navale, CAD Manager at Xilinx responsible for timing analysis and EM/IR. He started by explaining a little about their architecture to give context for the rest of the discussion. The top-level of a die is made up of between 100 and 400 fabric sub-regions (FSRs), each of which contains between 2500 and 5000 IP block instances. A packaged part may contain one or more die in a 2.5D configuration on an interposer. In this context, analysis for timing, EM and IR is first at the die level and then across die in a multi-die package.

SeaScape

The first application Nitin discussed was STA. Surely this will all be managed by the standard signoff timing tools? It seems the days of full-flat STAs are behind us and would be pointless anyway for the kind of analyses Nitin and colleagues need. So you have to strip away all but a select set of logic for analysis, but he told us that even after stripping back to a single FSR an STA run would not complete. The problem here is apparently that in Xilinx designs you may have to blackbox (BB) millions of instances; all those BB boxes and pins still consume too much memory.

What they wanted was not just to BB millions of instances but remove them and floating nets completely. Xilinx programmed this operation in Python on top of standalone SeaScape. Remember their first test kept a single FSR, with everything but blocks of interest BBed, and that wouldn’t complete in STA. They ran their SeaScape script (taking just over 6 hours) then STA completed in 12 hours per corner. A medium-sized design (33 FSRs retained) after stripping back in SeaScape ran through STA in 4 days per corner. (Nitin also talks about running Path FX trials here, which ran much faster than the equivalent STA. I’ll touch on that tool below.)

Now you could do all this fancy stripping in the reference timing tools but not very quickly. There’s nothing intrinsically parallelizable about Tcl scripts and even if you do some very clever scripting you would have to make sure those scripts wouldn’t get confused on overlapping paths. SeaScape takes care of all this by directly managing map-reduce and compute distribution. Pretty neat that Xilinx were able to use SeaScape to make 3rd party tool runs viable.

RedHawk-SC

Next up, Nitin talked about their trials with RedHawk-SC (the SeaScape-based version of RedHawk) for EM and IR analysis. Here they have the same scale problems as for STA except that RedHawk-SC has SeaScape built-in so can natively work at full-chip scale. He mentioned that when they were doing analysis on Ultrascale back in 2015, they had to break the design into 7 partitions. It took one person-month to do the initial analysis and one person-week to do an iterative run with ECOs. On the Versal product (2018) this jumped to five person-months on the initial run and five person-weeks per iteration. Clearly this won’t be scalable to larger designs which is why they started looking at the ANSYS products.

Nitin shared preliminary data here. In both the small and mid-size experiments, run-time was reduced by 3X on the small job and 10X bigger job and they saw good correlation between RedHawk and RedHawk-SC results. Nitin said they’ve seen enough already – they’ll be deploying RedHawk-SC on their next chips. Also interesting, the RedHawk (not SC) runs need a machine with almost 1TB of memory. RedHawk-SC distributed to worker machines needing only 30GB of memory per machine. Worth considering versus when you’re thinking about requesting more expensive servers.

Path FX

Nitin wrapped up with a discussion on their trials on Path FX, ANSYS’ Spice-accurate path timer. In the SeaScape trials I mentioned earlier, Path FX was running 4X faster than STA on the mid-size design, already notable. Of course you don’t switch to a different reference signoff of that importance based on a couple of tests, but Xilinx have another application where Path FX looks like a very interesting potential fit. In Vivado (the Xilinx design suite) it would be impossibly expensive to re-time a compile each time so the tool uses lookup tables for combinational paths. Populating those lookup tables is something Xilinx calls timing capture, and requires that paths be timed independently. While some parallelism is possible through grouping, this can become complicated on a reference STA tool and still only runs on a single host even with multi-core and multi-threading. Apparently resolving path overlaps further reduces performance.

Path FX however can take advantage of the same underlying elastic compute technology, intelligently distributing paths to workers and calculating pin-to-pin delays simultaneously, even on conflicting paths. And it’s more accurate than Liberty-based timing since it’s closer to Spice. They again ran a trial on a single FSR-based design. Compile was 4X faster and timing was 10X faster, compared with STA running fully parallelized. Correlation was also pretty good, though Nitin cautioned they (Xilinx) made a mistake in setting up the libraries. They have to correct this and re-run to check correlation and run-time. However he likes where this number is starting, even if it might be a bit wrong.

Overall, pretty compelling evidence that the elastic compute approach is more widely effective than parallelism in accelerating big tasks. You can register to watch the full webinar HERE.


Digging Deeper in Hardware/Software Security

Digging Deeper in Hardware/Software Security
by Bernard Murphy on 08-19-2019 at 10:00 am

When it comes to security we’re all outraged at the manifest incompetence of whoever was most recently hacked, leaking personal account details for tens of millions of clients and everyone firmly believes that “they” ought to do better. Yet as a society there’s little evidence beyond our clickbait Pavlovian responses that we’re becoming more sensitized to a wider responsibility for heightened security. We’d rather ignore security options in social media, in connecting to insecure sites or in clicking on links in phishing emails, because convenience or curiosity provide instant gratification, easily outweighing barely understood and distant risks which maybe don’t even affect us directly.

This underlines the distributed nature of security threats and the need for each link in the chain to assume that other links may have been compromised, often by the weakest link of all – us. Initial countermeasures, adding a variety of security techniques on top of existing implementations, proved easy to hack because the attack surface in such approaches is huge and it is difficult to imagine all possible attacks much less defend against them.

Hardware roots of trust are the new “in” technology, stuffing all security management into a tightly guarded center. Google now has their Titan root of trust for the Google Cloud and Microsoft has their Cerberus root of trust, both implemented in hardware. These aren’t marketing gimmicks. A security flaw discovered this year in baseboard management controller firmware stacks and hardware allows for remote unauthenticated access and almost any kind of malfeasance following an attack. When a major cloud service provider is hacked and it wasn’t clearly a user problem, the reputational damage could be unbounded. Imagine what would happen to Amazon if we stopped trusting AWS security.

However – just because you built a hardware root of trust (HRoT) into your system, that doesn’t automatically make you secure. Tortuga Logic recently hosted a webinar in which they provided a nice example of what can go wrong even inside an HRoT. This is illustrated in the opening graphic. An AES (encryption/decryption) block inside the HRoT first reads the encrypted key, decrypts it and stores it in a safe location inside the HRoT in preparation for data decryption. To decrypt data for use outside the HRoT two things have to happen: the data has to be run through the AES core and the demux on the right has to be flipped from storing internally to sending outside. Makes sense to flip the switch first then start decrypting, right?

But realize that the state from the key decryption persists on that path until other data is run through the AES. If you flip the demux switch first, the plaintext key can be read outside the HRoT. Oops. A seemingly reasonable and harmless software choice just gave away your most precious secret. Why not hardwire this kind of thing instead? Because for most embedded systems users expect some level of configurability even in the HRoT (which areas of memory should be treated as secure for example). You can’t hardwire your way out of all security risks and even if you tried, you’d just replace possibly firmware-fixable bugs with definitely unfixable HW bugs.

Bottom-line, to run serious security checks, you have to check the operation of the software on the hardware. Like for example booting Linux on the hardware. But how do you figure out where to check for problems like this? And how do you trigger such cases? A standard software test probably won’t trigger this kind of problem. Exposing the problem likely depends on some unusual event which might happen almost anywhere in the hardware + software stack, making it close to impossible to find.

The Tortuga approach using their Radix tool takes a different approach. It runs within your standard functional simulations/emulations, looking for sensitizable paths representing potential security problems. These are captured in fairly easy to understand security assertions, not SVA but assertions unique to the Tortuga tools (they can help you develop these if you want the help).

I like a couple of things about this approach. First, the collection of assertions represents your threat model for the system. Which means that once you understand the assertions, which in my view have a declarative flavor, you can easily assess how complete that model is, rather than trying to wrap your brain around all the details of the RTL implementation and how it might be attacked.

Second, this runs with your existing testbenches. You don’t need to generate dedicated testbenches, so you or your assigned security expert can start testing immediately and regress right alongside your functional regressions. A common question that comes up here is how complete the security signoff can be if it is simulation based. Jason Oberg (the Tortuga CEO) answered this in the webinar. It’s not a formal guarantee, but then no known method (including formal) can provide a guarantee for most security threats. However if your testbench coverage is good enough for functional signoff, Radix routinely finds more problems than other methods such as directed testing

Tortuga is already partnered with Xilinx, Rambus, Cadence, Synopsys, Mentor and Sandia National Labs, so they’ve obviously impressed some important people. You can register to watch the WEBINAR REPLAY HERE .


More Steve Jobs, Apple, and NeXT Computer

More Steve Jobs, Apple, and NeXT Computer
by John East on 08-19-2019 at 6:00 am

My first meeting with Steve Jobs was in early 1987 when he was running NeXT Computer.  I was a VP at AMD and was hunting for potential customers.  I visited him in the NeXT Palo Alto facility with the objective of selling him some existing AMD products.  He had a different objective:  to get me to produce a new product that we had no plans to make but that he felt he needed for his NeXT machine.

Have you ever noticed that dogs can always tell if a person likes them?  Somehow we humans give off some sort of emanation that every dog can pick up.  You can’t fool a dog.  People are the same in concept,  but less sensitive.  Sometimes we can pick up the emanation,  sometimes not.  It depends on how strong the emanations are.  Well  — Steve Jobs could really emanate!!  You didn’t have to be a dog to pick up Steve’s emanations.  About two seconds after I shook Steve’s hand,  I picked up — this guy doesn’t like me.  He thinks I’m stupid. —  He didn’t say it.  He was cordial enough in a stand-offish sort of way. But there was no doubt!!  But don’t feel badly for me. I didn’t feel alone.  In the course of that meeting I found out that Apple was stupid too. And John Sculley.   And anything to do with IBM/Microsoft.  So  — at least I was in good company.  But here’s my take: he was very, very smart.  He’d gone to a liberal arts college for one semester and then dropped out.  He’d never had a day of formal training as an engineer.  Yet  — I could barely keep up with him as he was describing the technical job that he was trying to get done.

The second meeting was not that pleasant.  I told him that we had decided not to make the part that he wanted.  That didn’t please him and he said so.  He let me know in clear terms that I was not making a smart decision.  He also let me know that stupid decisions were made by stupid people.  QED.  He might have been right.  In fact, as I look back on it,  he probably was right.  Rich Page was in those meetings too.  Rich was once a fellow at Apple and was now one of the NeXT technical gurus.  Rich and I had lunch the other day  (See the attached picture). We couldn’t remember exactly what Steve was asking for,  but we could guess well enough to agree that it probably would have been a good product.  Oh well. Still — even though I was never comfortable with Steve, I was in awe of the guy.  He was smart, rich, and good looking.  When it suited him,  he could really turn on the charm! I thought that if he could come up with a little better way of dealing with people that he could own the world.  You know what?  He did.

Sculley ran Apple until 1993.  It wasn’t easy!  Selling personal computers was nothing like selling mainframes, but it was also nothing like selling soda pop.  It was a tough combination of the two.  The company languished and Sculley was eventually fired. Mike Spindler took over briefly.  Spindler was replaced by Gil Amelio who had been running National Semiconductor.  One of the movies that was made about Apple did a not very favorable depiction of Gil.  That was wrong!  Gil and I worked together at Fairchild back in the 70s.  I love Gil Amelio.  I think he’s a really, really good guy.  Smart.  Hard working.  Good with people.  Unfortunately Steve Jobs didn’t see it that way.  Why did that matter?  Because Amelio decided to buy NeXT computer and when he did, he brought Jobs back as an advisor.  No good deed goes unpunished.  Within a year Gil was out and Steve was back in power.

Steve Jobs was more than just a technologist.  In fact, he wasn’t really a technologist at all.  But,  as Laurene Jobs said, Steve could see what wasn’t there and what was possible.  A great example of this is the IPod.  In 2000, Toshiba announced a new, very small form factor hard drive with a 1.8 inch diameter platter.  Jobs saw it at roughly the same time that everybody else did,  but his mind put the pieces together better and faster than anyone else.  When he put the pieces together, he saw a music player.  A year later, in 2001,  the IPod was born. Soon it seemed as though every teenager in America had an IPOD.  I had one too and I was nowhere near to being a teenager.  The Ipod was a huge win!!

And yet a third example?   The MAC,  the IPod  and then????   —  The IPhone!!  People wanted to be on the web from wherever they happened to be – not just from their office.  And they wanted to do it without having to work hard at it  — that is,  they wanted a really simple GUI on a big,  easy to read screen.  It wasn’t there.  But it could be.  Apple didn’t invent dual touch technology.  Apple didn’t invent capacitive sensing. Apple didn’t invent the ARM 11 processor.  But Steve saw what wasn’t there and what could be.  He acted on it.  He won again.  In the neighborhood of two billion IPhones have been sold.

Over a decade or so Apple introduced I-Tunes,  the IPod,  The IPhone, the IPad and finally the IWatch.  All really easy to use.  So easy that even a CEO could do it (My administrative assistant used to use that line all the time) What difference did those products make?  Apple went from a company who was hemorrhaging cash at the time of Gil Amelio’s appointment to a trillion dollar market-cap company. Pretty big difference,  wouldn’t you say?!!

To repeat Laurene Jobs  “Steve had the ability to see what wasn’t there and what was possible.”  How hard could that be? Anybody could do that,  right.? Steve got it right almost every time.  I tried like crazy to do it,  but could never quite pull it off.  One of us must have been a very special person.

I hope it was him!

Next week: the decade that changed the industry.

Pictured:  Above – Rich Page next to Steve Jobs with the other founders of NeXT Computer.

Below – Rich and I having lunch at Don Giovanni restaurant a couple of months ago. If you enlarge the picture you can barely make out some diagrams sketched on the (paper) tablecloth.  That’s what happens when a couple of old engineers share lunch.

See the entire John East series HERE.


Designing Connected Car Cockpits

Designing Connected Car Cockpits
by Roger C. Lanctot on 08-18-2019 at 4:00 pm

Creating engaging, though not distracting, in-vehicle experiences that enhance driving and maximize safety represents an intimidating and inspiring opportunity for automotive designers – who have already made great strides. The widespread adoption of connectivity means that artificial intelligence in the form of increasingly sophisticated digital assistants are transforming that experience.

Nuance, Affectiva, and Strategy Analytics have proposed a SXSW panel discussion on this topic focused on the revolutionary impact of AI on in-vehicle experiences. We’d like your vote in favor of our presentation proposal.

On Monday, August 5, all ideas received during the open Panel Picker application process for the 2020 SXSW event were posted for the online community to vote on.

Community votes make up 30% of the final decision of what makes the stage at SXSW. Input from SXSW Staff (30%) and the SXSW Advisory Board (40%) is also part of the decision making process and helps ensure that lesser-known voices have as much of a chance of being selected as individuals with large online followings. Together, these percentages help determine the final programming lineup.  

How to Vote

To participate in the voting process, visit panelpicker.sxsw.com/vote and login or create an account. Each voter can vote once per proposal – selecting “arrow up” for yes or “arrow down” for no.

Nuance, Affectiva, and Strategy Analytics’ session can be found here: https://panelpicker.sxsw.com/vote/104568

Thank you, in advance, for your interest and support. The goal is to make driving safer, more pleasant, more intelligent, and more human.


Hopes of a 2020 recovery but nothing solid yet

Hopes of a 2020 recovery but nothing solid yet
by Robert Maire on 08-18-2019 at 6:00 am

An in line quarter

Applied reported a quarter just above the mid point of guidance and analyst numbers (which mimic guidance) with revenues of $3.56B and EPS of $0.74 with guidance of $3.685B+-$150M and EPS o $0.72 to $0.80, also in line with current expectations. All in all a fairly boring quarter with business bouncing along a soft bottom cycle.

Memory still in decline

Management pointed out the same thing we have heard from others as well as memory companies and that is that memory companies continue to reduce capacity and shipments.  This means that existing tools are being taken off line, and sit idle in the fab, in order to reduce the excess supply in the market.  All this idled capacity is going to be a long buffer on any upturn which will delay purchases of new equipment until all the idled capacity comes back on line first.  We think all this idled capacity, which continues to pile up, to be a significant impediment to new equipment purchases when memory does indeed recover.

Logic/Foundry is better

As we have also hear and reported from Semicon West, the foundry industry has kicked up spend most notably TSMC. The biggest driver there is the roll out of 5G.  However, its clear that the strength of logic alone is not enough to either offset memory weakness or spark a logic driven up cycle.

Display is weaker

Display business has been weaker and is expected to move down a bit more.  We don’t expect a near term recovery in display given the weakening overall consumer market

Execution and returns remain good

Overall financial performance remains very disciplined and we see good shareholder value return in buy backs.  Gross margin remains better than expected and experienced in previous down cycles.

 The stock

The lack of some inspiring new news and no clear bottom nor clear indication of a recovery (other than pure hope) suggests that there is no real reason to run out and buy the stock.

The story was more or less in line with what we heard from Lam and KLAC so its not like Applied is out performing in any significant way.

In line performance is certainly better than a downward revision to the numbers but management is still not ready to call a “bottom” to the downcycle which suggests that they are not confident either that things will get better any time soon.

The China trade issue just adds to the overall weakness and adds more potential downside versus upside.

The stock is certainly not cheap….with EPS down by almost 50% from roughly a year ago, the stock is not down anywhere near that amount and thus remains at a high valuation for the bottom of the cycle.


Tensilica HiFi DSPs for What I Want to Hear, and What I Don’t Want to Hear

Tensilica HiFi DSPs for What I Want to Hear, and What I Don’t Want to Hear
by Randy Smith on 08-16-2019 at 10:00 am

It seems every day we see a new article (or ten) on autonomous driving. It is an especially hot topic, and it will happen someday. For now, we can dream about it, and many people are working on it. But for the present, the technology in a car that commands my attention is audio. I’ve been a musician since 4th grade. I still perform occasionally today. I love all types of music. And the one time I can listen to whatever I want is when I am alone in the car. So, when I attended Cadence’s Automotive Design Summit the end of July, the session titled, HiFi DSPs for Automotive Infotainment, had my full attention. Larry Przywara, Cadence’s Product Line Group Director Audio/Voice IP, Tensilica Products, who works in the Cadence IP Group, gave the presentation. So, thank you, Larry, you made my day!

If you live in Silicon Valley, you know we are in our cars a lot. I try to stay off the cell phone, but I need to stay connected. I use CarPlay for that. A feature we didn’t even dream about ten years ago. While I was happy when higher fidelity sound first started appearing in cars (e.g., DTS, DVD-A, etc.), many cars no longer support physical media at all. My phone has over 16GB of music on it, why should I need to fumble with a disc? So, today’s infotainment system needs to support many audio sources including Bluetooth, AM, FM, Digital Radio (HD Radio, DAB, DAB+, DRM…), CarPlay, Android Auto, Sirius XM, MP3 from hard disk, and the list keeps growing.  Most importantly, Cadence HiFi licensees, of which there are well over one hundred, can get support for any the digital terrestrial and satellite radio audio standards because they are all supported on the HiFi Audio DSP.

The presentation also pointed out the new advanced audio/voice features that will be coming to our cars very soon can all be supported on Tensilica HiFi DSPs. These new features include advanced noise cancellation to eliminate road and engine noise, improved speech recognition, audible directional warnings to let us hear the direction we need to be concerned about and even improved in-cabin communications. In-cabin communications? What’s this? Well, today’s SUV drivers need to shout to be heard in the back row – but what if the car could pipe that voice back to the back row for you instead, no shouting needed? Or, how do you like this feature, “sound bubbles”? The adult driver relaxes to soft jazz, while the kids are watching a movie in the back, and the front row passenger is listening to their favorite podcast – and none of them hear the other’s content. Wow, that sounds nice! The power of Tensilica’s DSP technology will be doing all of it.

If you want a real-life example for the over 100 HiFi licensees, look no further than the Samsung Exynos Auto V9 automotive processor. While the chip does employ Arm processors for some of the infotainment features, the audio portion is four Tensilica HiFi 4 DSPs, as seen here. DSP architectures are more suitable for presenting audio and for speech recognition. The advantage of using DSPs is due in part to the low latency characteristics of DSPs, and the high performance of Tensilica DSPs in particular.

Finally, as with any DSP core, you need to have the proper software available as well. Indeed, Cadence’s list of 3rd-party Automotive Partners is impressive and complete. No matter which audio features the car manufacturer, or infotainment OEM wants to provide – a complete HiFi solution will be available.

“Sorry Honey, I guess I will be getting a new car again soon…”


Chapter 6 – Specialization in the Semiconductor Industry

Chapter 6 – Specialization in the Semiconductor Industry
by Wally Rhines on 08-16-2019 at 6:00 am

Recently, the combined market share of the top ten and top twenty semiconductor companies has been increasing, contrary to the trend of the last fifty years.  Given the acceleration in mergers and acquisitions that began in 2015, one might assume that, as the semiconductor industry approaches maturity, companies are consolidating to increase their competitive advantage through economies of scale.  After all, that’s what many industries, including disk drives and DRAM’s have done in the past.  Closer examination of this trend, however, indicates that semiconductor companies are moving toward specialization rather than just bulking up to increase their revenue.  Let’s look at the top five largest semiconductor companies, where the consolidation is most evident.  The combined market share of these companies has been increasing in recent years as they grow at a 9% compound average growth rate (CAGR) versus a market that grew at 2% CAGR through 2017 (Figure 1). Did they grow by acquisition of other companies?  In general, “no”.

Figure 1.  Increasing combined market share of the five largest semiconductor companies

Despite acquisitions like Altera, Intel’s market share over the period from 2010 to 2016 was flat at about 15.5%. Samsung gained market share during the period, moving from 10.2 to 12.1% but this gain was not caused by acquisitions.  TSMC, the third largest semiconductor company by revenue, grew its market share substantially during the period, rising from 4.5 to 8.1% with no acquisitions.  And Qualcomm’s gain in market share from 3.1 to 4.2% was almost totally driven by the growth of its primary market, wireless telecommunications, rather than any acquisitions. Only Broadcom grew by acquisitions during the period, moving from 0.7 to 4.2% market share.

There were indeed companies that grew economies of scale through acquisitions during the period 2010 through 2016 but they are not a significant share of semiconductor industry revenue.  They include the TriQuint/RFMD merger to form QORVO, International Rectifier/Infineon, On Semiconductor/Fairchild, and Linear Technology/Analog Devices, to name some examples.  Overall data for the industry suggest that there is no correlation between operating profit and revenue, with a correlation coefficient of only 0.0706 (Figure 2).

Figure 2.  Lack of correlation between semiconductor revenue and operating profit of the largest semiconductor companies 2010 through 2016

Figure 3. Texas Instruments operating profit percent

Why then is there an accelerated level of semiconductor mergers and acquisitions in 2015 and 2016?  It turns out that companies that used acquisitions and divestitures to specialize their businesses usually improved operating profit percent more than those who did not.  Texas Instruments is a good example (Figure 3).  When I worked at TI in the 1970’s and 80’s, the company made almost every conceivable type of semiconductor component.  One could say that TI made everything in the semiconductor business except money. Through a series of acquisitions, divestitures and business terminations since the year 2000, TI has focused its business on analog and power components. As a result, TI has progressed from profitability that averaged less than 10% operating profit to a 40% operating profit in 2017, the highest of the major companies in the semiconductor industry.

Figure 4.  NXP Operating margin after adjustment for extraordinary items

NXP is another good example (Figure 4).  In 2014, nearly 30% of its revenue came from “standard products”.  Over the next five years, this percentage became negligible and more than 90% of NXP’s revenue then came from two major areas, automotive and security.

AVAGO is a similar story although the specialization was achieved by an aggressive series of acquisitions (Figure 5). Along with the acquisitions came divestitures resulting in very strong market share in wireless communications and networking, a specialization that was particularly good as “East-West” traffic grew in data centers. In addition, the need for improved wireless communications filters in cell phones accelerated the growth of bulk acoustic wave devices.

Figure 5.  AVAGO specialization through acquisitions

What about companies that did acquisitions in order to grow and diversify their product mix? Intel is a good example of a company that had an extremely high concentration of revenue in the microprocessor business aimed at PC’s and servers (Figure 6).  A series of acquisitions in new areas like McAfee for security, Wind River for embedded software, Altera for FPGA’s, as well as an organic diversification thrust with the foundry business, added to revenue but not to profit.

Figure 6.  Intel diversification versus profitability

Finally, one might wonder whether this high correlation of specialization with profitability came as a result of reductions in research and development, especially when one examines cases like AVAGO where substantial cost reductions followed each acquisition.  If this did happen, it’s not evident for the overall semiconductor industry.  The total R&D investment of the semiconductor industry has grown almost every year in history (Figure 7).

Figure 7.  Semiconductor research and development expenditures with recessions shown in gray

Research and development spending of the semiconductor industry has been relatively constant at 13.8% per year (Figure 8 in Chapter 2).  It appears that the managers and investors in semiconductor companies don’t believe that their industry is consolidating into a slow growth, mature business.  Why would they invest nearly 14% of their revenue each year if they believed that the recent compound average growth rate below 3% was likely to continue?  The semiconductor industry has reinvented itself periodically through history as new applications have evolved.  These new applications have created opportunities for new companies to emerge and for the total industry revenue to grow.  That’s likely to be the case for the foreseeable future.

Read the completed series