RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

Intel Manufacturing Day: Nodes must die, but Moore’s Law lives!

Intel Manufacturing Day: Nodes must die, but Moore’s Law lives!
by Scotten Jones on 03-29-2017 at 4:00 pm

Yesterday I attended Intel’s manufacturing day. This was the first manufacturing day Intel has held in three years and according to Intel their most in depth ever.

Nodes must die
I have written several articles comparing process technologies across the leading-edge logic producers – GLOBALFOUNDRIES, Intel, Samsung and TSMC. Comparing logic technologies to each other requires a metric for process density.

Continue reading “Intel Manufacturing Day: Nodes must die, but Moore’s Law lives!”


When is "off" not really off?

When is "off" not really off?
by Tom Simon on 03-29-2017 at 12:00 pm

With the old fashioned on-off power switch came certainty of power consumption levels. This was fine back in the days before processor controlled appliances and devices. On was on and off was off: full current or no current. With the first personal computers you always had to wait for the boot process to complete before you could use it. This frequently was not quick, but tolerable when you were sitting at your desk and using the computer for a long stretch. And, of course it was fine to have it running all day from a power consumption perspective because the computer was plugged into wall power.

Some PC’s and most laptops had a sleep mode that eliminated the need to wait for a lengthy boot process before you could resume work. However, these were often buggy and problematic – restoring RAM contents from a hard drive for instance was time consuming. It might have been with the Palm Pilot, or perhaps the Apple Newton, that I first realized these devices were designed to usually be in sleep mode, not powered down, and ready to wake up and use at the press of a button. The first commercially prevalent device that featured this was the iPod – just push the button and it’s awake. Today this behavior is expected in everything from iPad’s to e-book readers, cameras, laptops, etc.

The early sleep modes for phones and similar devices were pretty simple compared to today’s requirements. Their main goal was to save state to reduce power until an external interrupt, such as a button press. People had low expectations about how long the battery would last in sleep mode. PDA’s and iPods would lose all power in sleep mode after a few days.

Phones of course need to wake on incoming calls, so the RF stages need to stay awake in sleep mode. Computers and phones also need to monitor network connections such as Ethernet or WiFi. Sleep has become more sophisticated. Devices often have different levels of sleep, with each having additional circuitry always-on, depending on the needed service level. The latest addition to the panoply of sleep-wake modes is sound, or even more specifically voice, activation.

Nevertheless, adding more functionality to sleep modes, aka “battery drain”, has risked decreasing battery life. However, consumer demands create the need for longer standby battery life. Techniques used to reduce power during wake mode include clock gating, power gating, voltage domains, block level power management, multi-threshold libraries, etc. Sleep mode power reduction presents its own challenges. This is especially true when complex functions such as voice recognition are enabled. Google, Apple and Amazon all offer devices that sleep with voice activated wake ability.

At the TSMC Technology Symposium in mid-March I had a chance to talk to Frederic Renoux with Dolphin Integration about their comprehensive offerings in the area of low power IP for managing sophisticated sleep modes. One of the topics he emphasized was the importance of selecting the best standard cell library for the always on (AON) portion of the chip. Dolphin has studied this topic extensively. Because they have much of the IP that would be used and have built demo chips, they have good technical basis for their observations.

The best choice for AON standard cell libraries depends on how much functionality is kept on. For instance, is it just a RTC and simple control logic, or is it more complex logic, like that needed for voice recognition? For minimal logic, it makes sense to use a library based on thick gate transistors. This offers lower leakage and in some cases can avoid the need for an external voltage regulator. The Dolphin Integration SESAME BIV library can operate up to 3.6V and is ideal for minimal logic AON designs.

For more complex AON regimes, especially where SRAM needs to be retained, the best way to save power is to use the lowest possible voltage – near threshold. For this Dolphin Integration offers their SESAME-NVT library. They also offer a high density standard cell library that is optimized for performance that uses HVT cells running at nominal voltage.

Dolphin Integration has an excellent write up on their website that details their experience using each of these libraries in various AON configurations. In the paper they show the block diagrams for each scenario and cover the specifics referencing the IP blocks used. It is clear to see why they are part of the TSMC partner ecosystem. They are in line with TSMC’s concept that the way to make significant improvements in performance is to focus on more than just one element, i.e not just std cells. Instead a system level approach is needed, which in the case of Dolphin includes IP, std cells, implementation know-how, etc.


A Formal Feast

A Formal Feast
by Bernard Murphy on 03-29-2017 at 7:00 am

It’s not easy having to deliver one of the last tutorials on the last day of a conference. Synopsys drew that short straw for their tutorial on formal methodologies at DVCon this year. Despite that they delivered an impressive performance, keeping the attention of 60 attendees who said afterwards it was excellent on technical content, substance and balance for a wide audience. This was their second content-rich, marketing-light tutorial in this conference (following low-power verification). Worth remembering for next year.


Sean Safarpour of Synopsys (another Atrenta alum) kicked off with a quick and pragmatic review of “Why Formal?”. These have become almost pro-forma now that more of us are getting formal (usage at over 30% in ASIC and an amazing 20% in FPGA last year). Sean got through these slides quickly – contribution to shift, left, different tools for different problems, complementary formal and simulation analysis, the pros and cons for formal and the greatly simplified on-ramp to formal use.

The second part of the tutorial, also presented by Sean, dug deeper into this formal for everyman/everywoman, enabled through pre-packaged applications. This isn’t a new topic, but it’s worth stressing how accessible and valuable these can be for covering important sections of a testplan and for exploring deep properties. You can quantify coverage contributions from formal analysis and can demonstrate that areas you can’t cover in simulation may be unreachable, so can be dropped from coverage analysis. Serious value and easily accessible – there’s no good reason not to use capabilities like these. General adoption in this area hadn’t yet reached property-checking levels last year but it is growing much faster.


A favorite of mine checks register properties; this is a no-brainer for formal analysis and for formal apps. IPs and designs host vast numbers of registers to configure, control and observe behavior, typically accessed through standard interfaces like AXI. But these aren’t just vanilla read/write registers. They can have a wide range of special properties, across the register or bitfield by bitfield. A bitfield may be readable or writeable or both, perhaps it may be read/written only once, read or write operations may set/clear the field, the register may mirror another register, and so on. Checking all these possibilities in simulation can be painful, at minimum to setup the testbench, and is often incomplete. Formal apps for register checking are easy to setup (needing just spreadsheet descriptions of expected bitfield properties) and you can be confident they are complete. Again, why wouldn’t you do this?

Getting a bit more complex, Vigyan Singhal of Oski, a formal guru with always interesting ideas, presented on end-to-end formal verification. Here his objective was to fully verify an IP using only formal methods (an idea I’m hearing a lot more). Vigyan made a case that serious progress can be made in this direction, if we are willing to work harder. He cited blocks like MACs, USB, DMA and memory controllers, bridges, and GPIO blocks as candidates for this kind of proving.

Of course, today this isn’t as simple as an app. Abstraction becomes important, as do symbolic approaches to verification which can, though clever techniques, fully verify key aspects of the functionality of say a data transport block by checking just a limited set of possibilities. Very interesting concepts – maybe not something most of us would try unassisted, but this points to what we might expect to see eventually in packaged solutions. Will it completely replace simulation on these blocks? I would guess not, but it does seem possible that formal will eventually do more of the heavy lifting.

Mandar Munishwar from Qualcomm followed with a neat sequel. Maybe you did what Vigyan suggested but you’re still not convinced from a signoff perspective. How can you more thoroughly check coverage to make sure you didn’t miss hidden problems?

He started with a very interesting concept – the proof-core. When we think about proving an assertion we think about checking within the cone of influence (COI) leading up to the assertion. But a proof for an assertion doesn’t necessarily have to look at the whole cone of influence; the prover only extends out though just the piece of logic (the formal core) required to prove the assertion – which may be a lot smaller than the COI. This means that any potential bugs beyond the formal core may be missed. He suggested logic mutation to expose such problems; change the logic slightly then re-run the proof, and repeat with more mutations. Based on his experiments, he found at least one more problem on the DMA controller he used as his test DUT. What Mandar wants to get to is higher levels of formal coverage by pushing analysis, through mutation, beyond an initially limited set of formal cores.


Pratik Mahajan from Synopsys wrapped up with a few future-looking areas. The first was around a goal that many formal users will welcome. As we know, formal depends on multiple engines – BDD, SAT, ATPG and more. Each has special strengths and weaknesses, each has a host of parameters, you need to know when you should selectively push deeper beyond nominal proof depths, and you need to know when you should switch from one approach to another. Knowing how to make these choices is a big part of why the full scope of formal property-checking has been viewed as a PhD-only topic; you must know how to orchestrate all these possibilities.

Pratik described an approach to putting all that orchestration in a box, along with managing distribution of jobs out to server farms (since many tasks can run in parallel). He didn’t get into too much detail but it sounds like this is an adaptive approach, starting from some well-known recipes, with ability to fine-tune as results start to come back. From an end-user perspective, this could make more complex and more complete proofs much more accessible to non-experts.

Pratik also covered dealing with inconclusive proofs; another sore spot for formal users. Anything he can do to help here will be greatly appreciated. And he discussed work being done on machine learning in formal and assistance in bug hunting which I covered in an earlier blog on FMCAD.

I must apologize to the presenters for my highly-abbreviated treatment of what they presented. I hope at least I conveyed a sense of the rich set of formal options they presented, both for today and to for the future. You can learn a lot more from Tutorial9 from the DVCon download.

More articles by Bernard…


Seven Reasons to Use FPGA Prototyping for ASIC Designs

Seven Reasons to Use FPGA Prototyping for ASIC Designs
by Daniel Payne on 03-28-2017 at 12:00 pm

Using an FPGA to prototype your next hardware design is a familiar concept, extending all the way back to the time that the first FPGAs were being produced by Xilinx and Altera. There are multiple competitors in the marketplace for FPGA prototyping, so I wanted to discern more about what the German-based company PRO DESIGN had to offer in their proFPGAsystems by attending a joint webinar that they hosted last week with the ASIC services company Open-Silicon. SemiWiki blogger Bernard Murphy was the moderator and he was able to get things started by concisely listing seven reasons to use FPGA prototyping for ASIC designs:

[LIST=1]

  • Developing and debugging bare-metal software, accelerating the time to build a system
  • Hardware and software performance testing
  • Compliance texting with big use-case and regressions
  • In-system validation
  • Functional simulation is too slow, and emulation is too expensive
  • Waiting for first silicon to start system testing is way too late
  • You need a quick proof of concept

    There are four options for you to consider when choosing an FPGA prototyping approach:


    The best practices include not starting FPGA prototyping while the RTL is still in flux, don’t expect a prototype to be used for hardware debug, and finally the challenges to partitioning can be taxing. If you opt for a turnkey prototyping solution then you don’t have to spend time becoming an FPGA expert.

    Philipp Ampletzer from PRO DESIGN talked about six attributes of an ideal FPGA prototyping system:

    • Flexibility, adaptability (both Xilinx and Altera)
    • Performance and signal integrity
    • Scalability and capacity (latest FPGA devices)
    • Host interfaces
    • User-friendly
    • Price friendly, cost-performance ratio

    It turns out that PRO DESIGN has a series of FPGA Prototyping products that use both Xilinx and Altera FGPAs, and you can start with a small configuration using a single FPGA, or scale up to a system with four FPGAs, or ultimately combing five boards for a total of 20 FPGAs. Here’s a photo of the FPGA Module SG280 which provides:

    • Single Intel Stratix 10 FPGA providing up to 20M ASIC gates
    • Up to 1,026 user I/O
    • Up to 8 voltage regions
    • Up to 1.0 Gbps single-ended point to point speed


    The final presenter was Sachin Jadhav from Open-Silicon and he walked us through the typical ASIC design life cycle with 11 distinct steps showing where FPGA prototyping fits in:

    Based on actual experience with FPGA prototyping at Open-Silicon, they look at five challenges:

    • Selecting an FPGA Platform (capacity, I/Os, expected frequency, partitioning, turnkey or custom)
    • Design Partitioning (automatic, manual)
    • Optimum operating design frequency (choosing speed grade FPGAs, design IP placement, global clock, clock loading)
    • Custom PHY’s
    • Debugging (integrated RTL debugging)

    Related blog – Open-Silicon Update: 125M ASICs shipped!

    One case study was shared by Open-Silicon where they designed an ASIC for use in a professional camera system and partitioned their design across two Virtex 7 FPGAs with the following IP blocks:


    This ASIC used 40 million gates and the FPGA prototype used manual partitioning with IOs in one FPGA and logic in the second FPGA, while communication between the two was with a SerDes. By using an FPGA prototype the design team saved some 5 months from the schedule and reached a first silicon success. Other achievements on this project included:

    • Production quality software developed on FPGA prototype
    • Custom PHY’s (HDMI, LVDS-TX, HSIFB, UHS-II) using FPGA resources
    • Validated custom IP blocks with external devices prior to tape-out

    Related blog – ARM and Open-Silicon Join Forces to Fight the IoT Edge Wars!

    Summary
    ASIC design teams are under immense pressure to meet product requirements and develop software before silicon is fabricated, so using an FPGA prototyping approach can help you do that by enabling early software driver development and even producing a proof of concept to investors. Maybe it’s the right time for your next ASIC project to start using an FPGA prototyping methodology.

    To watch the archived webinar you can go here.


  • eFabless Design Challenge Results!

    eFabless Design Challenge Results!
    by Daniel Nenni on 03-28-2017 at 7:00 am

    Will community engineering work for semiconductors? Will anyone show up? Well, the efabless design challenge is complete and the results are both interesting and encouraging, absolutely!

    Efabless completed its low power voltage reference IP design challenge on Monday, March 13. This was a very interesting event that we followed closely here at SemiWiki. It’s community-style development and challenge methodology was a first of its kind for the semiconductor IP industry.

    Designers from all over the world were invited to compete for cash prizes, recognition and the ability to earn revenues by licensing through efabless’ clever community-style marketplace. Our good friends at X-FAB sponsored the challenge to test out what they see as an innovative on-demand design enablement solution for their customers. I was intrigued and added to the cash prizes for winners that are SemiWiki members.

    So what happened? According to the results posted on the efabless website, 88 designers from 26 countries signed up for the challenge. They were broadly distributed geographically. Six designs passed customer requirements and ultimately the top three completed layout and were awarded cash prizes based on their relative standing in power consumption. First place went to Rishi Raghav (SemiWiki member). Second place went to Arsalan Jawed and third place went to Ibrahim Muhammed.

    According to Mohamed Kassem, CTO and co-founder of eFabless, there were a number of interesting takeaways. First all, these were serious professionals. Two of the competitors represented small and, as shown by their designs in the challenge, very capable design firms. One, the eventual winner, Rishi, is an independent. Second, Mohamed was extremely impressed with the creativity and diversity of the designs and their architectures. All three winners took different approaches and delivered clean and interesting designs. The design of Arsalan employed a CMOS-only architecture, an unusual approach for a bandgap. There were also a number of designs that were not quite completed on time or were submitted outside the customer spec but with attributes that are intriguing. Mohamed said that we should expect to see one or more of these enter the marketplace in the coming weeks. I understand that winning designs will be processed on an MPW for silicon characterization and efabless will provide evaluation boards that community members can offer to their customers. This would be a real plus.

    I have taken a look at the newly released “gen 2” marketplace for efabless. It is a very interesting enabler for what Mike Wishart, CEO, sees as a new market for “on-demand” IP delivered by efabless community. The marketplace looks great, is easy to search and navigate, and provides very interesting information on both the IP and also the designer. The designer section has various designer supplied information as well as quantifiable certification based on designer’s success on the platform. I can see why independent designers and small firms would be very attracted to this. In Mike’s view of the world, a customer can come to the marketplace, find an IP design that closely matches his or her needs and simulate it at no cost in their design. If they like what they see but need some customization, they can check out the designer’s qualifications and history, and then engage the designer for final work.

    efabless says we should stay tuned for future design challenges and additional design capability for the efabless community. I am impressed with the community turnout on the project and excitement for the platform. Apparently, community is at over 1000 members, up from 600 or so in November. Next step will be to see the depth of customer demand.

    About efabless
    efabless is the first online marketplace for community-developed, customized integrated circuits (ICs) that lets hardware system innovators turn product visions into market reality. The company applies the concepts of crowdsourcing and open community innovation to key aspects of IC development and commercialization. Specializing in the design of analog/mixed signal ICs, power management ICs MEMS and agile ASICs, the company gives designers all the means needed to define, develop and monetize their work. It has built up a community of over 700 members from 30 countries around the world. For information visit: www.efabless.com


    Who knew designing PLL’s was so complicated?

    Who knew designing PLL’s was so complicated?
    by Tom Simon on 03-27-2017 at 12:00 pm

    Well it comes as no surprise to those that use and design them, that PLL’s are a world unto themselves and very complicated indeed. With PLL’s we are talking about analog designs that rely on ring oscillators or LC tanks. They are needed on legacy nodes, like the ones that IoT chips are based on, and they are crucial for high speed advanced node designs like those at 16nm and below. They are especially important on nodes that are typically considered ‘digital’, yet all the challenges of analog design must be addressed on these processes. Indeed, advanced SOC’s often have numerous PLL’s servicing internal clocking needs and those of external IO.

    PLL design involves making many trade offs: for instance, is the precision of an LC tank, with a high Q inductor, worth the area required, or can a much smaller ring oscillator deliver the needed performance? Jitter is the key parameter that needs to be controlled in PLL designs. Across the nodes from 180nm to 10nm there is a need for PLL’s that operate at anywhere from below 100 MHz up to the multi GHz range with low jitter. Applications such as SerDes rely on low jitter in PLL’s.

    Last week at the 2017 Silicon Valley TSMC Technology Symposium I had a chance to talk to Andrew Cole and Randy Caplan with Silicon Creations. Their bread and butter is designing analog IP that is used widely across a broad range of process nodes. PLL’s are a part of their expertise. With TSMC rolling steadily on toward 7nm – even 5nm was mentioned during the symposium – IP providers such as Silicon Creations need to deliver high performance analog designs on these FinFET nodes. Andrew talked about one of their most popular PLL designs that is in over 100 unique designs and has been taped out at every imaginable node from 180nm to 16nm. Apparently, the run rate for this one PLL on one 28nm process is around one billion instances per year.

    They pointed me to a document on their website that details the specific challenges they faced as this design was moved and verified at successively smaller nodes. From 65nm to 10nm there has been nearly a 3.5X relative increase in the peak transition frequency (fT) of the transistors. At 10nm the fT will be in excess of 500GHz. The material on their website goes into some detail about the tradeoffs between 28nm polygate and 28nm high-K metal gate. Nonetheless, 10nm FinFET continues the progression of fT to higher frequencies.

    The real question is, how do analog designs scale as lambda decreases? To help answer this Silicon Creations offer up data comparing their relative PLL area from 180nm to 10nm. Refer to the diagram below to see how this has progressed.

    While it is not as dramatic as digital area scaling, it is enough to help lower costs. Each smaller node has presented its own challenges to analog designers, much as they have to digital designers. Silicon Creations has dealt with this by developing their own back end design flows. Even though an analog design schematic may look much the same, at FinFET nodes they are now dealing with increasing interconnect resistance, which requires anticipating parasitics earlier in the flow. Also, the quantized nature of W in tri-gate devices leads to changes in transistor parameter specifications.

    Silicon Creations covers the gamut when it comes to process node coverage. They are concurrently designing at all the nodes mentioned above – 180nm to 10nm, and have work under way for 7nm. There is more detailed information available, including the material they directed me towards, on their website. It is interesting to see how, on what most people consider to be digital nodes, they sustain delivery of essential building block analog IP for a wide range of designs.


    Virtual Modeling Drives Auto Systems TTM

    Virtual Modeling Drives Auto Systems TTM
    by Bernard Murphy on 03-27-2017 at 7:00 am

    The electronics market for automotive applications is distinguished by multiple factors. This is a very fast growing market – electronics now account for 40% of a car’s cost, up from 20% just 10 years ago. New technologies are gaining acceptance, for greener and safer operation and for a more satisfying consumer experience. Platforms to support these capabilities are becoming more complex, greatly increasing challenges in verification and validation. Safety, security and reliability expectations are much higher than in other consumer applications, making consumer-style field repair/upgrades impractical given $10M+ costs per recall. Finally, the supply chain – from chip maker to Tier1/OEM supplier to auto-manufacturer – has become much more interdependent in reaching these goals.

    All of this points to a common question in systems design – how can I check the integrity of the design earlier – but with a wrinkle. Now the checking must span the supply chain because all parties have a vested interest in proving out the system as early as possible.
    Synopsys captures this in a V-diagram (not pictured here, but easy to imagine), starting with architecture specification at the upper-left. First, how can the system architecture (software+hardware, quite possibly in the context of a larger system model) be optimized for system performance? This is where supply chain collaboration is important. The software to drive that hardware may not be written yet, but the Tier1/OEM has a sense of use models. Traditionally these would have been exchanged in static Word and Excel documents, but these static requirements are an imperfect and incomplete way to ensure a match in intent between supplier and systems provider.


    A better approach replaces static spec and requirements with a dynamic virtual model. The Tier1/OEM can start from an existing model (or develop one, perhaps with assistance), along with modeled application use-cases and workloads based on task-graph models. This dynamic specification become the driver for SoC design within the chip company. If iteration is required, to optimize power, performance, cost or other factors, the Tier1/OEM can adjust the virtual model and workloads to reflect updated requirements. All of this can be accomplished through Platform Architect MCO.


    The bottom vertex in the V addresses starting on software development and integration well before hardware is available. FPGA prototypes are great in the late stages of development, but virtual models are a better approach before that point. They run almost in real time and require very limited understanding of the target architecture. Here of course the intention is to run real software as it is being developed, rather than modeled workloads. And again, the virtual hardware model provides an ideal dynamic reference point between the hardware supplier and Tier1/OEM. Experience shows that software development can start 9-12 months in advance of silicon when using a virtual hardware model.
    The upper-right vertex of the V is test, where again virtual models play an important role. Ultimately test must run on the real silicon (hardware in the loop testing) but test development can start much earlier using a virtual hardware model. Particularly important here is the ability to integrate with the larger system test environment, including tools like Matlab Simulink, Saber (Synopsys tool for simulating electronic/mechanical systems), systems like CANoe for testing individual MCUs and networks of MCUs, through Lauterbach and other platforms.
    These types of testing are essential for building testbenches to validate mission-mode behavior and performance but ISO 26262 compliance requires additional testing to validate safe behavior in the presence of faults. The virtual model is equally important here, to model faults injected into the software or the virtual hardware model so that software and hardware designers can determine how/if faults are detected and how they will be managed.

    For both mission mode and fault testing, the deterministic nature of the virtual model is important in being able to replay and track down root-case problems in the system design. Also very important is the ability to run virtual models in regression on server farms (since these models are software-based, something that would not be possible for true hardware in loop testing). This enables parallel testing of changes to the software system, and running on variants of software stacks, greatly increasing productivity and coverage in testing.

    Synopsys already provide virtual hardware models in the form of virtual development kits (VDKs) for the NXP MPC5xxx MCU Family, the Renesas RH850 MCU Family and for Infineon AURIX, so system developers are ready to go on some of the primary drivetrain platforms in the industry.

    To watch the Webinar, click HERE.

    More articles by Bernard…


    Talking Cars, Quiet Qualcomm

    Talking Cars, Quiet Qualcomm
    by Roger C. Lanctot on 03-25-2017 at 12:00 pm

    Qualcomm is a technology titan standing astride both the automotive and wireless industries with 10’s of thousands of patents, 340M automotive-grade chipsets shipped and a leading position in the connected car industry. So it is fascinating to find executives at the company almost totally tongue-tied of late when it comes to talking about the car technology currently facing a government mandate: DSRC-based (dedicated short-range communication) V2V.



    Qualcomm ought to have a lot to say about DSRC. Qualcomm is a manufacturer of DSRC chipsets by virtue of its ownership of Atheros. Qualcomm also happens to control more than a third of the automotive semiconductor market, thanks to its acquisition of NXP Semiconductors. And, of course, there are all those patents.

    It is hard to overstate Qualcomm’s influence in the automotive industry so recently magnified by the NXP buy. Because Qualcomm is helping to connect both cars and smartphones the company is in the unique position of enabling a true vehicle-to-everything connectivity environment where cars will be able to talk to other cars, pedestrians and infrastructure – all in the interest of collision avoidance, and congestion and emissions mitigation.

    Further, car makers rely on Qualcomm – as they also rely on Renesas, Intel and Nvidia – to help them anticipate emerging trends in connectivity and wireless access including performance issues such as transmission speeds, reliability, coverage, latency and deep insights into network infrastructure. The scope of Qualcomm’s expertise extends to all forms of wireless communications and billions of people and millions of organizations around the world rely on Qualcomm technology for their safety, security and livelihoods.

    But thanks to a falling out with the automotive industry over the sharing of the spectrum to be used by DSRC, Qualcomm has suddenly gone quiet. Because Qualcomm’s interests extend beyond the automotive world, the company has a deeper appreciation of the importance of proper spectrum allocations for different applications. Qualcomm is also aware of the need for more spectrum across a broad range of use case scenarios impacting hundreds of millions of users and millions of enterprises.

    Last year, Qualcomm responded to a request for comment on DSRC made by the U.S. Federal Communications Commission. The FCC was preparing to launch testing of different spectrum sharing options in response to a bi-partisan request from three U.S. Senators to find a way to share the allocated DSRC spectrum for unlicensed uses in a manner that might satisfy both the automotive industry and the cable and Wi-Fi industries.

    The core of Qualcomm’s comment was that “re-channelization” of the spectrum will not require substantial expensive and time-consuming testing – as claimed by DSRC advocates within the automotive industry. (Separately, a Qualcomm senior vice president of engineering described the DSRC technology as currently conceived as a “dead end.”)

    These were strong words indeed coming from the automotive semiconductor market leader. Qualcomm clearly stands to benefit mightily from a mandate of DSRC-based V2V technology – yet it was prepared to stop the progress of 19 years of study and development to challenge the basic assumptions behind the technology or at least to foster a recalibration to provide for wider use of the spectrum.

    Offsetting its automotive industry concerns were its investments in the cable and Wi-Fi industries, where commercial applications can be expected to drive substantial economic activity and revenue creation. Freeing up spectrum with a new spectrum sharing scheme will serve the interests of both the automotive industry and the cable and Wi-Fi industries.

    Car companies did not see it Qualcomm’s way. The Association of Global Automakers and the Alliance of Automobile Manufacturers responded to Qualcomm’s re-channelization claim by filing a response with the FCC questioning Qualcomm’s familiarity with DSRC technology and how it can be so sure of its conclusions. The two automotive associations also suggested that Qualcomm may understand wireless technology but that it doesn’t understand the requirements of automotive safety.

    A lot has changed in the 19 years since DSRC technology was first proposed. The cellular network has evolved such that direct communications between vehicles is now possible using advanced forms of LTE technology. In addition, the onset of radar, LiDAR and camera-based sensor technologies in support of automated driving has fundamentally altered the thinking behind connecting cars to other cars.

    Qualcomm itself has been impacted by the evolution of wireless technology. The company now focuses its vehicle-to-vehicle comments toward its research development devoted to LTE and 5G cellular technologies. (“Please don’t ask us about DSRC,” is the unspoken sentiment.)

    Automakers, too, have gone silent on DSRC. General Motors stands alone as the most prominent advocate and spokesperson for DSRC-based V2V tech. The company will begin shipping DSRC-equipped Cadillac CTS vehicles later this month. Not a single competitor has responded – and the average consumer has no idea what DSRC is or why it exists.

    The silence from Qualcomm and from competing automakers speaks volumes as to the unanswered questions regarding cybersecurity, privacy, infrastructure and adoption of DSRC. GM is turning its DSRC-equipped vehicles into wireless broadcasters (sending a basic safety message of location, heading and speed 10x/sec) only capable of communicating with other Cadillac CTS’s. With no infrastructure supporting the launch, GM will bear full responsibility for the fidelity and reliability of those broadcasts and their vulnerability.

    One of the core attractions of using cellular technology for V2V is the ubiquity of the technology along with its backwards and forwards compatibility. DSRC as currently conceived provides no evolutionary path and only limited interoperability – especially in regard to regional variations in the technology in Europe and Asia.

    Meanwhile, the FCC is continuing its testing of spectrum sharing alternatives. The Acting Legal Adviser for Wireless and International Issues at the Federal Communications Commission responded on behalf of recently-appointed Chairman Ajit Pai to my inquiry regarding that testing thus:

    “Chairman Pai asked me to respond to your email below on the status of spectrum sharing for unlicensed use between Wi-Fi and DSRC. Thank you for your interest in this issue. At this time, the Commission’s Office of Engineering and Technology continues phase 1 testing work. We have not set a conclusion date for this phase of testing.”

    I don’t understand what the automotive industry hopes to gain from shouting down one of its most essential technology suppliers and confidants. For Qualcomm to speak out the way it did reflects a careful consideration of commercial, legal, and technological factors. The automotive industry’s decision to question Qualcomm’s authority on the subject and therefore silence a critical source of technological insight is troubling to say the least. With 5G technology poised to disrupt and radically advance the nature of vehicle connectivity, the automotive industry ought to be putting a megaphone on Qualcomm, not a muzzle.

    For a more detailed discussion of DSRC V2V technology check out “Roadblocks to Implementing V2X Communications” a report commissioned by the International Telecommunications Union and prepared by Michael L. Sena Consulting AB. (Report link to come.)

    Roger C. Lanctot is Associate Director in the Global Automotive Practice at Strategy Analytics. More details about Strategy Analytics can be found here:https://www.strategyanalytics.com/access-services/automotive#.VuGdXfkrKUk


    Mobileye’s Revenge

    Mobileye’s Revenge
    by Roger C. Lanctot on 03-25-2017 at 7:00 am

    Tesla Motors CEO Elon Musk has famously dismissed hydrogen fuel (“fool”) cells and LiDAR and even parted company with Mobileye over automated driving strategy. Now it appears that Mobileye has gotten a measure of revenge with broad implications for the automotive industry, Tesla Motors and automated driving.

    Mobileye’s revenge has come in Tesla’s delivery of a new version of its Autopilot software – referred to as Autopilot 2.0 or AP2 – that appears to be inferior to the original autopilot which created a sensation upon its release more than a year ago. The original autopilot software – though released with warnings that it was a beta and that drivers had to remain attentive and that it should only be used for highway driving – led to the creation of multiple Youtube videos of owner drivers sleeping at the wheel, climbing into the back seat or using the system on surburban or even city streets.

    Even though at least one driver was killed while using the system, there was broad recognition and acceptance that autopilot 1.0 was truly an exceptional system capable of performing beyond its promised capabilities. Around the time of the fatal crash in Florida, Mobileye and Musk parted company – which some attributed to the crash and Musk’s apparent reckless marketing choices.

    Whether Musk was motivated by animus to Mobileye’s own architectural and algorithmic decisions or whether Mobileye preferred to disassociate itself from Musk’s reckless ways was never made clear at the time or since. Nevertheless, Musk forged forward with an update to the original autopilot – the aforementioned AP2 – which appears to have diminished the cleverness of the original feature.

    A video shared online by the New York Daily News captures the apparently addled autopilot 2.0 in action – repeatedly veering to the left across a double yellow line and into the path of oncoming traffic. All indications are that Mobileye is no longer a part of Tesla’s autopilot plans with what might be catastrophic results.

    http://tinyurl.com/zjq3vmo – Video: Tesla Model S on Autopilot 2.0 Struggles to Stay on Course

    My colleague, Industry Analyst Angelos Lakrintis, notes that there were rumors that Tesla was using a Mobileye chip in combination with an Nvidia Drive PX platform for its original autopilot system. The expectation is that the Nvidia platform remains in place but that the Mobileye content is gone replaced by a mystery processor and algorithm.

    Savvy Tesla owners have avoided updating their autopilot systems in order to preserve their access to the superior performance of the original system. Tesla finds itself in the position of Microsoft and Apple which have both, at various times, introduced operating system updates that consumers have refused to download and install.

    This creates a scenario whereby Tesla, the global market leader in implementing over-the-air software updates to its vehicles must now find a way to either pay its customers to download and install the software updates, coax them with attractive value propositions to be delivered by those updates, or scare them that their vehicles, their warranties or their lives are in danger if they do not allow the updates. Tesla’s software management strategy, a key competitive advantage, has suddenly been undermined by a dispute with a supplier.

    To add insult to injury, Intel has now offered to purchase Mobileye for about $15B. The acquisition guarantees that Tesla will be confronting Mobileye’s outsized presence in the market for the foreseeable future as the two companies vie for automated driving leadership.

    The importance of that automated driving leadership was made more clear recently when an insurance company, for the first time, created a discount intended to reward Tesla drivers who use autopilot. Root insurance is making the following proposition:

    How does the Tesla® Autopilot discount work?

    [LIST=1]

  • During the test drive, Root’s app measures Autosteer-eligible highway miles.
  • We apply a tiered discount—above and beyond any good driver discount you’ve already earned! The higher the percent of highway miles driven, the higher the discount.
  • Good drivers of Tesla cars save a lot of $$$ with Root!

    More details: https://blog.joinroot.com/tesladiscount/

    Root is not alone. At least one other insurer is pondering a similar offer. Both offers promise to fundamentally alter the interaction between vehicle technology and insurance.

    Root’s offer appears to fall in line with Tesla’s own estimates showing fewer crashes occurring when autopilot is engaged. Of course, Tesla’s data, created in connection with the investigation of the Florida crash by the National Highway Traffic Safety Administration, may have been focused on the use of autopilot 1.0. We don’t know for sure.

    Tesla’s tinkering with autopilot may ultimately jeopardize the first advanced driver assistance system-based insurance discount related to cruise control. For now, the Root offer is a huge endorsement of Musk’s marketing acumen.

    As much as I admire Tesla for its technology and temerity, alienating Mobileye – which is also aligned with HERE and a variety of other strategic partners – forces Tesla into the arms of Nvidia for the short-term – even as it hedges its bets by flirting with Samsung. We may never know precisely what was behind the parting of ways between Tesla and Mobileye, but by now it is clear that Mobileye has landed the first blow in the form of diminished Tesla autopilot performance and its acquisition by Intel. But with the gloves now on the ice, the battle is on. One can only hope that somehow, some way, the consumer is the winner and not the victim in the end.

    Roger C. Lanctot is Associate Director in the Global Automotive Practice at Strategy Analytics. More details about Strategy Analytics can be found here:https://www.strategyanalytics.com/access-services/automotive#.VuGdXfkrKUk


  • China to become largest semiconductor producer

    China to become largest semiconductor producer
    by Bill Jewell on 03-24-2017 at 12:00 pm

    China has long been the largest market for semiconductors, accounting for over 50% of the global market for the last five years. China is now on track to become the largest semiconductor manufacturer in the next few years. The chart below shows China’s integrated circuit (IC) industry from 2010 to 2016, according to the China Semiconductor Industry Association (CSIA).

    The chart also shows China IC exports (purple line on left scale) and IC imports (red line on right scale), based on United Nations (UN) trade data. China IC imports surged 47% from $157 billion in 2010 to $231 billion in 2013. However, since 2013 imports have been flat in the $218 billion to $231 billion range. China IC exports tripled from $29 billion in 2010 to $88 billion in 2013. In the last three years, exports have dropped back to the $61 billion to $70 billion range.


    China’s domestic IC industry has exploded over the last six years, tripling in size from $21 billion in 2010 to $65 billion in 2016. The fastest growing segment has been IC design, increasing 5 times from $5 billion in 2010 to $25 billion in 2016. The data shows China is increasingly furnishing its IC needs internally, become less dependent on non-Chinese IC companies.

    China’s growing production of semiconductors is reflected by wafer fab equipment spending trends over the last ten years. According to data from SEMI and SEAJ, China purchases of fab equipment grew 180% from $2.3 billion in 2006 to $6.5 billion in 2016. Over the same period, fab equipment purchases declined 50% in Japan and 39% in both North America and Europe. South Korea grew 10% and Taiwan grew 67%. In 2016 China trailed behind Taiwan at $12.2 billion and South Korea at $7.7 billion. However, SEMI expects China will be the largest fab equipment market by 2019.


    At the SEMI China conference last week, ASE Group COO Tien Wu predicted China’s fabless IC design industry will account for over 40% of the global fabless IC revenue in the near future. According to Digitimes, he also projected China’s wafer foundry industry will be 25% of the global market and Chinese integrated device manufacturers (IDMs) will be 20%.

    China’s growth in wafer foundry services is reflected by the capital spending of Semiconductor Manufacturing International Corporation (SMIC). According to IC Insights, SMIC’s capital spending was $2.6 billion in 2016, up 87% from 2015 and the fastest growth rate of the top eleven spenders. IC Insights forecasts SMIC capital spending of $2.3 billion in 2017. While this is less than a quarter of the spending projected for foundry giant TSMC, it is more than the $2.0 billion each of the second and third largest foundries, GlobalFoundries and UMC.

    In ourJanuary blog, we explained how Chinese electronics companies were moving from just assembly to fully integrated companies including design, marketing and sales. Thus China’s electronics industry is becoming less dependent on foreign electronics companies. In the same manner, China’s developing semiconductor industry will make the nation less dependent on foreign semiconductor companies. Chinese semiconductor companies will increasingly design and manufacture devices to support Chinese electronics companies. Eventually, the Chinese semiconductor companies will be serious competitors for business outside of China.