CEVA Dolphin Weninar SemiWiki 800x100 260419 (1)

Between Waze and a Thin Hard Place

Between Waze and a Thin Hard Place
by Roger C. Lanctot on 10-27-2016 at 4:00 pm

Car makers, semiconductor companies and wireless carriers are all excited these days about creating cars that can drive themselves. Billions of dollars are being spent on acquisitions and investments in companies and technologies that can make this happen. But there is a fly in the ointment by the name of Waze.

To create cars capable of automated driving, car makers and their supplier partners are having to replicate the functional capacities of the human brain and nervous system including vision systems and networks. In essence, the industry is being forced to create something we might regard as a “thick client” – a sentient car with sufficient storage and processing power to deliver a safe and reliable automated driving experience.

The alternative to the thick client is the “thin client” – in this case a smartphone. The thin client gets the job done by accessing off-board (cloud) resources to facilitate its decision-making. The best search and speech recognition solutions in cars today are either entirely cloud-based or hybrid on-board/off-board systems.

When Audi, BMW and Daimler came together to acquire the map data company, HERE, from Nokia, the intent was clearly to build an automated driving capability upon the foundation of HERE maps. If cars were going to drive themselves sooner or later, they’d need maps on-board and HERE was the big dog of embedded maps in cars.

Much was made at the time of the desire of car manufacturers to preserve their independence from Google and Apple and any other tech industry interlopers. So the HERE acquisition was something of a turf war over ownership of the customer relationship and the technology going into the cars.

The key differentiation between HERE as a source of map data and Google – aside from the ever-present Googlian invasion of privacy – was the fact that HERE actually provides a data set that resides in the car. Arch-rival TomTom also provides a map data set to its significantly smaller customer base of car makers. Google does not provide an on-board map, although it enables chunks of map data to be downloaded on an ad hoc basis.

Much has been made in the press lately of the onset of Apple’s CarPlay and Alphabet’s Android Auto smartphone integrations for cars. The hullabaloo over these increasingly widespread systems in new cars and the aftermarket revolves around the fact that they are creating headaches for car dealers and consumers (according to J.D. Power and Consumer Reports) even as they proliferate and begin to make smartphone navigation projection from the phone into the dashboard screen a reality.

But whether you snap your phone into a mount on your dashboard or connect it to your on-board infotainment system, the prevalence of smartphone navigation generally and Waze in particular is putting pressure on the pricing of built-in navigation systems. More importantly, it will ultimately cause consumers to reconsider buying the built-in navigation system if the price delta is too great.

As car makers lay the groundwork to enable automated driving the need for an on-board map will become increasingly imperative. In fact, the need for an up-to-date on-board map will become more important than ever.

If consumers abandon the idea of on-board navigation in favor of the good enough navigation experience of Waze on a smartphone – the progress toward automated driving will be severely impeded. And the risk is real because Waze is not only good enough navigation, for many it is becoming the preferred source of traffic information and, a dirty little secret, speed traps – or safety zones, as they are euphemistically described in Europe. Some Waze users won’t go anywhere without the app.

Car makers like BMW and Daimler are taking steps to provide for real-time hazard notifications and incremental map updates to match Waze’s perceived advantages. But the challenge extends beyond Waze. Toyota launched an OpenStreetMaps-based projectible smartphone navigation app in the U.S. last year. Apple is steadily replacing TomTom’s maps with OSM in dozens of countries around the world.

Meanwhile, Waze availability in dashboards continues to advance. Ford is expected to offer Waze via its SDLink app integration and Waze is also accessible via MirrorLink.

Waze’s viral marketing and crowd-sourced platform represent a formidable pothole on the path to automated driving for the masses. There are steps auto makers can take to preserve their long-term automation objectives. I will explore this topic in more detail in my keynote at next week’s TU-Automotive Europe event in Munich – http://www.tu-auto.com/europe/.

Smartphones have given us so much. They have the power to save time, fuel and lives on the road – as long as drivers don’t get distracted. They can enable entirely new business models and market opportunities. But good enough solutions don’t cut it when you are trying to transform an industry. Self-driving cars will need an on-board map to get them through thick and thin.

Roger C. Lanctot is Associate Director in the Global Automotive Practice at Strategy Analytics. More details about Strategy Analytics can be found here: https://www.strategyanalytics.com/access-services/automotive#.VuGdXfkrKUk


Manufacturing Singularity is Comng!

Manufacturing Singularity is Comng!
by Daniel Nenni on 10-27-2016 at 12:00 pm

One of the many benefits of blogging is that you get to meet some very interesting people. This time I had the pleasure of speaking with Michael Ford of Mentor Graphics about Industry 4.0 and smart factories. In fact, Mentor has an excellent series of white papers titled “Is This a Manufacturing Revolution?” from their Valor Division, but first a bit about Michael.

Michael spent more than 25 years with Sony Electronics Manufacturing Systems which resulted in the spin out of Valor Computerized Systems Ltd. Valor was a recognized leader of productivity improvement software across the electronics design and manufacturing supply chains that “simulate, optimize, monitor and control the production lifecycle of electronics products, enabling companies to design and manufacture more efficiently, cost effectively and with better quality”. In 2010 Mentor Graphics acquired Valor and that is how Michael arrived at Mentor.

The first question I asked Michael is when will electronics manufacturing come back to the United States? Given the advancement of robotics, artificial intelligence, and cloud computing, manufacturing is now a very small percentage of the electronic product cost equation. In fact, according to Michael, Apple’s percentage cost of manufacturing in China is now about 1%. The answer to my question of course is described in detail in the 8 part white paper series:

Part 1: Stop the Leaking Factory
Part 2: What Does the Industry 4.0 Factory Look Like?
Part 3: The Customers’ Perspective
Part 4: How to Get Started
Part 5: Making the Connections
Part 6: Staying Flexible
Part 7: The ROI of Change
Part 8: Risks and the Future

Michael and I also talked about Industry 4.0 and Mentor’s Open Manufacturing Language specification (OML). You can get a good look at Industry 4.0 from Wikipedia so let’s talk about OML. Earlier this year Mentor launched the OML initiative which is really IoT for manufacturing.

“For some time now we have seen and heard the demand for a comprehensive shop-floor communication standard that is detailed enough to support the next generation of computerization such as Industry 4.0 solutions,” stated Dan Hoz, General Manager of Mentor Graphics Valor Division. “With this initiative, Valor contributes the first step and sets the pace for the revolution in manufacturing for PCB assembly.”

I found this YouTube clip which nicely encapsulates our discussion:

You can read Michael’s blog for more information about OML HERE. The OML community website is HERE.

The challenge of course is legacy manufacturing equipment which is why Mentor came out with a secure plug-and-play IoT device you can read about HERE.

“The Valor IoT Manufacturing solution with the Open Manufacturing Language (OML) should revolutionize today’s automated electronics assembly industry. OML will bring much needed interoperability to the PCB manufacturing industry,” stated Dick Slansky, senior analyst, PLM & Industry, ARC Advisory Group. “Mentor’s plug-and-play, comprehensive, secure networking and connectivity solution is a significant milestone for the mass customization of manufactured electronics.”

Bottom line: The ultimate goal of course is singularity, where machine intelligence surpasses human intelligence and on the manufacturing floor Industry 4.0 is a step in the right direction, absolutely.


The Ising on the Cake

The Ising on the Cake
by Bernard Murphy on 10-27-2016 at 7:00 am

Just when you thought you knew all the possible foundations for computing, along comes another one. Forget von Neumann, this approach models Ising machines, systems built on solving a statistical ensemble model of ferromagnetism. The concept is quite simple. Imagine a lattice of magnetic dipoles/spins, each of which can only be in a “north-up” or “north-down” state. The coupling, in various states, when summed over adjacent magnets represents an energy and the goal is to minimize this energy.

As you might expect, this model maps neatly to various kinds of optimization problem. The travelling salesman problem (TSP) is one example, encountered in chip place and route tools, network routing, aircraft and truck routing and many other problems. TSP is considered an NP-hard problem though heuristic solutions are known and widely used. But improving on any of these solutions is always challenging since scaling up parallelism is quickly defeated by exponential growth in combinational complexity as the problem size grows.

While neural net and quantum annealing approaches are also being explored, advances based on Ising modelling have been reported recently. Rather than building spin lattices this approach uses laser pulses circulating through a ring cavity. Pulses are generated from a pumping laser as optical parametric oscillators (OPOs), a particular form of squeezed light. Each of these, above an oscillation threshold, splits into one of two possible phases which can model Ising spins. The ring cavity is designed to allow a fixed number of evenly-spaced OPOs to circulate through the ring at any one time, thus modelling a set of spins.

Next you have to model the coupling between the spins. In one approach, optical taps into the cavity take a small percentage of an OPO, delay it for an integral number of OPOs in the chain and couple that percentage back into the next OPO in line. There are N-1 taps for a cavity supporting N OPOs. This enables modeling a coupling between the i[SUP]th[/SUP] and j[SUP]th[/SUP] spins for any (non-equal) i and j. (The optical tap approach becomes very cumbersome with increasing N, so recent approaches have switched to electronic methods to add delay.)
Of course this ring of OPOs decays over time and the decay rate for any OPO depends on the phase states in the OPO. The decay rate of the system of OPOs can be engineered through couplings to model the energy profile (Hamiltonian) of a target Ising problem. (You’ll have to take that on trust or follow the 3[SUP]rd[/SUP] link below.) Now the OPO system is pumped by a laser, starting below the oscillation threshold. As the gain through pumping is slowly increased, it will eventually reach the lowest point in that profile which is a threshold for oscillation/resonance and that will self-reinforce. This by the way is known as the minimum gain principle, intuitively reasonable but not provably true. Finally, by observing the relative phase state of OPOs, resonances and hence minima in the problem space can be detected.

So there you have it. Technically, I don’t imagine other optimization techniques are in great danger in the near future. This is a very complex technique requiring some very sophisticated control of laser optics, probably not reducible (near-term) to a chip. Still, it does have one very interesting characteristic. All optimization techniques I know start on the problem curve and move around to (hopefully) find the lowest point. This technique starts below the problem curve and effectively moves a global metric (OPO system gain) up towards the curve. By construction the first thing it will hit is the global minimum (at least if the minimum gain principle is valid).

As a footnote, I checked a number of articles on this topic and found all the “popular” articles (Wired, a Stanford report, IEEE Spectrum) ducked any real attempt at explaining the method, which left me feeling cheated. This blog is my attempt to go at least a little deeper. Apologies in advance to experts in the field if I butchered the explanation – feel free to correct me. You can read a lightweight write-up HERE and the arXiv papers I relied on most HERE and HERE.

More articles by Bernard…


Automation for managed system-of-systems design

Automation for managed system-of-systems design
by Don Dingee on 10-26-2016 at 4:00 pm

Anybody who has done any bus & board system design knows the problem. Merchant boards typically have standardized pinouts (after years of haggling in standards organizations) for the backplane bus, and a group of user-defined pins for daughtercard I/O. Homegrown systems usually have a just-as-carefully defined proprietary backplane bus pinout. Once defined, changing a signal name or pin location requires an act of deity.

Or a mistake. Each board design team starts with the pinout table, or if they are lucky they get a connector model in a PWB design tool library. When systems were small, and there were fewer than 20 boards on a backplane, checking interconnects wasn’t too terrible. Every once in a while, someone would miss a pin, or transpose the direction of high-to-low order bits on a parallel signal group. Hopefully that was caught before the smoke test. The offending “odd-man-out” board would be sent back for cut and jump rework, and a revision scheduled.

Systems have gotten huge; in many cases, they are now systems-of-systems. It isn’t unusual to see hundreds of boards with tens of thousands of interconnects including backplane traces, backplane cabling (usually for the user-defined pins), and over-the-top cabling. Boards and cables are parceled out to separate teams, along with some control documents – typically an Excel spreadsheet with the signal tables, or maybe dumb graphics in Visio. The entire system relies on everyone reading the documentation and interpreting it correctly.

The risk of a disconnect somewhere in the system is also huge.

Not to mention there is little (ok, no) ability to do trade-off analysis. Mentor Graphics released the last update to Xpedition with the ability to optimize pins from the board through the IC package – but what happens if that optimization would benefit neighboring boards? That is a very hard change to drive given semi-manual methods such as Excel or Visio.

Dave Wiens, Xpedition Product Manager in the Board Systems Division at Mentor Graphics, sums this up as the contrast between the norm of split & fragment design and managed system-of-systems design. He’s been pursuing a new release of Xpedition for multi-board system design with several goals. Rather than holding projects together with desktop office tools, teams can collaborate on “correct by construction” system design.

Reentering data becomes a thing of the past, and design rules can easily check connectivity across boards, backplanes, and cables, eliminating errors. Change management is built-in with controlled synchronization; users also have the capability to accept or reject changes. Disparate teams on large projects that might talk to each other infrequently have fewer worries about the currency of their design information.


One of the more challenging topics in systems design is thermal verification. Instead of the conservative “rule of thumb”, a multi-board system can actually be modeled in this release of Xpedition with confidence, and recommended changes can be driven across boards quickly. The same goes for signal integrity or mechanical issues.

The strength of Xpedition is its underlying data management infrastructure. This is really about a single design flow for all boards, backplanes, and cables, with everyone working in the same project repository with the same tools. Weeks of manual synchronization are reduced to automation, complete with tips and color coding. Cables can be created in schematic form, then optimized for size and weight using 3D modeling.

Much more detail on this announcement is in the Mentor press release:

Mentor Graphics Launches Xpedition Multi-Board Systems Design Solution for Seamless Multi-Discipline Collaboration

Wiens says he completely understands existing processes are in place, especially on the change management front. He hopes that as users adopt the new Xpedition release, over time older system-of-systems design approaches yield to the increased efficiency of automation. The new ideas have been shaped with the help of lead customers such as ASML, factoring in a wealth of real-world experience. Any customer designing systems with boards, backplanes, and cables should benefit.


DFT Approaches for Giga-gate SoC Designs

DFT Approaches for Giga-gate SoC Designs
by Daniel Payne on 10-26-2016 at 12:00 pm

In the early days of IC design there were arguments against using any extra transistors or gates for testability purposes, because that would be adding extra silicon area which in turn would drive up the costs of the chip and product. Today we are older and wiser, realizing that there are product pricing benefits to quickly test each new SoC before packaging and even in the field. The biggest annual event in the test world has to be the International Test Conference (ITC), coming up November 15-17 in Fort Worth, Texas. I was able to speak with Ron Press of Mentor Graphics by phone about what they are doing at ITC this year. Here’s an overview:

  • Keynote by Wally Rhines, The Business of Test: Test and Semiconductor Economics (Tuesday, Nov 15, 9AM)
  • Tutorial 5, Diagnosis-driven Yield Analysis (Sunday, Nov 13, 1PM – 4:30PM)
  • Tutorial 9, Mixed-signal DFT and BIST: Trends, Principles and Solutions (Sunday, Nov 13, 1PM – 4:30PM)
  • Session 2.1, Test point Insertion in Hybrid Test Compression/LBIST (Tuesday, Nov 15, 2PM – 4PM)
  • Session 2.4, Minimal-Area Test Points for Deterministic Patterns
  • Session 17.1, Automated Measurement of Defect Tolerance in Mixed-Signal ICs (Thursday, Nov 17, 1:30PM – 3:30PM)

Full details of Mentor at ITC are online here. By the way, Ron is the General Chair for ITC 2016. The big three themes that I learned from Ron about testability this year were:

[LIST=1]

  • Giga-gate designs require hierarchical DFT approaches
  • Automotive test is growing and demanding
  • FinFET designs have unique diagnosis requirements

    Our present era has now enabled SoC designs with billions of gates, so how are you going to test that kind of a chip in a reasonable amount of time? The answer is to divide and conquer, applying the concept of hierarchical test to reduce ATPG run times, minimize RAM usage, and actually generate ATPG results much earlier in the design flow. This methodology lets test engineers create and validate patterns for each block, then automation enables reuse of the patterns as each block is placed in an SoC.

    Mentor also offers Embedded Deterministic Test (EDT) points that works with test compression to further reduce pattern volume giving you 2X to 4X additional compression on top of what their TestKompress approach offers.

    In the automotive world reliability of electronics is paramount, so they’ve created a standard called ISO 26262 and the EDA and semiconductor IP vendors have responded with both memory and logic BIST approaches. Field return analysis (RMA) is another key requirement for automotive. Mentor has created much of the test technology to meet the strict automotive requirements:

    The final of three major points is diagnosing faults in FinFET technology, where the challenge is to identify the root cause of a particular type of physical failure. The three letter acronym they use is called Root Cause Deconvolution (RCD) and this technique uses statistical enhancement to pinpoint failures starting from the logical diagnosis and ending up at the layout location:

    Transistor-level defects in FinFET designs can be located using cell-aware diagnosis for all of the pattern types (stuck-at, transition faults, cell-aware, etc.). The tools look at the FinFET layout and circuit schematic, then creates a cell-aware fault model specific to FinFET. Mentor has been using this sophisticated fault modeling since the 32nm node and smaller.

    If you attend ITC this year then consider checking out what the actual users of Mentor tools are talking about in their test experiences at the Posters: Samsung, Teradyne, Spreadtrum.


  • New Frontiers in the Storage System Market Call for the Best of ICE and Virtual Emulation

    New Frontiers in the Storage System Market Call for the Best of ICE and Virtual Emulation
    by Richard Pugh on 10-26-2016 at 7:00 am

    The storage market has reached what Andy Grove once described as “…a strategic inflection point.”[1] This is the stage in the life of a business when its fundamentals are about to change.

    Changing fundamentals in the storage market—where solid state drives (SSD) are now at the forefront of multiple storage applications, from enterprise-based datacenters to PCs—create both great opportunities and significant challenges. Both arise from new technical innovations, emerging standards, and the desire to reach higher performance, increase storage capacities, and meet the needs of system-level infrastructures—all at a lower cost per device.

    Delivering solutions for these challenges are where the Mentor Veloce® emulation platform demonstrates its strength as the hub of a sophisticated and comprehensive functional verification solution. One that gives design teams a catalyst to differentiate their enterprise storage devices that use SSD and NAND technologies.

    Emulation can verify the hundreds of millions of gates and multiple protocols now seen in SSD controllers along with the complex software that drives them. It has the speed to support full-chip hardware/software co-verification so that the RTL IP and software specific to a particular SSD controller can be verified together.

    Many protocols are still widely used in storage systems today (including SATA, SAS, PCIe, and NVMe). SSD device manufacturers need an emulation tool kit that offers solutions for them all. The Veloce emulation platform is without peer in this regard. Veloce provides in-circuit emulation (ICE) components (iSolve), virtual models (VirtuaLAB), and a unique new application (Veloce Deterministic ICE) to verify the design using the chosen host interface. Veloce also provides all the NAND, DDR, and NOR models used in conjunction with the SSD controller and software.

    Figure 1: SSD controller architecture.

    The Veloce iSolve library offers a full complement of hardware components to build a robust in-circuit emulation (ICE) flow, which is needed for many SoC verification scenarios.

    The Veloce Deterministic ICE App complements and extends the usability of an ICE-based environment by delivering a repeatable and virtual debug flow. In addition to offline SW debug, the Veloce Deterministic ICE App enables advanced debug capabilities, power analysis, and coverage closure methodologies.

    And the Veloce VirtuaLAB environment represents a new-generation of verification solutions delivering high-speed verification for multiple host protocols and memory devices, HW/SW system-level debug, power analysis, and system performance analysis.

    SSD on ICE
    When an SSD SoC design needs to be connected to real devices or custom hosts, the DUT (instantiated in the emulator) must be connected to physical hardware. In this case, teams use an emulation platform to set up the test environment by connecting the required peripherals/hosts using speed adapters/bridges to communicate with the SoC design mapped to emulator. Software teams make use of the ICE environment for firmware development and to run real applications. Verification teams use it to exercise various test methods to interact with the interfaces to verify the functionality of their SoC designs. Most designs today have an embedded CPU, and ICE is used for testing the OS boot cycles as well. ICE is also used for connecting proprietary hardware or proprietary operating systems (OS) to reproduce issues found in prototype or in a post-silicon lab setup.

    Making ICE Repeatable
    There are significant challenges in using ICE in certain scenarios, many of which are found in SSD verification. These include limitations on trace depth, long and iterative debug cycles, random and asynchronous events, and inflexibility in how the emulator can be deployed and shared. In addition, advanced verification techniques, such as power estimation, are not the best fit for a traditional ICE environment.

    The Veloce Deterministic ICE App addresses these challenges by creating a virtual debug model of an ICE run. Significantly, it adds determinism to the debug environment by making the test run repeatable cycle-by-cycle. It does this by generating a replay database to re-run or repeat the same test without the need for hooking up to ICE targets.

    Figure 2: ICE run.
    Figure 3: Using the replay database without ICE targets.

    While in replay mode, a user can choose to dump waveforms for an entire design for the duration of a test, or activate various other debug features (like live monitoring of important signals using streaming waveforms, enable displays, and protocol monitors), or do both. Having the ability to stop and inspect both data and full waveforms provides a rich debug platform and increased productivity in addition to efficient use of emulation resources.

    The Veloce Deterministic ICE App makes the test environment portable to other emulators or teams located in different places as the test is no longer dependent on the external ICE hardware. It also gives the flexibility to use other Veloce Apps like power analysis, SW debug, coverage, and assertions that would not have been feasible without this technology.

    The Virtual Lab
    Virtualization is the game changer for enterprise verification. Virtualization, which Mentor pioneered almost 10 years ago, allows emulators to be moved to centralized data centers to establish company-wide virtual platforms that support multiuser, software-driven SoC verification in a 24/7 enterprise server environment. Verification engineers can now access an emulator from their desktop, even from remote locations thousands of miles away from the emulator.

    Peripherals are virtualized via the Veloce VirtuaLAB environment. VirtuaLAB provides the same host interfaces as iSolve, including NVMe, but instead of using external hardware that must be cabled to the emulator, VirtuaLAB uses software protocol models. And, as VirtuaLAB uses the same IP as in ICE solutions, so it delivers the same functionality as iSolve hardware peripherals.

    Importantly, VirtuaLAB delivers emulation performance equivalent to that of ICE, making it an attractive alternative that is better suited to multiple users and applications.

    VirtuaLAB allows an SSD controller designer to run and debug the same software applications in the VirtuaLAB environment as they would on the real hardware. VirtuaLAB also eliminates the need for hardware speed adapters/bridges, and supports third-party performance analysis and power analysis applications. This means designers can do all of the same statistics, analysis, and metrics that they would do on the real design, but it’s done pre-silicon.

    The Best of Both Worlds
    With the Veloce emulation platform, verification teams have access to the best of both worlds—ICE and virtual emulation—powered by the world’s most versatile and flexible emulation technology.

    The Veloce emulation platform is radically recasting the emulation landscape, making emulation friendlier and more useful, while delivering all of the speed, visibility, and performance of traditional ICE-based emulation. The Veloce emulation platform is uniquely built with highly scalable hardware, an extensible operating system, proven virtual solutions, and a growing library of apps that solve application-specific verification challenges.

    To learn more about the rise of SSD technology and Mentor’s emulation solutions for storage system designs, check out the new whitepapers Veloce Delivers Best of ICE and Virtual Emulation to the SSD Storage Market and Using the Veloce Deterministic ICE App for Advanced SoC Debug.

    [1]Only the Paranoid Survive, Andy Grove, Crown Publishing Group, May 5, 2010


    2016 semiconductor capex highest in 5 years

    2016 semiconductor capex highest in 5 years
    by Bill Jewell on 10-25-2016 at 4:00 pm

    Global semiconductor capital expenditures (capex) are expected to return to the level of 2011 either this year or next. 2011 was the record year for capex as the industry returned to growth following the 2009-2010 recession. IC Insights’ August 2016 forecast called for 3.5% growth in capital spending to reach $67.1 billion, the highest level since $67.4 billion in 2011. Gartner’s October 2016 projection was a slight 0.3% downturn in 2016 followed by 7.4% growth in 2017 to reach $69.3 billion, surpassing $65.8 billion in 2011.

    Semiconductor manufacturing equipment accounts for about half of semiconductor capital spending. In August 2016, SEMI called for 4.1% growth in equipment in 2016, accelerating to 10.6% in 2017. Gartner’s October forecast has fab equipment growing in the 6% to 7% range in both 2016 and 2017. Neither SEMI nor Gartner has 2017 equipment returning to the 2011 levels of $44 billion to $45 billion.

    Solid growth in semiconductor fab equipment in 2016 is supported by data from SEMI and the Semiconductor Equipment Association of Japan (SEAJ). Combined SEMI and SEAJ data shows 3[SUP]rd[/SUP] quarter 2016 semiconductor manufacturing equipment billings were $8.5 billion, up 7% from the prior quarter and up 12% from a year ago. Bookings were $8.6 billion up 27% from a year ago but down 2% from the prior quarter. The resulting book-to-bill ratio was 1.02. We at Semiconductor Intelligence are projecting full year 2016 equipment billings will be up 12% from 2015. The slowing in bookings could be indicating billings are close to a peak. However even if billings remain at the 3[SUP]rd[/SUP] quarter 2016 level through the end of 2017, 2017 growth would be around 8%.

    Which companies are driving semiconductor capital spending? For several years, capex has been dominated by microprocessor giant Intel, the largest memory company Samsung, and the major wafer foundry TSMC. These three companies have accounted for 44% to 56% of total capex for each of the last five years. The table below shows the largest companies in capital spending based on 2016 projections. Most of the 2016 numbers are based on company guidance. IC Insights estimates were used for Samsung, GlobalFoundries and total capex.

    Samsung should have the largest capex in 2016 at $11.0 billion, according to IC Insights. Samsung has had the largest capex for several years. Intel and TSMC are each projecting 2016 capex of about $9.5 billion. The four largest memory companies account for 37% of capex. Besides Samsung, Micron Technology capex is $5.4 billion (based on fiscal 2016 ended September 1) and SK Hynix projects $5.2 million. Micron should pass SK Hynix to become the second largest memory company in capex for the first time. Flash Ventures, a joint venture flash memory company between Toshiba and SanDisk (now part of Western Digital) should spend $3.5 billion.

    The four largest foundry companies total 26% of projected 2016 capex. After TSMC, the next largest companies are GlobalFoundries at $3.0 billion, SMIC at $2.5 billion and UMC at $2.2 billion in capex. GlobalFoundries has lowered capex each of the last two years after peaking at $5.0 billion in 2014. China-based SMIC has been aggressively increasing capex averaging 50% annual growth over the last four years to hit $2.5 billion in 2016.

    Other companies make up 23% of projected 2016 capex. This category includes major semiconductor companies such as Infineon Technologies, NXP Semiconductor (now including Freescale), Renesas Electronics, STMicroelectronics and Texas Instruments. As new wafer fabs have become more expensive (now costing several billion dollars) these companies are depending less on internal fabs and increasingly turning to foundries. Intel is an exception since it has a large economy of scale and sees its process technology as a competitive advantage. The memory companies also have economies of scale. In addition, the memory market is highly commoditized and price sensitive, making it difficult for a company to compete without its own wafer fabs.

    The “other” category has been declining as a percentage of total semiconductor capex, from 39% in 2010 to 23% in 2016. The category will continue to decline in the future as capex is increasingly dominated by memory companies, foundry companies and Intel.


    End-to-End Secure IoT Solutions from ARM

    End-to-End Secure IoT Solutions from ARM
    by Bernard Murphy on 10-25-2016 at 11:30 am

    ARM announced today a comprehensive suite of solutions for IoT support, from IP optimized for applications in this space all the way to cloud-based support to manage edge devices in the field. Their motivation is to provide a faster path to secure IoT, from the chip to the cloud. One especially interesting component of this solution is a cloud-based software-as-a-service (SaaS) to manage, provision and update edge devices through a common platform. Simply put, ARM’s offering is about how little you now have to build or integrate yourself to get an end-to-end secure IoT solution up and running.

    Before we go to the cloud, let’s start with what ARM is doing for edge nodes. First, they have introduced two new microcontroller cores: Cortex-M33 and Cortex-M23. These cores build on the ARMv8-M architecture and are the first in the Cortex-M family with TrustZone built-in. The M33 provides a lot of flexibility, with DSP and FPU on board and with a co-processor interface, yet is 80% smaller than the Cortex-A5. ARM anticipates that this platform will become the mainstream MPU for secure embedded. The M23 is 75% smaller still and 50% more energy efficient. To give an idea how low power this can be, think about pulling an insulin pen out of a holder. Sufficient kinetic energy can be harvested from this action to support battery-free operation.

    TrustZone architecture, now available in these new M-class cores, provides similar capabilities to those available in other families. A Corelink SIE-200 fabric connects the processor to peripherals, mediating secure-world versus normal-world access under control of the processor which itself transparently time-slices between the two worlds with no need for programmer intervention. You get secure operation without needing an extra security CPU.

    Cryptocell-312 adds the security resources required to build a trusted execution environment (TEE), through faster and lower-power cryptography performance. It also offers symmetric and asymmetric ciphers, hashing and random number generation, lifecycle management and root-of-trust controls, along with many more features. And it’s configurable so you can dial area and power down to address just what you need in your solution.

    Another very important aspect supported in this series is secure debug. Cryptocell allows you to define and control a debug policy allowing differing levels of debug access in manufacturing, to the OEM and (per OEM grants) to field-deployment and maintenance teams.

    Then there’s the Cordio radio. The latest release supports both Bluetooth 5 and 802.15.4 for ZigBee and Thread, covering the most popular choices for IoT. You can get the radio as a hard macro in TSMC 40LP/ULP or 55LP/ULP processes or in UMC 55ULP, or you can use the link-layer controller RTL and stack with a 3[SUP]rd[/SUP]-party radio front-end and process of your choosing. You can also have both Bluetooth and 802.15.4 in one Cordio-C50 macro with a modest increase in area and you can dynamically switch between modes. ARM mentioned that it was also feasible to operate Cordio on harvested energy, where appropriate.

    ARM also offers all of this together in the pre-packaged Corelink SSE-200 subsystem: an ARMv8-M core, CryptoCell 312, the Cordio radio, memories and peripherals, all tied together with the CoreLink fabric and built on top of the Artisan IoT physical IP optimized to a low-power IoT use-model and targeted to the TSMC 40nm ultra-low-power process. That subsystem gets you a fast, low-power, secure and low-risk solution ready-made, allowing you to focus on adding your own special sauce.

    Which brings me finally to mbed Cloud. A high security edge device isn’t very useful unless you also secure cloud-based management of those devices. Now think about trying to integrate a mix and match solution between multiple providers of devices and cloud access. I have a hard time imagining how you could avoid deploying a solution with security holes and power-wasting communication bugs. Third-party applications still have a role, but sitting on top of a secure, low-power foundation managed by one provider. ARM’s extension to provide the cloud part of this foundation through a SaaS solution is a new departure of course, but it seems to me unavoidable given security demands.


    mbed Cloud has four major objectives: to be multi-cloud capable, to cover any device (not just ARM-based systems), to be very energy-efficient in management of devices and to secure every transaction. ARM acknowledge they are going to have to fold into legacy networks – devices, OSes, gateways and more, so the management solution has to span all of these. For connectivity mbed Cloud will communicate through CoAP, also OMA LWM2M, for provisioning it will take care of injecting security assets into a device and will manage access rights through the device lifecycle. And it provides fail-safe and secure update through broadcast and mesh-friendly packages.

    Together with Mbed OS 5.2, mbed Cloud 1.0 has been announced at ARM TechCon 2016. The solution is already open to a number of lead partners in smart factory, industrial IoT, asset tracking and healthcare applications. ARM expects the release to be more broadly available in Q1 of 2017. The business model for cloud support apparently will be similar to other SaaS models – an OEM subscribes to just the features they use. You can learn more about ARM IoT solutions HERE.

    More articles by Bernard…


    Emergence of Segment-Specific DDRn Memory Controller

    Emergence of Segment-Specific DDRn Memory Controller
    by Eric Esteve on 10-25-2016 at 7:00 am

    The semiconductor industry is served today by memory devices supporting various protocols, like DDR4, DDR3, LPDDR4, LPDDR3, GDDR5, HBM, HMC, etc. The trend is clearly to define application specific memory-protocols and in some cases, application specific devices. But developing many, and different, memory controllers IPs is resource and time consuming and not the best option for a vendor. For the chip-maker developing the System-on-Chip (SoC), the goal is to select memory protocol allowing integrating cost optimized memory devices, offering the best performance to cost ratio. If we look at the various, above listed, protocols, the DDR3 (DDR4) and LPDDR3 (LPDDR4) devices are offering by far the best cost/performance compromise. The DDRn or LPDDRn protocols support a wide range of applications, from low power mobile application processor to high performance, high bandwidth infrastructure application, like servers, storage or networking.

    The question is how to define a unique memory controller which could be optimized to best support various applications like high-end consumer, mobile and infrastructure through parameterization. The memory controller IP is the most crucial piece of design in a SoC. If this IP fails, the SoC is simply not usable. That’s why the SoC chipmaker will take advantage of a unique memory controller design that is more robust, stable and easier to maintain by the IP vendor than a variety of hard macros. This paper summarizes a white paper from Cadence “Emergence of Segment-Specific DDRn Memory Controller IP Solution” and the technical data are related to Cadence’ memory controller IP products developed in 28 nm and 16 nm.

    SoCs developed for networking applications requires the delivery of high bandwidth while running high performance computing and achieving large memory capacity. High bandwidth, along with large memory capacity and performance requirements, are expected to be directly translated to the memory controller IP and expected to deliver more data per cycle at the highest possible frequency. SoC targeting infrastructure segments have to provide a rich set of enterprise-class RAS (reliability, availability, and serviceability) features.

    The protocols supported reduce to DDR3, DDR3L, and DDR4, as there is no need for LPDDRn support. The maximum data rate is 3200Mbps (note that overclocking is not supported in order to keep maximum data integrity and not impact the reliability). The data bus is set to 72 bits by default (it can be 16, 32, 40, 64 or 72 bits wide), and the address bus is set to 18 bits by default, allowing the user to access the largest possible memory space. Several features have been specified to best fit with the requirements of infrastructure applications, like DQ-to-DQS ratio (set to 4:1 compared to 8:1 in the other two configurations) to minimize the maximum skew between clock (DQS) and data (DQ). The memory controller supports per-rank-leveling (PRL) as well as write leveling for x4 DRAM to maximize data integrity, the goal being to optimize system reliability.

    In the infrastructure segments, the CPU must use as much memory space as possible, and using dual inline memory modules (DIMM) is a must-have feature. The Cadence memory controller IP solution supports registered DIMM (RDIMM), unregistered DIMM (UDIMM) and load reduced DIMM (LRDIMM). In fact, these DIMM configurations are not supported in the other segments, the feature is infrastructure specific.

    The white paper will tell you about the other two set of configuration features, defined for the mobile segment and for high-end consumer applications. Developing 100% application specific IP would be an ideal solution, but not realistic. Based on a unique architecture, this memory controller IP is highly configurable, this configurability allowing supporting the requirements of applications as different as servers/storage, mobile application processors or high-end consumer.

    The white paper “Emergence of Segment-Specific DDRn Memory Controller IP Solution” is available on Cadence web site: http://ip.cadence.com/uploads/1102/wp-dip-ipnest-ddr-for-apps-final-pdf

    By Eric Esteve from IPNEST


    FPGAs for a few thousand devices more

    FPGAs for a few thousand devices more
    by Don Dingee on 10-24-2016 at 4:00 pm

    An incredibly pervasive trend at last year’s ARM TechCon was the IoT, and I expect this year to bring even more of the same, but with a twist. Where last year was mostly focused on ultra-low power edge devices and the mbed ecosystem, this year is likely to show a better balance of ideas across all three IoT tiers. I also expect a slew of ADAS applications to hit the show.

    The two IoT tiers besides the edge – gateway, and infrastructure – have room for bigger, more capable chips with either power-over-Ethernet, or wall power available. ADAS applications have vehicle power to work with, and while they have thermal restrictions limiting power dissipation, we’re also seeing larger chips to handle tasks like embedded vision, radar, and lidar.

    Every time I bring up “IoT” and “FPGA” in the same sentence, people pounce. I get it, though my first response is the IoT does not equal edge sensors and actuators and mobile devices. FPGAs don’t fit the power profile of most edge devices, but with fog computing taking on a larger role things are starting to change.

    Economically, we have the “a billion is the new million” problem, and the lower volume applications don’t make sense for custom silicon starts. Somebody still has to take care of those applications needing a few thousand devices. In the past, that was often a merchant microprocessor on a COTS single board computer with daughtercard mezzanines to customize I/O requirements.

    We’ve also talked a lot about optimization making sense for IoT chip starts, and most FPGA designs don’t seem optimized versus an ASIC solution. Yet, these applications are ripe for solutions such as Xilinx Zynq, combining the benefits of dual core processing with programmable logic. For decades, FPGAs have succeeded at relatively low volume, heavily customized applications such as broadcast video solutions and defense signal processing. Industrial IoT solutions call for low to mid-range volumes in the gateway and infrastructure tiers.

    Optimization is an interesting discussion. It gets really hard to optimize things when what you really need is flexibility. With specifications moving around, consortia merging, and market forces still not indicating a clear winner for industrial IoT solutions, FPGAs present an opportunity. Designs can be completed in programmable, accelerated hardware, fielded, and changed quickly to respond to the next customer requirement.

    ADAS is a bit more complicated, because there are millions of cars out there and the volumes are attracting merchant chip starts. However, we are seeing the same fragmentation – he who owns the algorithms and the maps will ultimately win. Committing to a strategy, be it GPUs, CNNs, DSPs, or hardware-accelerated instructions, is expensive. It might win a particular customer and completely miss the wants of another. There are questions of differentiation and ecosystems and who is willing to make joint investment instead of demanding NRE.

    Experimentation is rife in ADAS space. In a lot of ways, the algorithm scientists own the problem right now. This is almost the case for what John Bruggeman tried to pitch several years ago, where the silicon would self-organize around the software. We’re a long way from ASICs doing that, but development tools such as Xilinx SDSoC taking algorithms directly from C/C++ to FPGA hardware can approximate at least the compute intensive part of the solution.


    One of the first press releases to cross my desk for this year’s ARM TechCon is from Aldec, parlaying Zynq technology into both ADAS and IoT applications. They are demonstrating two embedded development kits (EDKs) based on their TySOM family:

    • Their ADAS setup has their TySOM2 module plus an FMC with four camera interfaces streaming four First Sensor Blue Eagle cameras, complete with edge detection, colorspace conversion, and frame merging in programmable logic.
    • For IoT gateway applications, the TySOM1 is showing off MQTT and Amazon Web Services (AWS) integration with sensors of various protocols connected to the gateway. Aldec has been working with hardware-accelerated encryption for this platform, as well as adding more sophisticated vision sensors.

    Also in booth #215, Aldec will be showing co-emulation using ARM Cortex-A15 fast models running SCE-MI. More on the Aldec presence at ARM TechCon:

    Aldec to Showcase Xilinx Zynq-based ADAS and IoT Gateway Development Platforms at ARM TechCon 2016

    Maker modules took the IoT world by storm because they are only twiddling a few bits with an MCU or dealing with a couple standard I/O ports off a mobile SoC. For the next wave of industrial IoT and ADAS applications, where customization and hardware acceleration of code are differentiators, Aldec and other Zynq-based module suppliers have a better formula.