CEVA Dolphin Weninar SemiWiki 800x100 260419 (1)

Stressed out about Electrostatic Discharge (ESD) or Electrical Overstress (EOS)?

Stressed out about Electrostatic Discharge (ESD) or Electrical Overstress (EOS)?
by bkeppens on 07-28-2016 at 12:00 pm

Do not lose sleep worrying that your integrated circuits might fail during EOS/ESD events. Join us for the 38th annual EOS/ESD Symposium in Anaheim, CA in September. Experts on the field will address the latest research on EOS and ESD in the rapidly changing world of electronics.

As electronics continue to become commonplace in every aspect of our lives, including medical applications, the control of our homes, and our cars, cost and reliability are of utmost importance. To accommodate these requirements and overcome challenges, progress has to be made in the form of creative ESD design, innovative, comprehensive, and predictive verification methods and on the side of the factor control standards and methods.

The 2016 EOS/ESD Symposium addresses this and more through tutorials, workshops, technical sessions, invited talks, and through the products and services presented in the industry exhibits.

There are 13 technical sessions covering topics like factory and materials, advanced CMOS, high voltage and RF ESD challenges, EOS/ESD case studies, device physics and modeling, ESD EDA tools, system level ESD, and ESD testing.

Download the entire program on our website, register for the event and stop losing sleep over ESD issues.

ESD Fundamentals: A six-part series on Electrostatic Discharge (ESD) prepared by the ESD Association

History & Background
To many people, Electrostatic Discharge (ESD) is only experienced as a shock when touching a metal doorknob after walking across a carpeted floor or after sliding across a car seat. However, static electricity and ESD has been a serious industrial problem for centuries. As early as the 1400s, European and Caribbean military forts were using static control procedures and devices trying to prevent inadvertent electrostatic discharge ignition of gunpowder stores. By the 1860s, paper mills throughout the U.S. employed basic grounding, flame ionization techniques, and steam drums to dissipate static electricity from the paper web as it traveled through the drying process. Every imaginable business and industrial process has issues with electrostatic charge and discharge at one time or another. Munitions and explosives, petrochemical, pharmaceutical, agriculture, printing and graphic arts, textiles, painting, and plastics are just some of the industries where control of static electricity has significant importance. The age of electronics brought with it new problems associated with static electricity and electrostatic discharge. And, as electronic devices become faster and the circuitry getting smaller, their sensitivity to ESD in general increases. This trend may be accelerating. The ESD Association’s “Electrostatic Discharge (ESD) Technology Roadmap”, revised April 2010, includes “With devices becoming more sensitive through 2010-2015 and beyond, it is imperative that companies begin to scrutinize the ESD capabilities of their handling processes”. Today, ESD impacts productivity and product reliability in virtually every aspect of the global electronics environment.

Despite a great deal of effort during the past thirty years, ESD still affects production yields, manufacturing cost, product quality, product reliability, and profitability. The cost of damaged devices themselves ranges from only a few cents for a simple diode to thousands of dollars for complex integrated circuits. When associated costs of repair and rework, shipping, labor, and overhead are included, clearly the opportunities exist for significant improvements. Nearly all of the thousands of companies involved in electronics manufacturing today pay attention to the basic, industry accepted elements of static control. ESD Association industry standards are available today to guide manufacturers in establishing the fundamental static charge mitigation and control techniques (see Part Six – ESD Standards). It is unlikely that any company which ignores static control will be able to successfully manufacture and deliver undamaged electronic parts.


SEMICON West – Leti FDSOI and IOT, status and roadmap

SEMICON West – Leti FDSOI and IOT, status and roadmap
by Scotten Jones on 07-28-2016 at 7:00 am

On Tuesday, July 12th at SEMICON West I had an opportunity to sit down with Marie Semeria, the CEO of Leti and discuss the status and future of FDSOI. Leti pioneered FDSOI 15 years ago and has been the leading FDSOI research ever since.

Two years ago Leti and ST Micro demonstrated products on 28nm that are cost competitive with bulk technology. For the first time the industry could consider two approaches to leading edge requirements, FinFET for high-end and FDSOI for low-cost and flexible IOT designs. Both technologies can cover multiple technology nodes. Since then ST has licensed 28nm to Samsung and Global Foundries and 14nm developed with Leti to Global Foundries. Global Foundries is now preparing to introduce a 22nm technology, 22FDX based on the Leti-ST 14nm front end with a relaxed back end for cost. FDSOI is out of research into foundries and an IDM and products are coming out.

In terms of scalability:

  • 14nm – demonstrated the technology is scalable to 14nm with ST Micro.
  • 10nm – they have completed modeling and some test devices. They have a full integration scheme and they have shown the modeling matches the actual results allowing them to have confidence when they use modeling to extrapolate to the next node. Strained SOI and silicon germanium are 10nm performance boosters but even with the current substrate they can meet 10nm requirements.
  • 7nm – modeling done.
  • 5nm – beyond 7nm Leti believes that at 5nm horizontal nanowires will be the next technology.

Authors note – the following table was added to the article on 8/10/2016

Leti defines the nodes mentioned above as follows where CPP = contacted poly pitch.

[TABLE] align=”center” class=”cms_table_grid” style=”width: 300px”
|- class=”cms_table_grid_tr”
| class=”cms_table_grid_td” | Node (nm)
| class=”cms_table_grid_td” style=”text-align: center” | 14
| class=”cms_table_grid_td” style=”text-align: center” | 10
| class=”cms_table_grid_td” style=”text-align: center” | 7
| class=”cms_table_grid_td” style=”text-align: center” | 5
| class=”cms_table_grid_td” style=”text-align: center” | 3
|- class=”cms_table_grid_tr”
| class=”cms_table_grid_td” | CPP (pitch)
| class=”cms_table_grid_td” style=”text-align: center” | 80
| class=”cms_table_grid_td” style=”text-align: center” | 60
| class=”cms_table_grid_td” style=”text-align: center” | 50
| class=”cms_table_grid_td” style=”text-align: center” | 40
| class=”cms_table_grid_td” style=”text-align: center” | 30
|- class=”cms_table_grid_tr”
| class=”cms_table_grid_td” | M1 (pitch)
| class=”cms_table_grid_td” style=”text-align: center” | 64
| class=”cms_table_grid_td” style=”text-align: center” | 48
| class=”cms_table_grid_td” style=”text-align: center” | 40
| class=”cms_table_grid_td” style=”text-align: center” | 32
| class=”cms_table_grid_td” style=”text-align: center” | 24
|-

Commercially 28nm is running at ST and Samsung and 22nm is coming up at Global Foundries. Global Foundries plans a follow-on to 22nm and Leti has assignees in Dresden working with Global Foundries and discussions are ongoing. The exact node for the follow-on technology hasn’t been announced yet (authors note – in a recent interview with Samsung they also discussed a follow-on technology to the 28nm process they are running; they want to avoid multi-patterning for cost reasons so it sounds like a relaxed 22nm technology at Samsung, at Global Foundries my guess is something in the 12nm to 16nm range will be next).

The ecosystem for FDSOI is completely established with fabless, foundries, IP companies and IDMs all supporting it. Leti has established the silicon impulse initiative as a gateway for designers to get trained and use multi-project wafers to evaluate FDSOI. In one year more than 20 companies have joined the initiative to assess the technology. There are over 60 tape-outs running at ST, Global Foundries and Samsung.

Marie expects to see many more FDSOI products in IOT due to low energy consumption and the ability to support RF and embedded memory. They have demonstrated RF to over 300Ghz! Leti is working with ST to develop back-end memory for 28nm or 20nm for a microcontroller. The memory may be PCM or OxRAM. Leti is also working with Spin Tech on magnetic memory, they have a European research grant and are focused on embedded memory and low voltage operation.

IOT is a very fragmented market today and requires a lot of different types of IP, FDSOI could be the IOT platform. In automotive IOT it is at the connected device level, plus processing of data and security. More and more big companies are developing their own structures and clouds to manage data. SOI has good radiation hardness and that is an advantage for automotive. At DAC Leti demonstrated a new driver assistance system using an ST microcontroller. Automotive needs low cost and global environment coverage, Leti has a probabilistic approach that avoids floating point operations and lowers computing requirements by 100x and power by 200-400x. In IOT you have to think about specific requirements of the application and then you can have tremendous impact on power and cost. You don’t need a lot of capacity in computing if you look at the whole system. They are working with companies in automotive to optimize the system to keep relevant information close to the sensors and optimize it for the type of operation.

In the late nineties IBM introduced partially depleted SOI (PDSOI) in their internal processor line. I suggested to Marie that because PDSOI required an expensive SOI substrate and yet didn’t reduce process costs it was an expensive solution and created an image of SOI an unaffordable whereas FDSOI greatly reduces the process complexity making FDSOI far more affordable (authors note – IBM’s processor needs were for high performance and cost wasn’t really an issue). My belief is this created a perception of SOI as high cost that FDSOI is still working to overcome and Marie agreed with me on this.

Today with Global Foundries poised to ramp 22FD and Samsung and ST running 28nm FDSOI is finally poised to take off. Global Foundries and Samsung are also both planning follow on nodes and FDSOI has a path to continue to scale for many years.


A Chinese smartphone drill in progress

A Chinese smartphone drill in progress
by Don Dingee on 07-27-2016 at 4:00 pm

One of our astute readers caught what looks like a major gaffe in the Linley Group mobile conference presentations from this week. It’s another indication of the speed of change in mobile markets and the instability that is giving Apple and others heartburn.

Here’s the chart in question:


The point of contention is who, exactly, are the China tier 1 vendors? Linley lists Huawei, Lenovo, Xiaomi, Yulong, and ZTE. As it turns out, that is outdated info according the IDC Worldwide Quarterly Mobile Phone Tracker:


Never heard of OPPO or vivo? I was flipping channels last night and saw some reality show where the participants were holding an OPPO phone. It turns out both brands are owned by BBK Electronics, and there’s a third brand coming soon called imoo. It’s insane how quickly these Chinese brands are appearing and disappearing on the top 5 mobile list, although OPPO and vivo have been out there for several years quietly building.

(For those not familiar with the title reference, a “Chinese fire drill” was a popular game among teenage drivers out on the town with their friends, where everyone would exit the car at a stoplight and run around it until the light turned green, and whomever was nearest the driver door jumped in and took control. Maybe we need to call it the “American fire drill” now.)

The importance of this list in the Linley argument is who is or may soon be doing their own LTE chipsets – a bullet on their slides says the top 3 plus “internal” make up 98% of mobile. Qualcomm still owns the high end, and MediaTek has surpassed the internal vendors: Apple, Huawei, and Samsung combined. We know Xiaomi has their soon-to-release “Rifle” chipset.

The days of premium mobile brands and high-end chipsets may be coming to a close, however, at least in terms of who makes the most money. Even Linley says that most of the remaining mobile growth is at the low end in developing countries, and that MediaTek and Spreadtrum are the primary beneficiaries of that trend.

In response, Qualcomm continues to push their offering lower, and made a compelling argument for a scalable LTE roadmap:


That’s why Qualcomm was all lathered up in their recent earnings report about unnamed Chinese companies not counting chips correctly – these numbers are starting to get pretty big. I suspect we’ll see more change in this in the coming quarters, it’s moved substantially since we published “Mobile Unleashed” about 8 months ago.


When Waze Comes to Town

When Waze Comes to Town
by Roger C. Lanctot on 07-27-2016 at 12:00 pm

Waze’s Connected Citizens program, rolled out in October of 2014, was envisioned as a means for cities to create a two-way data exchange between Waze users and cities for communicating urgent traffic information as well as to facilitate the analysis of traffic patterns. In other words, Waze wanted to be part of the solution to the traffic woes plaguing cities all over the world.

Connected Citizens Fact Sheet:http://tinyurl.com/h6e7ste

The concept is clever and forward thinking – nothing similar has been publicly pursued by competitors TomTom and HERE which have built their business around auto makers, transportation departments and enterprise applications. Waze is unique as a business-to-consumer application-based and crowdsourced traffic solution. The application has become so popular, in fact, that in recent years it has become part of the problem it was intended to solve.

Launched with 10 cities around the world, the program now claims 63 partners including city, state and country government agencies, nonprofits and first responders. Waze has become the de facto traffic and navigation app of choice in many cities where it is available. This pervasiveness has introduced a Waze Factor into local traffic management efforts.

From Washington, DC, to San Francisco, Los Angeles and Sao Paulo, Waze users (Wazers?) are following Waze’s traffic-influenced route guidance slavishly into secondary and tertiary streets not built for nor accustomed to large volumes of traffic. This scenario has forced potential Connected Citizen partners to invite Waze in for a chat to better understand how the app is influencing local traffic and how local authorities can work with Waze to find coping strategies.

In some instances, Waze has offered to adjust its algorithms to shift traffic away from troublespots identified by local agencies. But Waze’s willingness and ability to make these adjustments has exposed the fact that Waze isn’t so much basing its guidance on predictive models as it is sending its users to the nearest open routes.

The latest wrinkle is a lawsuit being brought by a toll road operator in Israel claiming that Waze is deliberately steering its users away from or at least not offering the option of using the company’s Fast Lane on Route 1 into Tel Aviv. (Waze Sued over Toll Road Rerouting – http://tinyurl.com/zr65gcv)
Depending on the outcome of the lawsuit we can now add Waze’s willingness to put its thumb on the routing scales for reasons known only to Waze. What are Waze’s other shortcomings?

[LIST=1]

  • Multi-minute lag in identifying incidents
  • Multi-minute lag in identifying the conclusion of an incident
  • Distracting pop-up offers and notifications
  • Preference for secondary and tertiary roads
  • Guidance based on real-time, rather than predictive modeling

    As always in the case of Waze, the key caveat is: Use whatever traffic/navigation app works best for you.

    The Connected Citizens outreach to cities is a positive step to integrate local traffic and incident reports into the Waze app. The effort can certainly help improve traffic event identification.
    The new traffic reality is that Waze’s influence has become a factor in the very problem it is trying to solve. City, state and Federal traffic agencies around the world are wise to heed Waze. The question remains as to whether Waze is a weed or a virus infecting the traffic landscape or whether Waze will become the dominant and preferred means of communicating official traffic information to drivers.
    Car makers, navigation software designers, system integrators and traffic information companies will do well to weigh Waze’s influence and its clever marketing efforts. Waze has a growing roster of competitors gathering vehicle probe data including transportation network companies and insurance companies running usage-based insurance programs. The question that remains: Who are you going to call next time you’re in a jam?

    Roger C. Lanctot is Associate Director in the Global Automotive Practice at Strategy Analytics. More details about Strategy Analytics can be found here: https://www.strategyanalytics.com/access-services/automotive#.VuGdXfkrKUk


  • Qualcomm Demonstrates First Sub-6 GHz 5G New Radio Prototype

    Qualcomm Demonstrates First Sub-6 GHz 5G New Radio Prototype
    by Patrick Moorhead on 07-27-2016 at 7:00 am

    You may not know this, but on-time delivery at expected performance with 5G is being determined right now in engineering labs across the globe. This is true even though end user deployments won’t be for another three to four years. You see, the ground is being laid right now with manufacturers, operators and standards bodies. With 5G being so far away, there is still time for companies like Ericsson, Huawei, Intel, Nokia, Qualcomm and Samsung Electronics to stake their claims on 5G. Qualcomm, at Mobile World Congress Shanghai, will be showing the first (to my knowledge) public demonstration of 5G NR at sub-6 GHz in a prototype platform for development and testing. This may not sound like a big deal, but it is. Let me explain with some background first.


    Qualcomm 5G NR sub 6GHz. pro system and trial platform (Photo credit: Qualcomm)

    5G has many use cases and ways to use spectrum and data
    Some companies have already performed demonstrations of certain 5G NR technologies like mmWave and some with very high throughput figures attached to relatively simple demonstrations of the technology. However, 5G is not just one multi-gigabit, multi-gigahertz new radio (NR) technology, it is much more complex than that and incorporates a new ways of handling different types of spectrum and data.

    5G needs to deal with flavors ranging from low bandwidth fixed broad IoT installations all the way up to high frequency high bandwidth infrastructure and even ultra-low latency applications for drones and cars. Oh and smartphones, too.


    5G has different use cases and ways to incorporate spectrum and data (Image credit: Qualcomm)

    5G encompasses wide variations of spectrum

    This means that just having 5G mmWave technology and a 4G fallback will likely not be enough to fully address all the different types of connectivity expected to utilize 5G. Specifically, there is a pretty broad gap between the applications of mmWave and low frequency 4G (sub-1 GHz) that will need to be filled for the majority of 5G applications. This is actually where many expect that the “meat” of 5G’s usage will be experienced by the user. That’s why Qualcomm’s demonstration of a new sub-6 GHz 5G New Radio (NR) prototype at Mobile World Congress Shanghai is so important.


    5G encompasses wide variations of spectrum (Image Credit: Qualcomm)

    Qualcomm’s 5G NR sub-6GHz. prototype
    Qualcomm’s 5G NR sub-6 GHz prototype system is not just a prototype for internal use, but like their mmWave prototype they showed in late 2015, it also serves as a trial platform for their partners and for 5G design validation and 3GPP standard contribution. This testbed for testing and demonstration incorporates both ends of the link, not just the user equipment. This is extremely important for the development of 5G because it means that their partners are able to work alongside them in the development and adopt the same technologies on the way to 5G. Because the first formal 3GPP 5G standard isn’t expected to be finalized until 2018, it is important for partners to work together and update their testing platforms to fit the movement of the standard.

    As such, Qualcomm’s platform for sub-6 GHz will be extremely important for addressing the spectrum between the current 4G LTE Advanced deployments and the expected mmWave deployments in the areas above 6 GHz like Qualcomm’s tested 28 GHz. The prototype currently operates at 3-5 GHz, but is designed to also address frequencies below 3 GHz as well.

    Benefits of 5G NR inside 6GHz. include reusing macro cells
    The benefits of operating 5G NR at sub-6 GHz is that you are able to incorporate all of the new 5G advanced wireless technologies while also being able to reuse existing macro cell sites. These macro sites can theoretically transmit up to 1.7 km at 4 GHz using Massive MIMO technology which is a 5G technology and can allow for significantly increased capacity and throughput. Qualcomm is able to deliver multi-Gbps bandwidth using 100+ MHz of spectrum at very low latency (sub 10 ms) enabled by the new self-contained TDD sub-frame and use of sub-6 GHz spectrum which is crucial for latency-intensive applications of 5G like remote vehicles as well as augmented reality.

    Qualcomm first to publicly demonstrate 5G NR
    This is the first time anyone has, to my knowledge, publicly shown 5G New Radio operating in the 3-5 GHz spectrum. This leadership from Qualcomm isn’t particularly surprising. Let’s run down the facts. Qualcomm:
    [LIST=1]

  • led in 4G and arguably had a two-year advantage on every other vendor
  • first and still the only 1 Gbps smartphone modem and RF front-end for 4G
  • first with smartphone MU-MIMO Wi-Fi and smartphone Wi-Fi AD

    This matters for more than chest-beating because many aspects of the technologies above are integrated into 5G. This is not to forget all of the mission critical, lower bandwidth applications that 5G will enable, but it does give consumers an easier metric of overall performance.

    Qualcomm Research’s 5G NR prototypes will allow them to continue to develop and revise their prototypes as the 3GPP study items are decided. Eventually, this revision process will land Qualcomm and their partners at the finalized 5G standard hopefully sometime in 2017 when the 3GPP’s Release 15 is announced.

    I’m looking forward to seeing the competitive response as rising waters lift all boats.

    More from Moor Insights and Strategy


  • More Details on the Smartphone and Wearables Market

    More Details on the Smartphone and Wearables Market
    by Daniel Nenni on 07-26-2016 at 4:00 pm

    The Linley Mobile Conference opened today with a nice keynote overview of the mobile market evolution. In the media I see a lot of doom and gloom articles about smartphones and wearables but if you look at it closely you will see a natural growth curve evolving.

    The mobile semiconductor market is evolving as vendors split their focus between the massive smartphone market and the rapidly growing market for wearables such as fitness bands and smartwatches. This presentation will discuss the end products that are driving new demand and the chip-level products that support them. Linley will also discuss technology trends in these mobile devices and provide market data including 2015 market share and updated forecasts.


    One of the most interesting observations Linley made was on the mobile application processor consolidation driven by systems companies. Apple started it all with their A4 SoC which used the ARM Cortex-A8 core and Imagination Technologies PowerVR GPU. In the beginning, standard IP was used for the majority of SoCs but now custom IP is prevalent. For example, Apple licenses the ARM architecture and creates their own custom cores. Apple does not however design a GPU or modem yet, but it is coming, absolutely.

    Side note: In 2013 Apple turned the SoC market upside down with the first 64-bit SoC based on the TSMC 20nm platform. This is well documented in our book “Mobile Unleashed” but the long story made short, Qualcomm first belittled Apple’s 64-bit masterpiece as a “marketing ploy” only to follow with a knee jerk reaction that would send their stock and employee count tumbling. QCOM has since recovered with their latest 14nm SnapDragon 820/821 offering which is best in class, for now.

    Samsung and Huawei are also doing their own custom SoCs which means the top three smartphone vendors do not have to buy merchant SoCs from the likes of Qualcomm and MediaTek. Xiaomi is the number four smartphone vendor and I was told that they also licensed the ARM architecture and may stop using QCOM and MKT SoCs in the near future. If you add up the smartphone market share of Samsung, Apple, Huawei, and Xiaomi you will get a number greater than 90% which is one reason why Intel, Marvell, and others are getting out of the merchant SoC business.


    Wearables mostly use standard processors with the exception of the Apple watch which uses recycled 28nm iPhone5 technology. Most Android watches use QCOM SnapDragons, the Fitbit uses STMicro MCUs, and the Xaomi Mi Band uses a Cypress part.

    Given that the smart watch dominates the wearables market and cost is critical, look for more custom silicon from systems companies (Apple, Samsung, Xioami, etc…) in the next generation of smart watches and even fitness bands. I would also be willing to bet that FD-SOI will be used for wearables in the not so distant future given the FinFET-like performance and power efficiency with a much lower cost . FD-SOI today also has superior RF/Analog capabilities over FinFETs, especially if you are looking at the GF 22FDX process (read more about 22FDX HERE).


    The Appeal of a Multi-Purpose DSP

    The Appeal of a Multi-Purpose DSP
    by Bernard Murphy on 07-26-2016 at 9:45 am

    When you think of a DSP IP, you tend to think of very targeted applications – for baseband signal processing or audio or vision perhaps. Whatever the application, sometimes you want a solution optimally tuned to that need: best possible performance and power in the smallest possible footprint. These needs will continue, but there’s growing interest in more flexible solutions to address multiple signal processing objectives through common functions and to support evolving requirements.

    Automotive and IoT markets in particular are driving this demand for flexibility. ADAS, infotainment and sensor fusion require multiple applications processing multiple data types, in floating point for codecs for example, in fixed point for other applications, supporting multiple word sizes, signed and unsigned. But systems development teams don’t have armies of DSP software developers ready to develop assembly code and floating point libraries per signal processing function as needs and standards change.

    What’s more, there’s increasing pressure to easily port existing software to DSP functions with the expectation that the compiler and the platform will take care of optimizations such as vectorization. In some ways, these multi-use DSP applications are increasingly demanding use-models we routinely expect in general-purpose computing, while still expecting high-performance.

    Cadence has developed the Tensilica Fusion G3 DSP specifically to address these needs. In switching to a multi-purpose platform, customers may be willing to accept some performance compromise, but not a lot. So Cadence has optimized the architecture to give best possible performance, along with flexibility. The G3 offers single and double precision floating point, along with fixed point and a range of word sizes. It has a finely-tuned high-performance architecture, balancing MAC, load/store and ALU functions. The Tensilica group has also added a set of specialized operations on top of the base Xtensa instruction set architecture, to support optimizations for specific applications.

    The G3 provides its own DMA controller and supports multi-banking for memory, helping you to get data in and out as fast as possible And naturally, since you’re going to be using this for multiple purposes, it supports multi-core usage. Debug is supported through Extensa Explorer and the G3 also connects to CoreSight debug and trace.


    For DSP software developers, Cadence claims best-in-industry auto-vectorization through the compiler and an extensive library of IIR, FIR, FFT, 1D and 2D transform, math, statistics and other functions. This means it should be easy to port C or Matlab code developed for other architectures and still get high performance, without needing to dive down into assembly code. (You can still go to the assembly level if you want, but it’s less likely you will run into that need.)

    The G3 was developed in close partnership with a customer who recently taped out their first G3-based design. While this is obviously a new release, Cadence are also seeing interest from other customers, especially around radar (automotive) applications. They expect the G3 will find a home in a lot of applications looking for best if not bleeding-edge performance with reduced software development costs (and schedule) and a higher degree of future-proofing. In audio, they see demand in surround-sound and active noise/echo cancellation (both of which require floating point), fingerprint recognition, communications (requiring complex floating point), image processing (scatter/gather on 8-bit data) and radar (requiring both floating point and fixed point).

    The value proposition is pretty obvious. As DSPs become the go-to solution for more functions in a design, the industry is demanding more cost-effective solutions that are easier to adopt, easier to maintain and easier to adapt as requirements change. Simplifying total development costs through a common platform, without significant compromise in performance, is an obvious way to get there. You can learn more about the Fusion G3 HERE.

    More articles by Bernard…


    SMART sensors with OTP memory for the IIoT

    SMART sensors with OTP memory for the IIoT
    by Don Dingee on 07-25-2016 at 4:00 pm

    A few years back before IoT became the buzzword, the industrial automation community had already talking about “smart sensors” since the mid-1990s. The impetus for those discussions was IEEE 1451, a family of standards for adding intelligence and wireless communications to sensors so they could be incorporated into field networks. Continue reading “SMART sensors with OTP memory for the IIoT”


    Coming Up Next: ARM IoT ASICs!

    Coming Up Next: ARM IoT ASICs!
    by Daniel Nenni on 07-25-2016 at 12:00 pm

    The History of ASICs is well documented in our book “Fabless: The Transformation of the Semiconductor Industry” which illustrates the earliest forms of design start driven collaboration. The history of ARM is well documented in our book “Mobile Unleashed” which illustrates an entire company culture based on design start driven collaboration.

    That brings us to where we are today with hundreds if not thousands of system companies cobbling together IoT solutions using off-the-shelf chips. Amazon Echo, Nest Dropcam, and a Skybell are examples in my home.

    The next phase of this transformation is what we call the IoT ASIC. Yes, the Internet of Things is a very fragmented market but it is also ultra-competitive so you will not survive if you are cobbling together systems. Take Apple for example (Chapter #8 in Mobile Unleashed), they went from cobbler to ASIC to full blown fabless semiconductor powerhouse in order to control their competitive destiny.

    The Multi-billion dollar question here is: Who is going to deliver the next big IoT thing? The answer of course is just about anybody thanks to ARM and ASIC providers like Open-Silicon. In fact, just last month ARM selected Open-Silicon to join the ARM® Approved Design Partner program in conjunction with the ARM DesignStart™ portal:

    “The new ARM Approved Design Partner program enables a powerful and extensive network of global design houses,” said Chris Shore, training product manager, ARM. “Open-Silicon has a successful track record in custom SoC design and manufacturing services as well as ASIC projects, and it has made significant investments in its ARM-based product services roadmap. As a member of the program, Open-Silicon can now play a valuable role in helping to enable the easy and rapid development of new ARM-based devices.”

    This program builds on the ARM DesignStart™ portal, which offers SoC designers free access to ARM Cortex®-M0 processor IP for design, simulation and prototyping with the option to buy a simplified and standardized $40,000 fast track license. The design and ASIC houses selected to join the ARM Approved Design Partner program will provide expert support during development and manufacturing. They are experienced in developing custom SoCs using ARM processor IP, and have successfully completed a stringent ARM auditing process to ensure they meet the highest quality standards.

    “The new ARM Approved Design Partner program enables a powerful and extensive network of global design houses,” said Chris Shore, training product manager, ARM. “Open-Silicon has a successful track record in custom SoC design and manufacturing services as well as ASIC projects, and it has made significant investments in its ARM-based product services roadmap. As a member of the program, Open-Silicon can now play a valuable role in helping to enable the easy and rapid development of new ARM-based devices.”

    Open-Silicon’s selection for the ARM Approved Design Partner program validates the company’s investments in its ARM TCoE (Technology Center of Excellence), established in 2011, and its recent Spec2Chip IoT ASIC Platform, which was developed for low risk and reduced schedule custom SoC development. This scalable platform is based on the ARM Cortex-M processor, TrustZone® CryptoCell hardware-accelerated security technology and ARM mbed™ SDK. This platform allows IoT ASIC designs to be evaluated at the system level.

    “ARM and Open-Silicon share the same vision for simplifying the path for system developers to deploy IoT platforms,” said Vasan Karighattam, VP of engineering, Open-Silicon. “Through this collaboration, both companies are paving the road to IoT innovation by facilitating the development of highly-differentiated custom SoC designs.”

    About Open-Silicon

    Open-Silicon transforms ideas into system-optimized ASIC solutions within the time-to-market parameters desired by customers. The company enhances the value of customers’ products by innovating at every stage of design — architecture, logic, physical, system, software, and IP — and then continues to partner to deliver fully tested silicon and platforms. Open-Silicon applies an open business model that enables the company to uniquely choose best-in-industry IP, design methodologies, tools, software, packaging, manufacturing, and test capabilities. The company has partnered with over 150 companies ranging from large semiconductor and systems manufacturers to high-profile start-ups, and has successfully completed over 300 designs and shipped over 120 million ASICs to date. Privately-held, Open-Silicon employs over 250 people in Silicon Valley and around the world. www.open-silicon.com


    Formally Crossing the Chasm

    Formally Crossing the Chasm
    by Bernard Murphy on 07-25-2016 at 7:00 am

    Formal verification for hardware was stuck for a long time with a reputation of being interesting but difficult to use and consequently limited to niche applications. Jasper worked hard to change this, particularly with their apps for JasperGold and I have been seeing more anecdotal information that mainstream adoption is growing. So I thought it would be interesting to ask Pete Hardee (marketing and product management for Jasper) what has changed in the industry and why.

    Cadence now treats formal as one of the 4 legs of their verification strategy. They arguably have the market-leading solution in Jasper, but they wouldn’t make it a top-level component if the demand wasn’t there, so what’s different? According to Pete, virtually all of the top semis doing RTL design are now using formal, as are a lot of the fast-growing companies. And formal usage is growing within these companies. Adoption alone would suggest it’s no longer a niche application.

    The reason for this change all comes down to coverage. Full dynamic SoC coverage is already well out of reach (because of size, complexity, 3[SUP]rd[/SUP]-party IP, software, …), but you still have to have high confidence by signoff. So verification engineers look for different ways to build confidence.

    One way is through connectivity checks – separate questions of whether the IPs function and communicate correctly from whether you have connected them together correctly. Can I prove that all the IP in the design are hooked up per a specification I am willing to provide (usually a connectivity spreadsheet)? If you can completely prove this aspect of the design is correct, you are able to signoff a whole class of functional checks more completely than would ever be possible in simulation. This makes formal checks a natural approach when they’re sufficiently simple to use. Apps make them simple and that is growing adoption in verification teams.

    A different class of problem is proving certain things cannot happen – something essentially impossible to prove in simulation for any reasonable-size design. A good example is proving that an encryption key cannot leak out to an insecure IP (or an IP being used in insecure mode), equally that it can’t be overwritten and that it remains secure even in the presence of faults. This isn’t an area where “reasonably” confident is an acceptable signoff, so you have to use formal methods.

    Power management is a nightmare for coverage because you take an already massive mission-mode state space and exponentially expand it in switching between all the possible power variants. You can gain some confidence through dynamic verification, but complete proof that there are no gotchas in switching again requires formal, in this case supported by an app.

    Pete also noted that fear of assertions and constraints seems to be on the decline. Solutions to certain properties you feel you must cover can’t always be pre-packaged in apps. This used to be when you’d ship the problem off to your team of verification PhDs. Now not so much apparently. Pete guesses that some verification teams bit the bullet in training (and maybe a little coercion), engineers in hot companies aren’t afraid of formal and real expertise in this area is looking increasingly valuable on a resume. “That stuff’s too hard” doesn’t seem to be something you want to be heard saying anymore.

    Getting to high coverage near the end of the verification cycle is another driver. We all know that the last mile in coverage is really hard. Maybe that’s because a lot of the uncovered cases are unreachable. Proving that is a a real time-saver – you know if you formally can’t reach a state, you can safely drop it from coverage. And if the app proves you can reach it, it will provide you with an example that will help you build a test. Reachability analysis is an exploding area in formal because getting to maximum coverage is must-have.

    Unsurprisingly, safety is driving more interest in formal, since safety is another area where “reasonable” coverage is not an acceptable goal. ISO26262 demands traceability of requirements, fitting well with formal which has well documented properties and constraints. In fault analysis, formal helps both in efficiency (why test a fault if it can’t be observed?) and in completeness (maybe a given fault-sim didn’t propagate a fault to an output or a checker, but would that always be the case?). Demonstrating safety to ASIL-D requirements in ISO26262 is again a must-have – expect that automotive safety will drive more growth for formal in multiple areas.


    Pete added that he’s also seeing growth in exploring design state-space for bug-hunting. This is an interesting domain where deep state-space bugs can be missed by constrained-random and you can’t conclusively catch the bug with direct formal if the bug is too deep. JasperGold has engines which support a concept they call “elastic bounded model checking”, letting you do a guided search progressively deeper into state space while skipping states you don’t feel are of interest. One user group paper reported multiple critical bugs found at 100-400 cycles deep and one case at nearly 3000 cycles deep, far beyond reasonable bounds for conventional model checking.

    Hopefully if you stayed with me this far you would have to agree that formal (especially in JasperGold) is covering a lot of bases. It’s no longer a niche application for specialists. It really has become a primary pillar of verification. I found a really useful way to understand more about how JasperGold is being used is to check out papers from the user group (JUG) conferences. You can get to papers from the last conference HERE (you’ll need to have an account with Cadence.com).

    More articles by Bernard…