RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

OpenVX Bring Power-efficient Vision Acceleration to Mobile

OpenVX Bring Power-efficient Vision Acceleration to Mobile
by Eric Esteve on 01-06-2014 at 8:44 am

OpenVX is the next open source sample specification to be launched by Khronos group, a consortium building a family of interoperating APIs for portable and power efficient vision processing. If you take a look at the OpenVX participant list, you can check that the major chip makers: Broadcom, Qualcomm, TI, Intel, Nvidia, Renesas, Samsung, ST or Xilinx, as well major IP vendors like ARM or CEVA have joined the Khronos group. Before discussing OpenVX benefits, we may comment this “Why do we NEED Standards?” slide:

These four bullets may look like obvious affirmations, but it’s worth to remind that the semiconductor industry need standard Interfaces for Interoperability, and that widely adopted standard interfaces is growing market opportunity, in such a way that it turn into a mass market, so devices can be built cheaply enough to attract even more customers. This is a kind of virtuous cycle, industries cooperate to build a market (looks naïve?)… And then compete, which is more realistic. And because today the largest market is by far the mobile industry, OpenVX has been developed to support computing vision for mobile and embedded.

Computing vision is extremely data intensive, and OpenVX bring a sophisticated image stream generation:

  • Advanced, high-frequency burst control of camera and sensor operation
  • Portable support more types of sensors – including depth sensors
  • Tighter system integration – e.g. synch of camera and MEMS sensors

OpenVX allows developing advanced imaging & vision application, like:

  • Image enhancement,
  • Object tracking and detection, and
  • Image manipulation

In fact, OpenVX can be implemented with CPU, GPU or DSP cores, but the goal is not only to accelerate performances but also to drastically minimize power consumption, as the primary target is mobile. Thus, implementing OpenVX on CEVA DSP core is probably the best option, allowing decreasing by 10X power consumption, when compared with CPU. Just remember that CEVA is supporting Android Multimedia Framework (AMF), a system level software solution allowing offloading of multimedia tasks from CPU/GPU to most efficient application-specific DSP platforms. The next picture illustrate OpenVX Development Flow, this example being for an Android OS using CEVA AMF:

Benefits enabled by AMF include:

  • Multimedia tasks are abstracted from the CPU and are physically running on the DSP. Furthermore, tasks can be combined (“tunneled”) onto the DSP – saving data transfer, memory bandwidth and cycles overhead on the CPU
  • Utilization of the PSU (Power Scaling Unit) available for CEVA DSPs to significantly lower power consumption further, when running multimedia tasks
  • Easy activation of CEVA-CV computer vision (CV) software library for the development of vision-enabled applications
  • Easy activation of yet to be officially released OpenVX pre-optimized software library

A chip maker developing IC for mobile has to search for differentiation, and leaders on the mobile market appear to be also the companies strongly investing to propose differentiated solutions, as we have explained in this blog for example. Is OpenVX adoption a drawback for differentiation? In fact, it’s not the case: as you can see on the above OpenCV flow, OpenVX standardized API are available in open source. Thus, it’s possible to extend OpenVX library to create differentiation, still keeping the benefits of using a standardized interface solution. This is also true for CEVA DSP customers: they will benefit from OpenVX standard interface, supported by CEVA, from the drastic reduction of the power consumption associated with DSP core usage when compared with CPU or GPU and this customer could also build a differentiated solution by extending OpenVX library with specifically developed API.

Eric Esteve from IPNEST

lang: en_US

More Articles by Eric Esteve …..


The Future of Intel

The Future of Intel
by Scotten Jones on 01-06-2014 at 1:30 am

There have been a lot of articles and discussion on SemiWiki about Intel. These articles have all been written from the perspective of an outsider commenting on what Intel is doing, or should or shouldn’t be doing. I thought it would be interesting to take a look at how Intel got to where they currently are, what their current strengths and weaknesses are and what Intel’s options are. I am not an Intel insider nor do I profess to have inside knowledge of their situation, I am however someone who has worked in the semiconductor industry for over 30 years and run a semiconductor division and I think Intel’s options are fairly clear.

Historical perspective

In 1971 Intel invented the PC microprocessor. As PCs became the driver for the semiconductor industry the microprocessor grew into one of the largest and the most profitable segments of the semiconductor industry. Intel built and maintained a market leading position in microprocessors and rode that success to become the largest and most profitable semiconductor company in the world.

The tremendous profits generated by Intel’s microprocessor business funded the construction of a network of state-of-the-art wafers fabs and the development of what I believe to be the most advanced IC logic technology in the world.

I know there is some debate about whether Intel really has the most advanced logic process technology but I think on a fundamental level they do and my argument is pretty straight forward. In 2007 Intel introduced high-k metal gates (HKMG) at their 45nm node. It wasn’t until approximately 1-1/2 nodes and 3 years later that other logic IC producers introduced HKMG. In 2011 Intel introduced FinFETs at their 22nm node. Other leading logic producers are expected to introduce FinFETs later this year, three years after Intel. By the time the 14/16nm first generation FinFETs are introduced from other leading producers Intel will already be producing their second generation 14nm FinFET technology that I believe will include another major new innovation, 2 to 3 years ahead of everyone else. These are fundamental innovations not convoluted with design and operating system issues.

The changing competitive landscape

While Intel has been dominating the microprocessor business, the semiconductor landscape has been changing under their feet.

The foundry/fabless models have been steadily developing in size and capability. Many fabless companies have sprung up focused on system-on-a-chip (SOC) development. SOCs are perfect for small form factor products such as smart phones and tablets and companies such as Qualcomm, Broadcom, Media Tek and even Apple (who now designs their own SOCs) dominate the SOC market for communications. I don’t think Intel was blind to this developing market; Intel has invested billions of dollars in communications over the years. What I think is the key issue is that inside Intel communications has always been second priority to microprocessors. All you have to do is look at Intel’s process technology where at each new node they first introduce the microprocessor version of a process and then around a year later they introduce the communications version. You will know they are really serious about communications when a new node is introduced with the communications version of the process coming out first.

The other thing that I think has happened is that Intel was surprised by how rapidly tablets and smart phones have grown and how they have cannibalized notebook computer sales (in fairness to Intel I think this has surprised most everyone). In 2012 unit sales of PCs declined for the first time and in 2013 they declined again. In my opinion PC unit sales will likely continue to be flat to down for the next several years. At the same time, although server sales are growing to support all of these connected devices, unit volumes for servers are far lower than for notebooks and Intel is facing increasing competition from the ARM camp for server business. I suspect Intel may also have expected that Ultrabooks would help fend off the tide of tablets but so far that hasn’t happened.

I do want to add a side note here that the Microsoft Surface Pro 2 tablet uses Intel microprocessors. This Christmas I had no problem getting an iPad air for my wife but couldn’t find a surface 2 anywhere. The surface 2 uses an NVIDIA Tegra processor but Surface Pro 2 that use Intel microprocessors was also in short supply. This could be due to a lot of reasons including Apple just planning better, but I thought it was interesting.

Intel’s current position

Intel is still the world’s largest semiconductor company with industry leading margins and lots of cash. They have industry leading process technology and a huge network of state-of-the-art fabs. They have a strong brand, strong marketing organization and sales channels and lots of engineers.

On the other hand only a few quarters ago there were widespread reports that utilization at Intel’s fabs was only 60%. In recent financial announcements Intel has claimed 80% but that is still lower than they would like. Intel also has their 14nm process coming on-line in the ramping Fab 11x and they have fab 42 ready to turn on for 14nm production. Intel needs to find a way to fill these fabs and keep them full or they will be facing a serious cost crisis and longer term have difficulty funding their continued lead in process technology. In short Intel needs to find a way to resume growing their business.

Intel’s options

The way I see it Intel has three main options:

[LIST=1]

  • Increase Microprocessor Sales – Perhaps the simplest option from Intel’s perspective would be to rev up microprocessor sales. If Ultrabooks and Microsoft surface pro 2s were to be run away successes this would fix Intel’s problems pretty quickly. To-date the success of these two products does not appear to be sufficient to fix Intel’s problems. That could change but strikes me as unlikely. It is also something that is largely outside of Intel’s control.
  • Succeed in Communications – the communications applications are currently serviced by a formidable group of competitors that are very focused on this space. Displacing a company such as Qualcomm at a major account is a major challenge. Companies such as Qualcomm also have a wide variety of communications targeted processes to choose from produced by: TSMC, Global Foundries, Samsung and UMC. Intel only produces a relatively narrow set of process options for communications. Still if Intel can design competitive products on their own process they will have one advantage in that they get to keep the margin that a fabless company has to pay to an outside fab. Yes, fabless companies avoid investing in process development but on a percentage basis that is smaller than the typical foundry margin (TSMC spends 8% of revenue on R&D but commands a 48% gross margin). Of course this is complicated by foundry fabs being located in low cost regions of the world while Intel’s fabs are generally located in higher cost regions but with microprocessors helping to absorb fab costs it should offer Intel a competitive cost structure. This is however an uphill battle for Intel against successful and entrenched competition as I think Intel has learned over the last decade. It should also be noted here that margins on SOCs while good in some cases are not generally as good as for microprocessors and Intel would see an overall margin decline if this strategy were successful. I do think if intel wants to succeed here they need to change their mind set from microprocessors first to communications first.
  • Succeed in foundry – the third option for keeping the Intel manufacturing engine humming is to make wafers for others as a foundry. I believe Intel is doing this as a hedge against declining microprocessor sales and possible lack of success in communications. I also believe that if Intel had to rank these three options they would be pretty much in the same order as they are here. The problem from an Intel perspective is Microprocessors are declining and Intel has been trying to succeed in communications for a long time with nothing but losses to show for it. Success in foundry for Intel will be difficult to achieve. TSMC is a fierce and successful competitor with a portfolio of foundry processes, a huge state-of-the-art fab network in a low cost part of the world and a vast – well established design environment. Intel can also be viewed as a competitor by some potential customers. Global Foundries and even Samsung and UMC are well ahead of Intel on the foundry learning curve. This will be another uphill battle for Intel. I frankly doubt Intel wants to be a general foundry with dozens of customers but I would bet they would love to have Apple or Qualcomm. Like the communication business the foundry business is another space where Intel would have to accept lower margins.

    The bottom line of all this is that with a declining PC microprocessor market Intel has to either prepare itself for being a mature – no growth company or succeed in communications or foundry or to some extent in both. I don’t think Intel can or will except no growth and so they are pursuing the only options they have. It has been argued on SemiWiki that Intel shouldn’t be in the foundry business but I think the answer is what choice do they have? Intel isn’t going to disappear but they need to find a path to resumed growth or they will be managing a whole set of really difficult problems due to shrinking revenue or shrinking gross margins or both.

    Scotten W. Jones – President IC Knowledge LLC

    lang: en_US


  • International CES: Day One

    International CES: Day One
    by Bill Jewell on 01-05-2014 at 10:00 pm

    Semiconductor Intelligence will be attending the International CES this week in Las Vegas, Nevada. The Consumer Electronics Association (CEA) puts on the show each year. The CEA insists the meeting be called “International CES” and states “CES” no longer stands for Consumer Electronics Show. The show is now about “consumer technology” which is broader than just electronics. Anyway, it is a chance to get together with about 150,000 of my closest friends and see the latest in consumer gadgets. I will post daily updates with my impressions on the coolest stuff and observations of the chaos which is CES.

    Sunday, January 5, 2014
    For all of you suffering in heavy snow and sub-zero temperatures, it is sunny and in the 50s in Las Vegas. Today was CES Unveiled, an introduction to some of the CES exhibits. The items ranged from the usual suspects (lots of headphones and speakers) to the cool to the “why does anyone need this?”

    Among the cool: drones are not just for taking out Al Qaeda anymore. Several were flying around CES Unveiled. Supposedly they have practical applications in surveying and aerial photography, but are mainly a cool toy you can use to spy on your neighbors.

    Also among the cool:zepp.comhas sensors which mount on a golf club, baseball bat or tennis racket. All the information about your swing (motion, speed, angles) is available to review on your PC or phone.

    Another cool device was from guard2me.com. This wristwatch size device is designed for people with dementia. The device can track where a person is, outside with GPS or inside with Wifi. It can also sense when a person falls and alert medical personnel.

    In the “why does anyone need this” category is a lighting system which allows you to turn off or dim your lights with your phone. A more practical lighting device was from SmartCharge. The LED bulb acts as a normal light bulb. When the power goes out, it will run for up to four hours using the regular light switch control.

    More from sunny Vegas tomorrow.

    Bill Jewell, www.sc-iq.com

    More Articles by Bill Jewell…..

    lang: en_US


    Mission Critical Role of Unmanned Systems – How to fulfill?

    Mission Critical Role of Unmanned Systems – How to fulfill?
    by Pawan Fangaria on 01-05-2014 at 11:30 am

    Do we ever imagine what kind of severe challenges mission critical unmanned systems in air, land and underwater face? They are limited in space and size; have to be light in weight, flexible in different types of operations and at the same time rugged enough to work in extreme climatic conditions. That’s not enough; amidst these severe environmental limitations these systems have insatiable requirements of added newer state-of-the-art functionality to work with lesser power within constrained energy to lengthen their duration of operation as most of these systems operate on batteries. Nevertheless, these systems need to provide preciseness of operation with unmatched performance, quality and reliability; and therefore need to conform to various standards such as ESD (Electrostatic discharge) and EMC (Electromagnetic compatibility standard developed by U. S. Military). And it’s evident that electronics plays a major critical role in controlling and working of these unmanned systems.

    For the system to work perfectly, it’s essential that the boards, packages and the semiconductor designs within ICs are architected and implemented perfectly from the beginning to end; conforming to all standards, size, space, power, thermal tolerance and the like. How to estimate various criteria and then meet them to achieve such stringent requirements before the systems going into field? That is done by software simulation for each of such criterion, and such software tools are seeded by actual physical principles that govern the working of semiconductor elements in isolation and in groups interfering with each other in today’s complex chips, SoCs and systems. A study by U. S. Department of Defense reveals that ROI from such software infrastructure and simulation is about 7 to 13 times of investment.

    Now let’s look at the key criteria one-by-one. Power has taken the central spot among other issues as that affects the system reliability, thermal and electrical effects and endurance of the system. It’s important that power management and reduction must be considered from the architecture stage of the design. ANSYS Apache’s PowerArtisttool analyses RTL description of a semiconductor design at the architecture level and identifies improvements in the design (without changing functionality) that can reduce power consumption considerably, thus reducing heat generation and improving long term reliability of the system.

    After right architecture of the design, power integrity must be ensured by proper delivery of power such that only required parts of the circuitry draw necessary and sufficient power at a time and other parts are shut to avoid wastage of power. Apache’s RedHawk tool analyses power integrity for all components and suggests any required changes to ensure specified power to each component for its flawless operation. It uses ANSYS extraction engine to generate electrical model of the package and board that helps in power integrity analysis of the overall system.

    Thermal management is another challenging criterion for miniaturized systems carrying large control and surveillance capabilities. RedHawk can again predict the heat generation due to power dissipation in the system. And Icepak, another advanced thermal modelling tool from ANSYS can analyse heat transportation and distribution through the system for proper selection of the IC package and cooling mechanism.

    Electromagnetic interference and electromagnetic compatibility is to the core of any electronic system and semiconductor design because of extreme proximity of wires and other electrical elements which can easily be affected by transient switching of currents and power delivery network resonance. While RedHawk analyses the activity of a component and simulates the electromagnetic excitation caused by this activity, another tool from Apache, Sentinel simulates the impact of its electromagnetic interference on the system’s environment. This study helps in optimizing the design for near and far-field electromagnetic field distribution and radiation by components to meet EMC compliance.

    Reliability is the first and foremost requirement for military systems that can be exposed to extreme temperature and other climatic conditions. Electromigration that can be caused by high current density, impacts interconnects leading to the device failure over long run. High temperature further aggravates electromigration. RedHawk analyses electromigration requirements for the complete interconnect in the chip to fulfill its functioning for its lifetime under marked extreme environment conditions.

    Electrostatic discharge (ESD) is another concern for reliability of the system. Pathfinder is another tool from Apache that can comprehensively analyse any device sensitivity to ESD and guides for proper protection as per ESD standards.

    Dr. Robert Harwood, Aerospace and Defense Industry Director at ANSYS and Ms. Margaret Schmitt, Application Engineering Director at Apache have described in great detail about these challenges of next generation unmanned systems and the tools for their solution in a whitepaperat Apache’s website. As I read it, the comprehensive set of tools as described in the paper, not only caters to the military systems but are very much appropriate for any electronic system and semiconductor design for automotive, aerospace, healthcare or personal technologies such as smartphones, notebooks or home appliances.

    More Articles by Pawan Fangaria…..

    lang: en_US


    Innovations that will change our lives in the next five years

    Innovations that will change our lives in the next five years
    by Daniel Nenni on 01-05-2014 at 8:00 am

    The theme of our book “Fabless: The Transformation of the Semiconductor Industry” comes from the Steve Jobs quote,“You can’t really understand now if you don’t know what came before.” After chronicling the rise of the semiconductor industry and transformation to the fabless business model we ask in the final chapter, “What’s next for the semiconductor industry?” We now have more than two dozen passages from CEOs, luminaries, and pundits, which makes for an interesting read, absolutely.

    In the same vein, researchers from IBM publish an annual list called “5 in 5”, five technologies that will change our lives in the next five years:

    5 in 5 2007

    • It will be easy for you to be green and save money doing it
    • The way you drive will be completely different
    • You are what you eat, so you will know what you eat
    • Your cell phone will be your wallet, your ticket broker, your concierge, your bank, your shopping buddy, and more
    • Doctors will get enhanced “super-senses” to better diagnose and treat you

    5 in 5 2008

    • Energy saving solar technology will be built into asphalt, paint and windows
    • You will have a crystal ball for your health
    • You will talk to the Web . . . and the Web will talk back
    • You will have your own digital shopping assistants
    • Forgetting will become a distant memory

    5 in 5 2009

    • Cities will have healthier immune systems
    • City buildings will sense and respond like living organisms
    • Cars and city buses will run on empty
    • Smarter systems will quench cities’ thirst for water and save energy
    • Cities will respond to a crisis — even before receiving an emergency phone call

    5 in 5 2010

    • You’ll beam up your friends in 3-D
    • Batteries will breathe air to power our devices
    • You won’t need to be a scientist to save the planet
    • Your commute will be personalized
    • Computers will help energize your city

    5 in 5 2011

    • People power will come to life
    • You will never need a password again
    • Mind reading is no longer science fiction
    • The digital divide will cease to exist
    • Junk mail will become priority mail

    5 in 5 2012

    • You will be able to reach out and touch through your phone
    • A pixel will be worth a thousand words
    • Computers will hear what matters
    • Digital taste buds will help you to eat healthier
    • Computers will have a sense of smell

    5 in 5 2013

    • The classroom will learn you
    • Buying local will beat online
    • Doctors will routinely use your DNA to keep you well
    • A digital guardian will protect you online
    • The city will help you live in it

    The common theme of course is semiconductors enabling our future health and welfare. Speaking of semiconductor innovation, CES 2014 is next week but first lets look at previously announced CES innovations that changed our lives:

    • Videocassette Recorder (VCR), 1970
    • Laserdisc Player, 1974
    • Camcorder, 1981
    • Compact Disc Player, 1981
    • Digital Audio Technology, 1990
    • Compact Disc – Interactive, 1991
    • Mini Disc, 1993
    • Radio Data System, 1993
    • Digital Satellite System, 1994
    • Digital Versatile Disk (DVD), 1996
    • High Definition Television (HDTV), 1998 Hard-disc VCR (PVR), 1999
    • Digital Audio Radio (DAR), 2000
    • Microsoft Xbox, 2001
    • Plasma TV, 2001
    • Home Media Server, 2002
    • HD Radio, 2003
    • Blu-Ray DVD, 2003
    • HDTV PVR, 2003
    • HD Radio, 2004
    • IP TV, 2005
    • An explosion of digital content services, 2006
    • New convergence of content and technology, 2007
    • OLED TV, 2008
    • 3D HDTV, 2009
    • Tablets, Netbooks and Android Devices, 2010
    • Connected TV, Smart Appliances, Android Honeycomb, Ford’s Electric Focus, Motorola Atrix, Microsoft Avatar Kinect, 2011
    • Ultrabooks, 3D OLED, Android 4.0 tablets, 2012
    • Ultra HDTV, Flexible OLED, Driverless Car Technology, 2013

    Every January CES brings an onslaught of new products, most of which we never see again. I’m still waiting for a 65″ OLED flatscreen. Curved TVs? Smarter smartpones and 2-1 one tablets? Wearables from head to toe? I decided to skip CES this year. Too much going on with work and family and honestly I just did not see anything worth the drive down. The keynote by BK (the Intel CEO) will probably be the highlight of the conference and I can live stream that. If I’m missing something here let me know.

    More Articles by Daniel Nenni…..

    lang: en_US


    Structured Asic Dies…Again

    Structured Asic Dies…Again
    by Paul McLellan on 01-04-2014 at 11:53 pm


    There has always been a dream that you could do a design in a cheap easy to design technology and then, if the design was a hit, press a button and instantly move it into a cheaper unit-price high volume design. When I was at VLSI in the 1980s we had approaches to make it easy to move gate arrays (relatively large die area) into standard cells almost automatically. Another approach was to try and get the design cost down and find a sweet spot between FPGAs and SoCs. LSI Logic had RapidChip structured-ASIC starting in the early 2000s, with pre-configured IP blocks and platforms that could quickly be programmed with just metal. Neither was successful.

    This was especially attractive to FPGA vendors. By their nature, FPGAs are not very efficient in their use of silicon and so FPGA vendors such as Xilinx and Altera felt under pressure that if designs went into high volume manufacturing (HVM) that they should have a way to get the design into something more silicon efficient so they didn’t lose the customer. Xilinx had a program called HardWire to do just this but it was killed off over a decade ago. Apparently they didn’t lose the customers without such a program.

    Altera had a program called HardCopy, When John Daane first joined Altera as CEO he was very bullish about the role that HardCopy would have in Altera’s success and expected it to be a critical differentiator. He expected HardCopy would be 10% of their revenues by 2004. They even had a program called SiliconPro that was basically design services: take the netlist from the FPGA and do a full implementation using standard-cells. They hired a group in Penang to support all these HardCopy designs but…they never materialized. By 2004 HardCopy was 1-2% of Altera’s revenue and maybe got as high as 5% in the end.

    And when I say in the end, that’s what I mean. Altera quietly dropped it from their product line:

    “Altera no longer offers HardCopy structured ASIC products for new design starts”

    Years ago, when I was at Ambit I think, I talked to the team that designed the Rio. This was the first (or the first reasonably successful) portable mp3 player, an iPod long before iPods. It was FPGA-based. I asked them why when it took off they didn’t immediately do a much-lower unit cost cell-based version. He told me that he could do one of two things with his design team. Cost-reduce the current Rio, or do a new FPGA-based Rio2. No prizes for guessing which option they chose. That is the heart of the problem.

    Another problem is that if Rio did produce the ultimate standard product for implementing mp3 players (let’s call it PortalPlayer just for fun) then why would they not want to make as much money as possible selling the chips to everyone (such as, say, Apple) rather than selling mp3 players. Eventually other companies would produce ASSPs in the space and Rio could use them (because if they didn’t their competitors would probably undercut them).

    The reality is that there seems to be a space for doing quick and easy designs even if the unit cost is high (FPGAs). Get to market fast, get traction. And there is a space for doing complex standard products that are sold to the general market. There is not a space for doing moderately hard designs at a moderately low unit price, especially if the volumes are low, nor for semi-automatically moving designs up the chain. It just doesn’t work smoothly enough and the next generation design is always more important than cost-reducing the last one.

    Apple is rich enough that they can bridge the strategy: expensive designs that they do not sell to the general market, a strategy that wouldn’t have worked for the much smaller Rio. But even they buy their modems from Qualcomm. And A7 is not a cost-reduced A6: it is the next generation.

    I think by now it is safe to say that there isn’t really a sweet-spot between FPGAs and SoCs. As in products from Xilinx (I know they will tell you they do much more than FPGAs these days, and they do) and products from Qualcomm/Broadcom/etc. It is the comparison in design methodology that is probably key: the FPGA methodology is fairly automatic, SoC requires $100Ms in design tools and the best designers in the world. There isn’t a gap in between.

    More details on Xilinx UltraScale are here.


    More articles by Paul McLellan…


    New Frontiers in Scan Diagnosis

    New Frontiers in Scan Diagnosis
    by Paul McLellan on 01-03-2014 at 8:10 pm

    As we move down into more and more advanced process nodes, the rules of how we test designs are having to change. One big challenge is the requirement to zoom in and fix problems by doing root cause analysis on test data alone, along with the rest of the design data such as detailed layout, optical proximity correction and so on. But without putting being able to create and run additional tests or put the chip under an electron microscope. Since (digital) test these days is all scan based, this means analyzing scan test failure and “deducing” what the problem has to be Sherlock Holmes style.

    One area of particular problem are intermittent problems due to patterning and double patterning issues. The test doesn’t fail all the time because sometimes the design prints just fine and sometimes it doesn’t, or sometimes the two patterns are well enough aligned that there is no problem and sometimes not. But this shows up as an extended period of low yield until the problem is fixed, which is a financial issue. For example the picture below is a GlobalFoundries 28nm test chip and you can see an area where optical interference has not been completely handled (28nm is not double patterned, this is an 80nm spacing I would assume).

    Traditional failure analysis results in narrowing the problem down to a single logical net. By adding layout awareness, it can be narrowed down to a physical segment. Further analysis can sometimes narrow this down to a specific failure (bridge of two metals, open via, break in metal and so on) or more often a limited number of possible failures and their associated probabilities.

    Root cause deconvolution (RCD) narrows things down more and gets rid of a lot of the noise, things that it cannot be based on a selection of die being analyzed and bayesian probability analysis. This then makes it possible to pick the best die for failure analysis (looking under an electron microscope for example to see what the layout actually looks like).

    FinFETs and 14/16/20nm bring a new set of problems, one of which is that we are getting out of the resolution range of scanning electron microscopes (SEM) and tunneling electron microscopes are required (TEM). Plus a lot of the critical features are much smaller than before making manufacturing defects much more likely.

    Mentor’s products that support this sort of analysis are Tessent Diagnosis and Tessent YieldAnalysis.

    Mentor have a webinar entitled New Frontiers in Scan Diagnosispresented by Geir Eide. It is one of the ASM webinar series on electronic device failure and analysis. It goes into a lot more detail than here with lots more example, including lots more pictures of failures. The webinar is archived here.


    More articles by Paul McLellan…


    Mastering the Magic of Multi-Patterning

    Mastering the Magic of Multi-Patterning
    by Daniel Payne on 01-03-2014 at 7:03 pm

    I’ve been quite impressed that modern ICs use a lithography process with 193nm light sources to resolve final feature sizes at 20nm and smaller dimensions. We’ve been blogging about Double Patterning Technology (DPT) some 45 times in the past few years that enable 20nm fabrication, so one big question for me is, “How does this effect my design and verification flow?”

    David Abercrombie of Mentor Graphics authored a 12 page white paper, “Mastering the Magic of Multi-Patterning” and it answers that question about the impact on design and verification flows with multi-patterning.


    David Abercrombie
    Continue reading “Mastering the Magic of Multi-Patterning”


    NoC, NoC: Your Chip May Be Under Attack

    NoC, NoC: Your Chip May Be Under Attack
    by Paul McLellan on 01-03-2014 at 12:37 pm

    SoCs face a lot of issues related to security and the Network-on-Chip (NoC) is in a good position to facilitate system-wide services. SoCs are now so complex that one of the challenges is to make sure that the chip does what it is meant to do and doesn’t do what it isn’t meant to do. Just as in software, security used to be largely ignored when doing a chip design but with all the latest revelations about the NSA (and others) you can’t just assume your blocks have no security holes and that it impossible to run malicious code on your control processor.

    Chips are vulnerable to attack in all sorts of subtle ways as well as the obvious ones of violating security policies such as writing out encryption keys to non-secure areas. Just as a website is vulnerable to DDOS attack (distributed denial of service) IP blocks on a chip are vulnerable to starvation, forced errors and unauthorized access.

    Sonics has a broad set of security features in its products SSX and SGN that can be used in conjunction with error management to enable content protection, core hijacking prevention, and denial of service protection and thus ensure that an SoC cannot be compromised.

    The protection mechanism allows for access restrictions to user-specified targets using flexible, user-defined protection regions. The initiator access can be additionally qualified with the use of in-band qualifiers. Incoming requests at the target are qualified using the address, the user bits, the initiator id and the type of command to decide whether to grant or refuse the request. Two tests are done: is the incoming request type (read, write, both) permitted by the permissions and are the role bits of the incoming request in the pattern allowed by the user-defined network permission bits.

    The access rights are stored in a table of run-time configurable registers—these registers reside in a protected region. The request address and protection group are used to look up read, write, and role permissions from a table of protection regions.


    In a little more detail, access control at the target agent is based on the following attributes of each request:

    • The address is used to determine the protection region. Because the initiator agent can do address fill-in and/or multi-channel operations, the address received by a target agent may not be the same as the address sent by the initiator.
    • Initiator ID is used to determine protection groups.
    • Command is used to determine if a read or write is requested (the ReadEx command is evaluated as both a read and a write).
    • Separate portions of user defined signals can be used to determine the protection group and the role associated with the request.


    Another important aspect of security is to track issues and capture error conditions. Of course there are lots of reasons for errors on an SoC and security is not necessarily the most likely. As Hanson’s razor says, “Never attribute to malice that which can just be explained by stupidity.” Well, OK, chip designers are not stupid but the most likely reason for sending, say, an unsupported command is an error. But it might not be so capturing errors is a very important part of security.

    The combination of the protection mechanism along with error management helps address many SoC security vulnerabilities such as information extraction, core hijacking (running malicious code) and DoS attacks.

    More information here.


    Somebody at the NSA has a sense of humor

    Somebody at the NSA has a sense of humor
    by Don Dingee on 01-02-2014 at 6:30 pm

    We have to go way back in the annals of entertainment history to find the origin of the word “Jeep”, not just a term of endearment hung on a WWII utility vehicle. Pictured is Eugene the Jeep, a mystical creature belonging to the 4th Dimension, who first appeared to torment Popeye the Sailor in 1936.
    Continue reading “Somebody at the NSA has a sense of humor”