BannerforSemiWiki 800x100 (2)

GlobalFoundries Has a New CEO

GlobalFoundries Has a New CEO
by Paul McLellan on 01-06-2014 at 4:07 pm

Sanjay Jha is taking over as CEO of GlobalFoundries. His background is in mobile. He was at Qualcomm in the early part of his career and was COO from 2006 to 2008 before going to be co-CEO of Motorola and then, when the company was split, CEO of Motorola Mobility. That was acquired by Google and he stepped down after the acquisition closed.

He was born in India. His educational background is a little odd, with an engineering degree from University of Liverpool (in England) and a PhD in EE from University of Strathclyde (in Scotland, and where I actually worked for 4 months between my undergraduate degree and my own PhD, as it happens).

Ajit Manocha was always officially acting CEO, a safe pair of hands while a new CEO was found. He will return to his role as an advisor to the company’s shareholder (Abu Dhabi’s Advanced Technology Investment Corporation, ATIC). GlobalFoundries only has one shareholder. They used to have 2 but ATIC bought out AMD’s share in 2012.

GlobalFoundries also announced (actually last Friday) that they will deploy an additional $9-10B of capital over the next couple of years, primarily the buildout of Fab 8 which is in Saratoga New York, and is GlobalFoundries’ most advanced fab. Some will also be invested in Dresden Germany and in the ex-Chartered fabs in Singapore.

GlobalFoundries is the second biggest foundry, but it is a long way behind TSMC both in terms of revenue and in its capability to deliver leading edge processes so Sanjay certainly has plenty of challenges. But the brass ring in the leading edge foundry business is the mobile industry and his connections there from Qualcomm and Motorola have to be an advantage. The biggest challenge is really to turn what seem to be almost unlimited amounts of money from ATIC into a business that is truly competitive with TSMC, Samsung (and maybe Intel). The other foundries such as UMC and SMIC seem to be struggling even more than GlobalFoundries (or are specialized analog fabs that don’t really compete in the same business). In the long run, if GlobalFoundries is going to be truly successful at the leading edge it needs to get some business from the likes of Apple, Broadcom, nVidia and…err…Qualcomm. He might be able to get them to return his phone calls having been COO.

I wish him luck. The foundry ecosystem needs a strong number 2 competitor.


More articles by Paul McLellan…


India Spearheading into Space Technology

India Spearheading into Space Technology
by Pawan Fangaria on 01-06-2014 at 11:00 am

Success follows failures, if your perseverance in high enough to achieve any kind of arduous goal. This adage was witnessed by Indian Space Research Organization (ISRO) successfully launching India’s first rocket, Geosynchronous Satellite Launch Vehicle GSLV D5 which carried GSAT 14 advanced communications satellite and successfully placed it in the orbit. Earlier two attempts had failed – one in last August was aborted before launching due to fuel leak from one of the rocket’s engines and the other in December 2010 (which employed Russian cryogenic engines) burst just after take-off. This exemplary success puts India into the small group of elite nations having this technology. As of date, only U.S., Japan, France, European Space Agency, China and Russia had this technology. By ushering into such advanced technology in global arena, Indian scientists bring unprecedented pride for India.

The GSLV D5 was indigenously developed in India by ISRO under its ambitious space programme. There was a need felt to put heavy satellites into higher orbits above 35000 KM from the Earth. India had been trying since long to develop its own cryogenic engines because a cryogenic rocket can provide higher thrust per kilogram of propellant. However cryogenic technology is extremely complex because of its propellants (liquid oxygen and liquid hydrogen) which require extremely low temperatures; liquid hydrogen at below -250 degree centigrade and liquid oxygen at below -150 degree centigrade. This further involves thermal and structural issues of the engine. The rocket itself weighed about 415 tonnes and the GSAT 14 satellite, which it carried, weighed about 2 tonnes. The launch was done from the Satish Dhawan Space Centre at Sriharikota in Andhra Pradesh state of India on Sunday, 5[SUP]th[/SUP] Jan.

Dr. K. Radhakrishnan, Chairman of ISRO is extremely proud of this accomplishment by his ‘Team ISRO’. On the momentous day he said, “Team ISRO and project directors put their heart and soul in making this proud moment for the country.” Including last 3-4 years of rigorous exercise, it took overall about 20 years to develop this proven cryogenic technology in the country. This makes India self-reliant in deploying its communication satellites into space; so far the rocket from France (Ariane) was being used to launch communication satellites in geostationary transfer orbits. The GSAT 14 satellite will be used for telecommunication and telemedicine applications.

This technology is expected to provide large push to Indian economy by saving foreign exchange for launching home satellites and at the same time earning large foreign cash for launching satellites of other countries in this lucrative global business, thus stabilizing the CAD (Current Account Deficit) situation in India. Today, in the international market, launching charge for a satellite is approximately 80 to 90 million USD. And demand for communication infrastructure is ever growing globally.

It’s a great feat for India which is struggling to balance its imports against exports. For that reason the India Government has been pursuing for two semiconductor foundries also in India, so that the chip manufacturing can be done here itself and not imported for its domestic demand. Let’s see how the future unfolds for India with this kind of technological thrust.

More Articles by Pawan Fangaria…..

lang: en_US


Imagination’s New GPU Cores

Imagination’s New GPU Cores
by Paul McLellan on 01-06-2014 at 9:00 am

This morning Imagination announced their latest GPU cores, including the world’s smallest fully-featured OpenGL ES3.0/OpenCL GPU core. More on that below. And it is the Internet…and graphics…so cats. Graphically rendered cats.

The first core is a a high end core new generation PowerVR Series6XT architecture delivers up to 50% higher performance and advanced power management. The Series6XT architecture features market-leading scalability, supporting implementations up to eight compute clusters that scale linearly in GFLOPS and texturing rates. With OpenGL ES 3.0 support across the range, Series6XT provides among the highest performance OpenGL ES 3.0 GPUs in the industry. Today they unveiled the first three cores in the Series6XT generation with two, four and six compute clusters respectively.

The PowerVR series is already the market leader, at least according to John Peddie research. There are now over 45,000 developers working on graphics applications using this portfolio of IP.

Here is how the whole family fits together for everyone who wants to see the whole (announced) roadmap:

This achieves up to 50% performance increase on the latest industry benchmarks compared to their previous generation of cores, with both the best performance in terms of GFLOPS/mm[SUP]2[/SUP] and GFLOPS/mW. There are architectural enhancements such as streamlined instruction set for improved application performance, the next generation of the hierarchical scheduling technology (HST) for higher resource utilization, sustained polygon throughput and improved pixel fillrate along with other improvements for better parallel processing.

The PowerGearing tile-based deferred rendering (TBDR) is the world’s most efficient. This enables fine-grained control of all GPU resources and dynamic demand-based scheduling, routing power only to needed resources to get the best in low-power performance.

The PVR3C triple compression provides for lossy texture compression (down to around 1 bit per pixel), but lossless image compression (about 2X) and lossless geometry compression. This reduces bandwidth requirements and thus leads to lower power.


The lower-end new Rogue GPUs are targeted at entry-level segments ideal for applications with limited area and bandwidth where the higher powered cores are inappropriate. There are four cores in the family:

  • PowerVR Series6XE G6050: takes advantage of the latest Rogue architectural developments and extends the scalability beyond one cluster to a half-cluster while maintaining full software compatibility, creating the smallest fully-featured OpenGL ES3.0 and OpenCL-capable GPU core available
  • PowerVR Series6XE G6060: This core augments the half-cluster design of the G6050 with the addition of second generation lossless image compression (PVRIC2), providing an optimal balance of small size and bandwidth reduction for products such as entry-level mobile products, tablets and full HDTVs and set-top boxes.
  • PowerVR Series6XE G6100: an updated and further optimized version of the original core and features a single Unified Shader Cluster (USC) combined with a high-performance texture mapping unit, enabling it to deliver the same raw fillrates as multi-processor GPUs from the previous generation.
  • PowerVR Series6XE G6110: single-cluster core extends the above G6100 design, adding PVRIC2 for improved throughput, reduced bandwidth, conserved power and improved system performance—key for products such as low-cost 4K UltraHD TVs and tablets where bandwidth is a limited, valuable resource.

Both cores come with physical design optimization kits (DOKs) to optimize power, performance and area (PPA). These include reference flows, tuned libraries from partners, characterization data and more.

More details on Imagination’s website here.

More articles by Paul McLellan…


OpenVX Bring Power-efficient Vision Acceleration to Mobile

OpenVX Bring Power-efficient Vision Acceleration to Mobile
by Eric Esteve on 01-06-2014 at 8:44 am

OpenVX is the next open source sample specification to be launched by Khronos group, a consortium building a family of interoperating APIs for portable and power efficient vision processing. If you take a look at the OpenVX participant list, you can check that the major chip makers: Broadcom, Qualcomm, TI, Intel, Nvidia, Renesas, Samsung, ST or Xilinx, as well major IP vendors like ARM or CEVA have joined the Khronos group. Before discussing OpenVX benefits, we may comment this “Why do we NEED Standards?” slide:

These four bullets may look like obvious affirmations, but it’s worth to remind that the semiconductor industry need standard Interfaces for Interoperability, and that widely adopted standard interfaces is growing market opportunity, in such a way that it turn into a mass market, so devices can be built cheaply enough to attract even more customers. This is a kind of virtuous cycle, industries cooperate to build a market (looks naïve?)… And then compete, which is more realistic. And because today the largest market is by far the mobile industry, OpenVX has been developed to support computing vision for mobile and embedded.

Computing vision is extremely data intensive, and OpenVX bring a sophisticated image stream generation:

  • Advanced, high-frequency burst control of camera and sensor operation
  • Portable support more types of sensors – including depth sensors
  • Tighter system integration – e.g. synch of camera and MEMS sensors

OpenVX allows developing advanced imaging & vision application, like:

  • Image enhancement,
  • Object tracking and detection, and
  • Image manipulation

In fact, OpenVX can be implemented with CPU, GPU or DSP cores, but the goal is not only to accelerate performances but also to drastically minimize power consumption, as the primary target is mobile. Thus, implementing OpenVX on CEVA DSP core is probably the best option, allowing decreasing by 10X power consumption, when compared with CPU. Just remember that CEVA is supporting Android Multimedia Framework (AMF), a system level software solution allowing offloading of multimedia tasks from CPU/GPU to most efficient application-specific DSP platforms. The next picture illustrate OpenVX Development Flow, this example being for an Android OS using CEVA AMF:

Benefits enabled by AMF include:

  • Multimedia tasks are abstracted from the CPU and are physically running on the DSP. Furthermore, tasks can be combined (“tunneled”) onto the DSP – saving data transfer, memory bandwidth and cycles overhead on the CPU
  • Utilization of the PSU (Power Scaling Unit) available for CEVA DSPs to significantly lower power consumption further, when running multimedia tasks
  • Easy activation of CEVA-CV computer vision (CV) software library for the development of vision-enabled applications
  • Easy activation of yet to be officially released OpenVX pre-optimized software library

A chip maker developing IC for mobile has to search for differentiation, and leaders on the mobile market appear to be also the companies strongly investing to propose differentiated solutions, as we have explained in this blog for example. Is OpenVX adoption a drawback for differentiation? In fact, it’s not the case: as you can see on the above OpenCV flow, OpenVX standardized API are available in open source. Thus, it’s possible to extend OpenVX library to create differentiation, still keeping the benefits of using a standardized interface solution. This is also true for CEVA DSP customers: they will benefit from OpenVX standard interface, supported by CEVA, from the drastic reduction of the power consumption associated with DSP core usage when compared with CPU or GPU and this customer could also build a differentiated solution by extending OpenVX library with specifically developed API.

Eric Esteve from IPNEST

lang: en_US

More Articles by Eric Esteve …..


The Future of Intel

The Future of Intel
by Scotten Jones on 01-06-2014 at 1:30 am

There have been a lot of articles and discussion on SemiWiki about Intel. These articles have all been written from the perspective of an outsider commenting on what Intel is doing, or should or shouldn’t be doing. I thought it would be interesting to take a look at how Intel got to where they currently are, what their current strengths and weaknesses are and what Intel’s options are. I am not an Intel insider nor do I profess to have inside knowledge of their situation, I am however someone who has worked in the semiconductor industry for over 30 years and run a semiconductor division and I think Intel’s options are fairly clear.

Historical perspective

In 1971 Intel invented the PC microprocessor. As PCs became the driver for the semiconductor industry the microprocessor grew into one of the largest and the most profitable segments of the semiconductor industry. Intel built and maintained a market leading position in microprocessors and rode that success to become the largest and most profitable semiconductor company in the world.

The tremendous profits generated by Intel’s microprocessor business funded the construction of a network of state-of-the-art wafers fabs and the development of what I believe to be the most advanced IC logic technology in the world.

I know there is some debate about whether Intel really has the most advanced logic process technology but I think on a fundamental level they do and my argument is pretty straight forward. In 2007 Intel introduced high-k metal gates (HKMG) at their 45nm node. It wasn’t until approximately 1-1/2 nodes and 3 years later that other logic IC producers introduced HKMG. In 2011 Intel introduced FinFETs at their 22nm node. Other leading logic producers are expected to introduce FinFETs later this year, three years after Intel. By the time the 14/16nm first generation FinFETs are introduced from other leading producers Intel will already be producing their second generation 14nm FinFET technology that I believe will include another major new innovation, 2 to 3 years ahead of everyone else. These are fundamental innovations not convoluted with design and operating system issues.

The changing competitive landscape

While Intel has been dominating the microprocessor business, the semiconductor landscape has been changing under their feet.

The foundry/fabless models have been steadily developing in size and capability. Many fabless companies have sprung up focused on system-on-a-chip (SOC) development. SOCs are perfect for small form factor products such as smart phones and tablets and companies such as Qualcomm, Broadcom, Media Tek and even Apple (who now designs their own SOCs) dominate the SOC market for communications. I don’t think Intel was blind to this developing market; Intel has invested billions of dollars in communications over the years. What I think is the key issue is that inside Intel communications has always been second priority to microprocessors. All you have to do is look at Intel’s process technology where at each new node they first introduce the microprocessor version of a process and then around a year later they introduce the communications version. You will know they are really serious about communications when a new node is introduced with the communications version of the process coming out first.

The other thing that I think has happened is that Intel was surprised by how rapidly tablets and smart phones have grown and how they have cannibalized notebook computer sales (in fairness to Intel I think this has surprised most everyone). In 2012 unit sales of PCs declined for the first time and in 2013 they declined again. In my opinion PC unit sales will likely continue to be flat to down for the next several years. At the same time, although server sales are growing to support all of these connected devices, unit volumes for servers are far lower than for notebooks and Intel is facing increasing competition from the ARM camp for server business. I suspect Intel may also have expected that Ultrabooks would help fend off the tide of tablets but so far that hasn’t happened.

I do want to add a side note here that the Microsoft Surface Pro 2 tablet uses Intel microprocessors. This Christmas I had no problem getting an iPad air for my wife but couldn’t find a surface 2 anywhere. The surface 2 uses an NVIDIA Tegra processor but Surface Pro 2 that use Intel microprocessors was also in short supply. This could be due to a lot of reasons including Apple just planning better, but I thought it was interesting.

Intel’s current position

Intel is still the world’s largest semiconductor company with industry leading margins and lots of cash. They have industry leading process technology and a huge network of state-of-the-art fabs. They have a strong brand, strong marketing organization and sales channels and lots of engineers.

On the other hand only a few quarters ago there were widespread reports that utilization at Intel’s fabs was only 60%. In recent financial announcements Intel has claimed 80% but that is still lower than they would like. Intel also has their 14nm process coming on-line in the ramping Fab 11x and they have fab 42 ready to turn on for 14nm production. Intel needs to find a way to fill these fabs and keep them full or they will be facing a serious cost crisis and longer term have difficulty funding their continued lead in process technology. In short Intel needs to find a way to resume growing their business.

Intel’s options

The way I see it Intel has three main options:

[LIST=1]

  • Increase Microprocessor Sales – Perhaps the simplest option from Intel’s perspective would be to rev up microprocessor sales. If Ultrabooks and Microsoft surface pro 2s were to be run away successes this would fix Intel’s problems pretty quickly. To-date the success of these two products does not appear to be sufficient to fix Intel’s problems. That could change but strikes me as unlikely. It is also something that is largely outside of Intel’s control.
  • Succeed in Communications – the communications applications are currently serviced by a formidable group of competitors that are very focused on this space. Displacing a company such as Qualcomm at a major account is a major challenge. Companies such as Qualcomm also have a wide variety of communications targeted processes to choose from produced by: TSMC, Global Foundries, Samsung and UMC. Intel only produces a relatively narrow set of process options for communications. Still if Intel can design competitive products on their own process they will have one advantage in that they get to keep the margin that a fabless company has to pay to an outside fab. Yes, fabless companies avoid investing in process development but on a percentage basis that is smaller than the typical foundry margin (TSMC spends 8% of revenue on R&D but commands a 48% gross margin). Of course this is complicated by foundry fabs being located in low cost regions of the world while Intel’s fabs are generally located in higher cost regions but with microprocessors helping to absorb fab costs it should offer Intel a competitive cost structure. This is however an uphill battle for Intel against successful and entrenched competition as I think Intel has learned over the last decade. It should also be noted here that margins on SOCs while good in some cases are not generally as good as for microprocessors and Intel would see an overall margin decline if this strategy were successful. I do think if intel wants to succeed here they need to change their mind set from microprocessors first to communications first.
  • Succeed in foundry – the third option for keeping the Intel manufacturing engine humming is to make wafers for others as a foundry. I believe Intel is doing this as a hedge against declining microprocessor sales and possible lack of success in communications. I also believe that if Intel had to rank these three options they would be pretty much in the same order as they are here. The problem from an Intel perspective is Microprocessors are declining and Intel has been trying to succeed in communications for a long time with nothing but losses to show for it. Success in foundry for Intel will be difficult to achieve. TSMC is a fierce and successful competitor with a portfolio of foundry processes, a huge state-of-the-art fab network in a low cost part of the world and a vast – well established design environment. Intel can also be viewed as a competitor by some potential customers. Global Foundries and even Samsung and UMC are well ahead of Intel on the foundry learning curve. This will be another uphill battle for Intel. I frankly doubt Intel wants to be a general foundry with dozens of customers but I would bet they would love to have Apple or Qualcomm. Like the communication business the foundry business is another space where Intel would have to accept lower margins.

    The bottom line of all this is that with a declining PC microprocessor market Intel has to either prepare itself for being a mature – no growth company or succeed in communications or foundry or to some extent in both. I don’t think Intel can or will except no growth and so they are pursuing the only options they have. It has been argued on SemiWiki that Intel shouldn’t be in the foundry business but I think the answer is what choice do they have? Intel isn’t going to disappear but they need to find a path to resumed growth or they will be managing a whole set of really difficult problems due to shrinking revenue or shrinking gross margins or both.

    Scotten W. Jones – President IC Knowledge LLC

    lang: en_US


  • International CES: Day One

    International CES: Day One
    by Bill Jewell on 01-05-2014 at 10:00 pm

    Semiconductor Intelligence will be attending the International CES this week in Las Vegas, Nevada. The Consumer Electronics Association (CEA) puts on the show each year. The CEA insists the meeting be called “International CES” and states “CES” no longer stands for Consumer Electronics Show. The show is now about “consumer technology” which is broader than just electronics. Anyway, it is a chance to get together with about 150,000 of my closest friends and see the latest in consumer gadgets. I will post daily updates with my impressions on the coolest stuff and observations of the chaos which is CES.

    Sunday, January 5, 2014
    For all of you suffering in heavy snow and sub-zero temperatures, it is sunny and in the 50s in Las Vegas. Today was CES Unveiled, an introduction to some of the CES exhibits. The items ranged from the usual suspects (lots of headphones and speakers) to the cool to the “why does anyone need this?”

    Among the cool: drones are not just for taking out Al Qaeda anymore. Several were flying around CES Unveiled. Supposedly they have practical applications in surveying and aerial photography, but are mainly a cool toy you can use to spy on your neighbors.

    Also among the cool:zepp.comhas sensors which mount on a golf club, baseball bat or tennis racket. All the information about your swing (motion, speed, angles) is available to review on your PC or phone.

    Another cool device was from guard2me.com. This wristwatch size device is designed for people with dementia. The device can track where a person is, outside with GPS or inside with Wifi. It can also sense when a person falls and alert medical personnel.

    In the “why does anyone need this” category is a lighting system which allows you to turn off or dim your lights with your phone. A more practical lighting device was from SmartCharge. The LED bulb acts as a normal light bulb. When the power goes out, it will run for up to four hours using the regular light switch control.

    More from sunny Vegas tomorrow.

    Bill Jewell, www.sc-iq.com

    More Articles by Bill Jewell…..

    lang: en_US


    Mission Critical Role of Unmanned Systems – How to fulfill?

    Mission Critical Role of Unmanned Systems – How to fulfill?
    by Pawan Fangaria on 01-05-2014 at 11:30 am

    Do we ever imagine what kind of severe challenges mission critical unmanned systems in air, land and underwater face? They are limited in space and size; have to be light in weight, flexible in different types of operations and at the same time rugged enough to work in extreme climatic conditions. That’s not enough; amidst these severe environmental limitations these systems have insatiable requirements of added newer state-of-the-art functionality to work with lesser power within constrained energy to lengthen their duration of operation as most of these systems operate on batteries. Nevertheless, these systems need to provide preciseness of operation with unmatched performance, quality and reliability; and therefore need to conform to various standards such as ESD (Electrostatic discharge) and EMC (Electromagnetic compatibility standard developed by U. S. Military). And it’s evident that electronics plays a major critical role in controlling and working of these unmanned systems.

    For the system to work perfectly, it’s essential that the boards, packages and the semiconductor designs within ICs are architected and implemented perfectly from the beginning to end; conforming to all standards, size, space, power, thermal tolerance and the like. How to estimate various criteria and then meet them to achieve such stringent requirements before the systems going into field? That is done by software simulation for each of such criterion, and such software tools are seeded by actual physical principles that govern the working of semiconductor elements in isolation and in groups interfering with each other in today’s complex chips, SoCs and systems. A study by U. S. Department of Defense reveals that ROI from such software infrastructure and simulation is about 7 to 13 times of investment.

    Now let’s look at the key criteria one-by-one. Power has taken the central spot among other issues as that affects the system reliability, thermal and electrical effects and endurance of the system. It’s important that power management and reduction must be considered from the architecture stage of the design. ANSYS Apache’s PowerArtisttool analyses RTL description of a semiconductor design at the architecture level and identifies improvements in the design (without changing functionality) that can reduce power consumption considerably, thus reducing heat generation and improving long term reliability of the system.

    After right architecture of the design, power integrity must be ensured by proper delivery of power such that only required parts of the circuitry draw necessary and sufficient power at a time and other parts are shut to avoid wastage of power. Apache’s RedHawk tool analyses power integrity for all components and suggests any required changes to ensure specified power to each component for its flawless operation. It uses ANSYS extraction engine to generate electrical model of the package and board that helps in power integrity analysis of the overall system.

    Thermal management is another challenging criterion for miniaturized systems carrying large control and surveillance capabilities. RedHawk can again predict the heat generation due to power dissipation in the system. And Icepak, another advanced thermal modelling tool from ANSYS can analyse heat transportation and distribution through the system for proper selection of the IC package and cooling mechanism.

    Electromagnetic interference and electromagnetic compatibility is to the core of any electronic system and semiconductor design because of extreme proximity of wires and other electrical elements which can easily be affected by transient switching of currents and power delivery network resonance. While RedHawk analyses the activity of a component and simulates the electromagnetic excitation caused by this activity, another tool from Apache, Sentinel simulates the impact of its electromagnetic interference on the system’s environment. This study helps in optimizing the design for near and far-field electromagnetic field distribution and radiation by components to meet EMC compliance.

    Reliability is the first and foremost requirement for military systems that can be exposed to extreme temperature and other climatic conditions. Electromigration that can be caused by high current density, impacts interconnects leading to the device failure over long run. High temperature further aggravates electromigration. RedHawk analyses electromigration requirements for the complete interconnect in the chip to fulfill its functioning for its lifetime under marked extreme environment conditions.

    Electrostatic discharge (ESD) is another concern for reliability of the system. Pathfinder is another tool from Apache that can comprehensively analyse any device sensitivity to ESD and guides for proper protection as per ESD standards.

    Dr. Robert Harwood, Aerospace and Defense Industry Director at ANSYS and Ms. Margaret Schmitt, Application Engineering Director at Apache have described in great detail about these challenges of next generation unmanned systems and the tools for their solution in a whitepaperat Apache’s website. As I read it, the comprehensive set of tools as described in the paper, not only caters to the military systems but are very much appropriate for any electronic system and semiconductor design for automotive, aerospace, healthcare or personal technologies such as smartphones, notebooks or home appliances.

    More Articles by Pawan Fangaria…..

    lang: en_US


    Innovations that will change our lives in the next five years

    Innovations that will change our lives in the next five years
    by Daniel Nenni on 01-05-2014 at 8:00 am

    The theme of our book “Fabless: The Transformation of the Semiconductor Industry” comes from the Steve Jobs quote,“You can’t really understand now if you don’t know what came before.” After chronicling the rise of the semiconductor industry and transformation to the fabless business model we ask in the final chapter, “What’s next for the semiconductor industry?” We now have more than two dozen passages from CEOs, luminaries, and pundits, which makes for an interesting read, absolutely.

    In the same vein, researchers from IBM publish an annual list called “5 in 5”, five technologies that will change our lives in the next five years:

    5 in 5 2007

    • It will be easy for you to be green and save money doing it
    • The way you drive will be completely different
    • You are what you eat, so you will know what you eat
    • Your cell phone will be your wallet, your ticket broker, your concierge, your bank, your shopping buddy, and more
    • Doctors will get enhanced “super-senses” to better diagnose and treat you

    5 in 5 2008

    • Energy saving solar technology will be built into asphalt, paint and windows
    • You will have a crystal ball for your health
    • You will talk to the Web . . . and the Web will talk back
    • You will have your own digital shopping assistants
    • Forgetting will become a distant memory

    5 in 5 2009

    • Cities will have healthier immune systems
    • City buildings will sense and respond like living organisms
    • Cars and city buses will run on empty
    • Smarter systems will quench cities’ thirst for water and save energy
    • Cities will respond to a crisis — even before receiving an emergency phone call

    5 in 5 2010

    • You’ll beam up your friends in 3-D
    • Batteries will breathe air to power our devices
    • You won’t need to be a scientist to save the planet
    • Your commute will be personalized
    • Computers will help energize your city

    5 in 5 2011

    • People power will come to life
    • You will never need a password again
    • Mind reading is no longer science fiction
    • The digital divide will cease to exist
    • Junk mail will become priority mail

    5 in 5 2012

    • You will be able to reach out and touch through your phone
    • A pixel will be worth a thousand words
    • Computers will hear what matters
    • Digital taste buds will help you to eat healthier
    • Computers will have a sense of smell

    5 in 5 2013

    • The classroom will learn you
    • Buying local will beat online
    • Doctors will routinely use your DNA to keep you well
    • A digital guardian will protect you online
    • The city will help you live in it

    The common theme of course is semiconductors enabling our future health and welfare. Speaking of semiconductor innovation, CES 2014 is next week but first lets look at previously announced CES innovations that changed our lives:

    • Videocassette Recorder (VCR), 1970
    • Laserdisc Player, 1974
    • Camcorder, 1981
    • Compact Disc Player, 1981
    • Digital Audio Technology, 1990
    • Compact Disc – Interactive, 1991
    • Mini Disc, 1993
    • Radio Data System, 1993
    • Digital Satellite System, 1994
    • Digital Versatile Disk (DVD), 1996
    • High Definition Television (HDTV), 1998 Hard-disc VCR (PVR), 1999
    • Digital Audio Radio (DAR), 2000
    • Microsoft Xbox, 2001
    • Plasma TV, 2001
    • Home Media Server, 2002
    • HD Radio, 2003
    • Blu-Ray DVD, 2003
    • HDTV PVR, 2003
    • HD Radio, 2004
    • IP TV, 2005
    • An explosion of digital content services, 2006
    • New convergence of content and technology, 2007
    • OLED TV, 2008
    • 3D HDTV, 2009
    • Tablets, Netbooks and Android Devices, 2010
    • Connected TV, Smart Appliances, Android Honeycomb, Ford’s Electric Focus, Motorola Atrix, Microsoft Avatar Kinect, 2011
    • Ultrabooks, 3D OLED, Android 4.0 tablets, 2012
    • Ultra HDTV, Flexible OLED, Driverless Car Technology, 2013

    Every January CES brings an onslaught of new products, most of which we never see again. I’m still waiting for a 65″ OLED flatscreen. Curved TVs? Smarter smartpones and 2-1 one tablets? Wearables from head to toe? I decided to skip CES this year. Too much going on with work and family and honestly I just did not see anything worth the drive down. The keynote by BK (the Intel CEO) will probably be the highlight of the conference and I can live stream that. If I’m missing something here let me know.

    More Articles by Daniel Nenni…..

    lang: en_US


    Structured Asic Dies…Again

    Structured Asic Dies…Again
    by Paul McLellan on 01-04-2014 at 11:53 pm


    There has always been a dream that you could do a design in a cheap easy to design technology and then, if the design was a hit, press a button and instantly move it into a cheaper unit-price high volume design. When I was at VLSI in the 1980s we had approaches to make it easy to move gate arrays (relatively large die area) into standard cells almost automatically. Another approach was to try and get the design cost down and find a sweet spot between FPGAs and SoCs. LSI Logic had RapidChip structured-ASIC starting in the early 2000s, with pre-configured IP blocks and platforms that could quickly be programmed with just metal. Neither was successful.

    This was especially attractive to FPGA vendors. By their nature, FPGAs are not very efficient in their use of silicon and so FPGA vendors such as Xilinx and Altera felt under pressure that if designs went into high volume manufacturing (HVM) that they should have a way to get the design into something more silicon efficient so they didn’t lose the customer. Xilinx had a program called HardWire to do just this but it was killed off over a decade ago. Apparently they didn’t lose the customers without such a program.

    Altera had a program called HardCopy, When John Daane first joined Altera as CEO he was very bullish about the role that HardCopy would have in Altera’s success and expected it to be a critical differentiator. He expected HardCopy would be 10% of their revenues by 2004. They even had a program called SiliconPro that was basically design services: take the netlist from the FPGA and do a full implementation using standard-cells. They hired a group in Penang to support all these HardCopy designs but…they never materialized. By 2004 HardCopy was 1-2% of Altera’s revenue and maybe got as high as 5% in the end.

    And when I say in the end, that’s what I mean. Altera quietly dropped it from their product line:

    “Altera no longer offers HardCopy structured ASIC products for new design starts”

    Years ago, when I was at Ambit I think, I talked to the team that designed the Rio. This was the first (or the first reasonably successful) portable mp3 player, an iPod long before iPods. It was FPGA-based. I asked them why when it took off they didn’t immediately do a much-lower unit cost cell-based version. He told me that he could do one of two things with his design team. Cost-reduce the current Rio, or do a new FPGA-based Rio2. No prizes for guessing which option they chose. That is the heart of the problem.

    Another problem is that if Rio did produce the ultimate standard product for implementing mp3 players (let’s call it PortalPlayer just for fun) then why would they not want to make as much money as possible selling the chips to everyone (such as, say, Apple) rather than selling mp3 players. Eventually other companies would produce ASSPs in the space and Rio could use them (because if they didn’t their competitors would probably undercut them).

    The reality is that there seems to be a space for doing quick and easy designs even if the unit cost is high (FPGAs). Get to market fast, get traction. And there is a space for doing complex standard products that are sold to the general market. There is not a space for doing moderately hard designs at a moderately low unit price, especially if the volumes are low, nor for semi-automatically moving designs up the chain. It just doesn’t work smoothly enough and the next generation design is always more important than cost-reducing the last one.

    Apple is rich enough that they can bridge the strategy: expensive designs that they do not sell to the general market, a strategy that wouldn’t have worked for the much smaller Rio. But even they buy their modems from Qualcomm. And A7 is not a cost-reduced A6: it is the next generation.

    I think by now it is safe to say that there isn’t really a sweet-spot between FPGAs and SoCs. As in products from Xilinx (I know they will tell you they do much more than FPGAs these days, and they do) and products from Qualcomm/Broadcom/etc. It is the comparison in design methodology that is probably key: the FPGA methodology is fairly automatic, SoC requires $100Ms in design tools and the best designers in the world. There isn’t a gap in between.

    More details on Xilinx UltraScale are here.


    More articles by Paul McLellan…


    New Frontiers in Scan Diagnosis

    New Frontiers in Scan Diagnosis
    by Paul McLellan on 01-03-2014 at 8:10 pm

    As we move down into more and more advanced process nodes, the rules of how we test designs are having to change. One big challenge is the requirement to zoom in and fix problems by doing root cause analysis on test data alone, along with the rest of the design data such as detailed layout, optical proximity correction and so on. But without putting being able to create and run additional tests or put the chip under an electron microscope. Since (digital) test these days is all scan based, this means analyzing scan test failure and “deducing” what the problem has to be Sherlock Holmes style.

    One area of particular problem are intermittent problems due to patterning and double patterning issues. The test doesn’t fail all the time because sometimes the design prints just fine and sometimes it doesn’t, or sometimes the two patterns are well enough aligned that there is no problem and sometimes not. But this shows up as an extended period of low yield until the problem is fixed, which is a financial issue. For example the picture below is a GlobalFoundries 28nm test chip and you can see an area where optical interference has not been completely handled (28nm is not double patterned, this is an 80nm spacing I would assume).

    Traditional failure analysis results in narrowing the problem down to a single logical net. By adding layout awareness, it can be narrowed down to a physical segment. Further analysis can sometimes narrow this down to a specific failure (bridge of two metals, open via, break in metal and so on) or more often a limited number of possible failures and their associated probabilities.

    Root cause deconvolution (RCD) narrows things down more and gets rid of a lot of the noise, things that it cannot be based on a selection of die being analyzed and bayesian probability analysis. This then makes it possible to pick the best die for failure analysis (looking under an electron microscope for example to see what the layout actually looks like).

    FinFETs and 14/16/20nm bring a new set of problems, one of which is that we are getting out of the resolution range of scanning electron microscopes (SEM) and tunneling electron microscopes are required (TEM). Plus a lot of the critical features are much smaller than before making manufacturing defects much more likely.

    Mentor’s products that support this sort of analysis are Tessent Diagnosis and Tessent YieldAnalysis.

    Mentor have a webinar entitled New Frontiers in Scan Diagnosispresented by Geir Eide. It is one of the ASM webinar series on electronic device failure and analysis. It goes into a lot more detail than here with lots more example, including lots more pictures of failures. The webinar is archived here.


    More articles by Paul McLellan…