CEVA Dolphin Weninar SemiWiki 800x100 260419 (1)

Car Theft Making a Comeback

Car Theft Making a Comeback
by Roger C. Lanctot on 07-24-2016 at 4:00 pm

In the U.K., where vehicle theft has been in a steep decline for the past 20 years, the most widespread advice given by police to car owners is: keep your car keys in your freezer. The most common source of vulnerability these days is the interception of RF signals between keyfobs and cars. For a time, several years ago, there was a rash of thefts that derived from the car owner’s inclination to leave car keys near their front doors.


The issue is timely as LoJack reminds us that July is National Vehicle Theft Protection Month. The company released an infographic to highlight its concerns: http://tinyurl.com/hxca92y

The proliferation of remote keyless entry and telematics is setting the stage for a renaissance in vehicle theft. Thieves are still interested in auto parts ranging from tires and catalytic converters to airbags, but grabbing the entire vehicle may be getting easier in some circles with the aid of code grabbers intercepting signals from keyfobs.

In the U.S., vehicle theft has also declined, though not as steeply as in the U.K. The decline has been steep enough to make life difficult for stolen vehicle tracking and recovery companies like LoJack. Data from the FBI for the first half of 2015 suggests a significant uptick in vehicle theft in the U.S. and the U.K. has also seen a recent spike.

Some recent thefts of FCA Jeep vehicles in Texas suggests that hacking may be taking the place of smash and grab style thefts of and from vehicles. The Texas report points to hacking thanks to video captured by a home owner of the car thief entering the vehicle and apparently using a laptop computer to start the car: http://tinyurl.com/j7pe7lm

The thefts in Texas are interesting and important from several perspectives. According to the news report, the police believe the thief tapped into the car’s on-board computer via the OBDII port and created his own key. FCA executives expressed their “concerns that the thieves may have gotten hold of a system used by dealers to pair the vehicles with a new key, one they already had in hand. That could be as simple as access to a dealer website where knowing a vehicle’s VIN, or unique identification number, can provide the necessary codes to marry car and key.”

Automotive Cybersecurity
The automotive industry has been wrestling with the issue of cybersecurity ever since IOActive analysts Chris Vlasek and Charlie Miller hacked their way into a Toyota Prius and a Ford Escape two years ago. The findings from these analysts, presented at a Black Hat cybersecurity event, was that cars are now frequently equipped with both telematics systems and automated parking systems – a combination that makes taking control of the vehicle locally or remotely both fun and potentially profitable.

The National Highway Traffic Safety Administration (NHTSA) got involved after Vlasek and Miller followed up their Toyota/Ford exploit (which involved a lot of dashboard disassembly) with the now-famous or infamous Jeep hack. The IOActive pair, who now work for Uber, demonstrated how they could remotely control the hacked Jeep – to the horror of FCA executives, regulators and Jeep owners.

Of course, the Jeep hack required some significant preparation and was not achieved without time spent reverse engineering code and penetrating the vehicle’s limited security preparations. In fact, the Jeep hack exposed a significant vulnerability which led to FCA initiating a recall and sending out USB software updates to owners of the effected vehicles.

Even after the Jeep hack, though, industry executives scratched their heads over why hackers would bother to hack cars. Up until recently car makers were content with their “security by obscurity” approach – ie. cars were just difficult enough to hack to make it not worth the effort.

But the prospect of vehicle theft combined with increasingly obvious security shortcomings may signal a turning point in the vehicle theft business. The latest data from the U.K.: vehicle theft is up 9.9%.

Software Updates
The Jeep hack exposed the seriousness of vehicle vulnerability and the extent to which car companies are ill prepared to respond. Vlasek and Miller’s hack was intended as a wake-up call to FCA and the industry – but their methods pushed the boundaries of ethical hacking.

Ethical hackers, like Lab Mouse, seek to penetrate a broad range of consumer products in the interest of finding and fixing flaws in security systems. Once a vulnerability is found the effected company is notified only after which are the details of the vulnerability published.

Vlasek and Miller revealed that certain Jeep vehicles lacked a necessary firewall between the infotainment system and the vehicle’s safety and powertrain systems. This created a big problem for FCA. Like most car makers, with the possible exception of Tesla, FCA is vulnerable to the sieve-like recall system in the U.S. where car makers struggle to find current vehicle owners – and vehicle owners ignore recall messages from their dealers and the car companies.

It is entirely possible that the hacker/thieves in Texas are exploiting the same vulnerability identified by Vlasek and Miller and taking advantage of the likelihood that the software-related recalls on effected models have not been seen to. We won’t actually know until the thieves are caught or stopped.

The thefts highlight the importance of over-the-air software update technology of the type used by Tesla Motors to add features and make code corrections in its Model S vehicles. FCA mailed out thumb drives with software updates – an approach widely frowned upon in the cybersecurity industry.

Dealers
There is yet another source of anxiety emanating from the Texas thefts. Dealers remain a weak link in the security chain. FCA’s suggestion that the Texas hacker/thieves might be accessing a dealer Website to clone keys is but one potential source of vulnerability. Disgruntled dealer employees have been known to wreak havoc with vehicle security and telematics systems.

Dealers are also a source of poor security hygiene because of the entire industry’s blasé attitude toward recall work. At the recent national gathering of automobile dealers, incoming NADA Director Jeff Carlson ridiculed the recall system, suggesting that most recalls did not represent urgent safety issues, based on industry research conducted by the auto makers.

In the FCA instance, the missing firewall is a vehicle theft waiting to happen. Vlasek and Miller may have had fun taking control, remotely, of a Jeep – but the real issue is theft.

The Texas Jeep thefts point up the greatest threat of weak vehicle cybersecurity: the return of widespread vehicle theft as a challenge for law enforcement and car owners. There has been a lot of fear-mongering around identity theft, vehicle ransom and remote control terror – but maybe we’re missing the most obvious threat in a world of connected cars – simple theft.


After the fatal Tesla crash, I still feel safe in my self-driving car

After the fatal Tesla crash, I still feel safe in my self-driving car
by Vivek Wadhwa on 07-24-2016 at 12:00 pm

At first, the thought of letting my car drive itself seemed rather frightening. But the highway was almost empty and the lanes were clearly marked, so I took the risk and engaged the autopilot function in my new Tesla Model X. Yet I couldn’t let go of the steering wheel. I didn’t want to put my life in the hands of software. This was two weekends ago as I drove to Big Sur, Calif.

The fear lasted about five minutes. Curiosity got the better of me and I let go of the steering wheel to see what would happen. The car continued to drive just fine; it didn’t need me. After a couple of minutes, the car beeped and displayed a message on the dashboard asking me to put my hands back on the wheel — a feature the automaker added to ensure drivers were in the front seat and attentive.

But 20 minutes later, I had one hand on the wheel and I was checking email with the other as the car did the driving for me. I did take full control when the road was narrow or the terrain was uneven, but by and large, I became as comfortable with the car’s autosteer function as I am with cruise control.

Yes, self-driving cars pose new risks, as evidenced by the recent fatal crash in Florida, when a Tesla in autopilot mode hit a large truck that crossed its path. The Tesla software cannot handle local roads, intersections or extreme hazards yet. There are limits to every technology. It is the same scenario as using cruise control on local roads — you just shouldn’t do it.

Three out of four U.S. drivers have the same fears I did, according to a AAA survey. The same survey revealed that only one in five would actually trust a driverless vehicle to drive itself with them inside. I have no doubt, however, that once they get behind the wheel of one, they too will be checking email as I did. They’ll feel as comfortable with software driving their cars as they are with software flying their airplanes.

Tesla calls its software “autopilot,” but it really is nothing more than cruise control on steroids. The autosteer function keeps the car in its lane, reads road signs, drives as much over the speed limit as you ask, and slows down or stops if there is a slower vehicle or obstruction ahead. If you want to overtake someone, you engage the turn signal, and the car will move itself to the adjacent lane when it can. I found this to be safer than changing lanes myself because of the blind spots. The advantage the Tesla has is that it can see in all directions at the same time. It literally has eyes in the back of its head.
I also learned how self-driving cars could prevent accidents when a car from the right jumped into my lane just as the setting sun blinded me. My car automatically slowed down and gave way. No, it didn’t honk.

Self-driving cars will improve our lifestyles and make the world smaller. They will prevent tens of thousands of fatalities every year. The best part is that they will do to pesky, dangerous human drivers what the horseless carriage did to the horse and buggy: banish them from the roads. Software malfunctions will surely cause unfortunate accidents along the way. There will also be ugly public debates, efforts by incumbent businesses to create legislative barriers, and a lot of confusion.
But the technology is coming — whether we are ready or not. And I for one can’t wait to receive the software upgrades that will let the car do all of the driving. I look forward to enjoying the scenery or working during my commute.

If political leaders and lawyers in the United States try to stop progress — as is very likely — other countries will still adopt the new technologies and take the lead. We will end up playing catch-up with the rest of the world and miss out on the most amazing transition of our lifetimes: into an era in which we become the drivers in driverless cars.

For more follow me on Twitter: @wadhwa and visit my website: www.wadhwa.com


Brexit impact on semiconductors

Brexit impact on semiconductors
by Bill Jewell on 07-24-2016 at 10:00 am

On July 1, Daniel Nenni posted his thoughts on the impact of Brexit (the vote by the UK to leave the European Union) on semiconductors. In general I agree with his points. Below is my take on the issue.

The long term impact of Brexit is uncertain. The UK will likely negotiate a trade agreement with the EU which includes free trade between the two entities but not the free movement of people. Scotland may vote to leave the UK in order to join the EU. A 2014 referendum in Scotland was close, with 45% voting to leave the UK. There is a possibility other EU member nations may vote to leave. Italy, France, Sweden, the Netherlands, Austria, Finland and Hungary are mentioned as potential candidates. However, opinion polls show a majority of people in each of these countries favors staying in the EU.

The International Monetary Fund (IMF) has lowered their outlook for World economic growth in 2016 and 2017 primarily due to Brexit. The July 2016 IMF forecast calls for World GDP growth of 3.1% in 2016 and 3.4% in 2017, each down 0.1 percentage points from the April 2016 IMF forecast. The most significant change is the UK forecast with 2016 GDP growth down 0.2 percentage points and 2017 down 0.9 percentage points compared to the April forecast. Other forecasters are more pessimistic. Scotiabank expects zero UK GDP growth in 2017. The table below compares recent forecasts for UK GDP growth with forecasts made prior to the Brexit vote.

[TABLE] align=”center” border=”1″
|-
| colspan=”10″ style=”width: 675px; height: 23px” | United Kingdom GDP Growth Forecasts
|-
| style=”width: 132px; height: 29px” | Source
| style=”width: 54px; height: 29px” | Date
| style=”width: 60px; height: 29px” | 2016
| style=”width: 54px; height: 29px” | 2017
| style=”width: 60px; height: 29px” | Date
| style=”width: 66px; height: 29px” | 2016
| style=”width: 66px; height: 29px” | 2017
| style=”width: 60px; height: 29px” | Change
| style=”width: 72px; height: 29px” | 2016
| style=”width: 51px; height: 29px” | 2017
|-
| style=”width: 132px; height: 29px” | IMF
| style=”width: 54px; height: 29px” | July
| style=”width: 60px; height: 29px” | 1.7
| style=”width: 54px; height: 29px” | 1.3
| style=”width: 60px; height: 29px” | April
| style=”width: 66px; height: 29px” | 1.9
| style=”width: 66px; height: 29px” | 2.2
| style=”width: 60px; height: 29px” |
| style=”width: 72px; height: 29px” | -0.2
| style=”width: 51px; height: 29px” | -0.9
|-
| style=”width: 132px; height: 29px” | Scotiabank
| style=”width: 54px; height: 29px” | July
| style=”width: 60px; height: 29px” | 1.3
| style=”width: 54px; height: 29px” | 0.0
| style=”width: 60px; height: 29px” | Feb.
| style=”width: 66px; height: 29px” | 2.0
| style=”width: 66px; height: 29px” | 1.9
| style=”width: 60px; height: 29px” |
| style=”width: 72px; height: 29px” | -0.7
| style=”width: 51px; height: 29px” | -1.9
|-
| style=”width: 132px; height: 29px” | Focus Economics
| style=”width: 54px; height: 29px” | June
| style=”width: 60px; height: 29px” | 1.4
| style=”width: 54px; height: 29px” | 0.3
| style=”width: 60px; height: 29px” | April
| style=”width: 66px; height: 29px” | 1.9
| style=”width: 66px; height: 29px” | 2.1
| style=”width: 60px; height: 29px” |
| style=”width: 72px; height: 29px” | -0.5
| style=”width: 51px; height: 29px” | -1.8
|-
| style=”width: 132px; height: 29px” | PWC
| style=”width: 54px; height: 29px” | July
| style=”width: 60px; height: 29px” | 1.6
| style=”width: 54px; height: 29px” | 0.6
| style=”width: 60px; height: 29px” | March
| style=”width: 66px; height: 29px” | 2.0
| style=”width: 66px; height: 29px” | 2.2
| style=”width: 60px; height: 29px” |
| style=”width: 72px; height: 29px” | -0.4
| style=”width: 51px; height: 29px” | -1.6
|-

The bigger question is whether Brexit is a sign of changing attitudes toward free trade. Trade has been a major issue in the United States presidential campaign. Republican nominee Donald Trump wants to renegotiate the North America Free Trade Agreement (NAFTA) among the U.S., Canada and Mexico. Trump is opposed to a U.S. trade agreement with Central America (CAFTA) and opposed to the Trans-Pacific Partnership (TPP) agreement among the U.S., Japan, Mexico, Canada, Chile, Peru, Australia, New Zealand, Singapore, Brunei, Vietnam and Malaysia. Trump wants aggressive trade negotiations with China, threatening punitive tariffs on U.S. imports from China. Expected Democratic nominee Hillary Clinton is also opposed to CAFTA and TPP, even though these are supported by President Barack Obama. Clinton has also questioned the effect of NAFTA, even though it was championed by her husband Bill Clinton when he was President.

How does all of this affect the semiconductor market? The direct effect of Brexit is not significant. The UK is not a meaningful producer or user of semiconductors. According to data collected by the United Nations (UN) UK imports of semiconductors were $2.4 billion in 2014 and exports were $4.7 billion, a small amount compared to the global market of $336 billion. An indirect effect is Japan’s SoftBank Group has agreed UK’s ARM Holdings PLC for US$32 billion. ARM designs and licenses processors which are in 95% of the world’s smartphones, according the Wall Street Journal. SoftBank said Brexit did not affect the ARM bid, but it did make it cheaper as the Japanese yen has appreciated against the British pound since the Brexit vote.

If Brexit is a sign of moves toward more restrictive global trade, the impact on the semiconductor market could be substantial. Global trade has been important to semiconductors for decades. Semiconductor companies were among the first U.S. companies to move some manufacturing overseas. One of the pioneer companies, Fairchild Semiconductor opened assembly sites in Hong Kong in 1961, South Korea in 1966 and Singapore in 1968. Fairchild was soon followed by Motorola opening a plant in South Korea and Texas Instruments opening plants in Taiwan and Singapore. Today wafer fab capacity is spread around the world. IC Insights shows capacity distribution at the end of 2015 as:

[LIST=1]

  • Taiwan 22%
  • South Korea 21%
  • Japan 17%
  • North America 14%
  • China 10%
  • Europe 6%
  • Rest of World 10%

    The importance of global trade to the semiconductor market is evident by imports and exports by country. UN data for 2014 semiconductor imports and exports by key countries are shown in the chart below. The data for Taiwan is from the World Trade Organization (WTO) since it is not a UN member. Import and export data can be misleading as some countries such as Hong Kong and Singapore are trading hubs, with most of the semiconductor imports later included as semiconductor exports.

    China is by far the largest importer of semiconductors at $241 billion in 2014 (blue bars on right). Hong Kong and Singapore are the next largest sources of imports, but most of these are later exported. The U.S., Taiwan, South Korean, European Union (EU) and Japan are all significant importers, ranging from $27 billion to $38 billion. The major semiconductor exporters (red bars on left) correspond with the major wafer fab locations – China, U.S., Taiwan, South Korea, Japan and Europe (excluding the trading hubs of Hong Kong and Singapore). The total of the countries shown is $570 billion in imports and $482 billion in exports. The world semiconductor market was $336 billion in 2014 according to WSTS. Thus many semiconductors pass through multiple countries on their journey from the country of manufacture to the country of final consumption.

    The U.S. Semiconductor Industry Association (SIA) supports the Trans-Pacific Partnership Agreement (TPP). The SIA states “International trade is vital to the U.S. semiconductor industry and the American economy as a whole.” The TPP would simplify trade among the member nations while including provisions to protect intellectual property.

    Trade agreements are controversial. Opponents in high wage countries fear the movement of jobs to lower wage countries. Proponents argue increased trade between nations increases overall economic activity and adds jobs. Global trade has been an important driver of the semiconductor industry. Hopefully Brexit is not a sign of future trade barriers which may hamper the growth of the industry.


  • Why did Softbank pay so much for ARM? Because it’s worth it

    Why did Softbank pay so much for ARM? Because it’s worth it
    by Gus Richard on 07-24-2016 at 7:00 am

    Softbank’s acquisition of ARM Holdings was not only unexpected, but also the valuation was astonishingly high. Softbank is acquiring ARM for $32.2B or 23x CY15 revenue and 46x CY16 earnings. This was a 46% premium to the prior day’s closing price before the announcement of the acquisition. The questions to ask are: Why is Softbank buying ARM, and why are they paying so much?

    Softbank is primarily a communications and media company. It is accustomed to making large investments with long-term pay offs that create an annuity. For example, investment in communication infrastructure and wireless spectrum are large up front investments that provide long-term dividends and cash flow. Softbank’s long-term goal is to become a technology company that focuses on information technology. The importance and growth of artificial intelligence and the automatic accumulation of knowledge frames the company’s long-term vision. Over time, ARM could have the potential of leveraging Softbank into a leadership role in the data economy along with the Magnificent Seven: Google, Facebook, Amazon, Microsoft, Alibaba, Baidu, and Tencent.

    In the PC era, technology was dominated by Microsoft and Intel who together captured a majority of the profits. The hardware and software in the current mobile technology era is dominated by ARM and Android with Apple, Google and Facebook who monetize this generation of technology. I would argue that the next era will be the data era which will be dominated by connecting things commonly known as the “Internet of Things” (IoT). IoT’s dominant microprocessor architecture and operating system are yet to be determined; however, increasingly the value creation will be in the data generated by the large number of connected things. In the coming years, the data economy is estimated to be worth $1.6T. This includes value from productivity improvements, proactive maintenance, and ad placement on the web. ARM is the entry price for Softbank’s participation in the data economy.

    In 2015, ARM had a 32% unit share in its served markets. According to the company, this is up from 22% in 2011. ARM base chips’ market share as measured by revenue in 2015 was roughly 50% surpassing Intel’s overall processor market share. Softbank has indicated that it would double ARM’s R&D budget, which would accelerate its roadmap and drive further market share gains, thus squeezing out all other architectures except Intel. This would result in unit market share of processors in the 60-80% range over the next 10 years and would be equivalent to 60-80B ARM based chips a year. I believe roughly 20-30B of these chips would have an IP address and would be connected to the Internet. With this level of dominance, it would take decades to displace ARM due to the massive amount of code written for ARM’s instruction set. This dominant position would potentially provide Softbank with tremendous leverage in the data economy. What is yet to be determined is how Softbank would monetize this market position.

    ARM is the processor of choice in IoT applications along with Synopsis’s ARC processor. ARM is also developing an operating system and platform for the IoT, called mbed. With control and/or over site of the architecture, operating system, and platform specifications of IoT, ARM would be positioned to extract more profit out of the proliferation of connected things. In simple terms, if Softbank could generate $1 of recurring revenue for each ARM processor connected to the Internet in 2025, this would generate roughly $30B in revenue per year far surpassing ARM’s current revenue of $1.5B.

    Here are a few possible avenues for monetizing ARM’s architecture:
    [LIST=1]

  • Embed blocks of circuitry that could be turned on remotely. One example would be a security functional block. Softbank could share revenue with chip and/or system providers generated from these features, which would be very profitable for Softbank, the chip vendor, and OEM.
  • ARM’s architecture by 2025 will dominate network infrastructure, mobile phones, consumer products, auto and IoT, potentially creating pathways to access vast amounts of data. Data is the grist for the artificial intelligence and data economy mill and will only increase in value.
  • Alibaba Group and SoftBank are forming a joint venture to launch a cloud computing service in Japan. This might also provide an avenue for monetizing ownership of the ARM architecture.

  • Loving it when a Qualcomm plan comes together

    Loving it when a Qualcomm plan comes together
    by Don Dingee on 07-22-2016 at 4:00 pm

    Corporate layoffs are always a touchy subject. I think that’s because there is skepticism that one round of layoffs can turn into two, then if business still doesn’t improve the spiral accelerates into more rounds. Too many rounds indicate management didn’t really have a clue what was going on in the business, instead trying to placate shareholders with action.

    “Rightsizing” done right involves more of a strategy, with personnel actions combined with other financial steps plus sales and marketing actions to restart top-line growth. (Tom Peters: “You can’t shrink your way to greatness.”) The trick is to take out enough people, once, and realign the entire organization and incentives around that new figure.

    About this time last year, instead of celebrating its 30th anniversary, Qualcomm dropped a 15% workforce reduction as part of its strategic realignment plan. It’s been hard to tell if that worked. New data in Qualcomm’s 3Q16 earnings release shows signs that the SRP initiatives are getting traction. A presentation released on July 20[SUP]th[/SUP] outlines progress on the SRP:

    1) On track for $700M cost savings in FY16, $100M more than the original estimate.
    2) Reaffirmation that the current corporate structure (with both licensing and product activity) delivers better value
    3) $5.9B returned to shareholders in the first 9 months of FY16 (dividends plus repurchases)
    4) Director turnover – 7 retirements, 3 additions – reduced average tenure to about 5 years
    5) Performance-based executive compensation steps implemented
    6) Focused investments, including completing the CSR acquisition and setting up a JV with TDK for RF front-end solutions

    It always helps to issue conservative quarterly guidance and then outperform it. Qualcomm hit the top end of their revenue guidance at $6.0B, and exceeded unit shipment guidance by 6M parts at 201M. Combined with a better cost story, EPS outperformed guidance by 16 cents.

    A few charts portray an interesting story. This is MSM chip shipments by calendar year:


    Next is 3G/4G device shipment estimates. The fine print on this slide is a chilling note given the claims of international “cheating” heard in Cleveland last night; here it is verbatim:

    “Global 3G/4G device shipments represent our estimate of CDMA-based, OFDMA-based and CDMA/OFDMA multimode subscriber devices shipped globally, excluding TD-SCDMA devices that do not implement LTE. We continue to believe that certain licensees in China are not fully complying with their contractual obligations to report their sales of licensed products to us, and certain companies, including unlicensed companies, are delaying execution of new license agreements. As a result, we do not believe that all global 3G/4G device shipments are currently being reported to us.”


    The last table affirms that while device shipments are trending up, device ASPs (the basis for Qualcomm royalty payments) are headed down. That could represent a mix of lower priced smartphones in Asia, cheaper IoT devices, and a maturing of older product lines.


    Qualcomm’s 4Q16 guidance has yearly revenue in the range of a 1% decrease to a 14% increase – and I’m guessing it will be closer to that top figure. EPS will come in anywhere from 15 to 26% improved. They resisted the calls to split the company and instead laid in a plan to fix the current business while still investing in the future; their R&D and SG&A expenses were $7.8B in FY15 and are expected to drop only 3 to 5% this year.

    I’m not trying to sell stock here. (Again, I don’t own shares of QCOM, or anything else I write about.) To me this is a refreshing story about a firm that took decisive, thoughtful, multi-faceted action and may have turned the corner. We could have a conversation on near-term versus long-term, but it’s much easier to execute a long-term strategy when the building is not on fire and people aren’t fearful for their jobs. I’m sensitive to the human costs that came with a 15% reduction, but I’m also mindful of the economic benefits a healthy Qualcomm can deliver – for themselves and the electronics industry at large.

    Do you have your copy of “Mobile Unleashed” yet? Chapter 9 is dedicated to the origin story of Qualcomm, and Chapter 10 delves into the Chinese contingent of ARM licensees.


    Reducing Data Centre Cooling by 40%

    Reducing Data Centre Cooling by 40%
    by Daniel Payne on 07-22-2016 at 12:00 pm

    Living in Oregon has many benefits, including access to cheap electricity thanks to the plentiful river systems that provide hydro power and a growing green power business fueled by wind and sun. Many of the world’s largest data centers are located in Oregon for access to this cheap electricity, and Google has a sizable investment in the Dalles, Oregon.

    I’ve learned that the racks of servers found in a data center generate a lot of heat, so that keeping all of that electronics cool does take up a lot of energy itself. The bright engineers at Google decided that one way to reduce cooling costs would be to analyze the data for cooling by using an AI-based system known as DeepMind. What that yielded was a surprising 40% reduction in the cooling costs.

    Another unique decision by Google is to use renewable energy for their data centers as another way to reduce emissions into the environment. The cooling in a data center uses big industrial equipment:

    • Pumps
    • Chillers
    • Cooling towers

    What’s so difficult about cooling a large data center? It turns out that the data center responds dynamically to requests by users, so the actual servers don’t have a static profile, rather the power and therefore cooling bounces around a lot. The interaction between servers, cooling and demand by users is sophisticated and non-linear, so trying to use a traditional engineering formula or your own common sense doesn’t really help to optimize the cooling challenge. Cooling systems also don’t respond instantly, there is a certain lag time to get started, reach a level, or ramp down. The physical plant at one data center may be quite different from another data center, so an approach must be taken that is based on each unique location.

    For the past couple of years Google has used a machine learning based approach to help operate data centers in a more optimal manner than before. The DeepMind system was used by researchers to improve the cooling efficiency by creating a system with neural networks that could understand the various operating scenarios and parameters that characterize a data center. An adaptive framework helped Google to learn the data center’s interactions.

    Past data was already available for analysis from all of the sensors inside a data center:

    • Temperatures
    • Power
    • Pump speeds
    • Setpoints

    This data was then used to train a collection of deep, neural networks. Optimization was focused on the Power Usage Effectiveness (PUE), which is the ratio of total building energy divided by the IT energy usage. Two additional neural networks were trained to predict the upcoming temperature and pressure over the next hour operating the data center. These predictions helped to simulate what actions were recommended from the PUE model, so that the cooling system operated within specifications.

    Here’s a plot showing the PUE value as a function of time, where a lower number is better because it saves power:

    When the Machine Learning (ML) starts we can see a very quick drop in PUE, which shows the 40% savings in energy used for cooling.

    Summary
    Google data centers are becoming even more energy efficient by using machine learning approaches and neural network modeling to reduce power consumption for cooling by 40%.

    Read the full blog about Google DeepMind and saving 40% on cooling costs here.


    IMEC Technology Forum at SEMICON – Coventor could save you billions!

    IMEC Technology Forum at SEMICON – Coventor could save you billions!
    by Scotten Jones on 07-22-2016 at 7:00 am

    The development of leading edge semiconductor technology is incredibly expensive, with estimates ranging from a few to several billion dollars for new nodes. The time to develop a leading edge process is also a critical competitive issue with some of the largest opportunities awarded based on who is first to yield on a new node.

    Being late to market can cost a semiconductor company billions of dollars in lost opportunities! Coventor produces SEMulator3D, a modeling platform that enables development engineers to simulate process flows in full 3D to test and refine them before running wafers, reducing development costs and speeding up time to market. David Fried is the CTO of Coventor and he presented at the IMEC Technology Forum (ITF) on Monday before SEMICON. I was at David’s presentation and I also had the opportunity to interview him on Wednesday during the show.

    Coventor SEMICON Presentation Slides

    Coventor was formed in 1996 making it approximately 20 years old. The original focus of Coventor was on MEMS simulation and EDA and that led to the development of a core competency in 3D modeling (MEMS devices typically have a 3D structures). Around 2004/2005 Intel was a Coventor investor and suggested migrating to semiconductor simulation and SEMulator3D was born.

    David Fried’s path to working at Coventor is an interesting one in that he was a user of the product before becoming CTO of Coventor. David was working at IBM and had just finished 65nm development. David was tasked with 22nm development and he recognized the need for a new development paradigm. David brought in Coventor to IBM; he helped Coventor understand what IBM needed and Coventor was very responsive to the needs. After he finished up his work on IBM’s 22nm SOI development, David joined Coventor as the CTO. Coventor has rolled out the SEMulator3D platform to semiconductor companies and they are now also seeing adoption at equipment and materials companies.

    The SEMulator3D platform is focused on process prediction and structural integrity. You feed a layout into the model using standard formats such as GDSII and define your process steps and the platform produces a full step by step 3D model. You can step through the resulting model and rotate and zoom in on the resulting structure. More importantly, you can measure and perform 3D checking on the resulting models, allowing quantitative analysis of the impact of process changes or variations. The process flow is built up using functional models, for example to define an etch process you put in etch rate, selectivity, time and lateral bias, you can also input optional parameters such as pattern dependence, sputtering component, etc. The interface is windows based and easy to use with dropdown menus. Typically, the platform is deployed across an entire development group. The platform has been used for process development, process documentation, metrology development and many other applications.

    David’s presentation was titled “Technology Development: The “In Between”.

    In David’s presentation he showed the historical technology development method:
    [LIST=1]

  • Test chip design – characterization structures for known/expected targets and challenges.
  • Run experimental lots – do splits, short loops, Front End Of Line (FEOL), Back End Of Line (BEOL) or full flows.
  • Characterization – use inline metrology, offline physical testing and electrical testing.
  • Feedback – run engineering analysis and change the process of record. You then go back to step 2. You repeat this loop every one to three months.
  • New Test Chip – as you learn you may need to go back and redesign the test chip based on what you have learned (go back to step 1). This typically takes place approximately once a year.

    In my interview with David he noted that a development cycle from steps 1. to 4. listed above typically takes three months and costs $50 million dollars!

    Due to growing process complexity, the need to account for atomic scale variation and the combination of new elements, development continues to get more complex. ASML has noted that multi-patterning for the 7nm/5nm increases lithography masks and steps by 5x and deposition, clean and etch steps by 5x, and that is not even counting additional inspection and metrology. Clearly some way to simplify, reduce cost and accelerate process development is needed.

    SEMulator3D addresses this problem by enabling virtual fabrication for:

    • Evaluation of research concepts – make the big branch decision about what technology directions to pursue.
    • Vetting of variations – look for problems lurking in your planned process flow.
    • Combine research elements – when you run into problems you can combine and reevaluate technological options.

    As you develop a technology there is a series of decisions:

    • Big branch – for example FDSOI versus FinFET.
    • Then as you move out the branches do you want to do gate first or gate last.
    • Then you move on to, merged S/D or isolated S/D or NMOS or PMOS metallization first.

    As you move onto a branch and further out the branch, starting over is more and more expensive. With virtual fabrication you map out the braches before being forced to make decisions.

    David went on to show examples of the use of virtual fabrication:

    • IMEC and Applied Materials for 7nm lower gate resistivity.
    • IBM 22nm SOI yield optimization.
    • Global Foundries 14nm FinFET specification setting and variation reduction.
    • IMEC 5nm process capability evaluation.
    • TEL evaluating equipment requirements and patterning options for 10nm.

    In our interview he also mentioned work to be published where over a million virtual wafers were run through SEMulator3D for 5nm process evaluation.

    The use of virtual fabrication in the SEMulator3D platform can eliminate entire development cycles reducing cost and speeding up development. It is also possible to do things like the million wafer virtual run to evaluate variation that are simply not possible to do by actually running wafers. Virtual fabrications runs take minutes to complete as opposed to months for test runs.

    In summary virtual fabrications can:

    • Map the big branch decisions prior to running wafers.
    • Avoid alligators in the water due to process variation.
    • Direct trips back to the well earlier in the development schedule.

    The bottom line is the development of processes can be less expensive, faster and the resulting process can be more highly optimized. Engineers all across the industry have recognized the need for this new virtual fabrication paradigm. In today’s market place, winning the development competition can bring in billions of dollars in business opportunities that might otherwise go to a competitor. The massive financial impact of virtual fabrication is now being felt in the boardroom!


  • IoT Tutorial: Chapter 8 – Introducing Internet-of-Things (BigData) Steams and Analytics

    IoT Tutorial: Chapter 8 – Introducing Internet-of-Things (BigData) Steams and Analytics
    by John Soldatos on 07-21-2016 at 4:00 pm

    Introduction to IoT Analytics – IoT Analytics vs. BigData Analyticss. In the previous chapter of the IoT tutorial, we explained the affiliation between IoT data and BigData, given that IoT data expose the Vs of BigData. We also illustrated the activities comprising IoT data processing applications, such as data selection, validation, semantic unification and more.
    Continue reading “IoT Tutorial: Chapter 8 – Introducing Internet-of-Things (BigData) Steams and Analytics”


    IMEC-Horizontal Nanowires for 5nm at the VLSI Technology Symposium

    IMEC-Horizontal Nanowires for 5nm at the VLSI Technology Symposium
    by Scotten Jones on 07-21-2016 at 12:00 pm

    At the VLSI Technology Symposium, IMEC presented a paper entitled “Gate-All-Around MOSFETs based on Vertically Stacked Horizontal Si Nanowires in a Replacement Metal Gate Process on Bulk Silicon Wafers”. I have wanted to blog about this paper since the symposium was held but also wanted to tie it in with an interview with someone from IMEC who worked on the technology. This last week I got a chance to speak with Dan Mocuta of IMEC about the work.

    The first question you may ask is why Horizontal Nanowire (HNW) technology is interesting. In my previous blog on An Steegen’s “Secrets of Semiconductor Scaling” presentation I discussed the looming limits on FinFETs. Basically FinFET scaling is expected to end at the 7nm node (real node, 5nm node at the foundries). To continue to scale some type of new device structure is needed. HNW processing is very similar to a FinFET process and they provide improved electrostatics and scaling. Many researchers and leading technologists believe HNW will be the successor to FinFEts.

    You can read my blog about An Steegen’s paper HERE.

    In the IMEC work they created 2 – 8nm diameter horizontal silicon nanowires stacked on top of each other. The pitches are slightly relaxed from what is needed for a 7nm node in order to demonstrate the devices.

    The process to fabricate the nanowires is as follows:
    [LIST=1]

  • Ground plane implant – this is used to dope the surface of the wafer and suppress leakage due to parasitic transistors – this step is also typically seen in bulk FinFET processes.
  • Deposit a stack of Si/SiGe/Si/SiGe (Si = silicon, SiGe = silicon germanium) using an epitaxial reactor. This step is unique to HNW fabrication.
  • Fin formation – mask and etch to create “fins” and shallow trench isolation trenches. Refill the trenches with oxide and etch back the oxide to expose the “fins” – this is very similar to FinFET processing except the temperature for the fill needs to be lower for HNW and you are etching a Si/SiGe stack instead of just Si.
  • Dummy gate formation – deposit polysilicon, planarize and pattern it – same as FinFET fabrication.
  • Extension implants and spacer formation – same as FinFET fabrication.
  • Raised silicon source/drain – selective epitaxial growth of a raised silicon source/drain to make contact to the nanowire – for a typical FinFET process there would be Si raised source/drains for NMOS and SiGe raised source/drains for PMOS (more on this later).
  • HDD implants – ion implants into the raised source/drains – same as FinFET fabrication.
  • ILD0 – interlevel dielectric to cover the fins and planarization back to the tops of the dummy polysilicon gates – same as FinFET fabrication.
  • Dummy gate removal – etch out the polysilicon dummy gate – same as FinFET fabrication.
  • SiGe etch – a vapor phase HCl etch is used to etch out the SiGe in the “fin”, this releases the Si nanowires. This step is unique to HNW fabrication.
  • WF – the metal work functions are deposited and the gate area is filled – similar to FinFET fabrication.
  • Contact and Back End of Line – same as FinFET fabrication.

    There are of course a number of adjustments needed to accommodate HNW fabrication versus FinFET fabrication but with the exception of steps 2 and 10 the process is essentialy the same as a FinFET process.

    All of these details are available from the VLSI Technology paper. Looking at this flow I could see it was a single threshold voltage device and the lack of a raised SiGe source/drain for PMOS was a drawback. When I got to sit down with Dan Mocuda I asked him about the limitations in the work presented.

    Dan said it is a 7nm process but somewhat relaxed to focus on the device behavior as opposed to pushing the process. The current work is also single Vt but they have been successful at CMOS integration with dual work functions. With respect to raised SiGe source/drains for PMOS there are integration challenges because of the SiGe etch to release the Si nanowires but they are working on integrating it using spacers to protect the raised SiGe source/drain.

    The technology covered in this papers is also not a fully integrated process flow. For a full process you need ESD, I/O and multiple threshold voltages and they are working on all of that. I asked him if they might integrate FinFETs as part of the flow for I/O and he said there are ways to do that. You could selectively grow the Si/SiGe/Si/SiGe super lattice in one area and silicon in other areas to form FinFETs. This work was simplified version to demonstrate the technology and they are now working on smaller pitches and fully integrated process.

    We also discussed the limits of FinFET technology. Ultimately gate pitch limitations in FinFETs drives the need for nanowires. Fin widths can’t be less than 5nm because mobility collapses and gate length can’t be less than approximately 18nm for electrostatic control. Nanowires have better electrostatic control than FinFETs and can provide additional gate scaling leaving more space for contacts. They also have simulation studies that show lower variability for HNW than FinFETs so low voltage performance is better.

    One key area that still needs to be further evaluated for HNW is the vertical spacing between the wires. As the spacing between the wires gets smaller the process is simpler but if they get too close you lose electrostatic control.

    In terms of timing Dan believes HNW will be needed in the early 2020s as a true 5nm technology and he thinks it can be ready. It is built on FinFET replacement metal gate technology and if you get on the development train now you can be ready in 3 to 4 years! Many of the leading technologists I talk to also believe HNW is the 5nm solution!


  • 10nm Will Be an Epic Process Node!

    10nm Will Be an Epic Process Node!
    by Daniel Nenni on 07-21-2016 at 7:00 am

    In the history of the fabless semiconductor industry the foundries have always been a process node or two behind the leading semiconductor manufacturers. Starting in Q1 2017, for the first time in fabless semiconductor history, the foundries will have a process node advantage. This is horrible news for some but great news for others including myself.

    You can ignore the “my process is better than yours” Power Point nonsense. Silicon does not lie and what silicon says is that you must be able to design to a foundry process and yield chips that will succeed at the system level, right? That brings us back to 10nm and the mighty fabless semiconductor ecosystem.

    Clearly it takes an ecosystem to build a chip but it also takes an ecosystem with “relentless collaboration” to build a leading edge process that everyone can design to. A good example was illustrated at the Synopsys/Samsung “Ready to Design at 10nm” breakfast at #53DAC last month. Unfortunately I missed the free breakfast but the video is now up on the Synopsys website HERE and it is definitely worth your time. For full effect please load a plate up with your favorite breakfast food and grab a cup of coffee.

    One of my favorite foundry speakers, Kelvin Low from Samsung Foundry, was on board. Let’s not forget, during Kelvin’s tenure Samsung was the first foundry to hit FinFET high volume manufacturing last year with their 14nm Exynos SoC and modem (Galaxy S6) followed by the Apple A9 SoC (iPhone 6s).

    *SPOILER ALERT*

    In case you are interested, Kelvin talks in detail about Samsung’s 10nm and 7nm foundry plans which are quite different from TSMC’s and GlobalFoundries’. Before Samsung, I knew Kelvin at GlobalFoundries and Chartered Semiconductor so I can tell you he is a very skilled foundry guy, absolutely.

    The other guys on the panel were from Synopsys: JC Lin who spoke on Compiler II Technology Strengths for 10-nm FinFET and Andy Potemski who spoke on the 10LPE Reference Flow & ARM® Cortex®-A53 CPU Reference Implementation.

    I do not know JC or Andy personally so I have included their biographies:

    JC Lin
    Vice President, IC Compiler II R&D, Synopsys
    JC Lin has worked at Synopsys for more than 20 years. He has worked on multiple technologies and products, including logic synthesis, physical synthesis and place and route. His current focus is on the IC Compiler II place and route product. JC holds a Bachelor degree from National Taiwan University on Electrical Engineering and a Ph.D. degree from SUNY Stony Brook on Computer Science.

    Andy Potemski
    Senior Director, Foundry Reference Flows and Lynx Design System R&D, Synopsys Andy brings 36 years of industry experience to the table. He has been focused on design flows and automation since joining Synopsys over 21 years ago. Presently he is responsible for the Lynx Design System product R&D as well as foundry reference flow development. Prior to Synopsys, Andy spent 13 years as an IC development engineer with IBM. He holds 8 US patents in chip design and automation.