RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

Clever IoT Devices are Coming!

Clever IoT Devices are Coming!
by Daniel Nenni on 07-24-2014 at 4:00 pm

Paul McLellan and I spent the evening with Samsung at the Bentley Reserve in San Francisco last night. One thing I discussed with them in great detail was IoT devices. Samsung is investing heavily in IoT and the supporting infrastructure. In fact, there is a rumor that Samsung is acquiring home automation company SmartThings for around $200M, to which they would not confirm or deny last night.

One of the IoT devices demonstrated was SkyBell which is a poor man’s video security system. SkyBell is a smart video doorbell that allows you to see, hear and speak to the visitor at your door whether you’re at home, at work, or out running errands. SkyBell also includes a motion sensor, on demand access from your phone, movable camera, and night vision. Most of the home break-ins I read about are crimes of opportunity meaning that they ring the doorbell and see if you are home before they break-in and steal your stuff. So SkyBell is not just a way to detect unwanted relatives at the door, it is a serious security device, absolutely.

This is a good example of existing products we use every day getting smarter which is probably the largest IoT market segment we will experience over the next 10 years. Here are some other IoT products that I found interesting:

Sense
is a simple system that tracks your sleep behavior and monitors the environment of your bedroom. I’m thinking you will need to turn this one off during intimate times. With Sense’s Smart Alarm, it can even wake you up in the morning at the right point in your sleep cycle, to avoid that groggy feeling we sometimes get.

Sense Mother is at the head of a family of small connected sensors that blend into your daily life to make it serene, healthy and pleasurable. Translation: Track your children, pets, or spouse like they do caribou on the Animal Planet.

TrackDot
tracks your luggage or other personal items while you are on the go. Not that you will ever get them back but it sure would be nice to see how many frequent flier miles they earn.

SmartDiapers. Talk about big data! Here’s a way for new parents to monitor their baby’s health. I recently learned at a wearables conference that in Japan more adult diapers are sold than infant diapers. Learning what is wrong with your child or your parents, before an emergency room trip is required, is probably a good idea. Talking adult diapers? That’s not a conversation I really want to have.

Smart Contact Lenses. Besides health tracking, contact lens technology under development by Google and others could enable better night vision and augmented reality. This might even get me to stick my fingers in my eyes on a daily basis and forgo my signature spectacles but I was really hoping for an Iron Man type helmet.

SmartScales. I already have a smart scale that analyzes my weight, fat measurements, and Body Mass Index. Do I really need IoT here so my scale can talk to my refrigerator? No thank you.

One thing I can tell you is that Samsung really knows how to throw a VIP party. This was a follow-on to the “Samsung Voice of the Body” event held in May that Paul wrote about HERE. As soon as they send the presentation slides we can write in more detail about their Foundry, System LSI, Memory, and Display offerings.


Power Modeling and Simulation of System Memory Subsystem

Power Modeling and Simulation of System Memory Subsystem
by Daniel Payne on 07-24-2014 at 11:05 am

One great benefit of designing at the ESL level is the promise of power savings on the order of 40% to 70% compared to using an RTL approach. Since a typical SoC can contain a hierarchy of memory, this kind of power savings could be a critical factor in meeting PPA goals. To find out how an SoC designer could use such an ESL approach to power savings for a system memory subsystem I interviewed Gene Matter of DOCEA Powerby email this week. DOCEA’s company headquarters is in France, and Gene works out of their San Jose office. One thing that Gene and I have in common is our Intel alumni history; I started my design career with DRAM chips, while Gene spent 23 years at Intel working on the x86 processors, chipsets, USB, PCI and memory technology.

Q&A

Q: Why model memory and at what level should that model be?

Memory subsystems have a huge impact on performance. In almost every processor and system performance simulator, excruciating detail is provided in modeling the core instruction set clock counts, internal bus interconnect bandwidth, buffering/posting, cache organization, main memory and storage to get an accurate performance prediction for application and benchmark performance.

What struck me are the die area dedicated to high speed cache SRAM and the type and diversity of embedded memory types in modern SoCs, chipsets and processors all of which are process technology, cell library and implementation dependent. The corresponding power models for most memory is either at the transistor level which is way too detailed for functional simulation or very abstract and simple bandwidth, % of traffic types and approximations of the power. Major components affecting power such as the temporal/address dependent behavior such as cache hit /miss, snoops, DRAM page hit/miss, burst reads, posted writes etc.. are available from the performance simulator but you also need to annotate and parameterize the power models from the core, interconnect, memory controllers, IO and memory to estimate power as function of task consumption/completion. I’ve seen many estimates try to predict power by just using statistical data from characterized workload and then just plug in power for the memory blocks, then try to sum up the results.

Q: What are some considerations of power models for memory subsystems and SoC/systems?

  • You need a dynamic, realistic set of workloads or applications. Either you can use VCD or CSV data from a performance/functional simulator or characterized workload data from performance analysis or software emulation. You can also co-simulate with a performance/functional simulator and power/thermal model and simulation tool
  • You need to build the complete memory subsystem and account for the application/SW flow through the machine architecture. You can start with a simple block diagram and then data flow for the major transaction types that occur.
  • Then parameterize all the interconnects, memory controllers, IO and memory blocks with corresponding power equations, values or scripts to provide the dynamic current, static retention current (e.g., DRAM refresh or SRAM) from the component data sheets, IP + process/voltage temp data per process technology used in each design
  • Add the memory power states and corresponding power in each state and the power state transitions events or triggers
  • Now you can get power as function of application behavior and solve the power as function of time and temperature

Q: How do you create power models of memory subsystem?

One approach is to use Docea’s Aceplorertool which provides templates of most common IP blocks and memory. Users can also build a library of templates using our scripts to parse an IP component data sheet, silicon compiler data per process technology, memory type, and organization. We also have automated scripts to read in excel spread sheet data, or from an IP repository in a shared network data base.

The 3 tools we recommend are:

  • The modeling kit, a set of worksheets in Excel to build each component power model
  • The assembly kit and Aceplorer Power Intelligence which is a front end to include the interconnects, clock and power intent
  • Python scripts/automation to build the models, view the configuration and manage/maintain all of the models

Q: How do you simulate power and performance for memory?

Once you have built power models for your system you can build scenarios which can be based on time flow charts, message sequence or data flow diagrams that are sequential events. A simple scenario can start with a reset, initialization phase, then boot, OS/application loading into memory, then execution of tasks. The accuracy of the each phase/event can be mixed where static or average values might be used for some phases and transaction or even cycle based values are used. Also the scenarios can have steps, time stamps, and concurrent tasks as well as delay and synchronization points.

Q: What type of analysis is important for memory subsystems?

Early power estimation is pretty much mandatory to meet product design targets for battery life and application dependent performance. Many OEMs and OSV’s will specify the MP3 audio and H.264 or HD video playback in hours, baseband/wireless modem talk time/standby, web-browsing and file upload/download power. So you need library of representative workloads that can drive a dynamic analysis.

Early on in the meta-partitioning of the system you are evaluating tradeoffs, like:

  • Number of cores and whether it is homogenous. heterogeneous cores or hybrid types of cores: then you need to see if you can thread or parallelize the code
  • Frequency, voltage min, typical and max/turbo ranges
  • Fixed function, DSP or offload engines
  • Bandwidth and concurrency of interconnects
  • And the critical part is whether you have over or under provisioned the memory speed, latency, working set size with your cost and power budget

Now you need to parameterize all the components with power data. You can build power models of all the current IP, you may re-use from existing data or annotate/update them from a shared repository. You can build quick “what if “models from vendor or internal IP specs for new models. Then you can apply process technology vendor data for transistor types used in each IP block, process corner, and temp data.

Finally, you want to analyze power as a function of temperature, workload and system configuration.

Q: You’ve thought out this memory subsystem modeling quite thoroughly, how would you summarize the DOCEA approach?

Memory subsystems are increasingly more complex. In many systems they are “make or break” in terms of meeting performance and cost goals. Now that many systems are also power and thermally constrained, it is imperative to have power/thermal models that you can build early and that track the design as it progresses within the form factor and cost budgets. Also there is variability in memory vendor power and many tradeoffs as to where you spend your money: on caches/die cost, on stacked/PoP or embedded DRAM, or in low power/high density and fast SSD/mass storage. The ability to automate the creation of power models, parameterize them behavioral (VHDL/System C/System Verilog), to functional (RTL/Gate level) to structural (fully place and routed/circuit and CFD) model data is key to productivity and project management.

The memory and storage hierarchy and organization are critical not just for performance and functionality, but also for the most robust thermal designs and most energy efficient. The ability to model and simulate thermal, power and performance as early as possible can be a huge competitive advantage for chip design, module/subsystem and complete platforms.


Atmel Licenses Mali and Security from ARM

Atmel Licenses Mali and Security from ARM
by Paul McLellan on 07-24-2014 at 7:01 am

Atmel (and ARM) announced yesterday that Atmel has licensed a portfolio of ARM IP for devices that require video, image and display capabilities. This portfolio includes Cortex-A7 (a 32-bit core), Mali-V500 (a video accelerator), Mali-DP500 (a display processor) and TrustZone technology (security technology). These can now be used to create devices for the Internet of Things (IoT) especially wearable devices, toy and other tasks that rely on image processing. Cortex-A7 is the most energy-efficient of ARM’s microprocessors ever.

Of course this isn’t the first license that Atmel has taken out from ARM. They have a full-llne of ARM-based microcontrollers that complements their own AVR processors. According to the Atmel Microcontroller Selector Product Finder, there are 217 of them from Cortex-M0 up to Cortex-A5. So this announcement adds graphics capabiity, the higher-end Cortex-A7 and ARM’s security technology that is a mixture of hardware-based and software-based.

The energy efficiency and small die area advantages of ARM Mali-V500 and Mali-DP500 enables full HD 1080p60 resolution capabilities on a single core, which is ideally suited for cost-conscious applications. Additionally, both the ARM Mali-V500 and Mali-DP500 incorporate support for ARM TrustZone technology for hardware-backed content security from download to display.

Mali has been very successful in mobile and is currently ranked #1 in the GPUs in Android devices (Apple uses Imagination Technologies GPU, 40 miles up the road from Cambridge in King’s Lynn. Must be some processor design hormones in the water in East Anglia or something).

As Reza Kazerounian, senior vice president and general manager, microcontroller business unit at Atmel said:As IoT and wearable devices become smaller, more sophisticated and integrated, the SoCs used in the devices will need to offer more features and functionality in smaller packages. The small area footprint of the ARM Cortex and Mali multimedia solutions will allow us to offer HD video and display processing in unprecedented sizes.

With Mali-DP500, Atmel SoCs will have the capability to deliver UI functionality such as multi-layer composition, scaling and post-processing with support from ARM’s Frame Buffer Compression (AFBC) protocol. This technology is unique to ARM and is capable of delivering a 60 percent reduction in system bandwidth for video playback.

One of the big challenges that IoT is going to face as the market grows is implementing very strong security with a comparatively limited amount of processor and battery power to do it. It is not the end of the world if somebody breaks into my FitBit and finds out that I only took 5000 steps today. But nobody wants their car or their heart-pacemaker hi-jacked by the bad guys. Pure software solutions are both too power hungry and less provably secure than a hybrid approach like ARM’s TrustZone.

ARM’s press release is here. Atmel’s blog on the deal is here.


Google Glass Fail!

Google Glass Fail!
by Daniel Nenni on 07-23-2014 at 4:30 pm

Not a total fail but I would classify it as a Segway type fail meaning that while society on a whole will shun Google Glass there will be a number of teksters that will keep the technology in circulation. Teksters is a word I just made up and submitted to the Urban Dictionary. Kind of like hipsters, teksters is a subset of counter-culture people bound by technology for no good reason. For those of you, like my beautiful wife, who don’t know what Google Glass really does here are the main features and applications according to Wikipedia, which I trust more than company websites:

  • Touchpad: A touchpad is located on the side of Google Glass, allowing users to control the device by swiping through a timeline-like interface displayed on the screen. Sliding backward shows current events, such as weather, and sliding forward shows past events, such as phone calls, photos, etc.
  • Camera: Google Glass has the ability to take photos and record 720p HD video.
  • Google Glass can be controlled using “voice actions”. To activate Glass, wearers tilt their heads 30° upward (which can be altered for preference) or tap the touchpad, and say “O.K., Glass.” Once Glass is activated, wearers can say an action, such as “Take a picture”, “Record a video”, etc.
  • Google Glass applications are free applications built by third-party developers. Glass also uses many existing Google applications, such as Google Now, Google Maps, Google+, and Gmail.
  • Third-party applications announced at South by Southwest (SXSW) include Evernote, Skitch, The New York Times, and Path.
  • Many developers and companies have built applications for Glass, including news apps, facial recognition, exercise, photo manipulation, translation, and sharing to social networks, such as Facebook and Twitter, reminders from Evernote, fashion news from Elle, news alerts from CNN, TripIt, FourSquare and OpenTable, translation app Word Lens, a cooking app AllTheCooks, and an exercise app Strava, and notifications from Android Wear will be sent to Glass.

As interesting as I find Google Glass and today’s wearables in general, I will need a pretty strong ROI to integrate one into my daily life. For example, once my personal data heads into the cloud I would like to get paid in terms of lower medical or auto insurance. Right now my wife and I get reduced auto insurance rates for good driving. I would be happy to share the GPS details of just how good of a driver I really am for some cold hard cash. My wife and I also get reduced medical insurance rates for taking yearly preventative medical diagnostics. We also live a very healthy and fitness oriented lifestyle which I would happily share the details of for some serious ducats.

Let’s face it, Google, Amazon, Apple, FaceBook, and other cloud savvy companies stand to make trillions of dollars by collecting our online DNA and commercializing it. Enough is enough! It is high time we get paid for our lack of privacy! So I say to all wearable companies SHOW ME THE MONEY!

More Articles by Daniel Nenni…..


The Two Biggest Misses in Mobile

The Two Biggest Misses in Mobile
by Paul McLellan on 07-23-2014 at 9:50 am

There are some interesting parallels between Intel and Microsoft. Both of them missed mobile. Actually they didn’t completely miss mobile, both of them had programs from early days. But clearly they both regarded mobile as a much lower priority: the PC was where all the money was and where it would continue to be forever.

When I was at VaST, Intel had a huge program in mobile built around their own ARM processor xScale (originally called StrongARM when it was first developed at DEC before Intel purchased the semiconductor division). They never had any real success that I know of and eventually they sold the entire xScale development and the teams around it to Marvell. Their next attempt was to purchase Infineon’s wireless division. At the time, Infineon supplied the chipset for what we might now call iPhone1, although of course it was just called iPhone at the time. Apple then switched to Qualcomm for modems and built its own baseband chips (Ax).

With those programs Intel pretty much manged to miss 2G (GSM for most of the world, CDMA too in US and Korea) and 3G. For 4G, also known as LTE, Intel acquired Fujitu’s LTE modem division (which actually had roots in Freescale/Motorola) to add to the LTE team they already had from Infineon. They now have working LTE modems shipping, although one of the not widely publicized aspect of this is that they don’t manufacture them in Intel fabs, they are built by TSMC. On their latest conference call they said the schedule for bringing those in house was late 2015 or early 2016.

I read a recent report that said “the mobile unit is a major drag on the company’s profitability and the management really needs to focus on this potential area of growth.” They are investing (losing) over $1B per quarter in this market, if that is not focusing on this area then I don’t know what is. The bigger question is how long they can go on hemorrhaging money if they don’t start to get real traction with customers. They make so much money in their mainline microprocessor business that in some sense the answer may be “forever” but to what end?

Microsoft has made forays into mobile in the past. First they had a mobile operating system called WindowsMobile. This was moderately successful and in 2004 (pre-iPhone, pre-Android) it had 25% of smartphone sales (Nokia’s Symbian pretty much all the rest). I actually had a Samsung Blackjack for a year or two that used it. But as iPhone and Android came along its share declined.

In 2010, Microsoft decided to enter the mobile hardware business with a phone called Kin. It turned out to be the shortest lived phone ever since it never got any traction with any of the major carriers, and a couple of months after introduction was quietly canceled.

Microsoft then developed WindowsPhone (WP). This had a few phone manufacturers at least experiment with using it. Then Stephen Elop went from Microsoft to be CEO of Nokia. Conspiracy theories abound as to whether he was some sort of a trojan horse, sent to Nokia to deliver it into Microsoft’s clutches. The first thing that he did was to cancel all Nokia’s internal operating systems (Symbian, Meego and Meltemi) and standardize on…Microsoft’s WP. Nokia’s smarphone sales instantly cratered and went unprofitable since there were no WP-based phones available when that announcement was made. It was never profitable again. Layoff after layoff ensued and then at the end of last year (closed in April) Microsoft acquired Nokia (this was still in the Ballmer era when the deal was done).

Now Microsoft has a new CEO, Satya Nadella, and he seems to see no future for Nokia (now rebranded under Microsoft). The recently announced layoff of 18,000 jobs (the 4th biggest tech layoff ever) falls mainly on the acquired Nokia divisions. If you look at this a certain way you could argue that Nadella thinks the acquisition was a mistake but can’t just stand up and say that without making the company look foolish. “You wasted how many billion?”


The smartphone market has finished its period of explosive growth. This is especially so at the high end where iPhone and Samsung Galaxy dominate and suck all the profits out of the entire industry. Everyone in Europe, US, Japan, Korea and even middle-class parts of China has a smartphone if they want one. The low end, in India, China, Africa, South America, is where the action is but this doesn’t play to either Intel’s strengths (their business model requires high margins) nor to Microsoft’s (they want royalties from 3rd parties whereas Android is free). Oh, and talking of Android, to nobody’s surprise Microsoft just canceled the Android-based Nokia phones announced only in Q1 this year.

My own opinion is that it is too late for either of them. Intel will not displace Qualcomm, Mediatek and the other established players. Even companies like TI and Broadcom have exited the market. Microsoft really has no other licensees for WindowsPhone than Nokia, meaning that for now at least they themselves are their only licensee with about 3% of market share for operating systems. Microsoft and Intel will just hang in there for now and eventually quietly withdraw, to focus on other product areas where they can be successful.


More articles by Paul McLellan…


NoCs for system-level power management

NoCs for system-level power management
by Don Dingee on 07-23-2014 at 7:00 am

Most of the buzz on network-on-chip is around simplifying and scaling interconnect, especially in multicore SoCs where AMBA buses and crossbars run into issues as more and more cores enter a design. Designers may want to explore how NoCs can help with a more power-aware approach. Continue reading “NoCs for system-level power management”


Xilinx: Revenue Down, Profit Up, FinFET on Schedule

Xilinx: Revenue Down, Profit Up, FinFET on Schedule
by Paul McLellan on 07-22-2014 at 11:59 pm

Xilinx announced their results today and had their conference call this afternoon, which I listened to. For them this is 1Q fiscal 2015 which means you have to be careful since there is a big difference between talking about fiscal quarters and calendar quarters. Xilinx’s conference calls are interesting for a couple of reasons. Firstly, of course, they are the leaders of the FPGA market so it provides insight into that market. Secondly, Xlilnx FPGAs are one of the process drivers for TSMC and so there is some insight into exactly where the various processes are in the ramp to volume.

Revenue was $613M, down 1% from last quarter. However, profitability was up with net income of $173M with gross margins of 69.1% (which is obviously higher than expected since net income is up on lower than predicted revenue).

The reasons for the variance given on the call were primarily China LTE deployment, and aerospace & defense.

  • wireless revenue was slightly down due to lower than anticipated 28nm sales to China LTE base station manufacturers due to delays in the deployment of 3rd phase LTE rollout there. LTE deployment outside China was healthy and there were higher than expected shipment of 40nm and 65nm parts
  • aerospace & defence was weaker due to the timing of some programs moving into September, which should see a rebound next quarter
  • wired communication sales (routers etc) were up and higher than anticipated driven by enterprise networking, optical (OTN) and access
  • industrial, scientific and medical were as expected.

Going forward they are forecasting a slower 28nm ramp than previously, again driven by the slowdown in China LTE deployment and slower than forecast ramp in the wired communication. However, Xilinx are confident that 28nm will be the most successful node in their history as it continues to ramp and design wins go into volume over the next several years.

Kintex UltraScale, the industry’s first 20-nanometer device been in the market since November and according to their largest customer is providing them with more than six months advantage over you-know-who. During this last quarter Xilinx started shipping Virtex UltraScale the industry’s first and only 20-nm high-end device.

In the questions there was some more color on the issues in China LTE rollout. Last quarter a lot of product was shipped but due to incomplete kitting (other parts not being available, nothing to do with Xilinx, they were not deployed, leading to a slowdown in this quarter which should work out in the next couple of quarters). China Mobile (the biggest operator in China, or indeed the world) was expected to deploy 4-500K base-stations and so far have only deployed about 300K.

Xilinx reckon their share of 28nm products was ovder 70% last fiscal year, and should be in the high 60% in this quarter at around $130M. The target for the (fiscal) year is $600M.

They are looking at possible belt-tightening but are committed to carry on their 20nm and 16nm development. Xilinx’s strategy remains to get new technology out as fast as possible.

Moshe was asked if they see risk in TSMC’s schedule and whether they would consider changing foundry vendor. Of course the correct answer is NFW but what Moshe actually said was:We are delighted with the progress of TSMC. As best we can tell, they’re on schedule and they have numerous other users of the technology who actually, in this case, will even be ahead of us. So, there really is no issue, in our mind, on the availability of the FinFET from TSMC. And, if anything, we’ve heard significant delays with other players.They’re doing 16 in two cycles. They’re doing a FinFET and they’re doing the FinFET Plus version, and we’re going to be using the FinFET Plus version. We’re benefiting from all of their development at this point in time.

Xilinx said they are on-track to start taping out 16nm designs in Q1 (calendar) 2015.

Transcript of the conference call at SeekingAlpha is here.


More articles by Paul McLellan…


The Leading Edge Foundry Landscape

The Leading Edge Foundry Landscape
by Scotten Jones on 07-22-2014 at 7:00 pm

There have been a lot of interesting announcements and presentations lately from the leading edge foundries. Looking at all of this information, a pretty interesting picture begins to emerge.

TSMC
TSMC is far and away the world’s largest foundry. In their 2014-Q2 conference call TSMC outlined their expectations for the balance of 2014 out through 2015.

At 28nm TSMC has LP (low power), HP (high performance), HPL (low power with HKMG) and HPM (high performance mobile). All four processes are bulk planar process with gate-last HKMG except LP that is polysilicon and SiON. At 28nm TSMC had the dominant market share, Samsung captured the Apple processor business and TSMC pretty much had everything else.

TSMC is now ramping 20nm (bulk planar process), they are expecting 20nm to be 10% of revenue in Q3 and 20% in Q4. Furthermore TSMC expects 20nm to be >20% of revenue for 2015. TSMC expects to dominate 20nm and discussed a major competitor skipping 20nm (without naming names, but we will get to who it is later). On TSMC’s web site they report that 20nm gives a 1.9x density improvement over 28nm.

At 16nm TSMC will not be ramping until Q3 of 2015 (FinFET on bulk process). Due to competitors having 14nm in the market in the first half of 2015 TSMC expects to initially have no share and lower share for 2015 in total. Longer term, TSMC expects to catch up and they expect that the combined 20nm/16nm market share will be higher than anyone else throughout 2014, 2015 and 2016. As a side note the development and design cycles are long enough that TSMC has a lot of visibility on who their customers will be all the way through 16nm. The 16nm process is a 16nm FinFET on bulk front end with the 20nm backend. I haven’t seen any statements from TSMC on overall density improvements but I have heard approximately 1.05x

For 10nm TSMC is forecasting 2.2x density improvement (FinFET on bulk). They declined to give any specific guidance on timing.

Also read: The Great 28nm Debacle!

Samsung
Samsung is the world’s third largest foundry. Dan Nenni recently published an interesting article on Samsung on Semiwiki and that article includes a link to a Samsung foundry presentation. The presentation discusses 28nm and 14nm but not 20nm so presumably Samsung is the competitor TSMC is referring to as “skipping” 20nm. This is particularly interesting because I had seen reports not that long ago that Samsung was going to be making at least some 20nm processors for Apple. I can only guess, but perhaps Apple went 100% with TSMC at 20nm and Samsung abandoned 20nm due to lack of customers.

At 28nm Samsung has 28LPS (cost effective), 28LPP (low power RF enabled), 28LPH (high performance) all planar on bulk and 28FD-SOI high performance “20nm performance at 28nm cost”. Given that Samsung is skipping 20nm it looks very likely that Samsung has licensed 28 FD-SOI technology to fill the 20nm gap. Samsung is gate-first for HKMG.

At 14nm Samsung has 14LPE (FinFET on bulk) as their fast time to market product and 14LPP (FinFET on bulk) as their second generation performance boosted product. 14LPE is qualified now and 14LPP is due in Q1-2015. Dan Nenni has suggested that Apple will go sole source with TSMC at 20nm and then jump back to Samsung at 14nm. Samsung’s 14nm should ramp late 2104 and early 2015 and represent the 14nm/16nm market share loss TSMC mentioned.

Samsung also reports that 14nm will have 0.55x the area of 28nm (for both LPE and LPP), that is a 1.82x density improvement. If TSMC sees a 1.9x improvement for 20nm over 28nm and another 1.05x at 16nm over 20nm, they would see a 2.00x density improvement for 16nm versus 28nm (please note TSMC’s 16nm process is really what everyone else is calling 14nm, the number 14 is apparently unfavorable in Taiwan). Assuming both companies have similar density at 28nm then TSMC could potentially have a density advantage at 16nm.

Also Read: Samsung Foundry Explained!

Global Foundries
Global Foundries is the world’s second largest foundry.

At 28nm according to the Global Foundries web site they have 28HPP high performance and 28SLP super low power. These appear to both be bulk planar processes. Global Foundries is also known to have FD-SOI technology but I don’t see it listed. I have heard Global Foundries is backing off of FD-SOI but I haven’t seen anything from the company one way or the other. It may be that their 28nm FD-SOI is supporting ST Micro (I have heard they have a manufacturing agreement with ST Micro) and they aren’t pushing it for general foundry usage. Global Foundries is gate-first HKMG.

At 20nm Global Foundries also has a bulk planar process 20LPM listed on their web site but I have heard they are abandoning it.

At 14nm Global Foundries has licensed Samsung’s FinFET on bulk technology and the two companies will offer capacity from Global Foundries Fab 8 and Samsung fabs S1, S2 and S3.

Also read: Samsung ♥ GLOBALFOUNDRIES

UMC
In recent years UMC has fallen from second in the foundry rankings to fourth just behind Samsung (although some rankings still have them slightly ahead of Samsung).

According to UMC’s web site they have 28LP low power, 28HLP high performance low power and 28HPM high performance mobile. These are all bulk planar processes. UMC use gate last HKMG.

I see no signs of a 20nm offering from UMC. I have heard they are working on 14nm FinFET on bulk technology with partners.

SMIC
The fifth largest foundry, SMIC is just now ramping 28PS and 28HK 28nm bulk planar processes. Reportedly SMIC has gone to gate last HKMG to be TSMC compatible. There are also rumors that SMIC is looking at 28nm FD-SOI but since FD-SOI is gate first this would be challenging to adopt. FD-SOI at 28nm does make a lot of sense to me for SMIC because it gives them a 20nm competitor without having to develop 20nm.

IBM

IBM is the eleventh largest foundry in the world. Traditionally IBM has offered leading edge capability but with the semiconductor unit rumored to be up for sale and shrinking it isn’t clear how long IBM will be available as a source.

Also Read: IBM and GLOBALFOUNDRIES Deal!

Intel
Intel is currently pretty far down the foundry ranking but with leading technology, the world’s largest semiconductor company has the potential to be a player. Intel currently has 22nm FinFET on bulk production and 14nm FinFET on bulk prototype foundry parts shipping to customers. We should see production 14nm foundry parts late this year or early next year.

Also read: Intel Custom Foundry Explained!

Discussion
Reviewing all this information there are several interesting observations I would like to make:

  • TSMC has had the leading market share at 28nm since the technology was introduced. 28nm options now include TSMC, Global Foundries, Samsung, UMC, SMIC and IBM plus 22nm from Intel. I would expect significant price erosion at 28nm going forward as these companies compete for share. I do think TSMC as the first company to ramp 28nm has a lot of designs locked in and they will be insulated from a lot of the competition due to the difficulty of moving to another foundry once a part is qualified. Samsung is also somewhat insulated due to the Apple business at 28nm. The rest of the competitors will likely end up fighting it out on price.
  • At 20nm I expect TSMC to again have the largest share. The only other options I currently know of are Global Foundries for 20nm bulk planar (if they are even still pursuing it) or a 28nm FD-SOI design at Samsung. Depending on whether you position Intel’s 22nm process against 28nm or 20nm that is another possible option. The bottom line is TSMC appears to be positioned to dominate this node.
  • At 14nm, Samsung with Global Foundries as a second source are first to market and will likely yield some benefit from this. Later on TSMC should build share as their committed customer’s ramp up in the second half of 2015. Intel also has announced customers but they are lower volume. It looks like the big battle here will be TSMC versus Samsung with Global Foundries and Intel playing smaller roles.
  • 10nm is a wide open battleground right now and the race will be interesting to watch.
  • ​I have not seen any of the foundries talk about or commit to 14nm FD-SOI. Clearly ST Micro is pursuing it but I will be very interested to see whether any of the Semiwiki readers are aware of any of the foundries pursuing it.

Moderate growth and minor correction in semiconductors

Moderate growth and minor correction in semiconductors
by Bill Jewell on 07-22-2014 at 1:30 pm

At SEMICON West two weeks ago, Bob Johnson of Gartner presented the outlook for the semiconductor market, semiconductor capital spending, and wafer fab equipment spending. Thanks to Daniel Nenni for providing the link to the SEMI/Gartner Market Symposium presentations at https://sites.google.com/a/semi.org/market-symposium/home/speaker-presentations

Based on Bob Johnson’s presentation and Gartner’s capital spending forecast released the same week, the semiconductor market is expected to show moderate growth in the 2% to 7% range over through 2018. The market is projected to pick up slightly to 6.7% growth in 2014 following 5% in 2013. Growth decelerates in the next two years, reaching a low of 2.3% in 2016. The market picks up slightly to 3.5% growth in 2017 and 4.4% in 2018. Semiconductor capital expenditures should experience a more volatile cycle, peaking at 8.9% growth in 2015, declining 3.5% in 2016, and recovering to 8.6% growth in 2018. Spending on fab equipment bounces back to a strong 16% increase in 2014 after an 8% decline in 2013. Fab equipment tracks closely with overall capital spending in 2016 to 2018.


The key drivers of the semiconductor market in the Gartner forecast are some of the usual suspects: smartphones and ultra-mobile PCs (including tablets, clamshells and hybrids). Gartner also expects solid state drives will be a significant growth driver – resulting in NAND flash memory surpassing DRAM in market size in 2016.

Christian Dieseldorff and Daniel Tracy of SEMI presented their forecast for fab equipment and materials through 2015. SEMI expects the fab equipment market will be more volatile than in Gartner’s forecast, growing 20.8% in 2014 after a 13.8% decline in 2013. Silicon wafers are expected to be basically flat in 2014 after a 12.9% decline in 2013. Other semiconductor manufacturing materials are projected to show moderate growth in the 5% to 6% range.

[TABLE] border=”1″
|-
| style=”width: 719px” |
|-
| style=”width: 218px; text-align: center” | Market Change in US$
(Source: SEMI, July 2014)

| style=”width: 167px; text-align: center” | 2013
| style=”width: 167px; text-align: center” | 2014
| style=”width: 167px; text-align: center” | 2015
|-
| style=”width: 218px; text-align: center” | Fab equipment
| style=”width: 167px; text-align: center” | -13.8%
| style=”width: 167px; text-align: center” | 20.8%
| style=”width: 167px; text-align: center” | 10.8%
|-
| style=”width: 218px; text-align: center” | Silicon wafers
| style=”width: 167px; text-align: center” | -12.9%
| style=”width: 167px; text-align: center” | -0.2%
| style=”width: 167px; text-align: center” | 2.2%
|-
| style=”width: 218px; text-align: center” | Other materials
| style=”width: 167px; text-align: center” | 3.4%
| style=”width: 167px; text-align: center” | 6.2%
| style=”width: 167px; text-align: center” | 5.0%
|-
| style=”width: 218px; text-align: center” | Total Materials
| style=”width: 167px; text-align: center” | -2.9%
| style=”width: 167px; text-align: center” | 4.0%
| style=”width: 167px; text-align: center” | 4.1%
|-

SEMI also showed the major shift over the 15 years in fab equipment spending by region. In 1999 the Americas, Japan and Europe/Mideast accounted for 71% of spending while Taiwan, South Korea and China accounted for only 26%. In 2015 the Americas, Japan and Europe Mideast are forecast to make up only 37% of spending and Taiwan, South Korea and China should total 60%.


Improve Your Memory the Sonics Way

Improve Your Memory the Sonics Way
by Paul McLellan on 07-22-2014 at 7:00 am

There is never enough memory bandwidth. Well, occasionally there is but many SoCs have lots of blocks that communicate through memory, typically off-chip DRAM. In 2001 Sonics created their first solution to this problem with MemMax technology that was incorporated into their SonicsSX product. This has been used in over 100 designs including the major video games and the largest manufacturer of digital TVs. Flagship products.

Since then Sonics have released their SonicsGN product and today they have incorporated their memory access technology into this newer technology. I talked to Drew Wingard, Sonics’s CTO about it.

The challenge with memory access is not just that you want to keep high throughput to the memory but that there are lots of blocks communicating with memory that have different requirements. GPUs need to have high throughput or there will be flicker in the graphics, communications need timely access or packets will be dropped, these are quasi-real-time deadlines. With the main processor latency is the big issue. By the time the processor tries to access DRAM, it has already missed in one or more caches and the processor is probably stalled until the memory line arrives.

So there need to be quality of service (QoS) guarantees not just at the level of overall access to memory but for individual blocks. The alternative is to have additional buffering so that, for example, instead of that graphics frame appearing right now it appears 10ms later. You won’t notice but it wastes chip area and power. This is analogous to the way video streaming like Netflix buffers at least a few seconds to cope with unpredictable internet performance so hopefully the screen never freezes, except at the chip level. The key to this is that different memory accesses have different requirements, plus they are not issued in the order in which they need to be services.

To make things worse, the number of subsystems on an SoC that need access to memory has exploded. This is implemented through virtual channels between the blocks on the chip and the off-chip memory. Flow-control is handled on a per-channel basis so that one virtual channel doesn’t block another. For example, bursty traffic on a PCIx shouldn’t make the video flicker.

There are three different traffic classes for channels with different QoS requirements:

  • high (low latency)
  • guaranteed latency (audio, video, isochronous data)
  • best effort

Of course if there are too many requests then eventually the NoC is being asked to do the impossible. But when the hardware detects that something is past its guarantee it doesn’t simply block it but demotes it to best effort.

Of course doing QoS at the network level is equally valuable for other forms of shared memory. In some applications, where there is not enough bandwidth to off-chip memory, it becomes attractive to add embedded SRAM to offload some of that bandwidth to on-chip memory, but this still needs the same type of end-to-end QoS guarantees.

These new capabilities mean that SonicsGN supports the latest DDR4 and LPDDR4 memories with the highest performance. It also fully supports the multi-threading capabilities of the Open Core Protocol (OCP) interface that reduces contention and increases performance.

The lower power demands of SonicsGN means that this scales down to very low-power small-footprint chips in internet of things (IoT) type applications such as medical and wearables.