RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

What makes the world smart?

What makes the world smart?
by Pawan Fangaria on 11-25-2014 at 4:00 pm

The simple answer is when everything in the world is smart. But if you think deeply, you would find that the continuous progression to make things easy in life is what makes the world smarter day-by-day – the sky is the limit. In the world of computing, consider the 17[SUP]th[/SUP] century era when humanbrain was used as a computer and it took ~200 years when in 19[SUP]th[/SUP] century the first mechanical computer was invented by Charles Babbage considered as father of the computer. Today we are in much advanced state and the pace of innovation is pretty fast. Technology definitely makes things smarter, life easier, and pace of doing things faster.

Today we are talking about IoT which makes all devices around us smart enough to sense and act as programmed by us, whenever and from wherever we want. What makes it possible? Sensor is not a synonym of smart, but it is the technology which enables smart things to be done. Various types of sensors can detect every movement, temperature, pressure, light etc. and activate its device to do something. We often hear talk of a world with a ‘Trillion Sensors’ associated with IoT, and we are getting there….

In 2014 MEC(MIG’s MEMS Executive Congress), Chris Wasden, Executive Director, Sorenson Center for Discovery and Innovation, University of Utah talked about the number of internet devices in use: >5 Billion today and is expected to reach 18B by 2018, and the number of sensors crossing 1 Trillion by 2025. And he talked about platform leaders (device, chip, MEMS etc.) to emerge and MEMS to co-create an industry platform to reach the 1T target.

Interestingly, foundry leaders are taking great interest in MEMS. George Liu, Director, TSMCtalked about multiple technology drivers (Personal, Home, City, Automotive and so on) in the context of IoT as against mainly computers in last several decades. He recognized the importance of sensors in making the devices intelligent and smart and also the gaps (material, architecture, low power, integration, packaging, capacity and price) that need to be filled to bring MEMS into main stream. And how can foundry contribute in filling the gap? Of course supply chain, ROI, scaling and so on, but what caught my attention are sensor and MEMS PDKs and joint process & product development between design and foundry. Wow! This can open up big opportunity for fabless MEMS development. This reminds me about one of my blogs (What will drive MEMS to drive I-o-T and I-o-P?) in which there was emphasis on standardization which can bring MEMS into volume production, and GLOBALFOUNDRIES pursuing the path of IC fab-like production discipline for MEMS.

Getting to 1T sensors is not a slam dunk; like EDA enabled fabless IC development, we need highly sophisticated and integrated automation including modeling to accelerate MEMS development. In the days to come we will see newer and newer MEMS devices, which is beyond our imagination today. But that reality has to be complemented by automated tools which can model the MEMS accurately, integrate them at system or IC level and verify accurately as fast as possible.

Taking at look David Cook’sblogat Coventorwebsite where he mentions about CoventorWare and MEMS+for MEMS+IC co-design, modeling, simulation and analysis, and SEMulator3Dfor virtual fabrication of MEMS devices to cut down on long build-and-test cycles through fab and improve yield before production, I concur with him that these tools are very apt in today’s environment to cater to the complexity of a variety of MEMS, yet meet the shrinking time-to-market window. In fact this reminds me about another blog written by Gunar Lorenz on new capabilities in MEMS+ 5.0Breakthrough MEMS Models for System and IC Designers.

In MEMS+ 5.0, Reduced Order Models (ROMs) of MEMS devices (which allows writing out snap shots of sophisticated nonlinear multi-physics models into Verilog-A protecting the IP) can be exported into Simulink schematics for system designers and circuit schematics for IC designers. Verilog-A ROMs can run up to 100 times faster than full MEMS+ models in CadenceVirtuoso or MATLABSimulink. Users can decide whether to write out ROMs in Verilog-A for circuit schematic or MROM (a new file format) for Simulink. The environment provides good set of controls for users to tradeoff between accuracy and speed. Simulation results from MROMs can be viewed and animated in 3D, just like results from full MEMS+ models.

Smart tools to develop smart MEMS, smart MEMS to develop smart devices and smart devices to make smart eco-system are must to create a smart world!

More Articles by Pawan Fangaria…..


Coverage Driven Analog Verification

Coverage Driven Analog Verification
by Paul McLellan on 11-25-2014 at 7:00 am

Ad hoc digital design verification approaches ran out of steam at least a decade ago when designs got intractably large to make it feasible to keep track of everything with pen and paper and excel. But analog design has remained largely ad hoc to this day. The designer runs spice, looks at the waveforms that come out and decide whether or not they are acceptable. But now even in analog design this sort of undisciplined approach is in turn starting to change away from the traditional methodology in the diagram below.


There are a number of reasons for this. One is that digital design (and the unit test approach to software development) has such clear advantages that it is silly for analog design not to piggyback on the experience. At the same time more and more analog, especially in the more advanced processes, is relatively simple analog design with very complex digital logic used to trim it, sometimes called digital controlled analog or DCA, meaning that the analog and the digital aspects of the design need to be verified together.

Digital design and analog designs are complementary in some ways. Digital is straightforward to design (at least from the RTL onwards we have a working methdology) but verification is very hard due to the impossibly large state space. Analog, on the other hand, is easy to specify but actually designing the blocks is extremely hard. As a result digital designers adopted coverage driven verification (CDV), based on assertions and property checking and verification planning to ensure that verification cycles are not wasted on things that have already been verified.


With the increased complexity of analog designs and IP due to the implementation of an increased number of features and functionalities as well as greater challenges due to the large variation of device characteristics in nanometer technologies, verifying designs to meet all specifications across all process corners has become an intractable problem from the perspective of debugging, managing, tracking, and meeting verification goals. Implementing a CDV methodology for analog designs can evolve analog design and verification to a standard process-based method that can be tracked and its progress measured. This work aims to extend common traits of CDV as used in digital verification to analog verification. This would bring standardization of the analog verification process.

A further driver is that standards for aerospace and automotive, such as ISO 26262, will no longer accept an ad hoc approach to requirements tracking. This too drives towards a much more formal approach:

  • specification of all requirements, whether analog or digital
  • a test plan and test cases to verify whether each requirement is met
  • metrics for measuring coverage of the tests in the test plan
  • a system for tracking requirements to ensure that they are all met and to reduce duplication of unnecessary tests


Mentor has a new white paper A Complete Analog Design Flow for Verification Planning and Requirement Tracking by Atul Pandey, Guido Clemens, Marius Sida. The whitepaper describes building a flow for CDV based on ICanalyst, Questa and ReqTracer.

You can download the whitepaper here.


More articles by Paul McLellan…


Where have all the semiconductor drivers gone?

Where have all the semiconductor drivers gone?
by Bill Jewell on 11-24-2014 at 11:30 pm

Tablets and smartphones have been key drivers of electronics and semiconductor growth for the last few years. However the growth rates for these devices are slowing as they have become more prevalent. Tablet shipments are expected to reach 229 million units in 2014, according to Gartner, equal to 73% of PC units. IDC projects smartphones will exceed 1.2 billion units in 2014, accounting for about two-thirds of total mobile phones. As show in the chart below, shipments of tablets versus a year ago have slowed to about 11% for the last two quarters from growth in the 90% to 160% range in 2012 and early 2013. Smartphone growth has decelerated from the 40% to 50% range through 2012 and most of 2013 to the 20% to 30% range for the last four quarters.


Despite the slowing growth of tablets and smartphones, new categories of devices are emerging to drive growth. Gartner has identified a segment which it calls ultramobile premium PCs – devices which have the functionality of PCs in lightweight and smaller packages similar to tablets. With 71% growth in 2015, Gartner expects these devices to drive 3.6% growth in total PCs in 2015 despite a 5.6% decline in traditional PCs. Gartner projects the tablet market will grow 19% in 2015. Combining tablets and ultraportable premium PCs results in 32% growth in 2015.

[TABLE] align=”center” border=”1″
|-
| style=”width: 239px” | Annual change in units
| style=”width: 90px” | 2013-14
| style=”width: 78px” | 2014-15
| style=”width: 78px” | CAGR
2014-18
| style=”width: 148px” | Source
|-
| style=”width: 239px” | Traditional PC
| style=”width: 90px” | -6.6%
| style=”width: 78px” | -5.6%
| style=”width: 78px” |
| style=”width: 148px” | Gartner, Oct. 2014
|-
| style=”width: 239px” | Ultramobile Premium PC
| style=”width: 90px” | 75%
| style=”width: 78px” | 71%
| style=”width: 78px” |
| style=”width: 148px” |
|-
| style=”width: 239px” | Total PC
| style=”width: 90px” | -1.1%
| style=”width: 78px” | 3.6%
| style=”width: 78px” |
| style=”width: 148px” |
|-
| style=”width: 239px” |
| style=”width: 90px” |
| style=”width: 78px” |
| style=”width: 78px” |
| style=”width: 148px” |
|-
| style=”width: 239px” | Tablet
| style=”width: 90px” | 10.6%
| style=”width: 78px” | 19.1%
| style=”width: 78px” |
| style=”width: 148px” |
|-
| style=”width: 239px” | Tablet + Ultramobile Premium PC
| style=”width: 90px” | 23%
| style=”width: 78px” | 32%
| style=”width: 78px” |
| style=”width: 148px” |
|-
| style=”width: 239px” |
| style=”width: 90px” |
| style=”width: 78px” |
| style=”width: 78px” |
| style=”width: 148px” |
|-
| style=”width: 239px” | Regular smartphone
| style=”width: 90px” | 12.8%
| style=”width: 78px” |
| style=”width: 78px” | 3.7%
| style=”width: 148px” | IDC, Sep. 2014
|-
| style=”width: 239px” | Phablet
| style=”width: 90px” | 210%
| style=”width: 78px” |
| style=”width: 78px” | 36%
| style=”width: 148px” |
|-
| style=”width: 239px” | Total Smartphone
| style=”width: 90px” | 23.8%
| style=”width: 78px” |
| style=”width: 78px” | 10.1%
| style=”width: 148px” |
|-
| style=”width: 239px” |
| style=”width: 90px” |
| style=”width: 78px” |
| style=”width: 78px” |
| style=”width: 148px” |
|-
| style=”width: 239px” | Tablet
| style=”width: 90px” | 13%
| style=”width: 78px” |
| style=”width: 78px” | 6.8%
| style=”width: 148px” |
|-
| style=”width: 239px” | Tablet + Phablet
| style=”width: 90px” | 55%
| style=”width: 78px” |
| style=”width: 78px” | 22%
| style=”width: 148px” |
|-

IDC has segmented out a high growth product area in smartphones which it calls phablets (combining phone with tablet, although I doubt any self-respecting supplier will use the term for its products). Phablets are smartphones with screens from 5.5 inches to 7.0 inches, thus displacing many of the smaller tablets. IDC forecasts the compound annual growth rate (CAGR) for phablets from 2014 to 2018 will be 36%, driving the total smartphone CAGR to 10.1% despite only a 3.7% CAGR for regular smartphones. IDC projects the CAGR for tablets from 2014 to 2018 will be 6.8%. Combining tablets with phablets drives a CAGR of 22%.

The ultimate winner in the merging of PCs, tablets and smartphones remains to be determined. It is likely that several categories of devices will continue to claim various segments of the market. Most business users will continue to need the full functionality of a PC, but may compromise with an ultramobile to get the portability and flexibility of a tablet. Many young consumers use their smartphones as their primary communication and computing device, but may like the tablet-like functions of a phablet. In many emerging markets, consumers cannot afford multiple devices and will chose the one device which best fits their needs.



SIM cards and avoiding stranded IoT assets

SIM cards and avoiding stranded IoT assets
by Don Dingee on 11-24-2014 at 4:00 pm

Since pennants, drums, smoke, and horses fell out of favor to more advanced communication technology, network operators have struggled to find balance. Too few subscribers interested, and infrastructure investments completely fail. Just the right number of paying users, revenue streams provide profit and ability to invest in growth. Too many connections, and a network clogs, and subscribers curtail use or flee for alternatives.

In the early days of the electrical telegraph, three innovations provided the breakthrough. One wire systems made pulling cable relatively inexpensive compared to early six wire attempts. Relays provided signal boost needed to span more than a few kilometers. Morse code created a compact, standard message format.

With popularity came the next challenge. Operators restricted telegraph messages to a 10 word limit, with overage charges for verboseness. It wasn’t entirely because they were money-grubbing capitalist pigs; there was a practical reason. Only one message could be on a given segment of wire, so longer messages meant increased wait times for network access. Advances in the harmonic telegraph – an early use of frequency division multiplexing – and switching stations providing multiple routes to get to a destination helped.

A similar problem arose with the early generation mobile telephone, “0G” in telecom parlance. There was one big transmit tower with 12 frequency slots for a given city. Perhaps there were a couple thousand people with radios, but only 12 could get on the network at a time. Waits were typically 30 minutes to place a call. Part of the problem was spectrum allocation, and part was computational power needed to encode and decode more connections.

Video content drove 4G, and even with cell tower proliferation, more spectrum, improvements in DSP, and denser LTE encoding, we still don’t have enough bandwidth to keep up. Wi-Fi offload saves users from hitting their data plan cap, but it also keeps the network from total congestion.

Now, all these IoT devices show up. Fixed sensor clusters can use wired gateways. Personal clusters based on smartphones hang on the 4G network. Agile clusters – think connected cars, trucks, buses, trains, airplanes, ships, anything that moves – also rely on a cellular M2M gateway. Wi-Fi does not address mobility, and it does not look like WiMAX will achieve wide scale deployment. With 2G networks sunsetting, individual IoT devices may not need 3G or 4G bandwidth for data, but nonetheless they consume spectrum for a connection to the cloud.

Devices resembling connected cars also need to be portable across markets, for original sale, use, and resale. The solution to that in mobile phone space was the SIM card, and there are now those thinking we need a similar idea for IoT devices. SIMs carry the international subscriber mobile identity (IMSI) and an authentication key plus other info.

courtesy GSMA

A SIM would keep unauthorized IoT devices off the network, and enable services network operators can monetize. The user-installable SIM form factor seen in smartphones is less than ideal for IoT use. A solderable, embedded SIM form factor such as embedded universal integrated circuit card (eUICC), can add SIM-style functions to IoT devices. For instance, there is a Gemalto implementation of an MFF2 machine identity module.

Gemalto MFF2 package for embedded SIM

One-time programmable memory would play a key role in embedded SIMs, just as in mobile SIM cards. It would prevent tampering and enable secure provisioning of IoT devices. However, in cases such as the connected car, transition of service to new ownership is important. OTP can emulate multiple time programmability by using replicated blocks and pointers, allowing reprogramming of keys. Or, creative applications could emerge such as a device locked for a contract period, then unlocked for its remaining life.

The GSMA in conjunction with Beecham Research Ltd. has produced a study on the use cases for eUICC in M2M or IoT markets. Their points on connected cars are well taken; proprietary solutions could slow adoption and reduce portability upon resale or service termination, perhaps to the point of leaving allegedly connected assets “stranded” off a network. The text narrative and reasoning in this study is worth a look, even if the forecasts don’t come to pass as shown. Also worth noting: Apple is all over eUICC.

If network operators are going to embrace IoT devices to the degree people are projecting, the use case for the entire lifecycle needs careful consideration. It would be a shame to lose the benefits of connectivity of a thermostat, car, or other long life IoT device over not incorporating ID/key reprogrammability or unlocking in some form. Embedded SIMs may be part of the answer.

Related articles:


Mentor Aims to Improve Yield and Production Ramp for PCBs

Mentor Aims to Improve Yield and Production Ramp for PCBs
by Tom Simon on 11-24-2014 at 7:00 am

Getting a printed circuit board from design and into production presents one of the biggest challenges in successfully launching a product. The designer’s job is to anticipate issues that can adversely affect PCB fabrication and assembly. Design rules and component libraries go part of the way, but there is a thicket of things that determine how many iterations will be needed with the manufacturers and how quickly volume production can start at high yield.

I attended a presentation recently that was given by Julian Coates from Mentor’s Valor Division on their NPI (New Product Introduction) offering. Valor products are well known and widely used by PCB fab and assembly houses. But very often designers rely on their vendors to run it to provide feedback. This was confirmed by a friend of mine whose company does contract board design, “I just let them run it and tell me if there are any issues.” But he conceded that Valor provided much needed information to ensure manufacturability. An example of one such issue is where a net touches itself and creates clearance issues for reflow soldering. It’s not a short, but can cause bad solder connections. Valor can spot these kinds of issues easily.

Valor NPI is intended for designers and NPI engineers to run during the design phase. By pulling necessary changes forward in the design process, it reduces costs. Mentor’s Julian Coates cites a study that shows the use of Valor NPI can reduce the average number of design to fab house iterations from 2.8 to 1.5. Given that each turn will have associated costs, this represents a big savings in money and time.

Mentor’s challenge is to get designers and NPI engineers to run the tool themselves. This means leaving Allegro, for instance, and getting into Valor. Mentor explained that they have interactive integration to make this convenient.

Often there are multiple sources for parts and multiple fab and assembly providers. Valor NPI can help with each of these. The PCB editor parts library is actually just shapes for the copper and board, not actual part geometry. If you dig into the inventory parts that might be used for a given SMT device, you will see subtle variations of the terminal geometry. Not all 0402 10K resistors are exactly the same. Locking in to just one supplier could affect the supply chain. Valor NPI addresses this by complimenting the PCB editor library with their physical parts library with thousands of actual device dimensions. This means that for all alternative devices, it is possible to see exactly how the pins will contact the PCB, showing if the pad geometry will work well for all possible inventory parts. The same goes for pad position as well as size. Badly positioned pins on pads will cause bad solder joints and is a leading cause of PCB failure.

Lastly one of the examples I appreciated the most was the case where the solder stencil opening for a pin touched a nearby net. Valor flags this kind of issue, avoiding solder bridges leading to design failure.

DFM rules vary between fab and assembly suppliers. Valor NPI can maintain different DFM rules for each vendor. One of the attendees at the event mentioned that they always use multiple vendors for manufacturing. Being able to run all the vendors’ DFM rules seems like a good capability to ensure manufacturability during design. Valor NPI also adds a qualitative aspect to DFM. It does not just provide pass/fail for DFM checks. It encourages practices to boost yield by advising when certain dimensions are reaching critical values. You might have a reason for 6 mil spacing in specific locations, but you probably want to avoid it where you can even if it is OK within the design rules. Valor NPI will give you a histogram showing red and yellow rule warnings, as well as black hard violations.

Valor NPI also lets designers do their own panelization. It handles all the tricky issues with outline milling with mouse bites, and v-grooves. In addition to fiducals, if rails are deemed necessary they can be added. The Valor parts library is helpful here because it has the physical dimensions for connectors and potentially overhanging components that would cause issues during assembly.

PCB manufacturability is a huge issue. Mentor has impressive expertise in PCB supply chain tools. Valor NPI applies this expertise to provide a compelling solution for the design side of the business. The only question is if they can convince designers and NPI engineers to adopt their proposed process. It seems that the business case for doing so is strong.

They will let you try it free for 5 days. More information about Mentor’s Valor NPI can be found here.


Codasip and Coby and Czech

Codasip and Coby and Czech
by Paul McLellan on 11-24-2014 at 12:00 am

At ARM TechCon I ran into Coby Hanoch who has just been appointed VP worldwide sales of a comany that I’d not previously heard of called Codasip. As the name implies they supply code, and ASIPs. Well, actually IP source code and ASIP tools. The company is based in Brno (pronounced pretty much like Bruno) in the Czech republic with a sales presense in US, EU, Israel, Japan, China and Korea. They have actually been working on technology in an incubator since 2006 but were spun out as a venture funded company in Q1 this year.

Coby was most recently at Jasper Design Automation where he ran worldwide sales. Of course Jasper was recently acquired by Cadence and so Coby was surplus to requirements.

ASIP stands for application specific instruction-set processor and they fill the gap between standard microprocessors such as ARM or MIPS, and writing RTL (or using HLS) to implement the functionality. You get close to the flexibility of a software-based solution with close to the performance of doing the RTL. ASIPs are typically used for doing very specific functions that require unique performance capabilities that a standard microprocessor cannot deliver, typically either ultra-low-power or else very high performance.


For example the “OK Google” engine above. A very low-power always on ASIP with very limited detection capabilities fronts a second ASIP with a full speech recognition engine to understand the request. Then depending on the request, other parts of the system are woken up (such as a high powered multi-core processor) to perform the tasks.

Codasip have a language, CodAL, for processor description. It supports all processor architectures such as RISC, CISC, DSP and VLIW. This is then run through Codasip Studio to generate all the views required to actually use the processor:

  • Synthesizable RTL
  • UVM testbench
  • Compiler (using LLVM)
  • Assembler
  • Debugger
  • Virtual platform
  • Profiler
  • And more…

IOT, wearable devices, automotive and medical products require many specific processors which provide best performance with minimal power consumption. Codasip’s profiler enables the designer to tailor the architecture and optimize the power-performance-area equation They also provide generic IP modules for RISC, DSP and VLIW which users can use to jumpstart the design, adding/removing/modifying them with total flexibility so they are optimal for their needs. They are focused on leveraging standard technologies such as LLVM, GNU, QEMU, etc., so the generated elements can be integrated with the rest of the customers environment.


As an example, look at Sobel edge detection with grayscale output. This takes in a color picture, finds all the edges and outputs a black and white version with all the edges highlighted. By introducing a 128-bit SIMD extension they immediately get a 4X speedup compared to optimization done entirely at the sotware level.

Once the architect defines the Instruction Accurate model, the SW/Firmware team can immediately compile their code, run it on the emulator, and debug it, even before the HW guys developed the microarchitecture and have any RTL. The architects can profile the model and add/remove instructions/registers/memory elements to optimize the architecture. This means that software development and SoC development can proceed in parallel, pulling in the design schedule significantly.

The Codasip website is here.


More articles by Paul McLellan…


Leakage Current TCAD Calibration in a-Si TFTs

Leakage Current TCAD Calibration in a-Si TFTs
by Daniel Payne on 11-23-2014 at 4:00 pm

Two weeks ago I blogged about amorphous silicon and how that material is well-suited for designing TFTs. Today I’m following up after watching the archived webinarpresented by Nam-Kyun Tak of Silvaco. After clicking on that link you’ll be brought to a brief sign-up page and then can watch the archived webinar in your web browser. This info is most appropriate for TCAD engineers who want to predict semiconductor behavior while gaining insights before actually fabricating a new technology. Continue reading “Leakage Current TCAD Calibration in a-Si TFTs”


More Apple A9 Ridiculousness!

More Apple A9 Ridiculousness!
by Daniel Nenni on 11-23-2014 at 8:30 am

File this one under funny things journalists are paid to say. Last week the Korea Times reported that Apple had “designated” Samsung as the primary supplier of the next Apple SoC. In response, the Chinese Commercial Times reported that TSMC is to supply the Apple A9 chip despite competition from Samsung. Since SemiWiki readers already know the score on this let me just highlight the funny parts:

“Samsung Electronics agreed with Apple to produce application processors (APs) from next year for iPhones and iPads, sources said Monday. The agreement means Samsung will become a primary supplier of APs to Apple, pushing its chief Taiwanese rival TSMC back to second place. From 2016, the company will supply 80 percent of APs used in Apple devices, and TSMC the remainder.

The Samsung and TSMC FinFET processes are not compatible so I do not see Apple or anybody else for that matter splitting a bleeding edge mobile chip amongst foundries. It will take a serious amount of experienced design effort and with the current time-to-market SoC pressures those resources are not treated lightly.

Also Read: Who is REALLY Using TSMC 16FF+?

“TSMC replaced Samsung in 2013, becoming the main manufacturer of Apple’s A8 processor, used in the iPhone 6 and the iPhone 6 Plus. Samsung only produced 30% of A8 processors, a market insider said.”

Again, Apple did not split the manufacturing of these chips. According to tear downs the Apple A8 (iPhone) and the A8x (iPad) are both TSMC 20nm. According to Samsung they do not offer a 20nm foundry process nor have I seen a Samsung 20nm SoC. In fact, I was told a while back that Samsung would skip the 20nm planar node to accelerate their 14nm development (which is 20nm FinFET).

“A US investment banker confirmed on Nov. 19 that TSMC is to manufacture a large portion of the A9 processors in 2015, although whether the process technology will be 20-nanometer or 16-nanometer is still unclear. Samsung will supply a smaller proportion of the processors, the paper reported.”

I wonder what investments this US banker has?

“The yield rate of TSMC’s 20-nanometer process technology has reached 80%, and its 16-nanometer FinFET process technology 90%. The two processes are expected to account for 1% of revenue in the first quarter of 2015 and 10% in the fourth quarter of 2015, said market analyst Randy Abrams from Credit Suisse Taiwan.”

I know Randy, I’m pretty sure this is a misquote. I will let you know after my next Taiwan trip. Unfortunately it has been cut and pasted around the internet already by those who need to believe it is true.

And finally, according to Barron’s:

Investors who have been put off by delays in production ramps for Intel’s latest chips should focus on the broader picture. Intel has a 3½-year lead over rivals like Taiwan Semiconductor Manufacturing, IBM, and Samsung Electronics in cutting-edge chip-making techniques, says Pitzer…

If you must give them a click:

http://www.koreatimes.co.kr/www/news/tech/2014/11/133_168259.html

http://www.wantchinatimes.com/news-subclass-cnt.aspx?id=20141121000059&cid=1206

http://online.barrons.com/articles/intel-has-30-upside-1416633838

More Articles by Daniel Nenni…..


Not Mobile, Automotive to See Max Semiconductor Growth!

Not Mobile, Automotive to See Max Semiconductor Growth!
by Pawan Fangaria on 11-22-2014 at 9:00 am

There is no denying that mobile market is almost matured, the growth in the semiconductor industry has to pick up somewhere else. Although it’s expected that worldwide cellphone subscription will exceed the world population in 2015 (already exceeded in many parts of Europe) and continue for some time (while CAGR in unique subscription will decline to 3% by 2018, see report), the actual rise in CAGR in revenue is going to be much higher in automotive semiconductor segment.

According to IC Insights forecast report, in 2013-2018 the automotive IC segment is going to see 10.8% CAGR, much higher than other segments including communications, industrial and consumer. While cars and other vehicles will see more and more infotainment, ADAS (Advanced Driver Assistance Systems) and safety systems (National Highway Traffic Safety Administration has mandated backup cameras as a must in all new vehicles), some of the communication systems such as IoT devices and vehicle-to-vehicle communications will also go into vehicles. The growth in automotive ICs has already picked up; in 2014 it is expected to grow by ~15% to $21.7B compared to just 1% in 2013.

What do we infer from here? If I look at the top3 automotive IC players in 2012 and 2013, they have remained in the same order – Renesas, Infineonand ST Micro. Interestingly, they are in 2014 top20 semiconductor revenue list as well, but there ST is above (at #10) Renesas (at #11 although with very minor difference in revenue compared to ST) and Infineon (at #13, improved from #14 in 2013). Let’s look at some other interesting stuff.

The top drivers in automotive segment are Analog ICs and MCUs other than growing presence of sensors and power management ICs. I see other automotive players such as Texas Instrumentsand NXPin top20 list putting higher emphasis in these automotive semiconductor areas. TI had largest 21% increase in automotive revenue in 2013. In 2014 top20 list, TI has improved its rank to #7. Also, looking at the fact that APAC region is forecast to be the largest market for automotive ICs (at ~20% CAGR), UMC is stepping up its manufacturing of automotive chips; already have plans to supply these chips for Japan’s automotive industry.

Does that mean we are going to see some changing equations over next couple of years? Among automotive players in 2014 top20 semiconductor companies (Renesas, Infineon, ST, TI, NXP), they have either improved their rank or maintained where they were in 2013. And among top3 automotive IC companies, they are retaining their order since 2012. This signifies the fact that due a longer life cycle of automotive products, they are bound to be there and further improve with accelerated growth in that sector.

By the way, wireless and consumer sector players such as Qualcomm, Inteland NVidia will also get benefited from car electronics and infotainment systems. Automotive memory IC market is also expected to double ($2B to $4.2B) by 2018.

Although the automotive segment will grow from a low base revenue compared to other segments, its high growth rate can add to already established semiconductor players in the top20 to renew their fortunes. Are we going to see any major change in rankings?

More Articles by PawanFangaria…..


Intel 2014 Investor Meeting and 14nm Status

Intel 2014 Investor Meeting and 14nm Status
by Scotten Jones on 11-21-2014 at 6:30 pm

Intel’s investor meeting was held yesterday and for me the presentation that is most interesting is Bill Holt’s. The presentations are available on the Intel website: Intel Corporation – Presentations Material 2014. Here is the 2013 version of this presentation: Intel Corporation – Presentations Materials 2013. First off I want to vent a little, what is up with the European paper size? Does Intel have a secret plan to get everyone in the US to buy new printers?

On slides 3, 4 and 5, the 14nm yields are shown versus 22nm. The good news for Intel is the yields are finally looking pretty good; the bad news is it has taken a long time to get there. I find it interesting that TSMC is reportedly already getting good yields on their 16nm process suggesting their 16nm/14nm development has proceeded more smoothly than Intel’s. From what I have heard Samsung and Global Foundries continue to struggle with 14nm yields.

On slide 7, 14nm pitches of 42nm for STI, 70nm for gate (GP) and 52nm for M1 (M1P) are presented. This is in contrast to TSMC’s pitches of 48nm for STI, 90nm for GP and 64nm for M1P as reported at IEDM 2013. This gives a GP x M1P of 3,640nm[SUP]2[/SUP] for Intel and 5,760nm[SUP]2[/SUP] for TSMC. I have two observations on this:

[LIST=1]

  • This is comparing Intel’s 14nm to TSMC 16FF. At the 2014 IEDM on December 15, 2014 TSMC is scheduled to present what looks to be 16FF+. It will be interesting to see what if any pitch improvements they report for 16FF+ versus 16FF and how that compares to Intel. The TSMC 16FF GP and M1P are the same as 20SOC, at the 2014 TSMC technology symposium 16FF+ was reported to offer a 15% improvement over 20SOC so perhaps GP x M1P is something like 4,896. I should note here that I have had someone who should know what they are talking about tells me the 16FF+ does not improve density versus 16FF.
  • The BEOL pitches for Intel’s 14nm process have started to come out. My understanding is there are 8 layers of 52nm pitch metal produced with Self Aligned Double Patterning (SADP) followed by 80nm and 160nm pitch layers with air gaps and finally 3 layers of presumably large pitch metal. The use of SADP for the first 8 metal layers means they are 1D metal layers and the design rules are very restrictive. It seems unlikely to me that a foundry could get away with such restrictive rules and this is a key part of why Intel can produce smaller metal pitches than anyone else (more on the metal layers later).

    Slide 8 shows a 0.54x scaling in SRAM size, an impressive achievement!

    Slides 9 through 14 present fin scaling and show scaling to a smaller pitch while simultaneously increasing the fin height. This is another impressive achievement.

    Slide 15 presents Intel’s leadership in introducing new process technologies to the industry. Once again these achievements are impressive and it illustrates how much Intel has helped to drive the industry forward over the last decade. The key question this slide doesn’t address is what is next and will Intel maintain a lead. TSMC, Samsung and Global Foundries are all ramping up their FinFET processes and have essentially “caught up” on that innovation. In my opinion the next innovation will be Germanium or Indium Gallium Arsenide fins and it will be interesting to see who get there first.

    Slides 18 and 19 present the 14nm Interconnect. I have to say I am very surprised by the 13 layers of interconnect at 14nm (the number of metal layers isn’t listed here and is from other sources). Intel had 6 metal layers for 180nm and 130nm while transitioning from aluminum to copper metallization; at 90nm they had 7 metal layers, 8 metal layers at 65nm and then 9 metal layers at 45nm, 32nm and 22nm. My expectation at 14nm was 10 metal layers. What I think happened was the use of SADP to produce the 52nm critical metal pitches forced 1D metal and a lot of metal layers to accomplish the required interconnect. My “guess’ is:

    • M1 through M8 are alternating x and y direction metal layers all serving for short signal runs.
    • M9 and M10 reportedly have air gaps and presumably these are longer signal runs where the air gaps are need to lower the RC delay.
    • M11, M12 and M13 are presumably large pitch metal runs for power and ground.

    Slide 20 is a new version of the “infamous” slide showing Intel’s density lead. In the past the x-axis has been node but has now been switched to time. Now instead of Intel lagging and then pulling ahead they consistently lead. The following is my own version of this slide comparing Intel and TSMC actual processes and then forecasting TSMC 16FF+ with a 15% shrink and 10nm with a 2.2 density improvement based on the TSMC technology symposium early guidance (these are updated projection since my “Who will lead at 10nm post”). For Intel I used my own trend projected 10nm numbers.

    Intel Versus TSMC GP x M1P by year of technology introduction.

    As can be seen from this plot, Intel consistently leads for density; the problem to me with this analysis is until recently Intel was exclusively using their processes for microprocessors (MPU) which have a much narrower set of performance requirements than processes for foundry use. Intel only had to focus on fast transistors while TSMC has to provide processes that meet a wide variety of different requirements. At 22nm Intel’s MPU and foundry processes have the same pitches for GP and M1P but will that hold at 14nm and if so how many customers will accept the restrictive design rules required for SADP metal layers?

    Slides 22 and 23 show Moore’s law is alive and well at least at Intel. The cost per wafer goes up with each generation but the die shrinks more than make up for it. As we have entered the multi-patterning era wafer costs are rising faster than we have historically seen but at least at Intel the die shrinks are overcoming this.

    Some observers believe that at the foundries the increase in wafer cost at 20nm due to multi-patterning has overwhelmed the die shrink and die costs have risen. I do not believe this but rather think the die cost reductions have slowed. At the 16nm/14nm node at foundries the wafer costs will again increase (although the use of 20nm backend pitches mitigates this to some extent) and the shrinks are minimal. At 16nm/14nm die cost reductions will be minimal at best. At 10nm I expect foundries to deliver competitive cost per die reductions as we get back to full shrinks, in fact TSMC has guided a 2.2x increase in density. Wafer costs from 16nm to 10nm at TSMC are not going to go up anywhere near 2.2x!

    All in all Intel continues to deliver impressive technological progress and do it economically. Comparing Intel with TSMC (or any foundry) for device area is really not a valid comparison until Intel is a substantial foundry player and the processes being compared are both being used in the foundry space.

    I am still going through all of the presentations but I also wanted to comment on Stacy Smith’s presentation slide 51 which shows Intel’s fab capacity and demand coming back into balance, which is a really big deal after the low levels of loading seen in 2012 and 2013.