BannerforSemiWiki 800x100 (2)

Simulation of Novel TFT Devices

Simulation of Novel TFT Devices
by admin on 01-27-2014 at 5:45 pm

Traditionally logic devices built on top of thin-film-transistors (TFTs) have used one type of device, either an NMOS a-Si: TFT (hydrogenated amorphous silicon) or a PMOS organic device. Recently a-Si:H and pentacene PMOS TFTs have been integrated into complementary logic structures similar to CMOS. This, in turn, creates a problem of how to model and simulate these structures.

This is a special case of something that Silvaco does all the time, since it has a full line of TCAD products along with modeling and circuit simulation tools. So the basic idea is to use Silvaco Athenta and Atlas TCAD tools to model the process used to build the TFT devices and perform process and device simulation. This is then converted into the Utmost IV data format. From there models can be extracted that can be used in circuit simulation to predict the performance. The TCAD tools close the gap between the technology development (TD) process engineers and the designers, two worlds that have very different knowledge bases.


TFT circuits in all NMOS (or PMOS) like topology have a large static power dissipation due to the existence of a direct path from supply to ground, just as in the days of NMOS and HMOS process technologies before the world went completely CMOS for logic. This power dissipation means that circuits like this cannot be used in battery operated portable systems. So just as we did with CMOS, we can integrated a-Si:H NMOS TFT with pentacene PMOS TFT in a complementary structure to form a hybrid inverter circuit. The TCAD data is all converted to Utmost IV format and then model extraction is done in Utmost IV. For the pentacene-based PTFT a UOTFT model was used. For the NMOS a-Si:H TFT an RPI a-Si TFT model was used. The extracted SPICE models are then used in the hybrid inverter circuit and a ring oscillator containing 5 of them.


And yes, the ring oscillator oscillates:

So that is a lot of buzzwords and initials. But the important ideas are fairly simple. TCAD simulation of these novel devices were done and the output data was converted to Utmost IV format. Using this data, SPICE models for a-Si TFT (level=35) and organic TFT (using UOTFT model level=37) were extracted and used to successfully simulate a five-stage ring oscillator using the hybrid inverter. Basically, starting from details of the process, SPICE models are automatically generated and then used for circuit simulation and analysis.

The full white paper is available on the Silvaco website here.

More articles by Paul McLellan…


TSMC projects $800 Million of 2.5/3D-IC Revenues for 2016

TSMC projects $800 Million of 2.5/3D-IC Revenues for 2016
by Herb Reiter on 01-27-2014 at 11:00 am

At TSMC’s latest earnings call held mid January 2014, an analyst asked TSMC for a revenue forecast for their emerging 2.5/3D product line. C.C. Wei, President and Co-CEO answered: “800 Million Dollars in 2016 ”. TSMC has demonstrated great vision many times before. For me, an enthusiastic supporter of this technology, this statement represents a big moral boost. I had the opportunity to drive Synopsys’ support for the early TSMC reference flows and saw how this strategic move has paid off very well, for the entire Fabless EcoSystem. In my humble opinion, 2.5 and 3D ICs will have a great impact on our industry such as the TSMC’s reference flows have.

TSMC’s prediction for 2.5/3D revenues confirms what I see and hear: Several large companies and an impressive number of smaller ones are starting or are already relying on 2.5/3D technology for their products that will become available sometime between 2014-16. Why rely on 2.5/3D technology? Because continued shrinking of feature sizes, including FinFETs, is no longer economical for many applications. Likewise, wire-bonded multi-die solutions or package-on-package can no longer meet performance- and power requirements.

How can busy engineering teams quickly evaluate and choose the best alternative between current and the new 2.5 or 3D-IC solutions?

Based on the fact that this technology shifts a major part of the value creation into the package – packaging is becoming more important and must be considered PRIOR to silicon development. This new book expresses much of the packaging expertise Professor Swaminathan has gained in the last 20 years while working at IBM and teaching / researching at GeorgiaTech. Together with Ki Jin Han, they address most of the topics system- and IC designers need to consider when utilizing 2.5 and 3D-ICs solutions. Professor Swaminathan is also accumulating hands-on 2.5 and 3D experiences as CTO of E-System Design, an EDA start-up in this field. Their 2.5/3D book is available at Amazon.com.

The book explains in Chapter 1 why interconnect delays and the related power dissipation are constraining designers and how Through-Silicon-Vias (TSVs) help to finally break down the dreaded “Memory Wall”. Either a 2.5D IC (die side-by-side on an interposer) OR a 3D IC (vertically stacked die) solution can better meet the performance, power, system cost, etc requirements. But before expensive implementation is started, the various options available in either need to be objectively evaluated. Both solutions increase bandwidth while lowering power dissipation, latency and package height. In addition, they simplify integration of heterogeneous functions in a package, for example combining a large amount of memory with a multi-core CPU or adding analog/RF circuits to a logic die.

Chapter 2’s primary target audience is modeling and design tools developers. It explains how to accurately simulate the impact of TSVs, solder balls and bonding wires on high-speed designs – information also useful for package and IC designers.

Chapter 3 dives into a lot of practical considerations for designing with the above mentioned IC building blocks.

Chapter 4 focuses on signal integrity challenges, coupling between TSV as well as power and ground plane requirements. Both silicon and glass interposers are covered.

Chapter 5 addresses power distribution and thermal management and Chapter 6 looks at future concepts currently in development for solving 2.5/3D-IC design challenges.

The many formulas and examples in this book make it a great reference for experienced IC and package designers.

Herb@eda2asic

lang: en_US


What will drive MEMS to drive I-o-T and I-o-P?

What will drive MEMS to drive I-o-T and I-o-P?
by Pawan Fangaria on 01-27-2014 at 5:45 am

By I-o-P, I mean Internet-of-People- I couldn’t think of anything better than this to describe a technology which becomes your custodian for everything you do; you may consider it as your good companion through life or an invariably controlling spy. This is obvious with the embedded sensor techno-products such as Kolibree, a smart toothbrush (that tells you about your brushing style and effectiveness, recorded into your smartphone to be later examined by your dentist), PulseWallet (that biometrically reads your palm and links to stored credit card information to make payment without needing the card), Beddit(that tracks your sleep pattern), and others (Veristrideis developing a technology for your shoes that can tell you how you walk and how to improve, Netatmois developing bracelets that can monitor your exposure to sunlight and tell you when you have to put on your goggles and apply sunscreen, and Cityzen Sciencesis developing smart fabric for your techno-shirts!) making headlines among the most disruptive innovations in CES 2014. A comprehensive report of disruptive innovations is published here.

I-o-T again has a long list of gadgets and that is going to expand tremendously in near future by further proliferation into consumer, healthcare, mobile, automotive, aerospace, military and industrial applications. We will undoubtedly see more technology products joining the bandwagon of smartphone revolution in coming years.

What enables them to gain such an exciting acceptance in the market? MEMS are present in every such device making them responsive to our environment, which can be in the form of touch, motion, feel, sound, weight, pressure or any kind of change for that matter. There can be enormous ways to frame technologies based on our environment and that will be driven by MEMS. Starting with actuators in automotive airbag application, MEMS have expanded into several other areas with accelerometers, gyroscopes, microphones, resonators, switches, optical mirrors and so on.

Despite many design and manufacturing challenges, MEMS growth rate has been higher than the average of overall semiconductor industry; sensors and actuators revenue was $8.41B in 2012 and is forecast to reach $9.09B, up 8.1% in 2013 according to the reportat isuppli. It is expected to climb at double-digit growth rate from here reaching to revenue of about $12.21B by 2017. While Texas Instruments, ST Microelectronics, HPand Boschare among the top MEMS players across the world, Taiwan is seeing major volume growth in MEMS market; Domintechand mCube, each is estimated to ship 10 million units of accelerometers this year.

MEMS are rapidly moving into a more mainstream path in the modern semiconductor industry. What drives MEMS is their ability to be manufactured in tiny pieces that can be integrated with ICs in a package, thus giving push to the niche devices we talked about. Other factors that will help MEMS proliferate are low power and low cost.

So, how do we scale up MEMS manufacturing in smaller size, power, integration, and at lower cost? Since MEMS devices involve more physical variables such as motion, their process development is highly customized as per the design of the device. There is no industry standard process for MEMS unlike ICs. And therefore MEMS and IC cannot be on the same die as of today; they need to be put together into a package. My personal opinion is that 3D-IC in its assembly of planes can dedicate a few planes for MEMS, as the 3D-IC process technology matures going forward.

Anyway, apart from more compact packaging, the stage is already set for many other developments that can take place for better integration, accuracy, faster pace, and at lower cost. As I had indicated in my last article here, Coventor’s SEMulator3D tool can enable faster and accurate process modeling and scaling for MEMS manufacturing through its Virtual Fabrication platform. And that can reduce cost significantly by eliminating time consuming and expensive build-and-test cycles. This can also help accelerating development of newer and more complex MEMS models to fuel the growth of I-o-T and I-o-P.

From a design standpoint, Coventorprovides an integrated MEMS and IC co-design and verification environment through its new release of MEMS+ 4.0 suite of tools. This enables MEMS components to be designed in a 3D design entry system and imported as symbols into MathWorks(MATLAB, Simulink) and Cadence(Virtuoso) schematic environments. The MEMS models can be automatically exported in Verilog-A, which can then be simulated together with IC description in any environment that supports Verilog-A; Cadence (Virtuoso) or other AMS simulators. These models simulate extremely fast; up to 100X faster than full MEMS+ models. By automating hand-off between MEMS and IC designers, this approach can eliminate design errors and thereby require fewer build-and-test cycles. More details can be found in another article here.

Although we may be a bit away from having MEMS and IC on the same die, today we do have other tools and infrastructure to design and verify them together, which can then be put together in a system with accuracy at faster pace and reasonable cost. Coventor tools can be used by Foundries as well as fabless design houses to avail the large window of opportunity in MEMS business.

It’s heartening to see GLOBALFOUNDRIES taking a lead in volume production of MEMS by pursuing the path of IC fab-like production discipline. Such moves can bring standardization of MEMS manufacturing closer, which will be key to boosting the business further.

More Articles by Pawan Fangaria…..


SPICE Circuit Simulator Gets a Jolt

SPICE Circuit Simulator Gets a Jolt
by Daniel Payne on 01-25-2014 at 11:28 am

I’ve been using SPICE circuit simulators since 1978, both internally and commercially developed, and a lot has changed since the early days where netlists were simulated in batch mode on time-share mainframes. We used to wait overnight for our simulations to complete, and in the morning had to pickup our output results as a thick stack of folded paper, but only if there were no syntax mistakes. If your output was only two pages long, then you had a typo in your netlist. Today, however we have fast workstations and interactive circuit simulation, so finding a typo takes a few seconds.

Silvacohas offered their circuit simulator called SmartSpice for decades now, and in the latest release there are two big improvements that are rather compelling to the circuit designer: parallelism and hierarchy.

Parallelism

The classic UC Berkeley SPICE Circuit Simulator and many derivative simulators would read in a netlist and then build a single, large matrix to solve for node voltages and branch currents. This simulation method is well understood, produces accurate results, although the run times can be lengthy because of all the floating point math calculations.

SmartSpice has introduced a new command line option, called “-hpp”, that will instead automatically break up the single, large matrix into multiple smaller matrices that can be run in parallel on different cores, thus speeding up the simulation time dramatically. The good news for design engineers is that this step is fully automated, you don’t have to identify partitions or make any decisions, just use the new command line option and start getting results back faster. The following chart shows that you can expect up to a 14X faster results when running on 4 cores.

Even with a single core, there’s up to 13X speed improvement versus a baseline of SmartSpice 2012.

Improvements have also been made to shorten the time that it takes to load in large netlists with millions of elements by using a domain decomposition method. Here’s a chart showing that you can reduce load times by almost 10X when using 12 cores and the new Domain Decomposition Solver (DDS):

Hierarchy

Designers of DRAM, SRAM, Flash and TFT circuits often use massive amounts of hierarchy in their netlists. SmartSpice will now take advantage of that hierarchy with the DDS option and an isomorphism option, so that during circuit simulation the hierarchical cells can share their results instead of all being simulated individually.

Summary

Silvaco has kept up with the times by adding new options to SmartSpice that speed up circuit simulation run times and expand the capacity. The benefits are that you can now run more simulations in the same amount of time, providing you more confidence that first silicon will be correct.

lang: en_US


Stop TDDB from getting through peanut butter

Stop TDDB from getting through peanut butter
by Don Dingee on 01-24-2014 at 6:00 pm

There are a few dozen causes of semiconductor failure. Most can be lumped into one of three categories: material defects, process or workmanship issues, or environmental or operational overstress. Even when all those causes are carefully mitigated, one factor is limiting reliability more as geometries shrink – and it sneaks up over time.

I found an interesting document from Panasonic titled “Failure Mechanism of Semiconductor Devices”, a few years old (circa 2009) but a fairly concise read and a great handy reference on defect causes. Table 3.1 from that document nicely summarizes the causes and modes of failure.

courtesy Panasonic

The first failure mode described in detail in that document is our suspect of interest: time-dependent dielectric breakdown, or TDDB. As geometries get smaller and gate oxide films get thinner, the risk of long term failure due to deterioration of the oxide film is growing. There is a gory formula expressing TDDB in terms of electric field strength, temperature, and other variables, but one sentence in the explanation draws attention:

… dielectric breakdown occurs as the time elapses even if the electric field is much lower than the dielectric breakdown withstand voltage of the oxide film.

That is a bit disconcerting, because it suggests many designs may not have accounted for or analyzed TDDB, instead proceeding on the assumption the materials in use are well within specification. To help designers improve their odds in combatting TDDB and improving reliability, Mentor Graphics has taken a new look at spacing rules with some surprising results.

As with so many design techniques for difficult-to-characterize problems, peanut butter approaches are often used to mitigate TDDB: visual inspection or marker layers, targeting congested areas thought to potentially pose a problem. The “fix” is usually extra checking as indicated, followed by applying a generous spread of spacing. By creating more separation in critical areas of a design, in effect derating the oxide material, TDDB can be forestalled.

This rather empirical, experience-based approach may have worked at less aggressive geometries with fewer power domains, but as complexity is increasing a more analytical approach is required to save EDA time and unnecessary padding that wastes valuable space in short supply. Mentor’s approach to the problem sounds simple: analyze the nets and the voltage differences between them, and apply spacing rules accordingly.

Easier said than done. An analysis like this not only has to understand the layout and power domains, but account for physical implementation details – a job normally for SPICE, but creating enough test vectors to look at all the combinations is an extremely complicated exercise. Plus, every time a design changes, the analysis has to be rerun since the prior results are basically no longer valid, and brand new problems may crop up. Did I hear someone say “more padding”?

The Mentor approach brings the capability of Calibre PERC to bear on TDDB. By performing rule-based checks on layout-related and circuit-dependent values, problem areas can be spotted quickly and accurately, without the use of marker layers or tedious SPICE simulations. With the voltage differences across the nets accurately known, rules-based spacing checks can automatically run and apply the minimum spacing needed. The savings in time, space, and false alarms compared to the peanut butter approaches can be significant according to Mentor.


Mentor’s complete white paper, authored by Matthew Hogan, describing the motivation behind voltage-aware design rule checking and its benefits in mitigating TDDB, is here:

Improve Reliability With Accurate Voltage-Aware DRC

Is TDDB sneaking up on your design, waiting to cut its life short? Did your last design take more time and use more space than it had to just to be “safe”? Has the peanut butter approach let you down lately? Calibre PERC can help replace best-guess strategies with rules-based results.

More Articles by Don Dingee…..


Is Altera Leaving Intel for TSMC?

Is Altera Leaving Intel for TSMC?
by Daniel Nenni on 01-24-2014 at 9:00 am

There is a rumor making the rounds that Altera will leave Intel and return to TSMC. Rumors are just rumors but this one certainly has legs and I will tell you why and what I would have done if I were Altera CEO John Daane. Altera is a great company, one that I have enjoyed working with over the years, but I really think they made a serious mistake at 14nm, absolutely. Altera moving to Intel was not necessarily the mistake, in my opinion it is how they went about it.

The rumor started here:

“Altera’s recent move [contacting TSMC] is probably due to its worry of the recent Intel’s 14nm process delay causing delay in its new product will let Xilinx win”
ChinaEconomic Daily News 12/2/13

It became more real when Rick Whittington, Senior Vice President of Drexel Hamilton, released a downgrade on Intel stock (INTC) from buy to hold titled “A Business Model in Flux”. There are more than a dozen bullet points but this one hit home:

While Altera’s use of 14nm manufacturing late this year wasn’t to ramp until mid-late 2015, it has been a trophy win against other foundries

A trophy win indeed, the question is why did Altera allow itself to be an Intel trophy? After working with TSMC for 25 years and perfecting a design ecosystem and early access manufacturing partnership, it was like cutting off your legs before a marathon.

The EDA tools, IP, and methodology for FPGA design and manufacturing are not mainstream to say the least. It is a very unique application which requires a custom ecosystem and ecosystems are not built in a day or even a year. Ecosystems develop over years of experience and partnerships with vendors. FPGAs are also used by foundries to ramp new process nodes which is what TSMC has done with Altera for as long as I can remember. This early access not only gave Altera a head start on design, it also helped tune the TSMC manufacturing process for FPGAs. Will Intel allow this type of FPGA optimization partnership for their “Intel Inside” centric processes? That would be like a flea partnering with a dog, seriously.

What would I have done? Rather than be paraded around like a little girl in a beauty pageant, Altera should have been stealthy and designed to both Intel and TSMC for FinFETs. Seriously, what did Altera REALLY gain by all of the attention of moving to Intel? Remember, TSMC 16nm is in effect 20nm using FinFETs. How hard would it have been to move their 20nm product to TSMC 16nm while developing the required Intel design and IP ecosystem? Xilinx will tape out 16nm exactly one year after 20nm and exactly one year before Altera tapes out Intel 14nm. Remember, Altera gained market share when they beat Xilinx to 40nm by a year or so.

Correct me if I’m wrong here but this seems to be a major ego fail for Altera. And if the rumor is true, which I hope it is for the sake of Altera, how is Intel going to spin Altera going back to TSMC for a quick FinFET fix?

More Articles by Daniel Nenni…..

lang: en_US


Parasitic Debugging in Complex Design – How Easy?

Parasitic Debugging in Complex Design – How Easy?
by Pawan Fangaria on 01-23-2014 at 9:00 am

When we talk about parasitic, we talk about post layout design further expanded in terms of electrical components such as resistances and capacitances. In the semiconductor design environment where multiple parts of a design from different sources are assembled together into highly complex, high density SoC, imagine how complex it would be to debug that design at parasitic level. We definitely need smart tools to be able to analyse different parts of a design, at different levels of hierarchy, and at different levels of abstraction such as transistor, gate and RTL.

Good news is that we do have such tools available from Concept Engineeringwhich enable designers to do very fast design exploration, visualize the design at different levels, reduce complexity, and thus debug the design easily, precisely and in lesser time. I was delighted to go through a webinarhighlighting parasitic debugging by using StarVision and SpiceVision. The webinar included a demo as well, conducted very nicely by Lokesh Akkipeddi at Eda Direct. Lokesh demonstrated features with live menus which help locate the exact problem area with great navigation and cross-probing, simplify portion of the design view at a desired level (e.g. modify symbols, move up to gate or down to transistor level, remove RC etc.) to understand the problem, review Spice netlist and fix at any level as appropriate.

The design can be visualized at various levels such as transistor, gate or RTL and those can be mixed with each other as required. Parasitic for different wires can be viewed in different colors for easy correlation. Similarly the source code can be viewed for any module or component of the design.

Industry leading Spice and post-layout interfaces (including those from EDA majors like Synopsys, Cadenceand Mentor) are supported which StarVision can read and also write out Spice netlist. Schematic can be exported to Cadence Virtuoso through SKILL.

During the demo, I could see a good level of navigation moving through different levels of hierarchies connected through nets, signal distribution, looking inside a module or individual pin, and provision to hide unconnected pins to remove clutter and many other features.

Cone extraction is a special feature which caught my attention. It can expose all inputs connected to a pin as well as all outputs from it to probe with closer vision.


[Circuit with RC and without RC; Parallel transistors merged to recognize gates]

Similarly, to view a circuit in simple form, there is an interesting feature to reduce netlist where RC can be filtered out from a circuit to view it in simple form of transistors. Also, parallel transistors can be merged to easily recognize CMOS gates. Large resistances and capacitances can be recognized, viewed and values observed, if that asks for any modification in the circuit.

Then there is cross-probing up to the source level and the code can be highlighted in the same color as selected for the particular component.

Spice code can be written for any desired portion of the circuit and that can be used for external partial simulation for analysis and decision making.


An excellent extensible feature is that APIs can be developed for customized functionality at any level (Spice, gate or RTL) by using tcl scripts. Over 100 example APIs have been developed; some of those are shown in the table.

I can go on and on to mention more features and still not be able to justify the real essence of those through these pictures. It would really help designers to gain the gist of those excellent features by seeing the live presentation and demo in the webinar. Go for it!!

More Articles by Pawan Fangaria…..


Rekeying the IoT with eMTP

Rekeying the IoT with eMTP
by Don Dingee on 01-22-2014 at 4:10 pm

For non-volatile storage in IoT devices, there is technology designed to be reprogrammed many times, and technology designed to be programmed once. The many times mode is for application code, while the once mode is for keying and calibration parameters. We are about to enter the IoT rekeying zone, in between these two extremes.
Continue reading “Rekeying the IoT with eMTP”


Wearables the Big Hit at CES

Wearables the Big Hit at CES
by Paul McLellan on 01-22-2014 at 3:00 pm

There were a number of trends discernible at CES this year, one of the big ones being wearables, especially in the medical and fitness areas. I wear a FitBit Flex and I have, but rarely wear, a Pebble Watch that links to my iPhone. I would say that at this point they are promising but are more gimmicks than truly useful. My Fitbit measures how much I walk but it gets confused by cycling and I have to tell it when I go to sleep and wake up, which I usually forget to do. It contains an ARM Cortex-M3, a bluetooth interface, accelerometer and a power control IC. The Pebble is getting better as they upgrade the software and it works with more Apps. It contains a 120MHz ARM chip, 3-axis accelerometer, and a Bluetooth 2.1 and low-energy 4.0 chip. So despite the very different applications both the Fitbit Flex and the Pebble watch have the same IC functionality and could almost certainly use a single chip SoC incorporating everything. The displays are obviously different.

One thing that all these devices have is lots of IP that wasn’t particularly designed to work together, or even work at the same speed. One of the challenges is getting everything so that fast devices can communicate with slow devices without overloading them. Often different blocks are running at different voltages and in different clock domains, adding to the complexity of the interfaces. Level shifters are needed when voltages are different, and there is always plenty of opportunity for introducing subtle and not-so-subtle bugs at clock domain crossing boundaries.


Sonics has a solution that solves all these problems, SonicsExpress. SonicsExpress provides a high bandwidth bridge between two clock domains, with optional voltage domain isolation. SonicsExpress supports AXI or OCP protocols and is capable of crossing clock boundaries, power boundaries, and large spans of physical distance. In addition, SonicsExpress is optimized for high-bandwidth, low-latency communication. It supports both single threaded and multi-threaded configurations and can operate in either blocking or non-blocking modes.

Because clock domain boundaries also often occur at power domain crossings, tactical cells are instantiated on the signals that cross the asynchronous boundary addressing voltage level shifting and clock domain safety. Combined with the features of Sonics NoC, this allows IP blocks from different suppliers, operating with different data rates, different supply voltages, different clock fequencies and different protocols to operate seamlessly together and build a working SoC.

More information on SonicsExpress is here.


Dan Niles: Strong Developed Markets, Weak Emerging

Dan Niles: Strong Developed Markets, Weak Emerging
by Paul McLellan on 01-22-2014 at 2:15 pm

Yesterday was Dan Niles’s economic review that he presents quarterly for GSA. As always he starts from big macroeconomic picture and ends up looking at the implications for semiconductor end-markets and thus the implication for semiconductors in general and the fabless ecosystem in particular.

The big picture is that the developed markets such as US, Japan and some of Europe seem to be recovering reasonably well if not dramatically so. But with bond yields essentially zero, hot money has moved into the developing markets, the so-called BRICs (Brazil, Russia, India, China) and they are struggling. Brazil, India and Russia are all experiencing slowing GDP, high inflation and a falling currency. China is much stronger, but not as strong as it was. It has to rebalance its economy more towards domestic consumption, slow the growth of the shadow banking credit markets (now up to 15%). Hopefully they can avoid a Lehman type event, but there are lots of bad loans and malinvestments around. As the fed starts to “taper” and stop quantitative easing and, eventually, see interest rates increase, some of the hot money will return to the US from the developing countries which will make their problems worse. Japan also has an issue as interest rates rise since it debt is over 200% of GDP. Of course if the US did its accounting properly for future entitlements we are much worse.


The big semiconductor end markets are computing and mobile. Computing is showing some uptick finally, although that is after two years of fairly major decline. In computing, flat is the new normal. The only bright spot is the buildout of datacenters. The traditional desktop market is largely gone and a lot of the notebook market is being superseded by tablets (iPad and the like) or just smartphones. More internet access was made on mobile devices last year than traditional PCs.


Mobile will also slow, although it will still grow a lot, over 20%. But that is down from 40% and more in the last few years. But smartphone and tablet markets are starting to be mature, at least at the high end. Future growth will mostly be at the low end and a lot of that in China. There is now only one north american manufacturer in the top 10 (Apple at #2). Motorola, Palm and Blackberry used to be there. Samsung is #1. But Huawei, Lenovo, ZTE and probably Coolpad (Yulong), all from China, and LG from Korea are all there. Nokia (purchased by Microsoft) should just scrape into the top 10 for the year, Europe’s only entrant.

After growing 4% this year, mostly due to a doubling in memory prices, and basically being flat for several years, semiconductor should grow 6% this year. There has not been overspending in capital investment so this time there is not excess capacity.

TL;DR developed markets in good shape; BRICS not so much. Semi should break out and have a good year after 3 years of being flat


More articles by Paul McLellan…