RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

Making Money With Cramer? Don’t Count on it!

Making Money With Cramer? Don’t Count on it!
by Daniel Nenni on 10-02-2011 at 11:16 pm

Investing with Cramer is a crap shoot. By Cramer, I mean the Mad Money TV show, and Action Alerts PLUS from thestreet.com. Cramer is certainly a smart guy and knows his stuff, but don’t think following his investment strategy is necessarily a winner. He constantly maintains that you can beat the averages by picking individual stocks and doing your homework. This just isn’t true.

Cramer did a piece on Cadence that I blogged about in 2009, Jim Cramer’s CNBC Mad Money on Cadence!, to which I concluded that Jim Cramer is in fact an “infotainer” and prone to pump and dump groupies. In regards to CDNS however, he got lucky (CDNS has doubled since then).

Cramer may have made money as a hedge fund manager, but he also used many tools (such as options, shorting, etc.) which most Joe on the street investors don’t utilize. He also had a staff, and access to much better tools and information.

A good friend of mine is getting killed with his recommendations!

It drives me crazy when he talks on his show about, “I recommended this great stock much lower ….”. He does occasionally mention the ones that get crushed. Of course if he did this as a matter of routine people wouldn’t watch.

The most salient fact: His Action Alerts PLUS portfolio hasn’t beaten the S&P (when you include dividends) since 2007.

Several awful Cramer recommendations, many of which my friend has gotten creamed on, include Bank of America, Netflix, Limelight Networks, Juniper, Teva, GM, Ford, BP, Alcoa, Apache, Express Scripts, Freeport McMoran, Starwood Hotels, and Johnson Controls. There is probably a list of winners just as long, but I’m in no mood for that.

Sour grapes? Of course. Watching Cramer and subscribing to Action Alerts PLUS will make you more informed about the market. Will it make you money? Maybe. There is just as good a chance you’ll make more money with an S&P Index fund. In an up market, you’ll feel smarter about the investments you are making. In a down market you’ll be kicking yourself in the butt.

Cramer makes me think of a “get rich quick book” that’s a really good read. The only one that gets rich is the author, and that is from selling the book.

Note, my friend still watches his TV show, albeit with a lot of Tivo fast forwarding. You can be interested in Cramer’s opinion on market direction. You can subscribe to Action Alerts. Is it worth $399/yr to be better informed? The hitch is that I’ll never be better informed than the pros on Wall Street, and more information will not necessarily make you money.

I think Cramer is a smart guy and a helluva entertainer. However, I think he does the average individual investor a disservice by leading them to believe that he can help make them above average returns. I’ve not seen this Action Alert Plus service but if it’s like most newsletters, it’s lacking in timing and exit strategies.

I have some possible explanations for his chronic under performance despite his intellect, experience, and huge research staff:

1. He has to have three new ideas every day. If I have a good investment idea once a month I’m happy.

2. Time frame – a huge part of managing a portfolio has to do with investment horizon – if you have a 10+ year time frame – you don’t care what the Finance Minister of Germany is saying about Greece. But Cramer has to have a stock idea that is an answer to that news sound byte. For this type of recommendation over a short period of time you are going to get very random results.

I think that an active financial manager that uses a tactical asset allocation strategy along with an industry sector strategy based on macro economic analysis and individual stock selection based on sound fundamental analysis can outperform a passive benchmark index over the long term. However, all this work and strategy may only mean an extra 1.5-2.0% pick up in total return. Individual investors should utilize someone that is willing to execute this type of individual portfolio management for a reasonable fee of .75-1.0%.

The bottom line is that anyone who makes promises of “big money” returns (like Cramer) is either lying (Bernie Madoff) or taking on more risk with your money than you think.


Memory Cell Characterization with a Fast 3D Field Solver

Memory Cell Characterization with a Fast 3D Field Solver
by Daniel Payne on 09-29-2011 at 12:07 pm

Memory designers need to predict the timing, current and power of their designs with high accuracy before tape-out to ensure that all the design goals will be met. Extracting the parasitic values from the IC layout and then running circuit simulation is a trusted methodology however the accuracy of the results ultimately depend on the accuracy of the extraction process.

Here’s a summary of extraction techniques:

Extraction Benefits Issues
Rule-based High capacity Limited accuracy, 10% error total capacitance, 15% error coupling capacitance
Reference-level solver High accuracy Limited capacity, long run times
Fast 3D Solver High accuracy, fast run times New approach

Bit Cell Design

Consider how a memory bit cell is placed into rows and columns using reflection about the X and Y axis:

The green regions are an application of boundary conditions on a cell with reflective boundary enclosed by 2um in the X direction and 4um in the Y direction.

For attofarad accuracy the field solver has to extract the bit cell in the context of its surroundings.

Mentor Graphics has a fast 3D field solver called Calibre xACT 3D that can extract a memory bit cell in just 4 seconds using this boundary condition apprach, compared to a reference-level solver that requires 2.15 hours. I’ve blogged about xACT 3D before.

Accuracy Comparisons
A memory bit cell was placed and reflected in an array, then the entire array was extracted. The unit bit cell used boundary conditions as shown above and the results were compared against an actual array. The accuracy of the boundary condition approach in Calibre xACT 3D is within 1% of the reference-level field solver.

Another comparison was made for symmetric bit lines in a memory array using the boundary condition approach versus the reference-level field solver, with an accuracy difference within 0.5%.


Beyond the Bit Cell

So we’ve seen that Calibre xACT 3D is fast and accurate with memory bit cells, but how about on the rest of the memory like the decoders, and the paths to the chip inputs and outputs?

With multiple processors you can now accurately and quickly extract up to 10 million transistors in about one day:

Summary
Memory designers can extract a highly accurate parasitic netlist on multi-million transistor circuits for use in SPICE circuit simulation. Run times with this fast 3D field solver are acceptable and accuracy compares within 1% of reference-level solvers.

For more details see the complete white paper.


Introducing TLMCentral

Introducing TLMCentral
by Paul McLellan on 09-29-2011 at 8:00 am

Way back in 1999 the open SystemC initiative (OSCI) was launched. In 2005 the IEEE standard for SystemC (IEEE1666-2005 if you are counting) was approved. In 2008, TLM 2.0 was standardized (transactional level models), making building virtual platforms using SystemC models easier. At least the models should be play nicely together, which had been a big problem up until then.

However, the number of design groups using the virtual platform approach still only increased slowly. Everyone loves the message of using virtual platforms for software development, but the practicalities of assembling or creating all the models necessary continued to be a high barrier. Although there are lots of good reasons to use a virtual platform even after hardware is available, the biggest value proposition is to be able to use the platform to get software development started (and sometimes even finished) before silicon is available. And time taken to locate or write models dilutes that value by delaying the start of software development. In fact in a survey that Synopsys was involved with last year, the lack of model availability was one of the biggest barriers to adopting virtual platform technology.


Today, Synopsys announced the creation of TLMCentral. This is a portal to make the exchange of SystemC TLM models much easier. Synopsys is, of course, a supplier of both IP and virtual platform technology (Virtio, VaST, CoWare). But TLMCentral is open to anyone and today there are already 24 companies involved. IP vendors such as ARM, MIPS and Sonics. Service providers such as HCL or Vivante. Other virtual platform vendors such as CoWare and Imperas. And institutes and standards organizations such as Imec and ETRI. The obvious missing names are Cadence, Mentor and Wind River, at least for now. Cadence and Mentor haven’t yet decided whether or not to participate. I don’t know about Wind. Teams from Texas Instruments, LSI, Ricoh and others are already using the exchange.

As I write this on Wednesday, there are already 650 models uploaded, and more are being uploaded every hour. By the time the announcement hits the wire on Thursday morning it will probably be over 700. There are really three basic classes of model: processor models, interface models (what I have always called peripheral models) and environment models. A virtual platform usually consists of one or more processor models, a model for each of the interfaces between the system and the outside world, and some model of the outside world used to stimulate the model and validate its outputs. The processor models run the actual binary code that will eventually run on the final system, ARM or PowerPC binaries for example. By using just-in-time (JIT) compiler technology they can achieve extremely high performance, sometimes running faster than the real hardware. The interface models present the usual register interface on some bus on one side, so the device driver reads and writes them in the normal way, while interfacing in some way to the test harness. Environment models can be used to test systems, for example interfacing a virtual platform of a cell-phone to a cellular network model.

TLMCentral is not an eCommerce site for purchasing models. It is central resource for searching for them and then finding suppliers. Some models are free, and available directly from the site, but some models you must pay for, and you are directed to the vendor. There is also an industry-wide TLM eco-system allowing users to support each other, exchange models and so on.

There have been other attempts to make models more available, notably Carbon’s IP exchange. But the scale and participation on TLMCentral, and the backing of the largest EDA company, means that this is already the largest. But the success is not so much counting how many people sign up on day one, but whether it is successful at lowering the barriers to adoption of virtual platform based software development. And that will show up as growth, hopefully explosive, in the number of groups using virtualized software development.

TLMCentral is at www.tlmcentral.com


Analog IP Design at Moortec

Analog IP Design at Moortec
by Daniel Payne on 09-28-2011 at 12:34 pm

Stephen Crosher started up Moortec in the UK back in 2005 with the help of his former Zarlink co-workers and they set to work offering AMS design services and eventually created their own Analog IP like the temperature sensor shown below:

We spoke by phone last week about his start-up experience and how they approach AMS design.


Continue reading “Analog IP Design at Moortec”


Samsung versus Apple and TSMC!

Samsung versus Apple and TSMC!
by Daniel Nenni on 09-28-2011 at 6:56 am

Apple will purchase close to eightBILLION dollars in parts from Samsung for the iSeries of products this year alone, making Apple Samsung’s largest customer. Samsung is also Apple’s largest competitor and TSMC’s most viable competitive foundry threat so it was no surprise to see Apple and TSMC team up on the next generations of iProducts. The legal battle between Samsung and Apple did come as a surprise however and will change how we do business for years to come.

“Our mission is to be the trusted technology and capacity provider of the global IC industry for years to come.” TSMC Website

During the past 25+ years I have been to South Korea a dozen or so times working with EDA and SemIP companies in pursuit of Samsung business. South Korea is a great place to visit but South Korea is not a great place to do business (my opinion) due to serious ethical dilemmas. Let’s not forget the Samsung corruption scandalthat engulfed the government of South Korea. Let’s not forget the never ending chip dumping probes. The book “Think Samsung” by ex-Samsung legal counsel accuses Samsung of being the most corrupt company in Asia. So does it really surprise you that Apple is divorcing Samsung for cloning the iPad and iPhone?

I was never an Apple fanboy, always choosing “open” products for my personal and professional needs. If the IBM PC was “closed” and obsessively controlled like Macs, where would personal computing be today? The iPod was the first Apple product to invade my home and only after a handful of other MPEG players failed on me. Without iPod/iTunes where would the music industry be today?

iPad2s came to my house next. Would there even be a tablet market without the iPad? I looked at other tablets but since they were to be gifts to SemiWiki users I had a much more critical eye for quality. I even kept one of the SemiWiki iPad2s which I now use daily. We still have some iPad2s left so register for SemiWiki today and maybe you will win one!

A MacBook Air ALMOST came next, but I chickened out and bought a Dell XPS instead. The support burden of moving my family of six from Dell/HP/Sony laptops to Apple Town was just too much to fathom.

iPhone5s for the entire family will be next, Santa is bringing them for Christmas. I’m tired of my Blackberry and I being out smartphoned by snot nosed iPhone kids. I did look at the Samsung iPhone and iPad clones, and while they are less expensive, my professional experience with Samsung will not allow me to buy their products. I will wait for an Apple flat screen TV as well.

Paul McLellan did a nice write up of “The battle of the Patents” for the wireless business: Apple, Samsung, Microsoft, Oracle, Google, Nokia, and here comes a real threat to the mobile industry, Amazon (Kindle Fire Tablet)!

The Apple / Samsung legal debacle will most definitely change the semiconductor foundry business. Can Samsung or even Intel become “the trusted technology and capacity provider of the global IC industry for years to come”? Not a chance.


Battle of the Patents

Battle of the Patents
by Paul McLellan on 09-27-2011 at 5:01 pm

What’s going on in all these wireless patent battles? And why?

The first thing to understand is that implementing most (all?) wireless standards involves infringing on certain “essential patents.” The word “essential” means that if you meet the standard, you infringe the patent, there is no way around it. You can’t build a CDMA phone without infringing patents from Qualcomm; you can’t build a GSM phone without infringing patents from Motorola, Philips and others.

The second thing to understand is that typically, if you are a patent holder, you want to license the last person in the chain. There are two reasons for this. Firstly, the further down the value chain, the higher the price, and so the easier to extract any given level of license fee. It is easier to get a phone manufacturer to pay you a dollar than a chip manufacturer, for example. The second reason is that often the patent is only infringed in the final stage of the product chain. Any patent that claims to cover phones that do something special is not infringed by chips, software or IP that might go into the phone to make that something special happen. Plus you can’t really embargo anything other than the final product if it is all assembled offshore.

Apple, presumably in a calculated way, didn’t worry about licensing anyone else’s patents. They pretty much invented what we think of as the smartphone and it is hard to build one without infringing lots of Apple patents on touch-screens, gestures, mobile operating systems, app stores and so on. So they figured that they had a good arsenal for cross-licensing to address their lack of patents on basic wireless technology.

Google seems to have been blindsided by this. They created Android, which in and of itself doesn’t infringe much. They didn’t patent much on their own and probably didn’t have any intention of suing anyone. “Don’t even be as evil as suing someone.” But when Android is put into a smartphone or tablet then that end product infringes lots of patents, most notably Apple’s. Google tried to fix this, first by offering $3.14159B for Nortel’s patents (which they lost) and then by buying Motorola’s mobile phone division for around four times as much (well, they got a mobile phone division too, which might turn out to be important).

Microsoft also had a lot of patents. In fact it has been so unsuccessful so far in its mobile strategy that it reportedly makes more money licensing Android phone manufacturers (for patent licenses) than it does licensing Window7 phone manufacturers (for software licenses, presumably including the patent licenses since suing your customers tends to be bad for business).

Also, in here somewhere, is Oracle, which with its acquisition of Sun owns any patents on Java. And Android’s app development environment is Java (Apple’s is Objective-C, which they acquired with Next).

The most schizophrenic relationship is Apple and Samsung. Samsung build the A4 and A5 chips that are in the current iPhone and iPad, it supplies some of the DRAM and some of the flash. I wouldn’t be surprised if Apple is their largest customer. But they are suing each other mainly over Samsung’s iPhone lookalikes Galaxy S and Galaxy SII and iPad lookalike Galaxy Tab. Samsung announced that they have already shipped over 10M Galaxy SIIs, which is an impressively large number. Samsung is probably the biggest threat (as a single manufacturer) to Apple, already #2 in profitability and, I think, #2 in unit volume behind Nokia.

Apple has also been suing some of the Android manufacturers but they are countering since Google is now licensing some of the Motorola patents to them (for free, I assume). Remember, Apple can’t sue Google directly since an OS doesn’t infringe a phone patent, only phones can do that, and so Google can’t counter Apple directly, it has to do it through its licensees.

Meanwhile, Nokia, which must have an enormous patent portfolio, is also suing Apple, although Apple has already settled (surrendered) some of this by paying a license fee. If Nokia is to be successful with its strategy du jour of using Microsoft for its smartphone strategy then it will need to be able to defend itself against Apple. It also needs to get moving, since the latest Mango release of Microsoft’s WP7 is already coming to market through HTC and Fujitsu. If all Nokia has is a late to market me-too WP7 implementation they are doomed. Well, I think they are doomed anyway although it may depend on how much the carriers want to keep Nokia and/or Microsoft WP7 alive to counter Android and Apple.

Oh, and Amazon’s Fire tablet comes to market tomorrow, supposedly. Don’t be surprised if Apple sues them. Amazon is probably the biggest threat to Apple leveraging content rather than basic tablet technology.

What will happen in the end? Probably not much. Nobody has a clue how much anyone infringes anyone else’s patents and nobody is going to put much effort into finding out. I expect that everyone will cross-license, with Apple and anyone else who lacks fundamental patents (the ones that are used in non-smart phones) having to make some balancing payments to cover the last couple of decades of investment that they are riding on, and anyone who hasn’t got their own smartphone patents having to make balancing payments to Apple who pretty much invented them as we now think of them.


Magma eSilicon One Keynote

Magma eSilicon One Keynote
by Paul McLellan on 09-27-2011 at 2:31 pm

I was at the first half of Magma’s Silicon One event yesterday. The first keynote was by Rajeev about the environment for SoC designs, especially fabless startups, and Magma’s role going forward. More about that later. The other keynote was Jack Harding, CEO of eSilicon. As usual Jack did his presentation without any powerpoint slides, something I find very difficult to do without losing my thread.

Jack started off with some statistics about eSilicon. They have been in existence for just over 10 years now and have done over 200 parts. A 3rd party audited them for a customer and decided that they had a 98% first time hit rate. For those who don’t know eSilicon, their business model is to be an ASIC company although they don’t have a fab. But they are more than a design house. They deliver tested, packaged parts just like an ASIC company with a fab, except that they let you put your own logo on the parts. At VLSI, for example, we always put ours on (look at any pictures of motherboards of early Macs).

The big change that Jack wanted to talk about was the consumerization of SoCs, and the effect that this is having on the design chain. Design used to be “all” digital, with a smart group of slightly eccentric designers down the hall (or in a separate company) who used spice and a layout editor and their bare hands to wrestle analog to the ground. Analog was something nobody worried about.

In the 90s, the strategy was to make it a separate chip. That way it could be done in an older process. So a system might be 4 digital chips and an analog chip that was either a standard product or designed and manufactured in a separate process.

But now all that is condensed into a single SoC with all the digital, and lots of it, and all the analog, and lots of it, on a single chip. This is a problem and an opportunity. eSilicon has historically, for business reasons, been more focused on networking than consumer and over half their chips had a serdes on. Today virtually every chip they do has hundreds of lanes of serdes. There are now so many variables to make the analog work that it is incomprehensibly by a human. So it is going to have to get more automated whether the designers like it or not, just like when place and route first arrived and designers figured they could do better. For a few gates, yes, but for thousands it is just impossible.

Even picking the IP is an intractable task. TSMC has 12 flavors of 28nm process and 40 different commercial lilbraries available (from them and 3rd parties) so that’s 480 combinations just there.

Jack was asked in the questions what EDA companies don’t design the chips for at least some of their customers. He thought that this made a lot of sense but there are big problems with the way Wall Street values EDA companies (many types of companies with mixed product lines, such as HP, have this problem too). To combine an eSilicon type business with Cadence type business (and remember Jack was CEO of Cadence too) would mean going for 95% software margins to 45-50% semiconductor margins and nobody knows how to value a company that mixes those two business (one reason VLSI spun out Compass when I was there was for that Wall Street got confused at companies with mixed product lines like that). So right now it is good business for eSilicon but clearly a potential slot for EDA to step up and provide themselves. But of course that would mean they only get paid of the tools work…

Which leads to the next question: why is EDA just a $4B business? Jack’s view is that it is a flaw in the EDA business model whereby EDA charges for a capability regardless of success. Everyone else is at risk. If a chip doesn’t go to production, no wafers are bought, no parts are packaged, nothing is tested and nobody makes any money. Except EDA. But the quid pro quo is that if EDA is not going to take that risk then it capped at $4B (in fact, excluding IP, it is probably shrinking). In the early days, EDA had a hardware business model (Calma, Applicon etc) and this model made sense. The software was thought of almost as an add-on to sell the hardware. But 15-20 years ago that stopped making sense. Jack’s estimate is that if EDA had switched to take risks on a variable basis then it would be a $40B business. More chips could be made, probably with a higher percentage failure rate (as business lines, not necessarily technical) but much more volume in total.

But he wouldn’t want to be the first CEO to make the switch. Wall Street would punish you for 2-4 years until the first designs in a new process node went from EDA software development through design to volume production. The challenge is how to find a way to switch without blowing up the existing business model completely. Jack said that they have done a few small deals where eSilicon + EDA company + customer on basis of long-term royalty. Possibly a good candidate to grow.

So the takeaway is that things need to be looked at differently before. The only way to get these designs done is to be very silicon aware, work with EDA partners, silicon partners, test and assembly and so on. This is leading to re-aggregation of supply chains since someone needs to take responsibility for everything. After all, if a package is broken then the foundry isn’t getting any wafer orders even though it isn’t their fault and vice versa. For the designs they do, eSilicon takes that responsibility and they invest a lot in communication and staffing for process people, manufacturing people, test people, package experts and so on.


Cadence VIP Enables Users to be First-to-Market with Mobile Devices Leveraging Latest MIPI, LPDDR3 and USB 3.0 OTG Standards

Cadence VIP Enables Users to be First-to-Market with Mobile Devices Leveraging Latest MIPI, LPDDR3 and USB 3.0 OTG Standards
by Eric Esteve on 09-27-2011 at 1:56 am

The mobile devices market is simply exploding, with smartphones shipmentgoing up to the sky, tabletsemerging so fast that some people think it will replace PC (but this is still to be confirmed…). This lead mobile SoC designs to integrate increasingly more features, to support customer needs for more computing power and sophisticated video, audio and storage. To support these new features, improving both performance and power, new interface standards have emerged, that SoC designers need to integrate, under an ever increasing time to market pressure, opening the door for external sourcing of new functions (Design IP) and the need for solutions that can accurately test the functionality of their design and ensure manufacturing success (Verification IP). Cadence has pretty well defined the problematic (see image): it’s too hard to verify from scratch, as it requires too much time, effort and expertise.

That’s why the Verification IP market is so dynamic these days, and that’s why both Synopsys and Cadence not only communicates, but are also very active: almost every week brings major news, acquisition or release of new VIP supporting emerging standards. The involvement of Cadence in the Verification IP for System on Chip developed for the Mobile industry (wireless handset, media tablets, and portable consumer electronic devices) clearly appears when looking at the list of VIP offering for mobile applications with support for the following standards:

  • LPDDR3: This low-power version of the pervasive DDR3 memory standard enables customers to meet the high bandwidth and power efficiency requirements of mobile systems.
  • MIPI CSI-3: Providing an advanced processor-to-camera sensor interface, MIPI CSI-3 enables mobile devices to deliver the bandwidth required to enable high resolution video and 3D.
  • MIPI Low Latency Interface (LLI): This interface cuts mobile device production cost by allowing DRAM memory sharing between multiple chips.
  • USB 3.0 On-The-Go (OTG): Providing 10x the performance of the previous USB specification, USB 3.0 OTG allows consumers to rapidly transfer data, such as video and audio content, as well as quickly and effortlessly charge devices.
  • Universal Flash Storage (UFS): A common flash storage specification for mobile devices, UFS, a JEDEC standard, is designed to bring higher data transfer speed and increased reliability to flash memory storage.
  • eMMC4.5:Designed for secure, yet flexible program code and data storage, eMMC4.5, a JEDEC standard, enables high bandwidth, low pin-count solutions that simplify system design.
  • cJTAG:With its support for reduced pin count, power management and simplified multichip debug, cJTAG enables efficient testing of mobile devices, a key requirement for delivering high volume, high quality mobile devices.

The secret sauce for supporting emerging standards, when these are still in development, is to actively participate to the standard committee, like for example MIPI Alliance. According with Joel Huloux, chairman of the board, “MIPI Alliance continues to advance mobile interface standards with processor and peripheral protocols that streamline system development and expand the sophistication of today’s mobile devices. By ensuring verification support for these protocols at the earliest stage possible, companies such as Cadence enable mobile designers to embrace the latest standards and deliver products that transform the consumer’s mobile experience.” Cadence was also the first company to add support for ARM Ltd.’s AMBA 4 Coherency Extensions protocol (ACE), speeding the development of multiprocessor mobile devices, and the DFI 3.0 specification, which defines an interface protocol between DDR memory controllers and PHYs.

Another important ingredient for cooking a successful recipe, at the other side of the spectrum, is the collaboration with the system manufacturers. If you take a look at the member list on the MIPI Alliance web, you realize that this collaboration could be with companies like: Ericsson, Nokia, Panasonic, RIM or Samsung, even if Cadence does not disclose this information. Being present at both side of the spectrum, participating to the standard elaboration well before the protocol release, and working closely with the system integrators, the final users, is a good way to fine tune the verification product and release it as early as possible, allowing the SoC designers to cope with increasingly shorter time to market.

Eric Estevefrom IPNEST


Apple Plays Saudi Arabia’s Role in the Semiconductor Market

Apple Plays Saudi Arabia’s Role in the Semiconductor Market
by Ed McKernan on 09-27-2011 at 12:08 am

The retirement of Steve Jobs left most commentators wondering if Tim Cook could lead Apple marching ever onward and upward. In truth, Tim Cook’s contribution on the operations side has been just as instrumental in the destruction of Apple’s PC and consumer electronics competitors as Jobs’ product vision. Under Tim Cook’s guidance, Apple has increased their gross margins from 29% to 41% in the last five years and they look to increase this further. Cook is executing on what I will call the “Swing Consumer” strategy, which is a takeoff of Saudi Arabia’s “Swing Producer” position within the OPEC Cartel. What it means is that those companies competing with Apple will be operating off of high priced left overs.

Saudi Arabia, naturally took the role of “Swing Producer” within the cartel based on its vast oil reserves, its ability to easily lift over 12M barrels of oil a day at a cost of $2 per barrel in a country with a relatively small population. The net of it all is that Saudi Arabia for many years could financially support itself on less than half of its full capacity and therefore ensure a maximum oil price in the market per any given economic situation. Despite having a dozen members in the OPEC Cartel and gross margins of over 90%, in the case of Saudi Arabia, the oil market has not collapsed like the DRAM market does in perpetuity off a base of less than a handful of suppliers. Furthermore, Saudi Arabia recognizes that the kingdom itself is valued more on what’s still in the ground than what it produces in terms of the current cash flow. 200B+ barrels of oil at $80 is more than $1.6T of assets, which is why short term price fluctuations don’t matter as much as maintaining the long term oligarchy. They always adjust supply to get oil to an upper range.

Apple first moved into the “Swing Consumer” of semiconductors mode when it based all of its new, high growth products off of NAND Flash in combination with commodity ARM processors. By 2005, Apple was taking roughly 40-50% of Samsung’s NAND capacity and growing. To maximize their opportunity, Apple decided in July 2009 to write a $500M check to Toshiba as a prepayment for NAND Flash capacity at a discounted price. As the largest consumer of Flash coupled with commodity ARM processors that could be build anywhere, Apple found itself all alone, able to dictate the lowest worldwide pricing. The only exception would be if Samsung subsidized their smartphone and tablet groups. All of Apple’s competitors were left negotiating the 2nd best price within a sometimes spot shortage environment. There is no chance for them to overturn Apple’s component cost lead.

Apple’s premium brand combined with its “Swing Consumer” logistics has put it into a similar position to that of Saudi Arabia. At a moments notice it can not only shift NAND flash capacity but also DRAM suppliers, LCDs, wireless chips etc. Last week, an article appeared that confirmed Apple’s plan to source more NAND and DRAM from Toshiba and Elpida and away from Samsung (see Apple Looks to Japan for DRAM, NAND Flash Supply). The article speculated that it was in response to the legal issues it is embroiled with Samsung. It is that and more. It is a chance for Apple to lower Samsung’s prospects of being a serious competitor in the Smartphone and Tablet market and it is a leveling of the playing field. Apple needs both Toshiba and Samsung around as competitors always sharpening their pencils.

As a growth company with strong product margins, Apple is able to offer suppliers a guaranteed forecast that is always expanding, while its competitors like Dell, HP and others must remain tentative in their outlooks in the face of worldwide economic turmoil. As I discussed in an earlier article (Apple Will Nudge Prices Down in 2012: PC Market Will Collapse), margins for the PC OEMs and the retailers (i.e. Best Buy) are so thin that they are forced to under build and under forecast to suppliers because of the fear that they are left with too much inventory at the end of a selling season which will wipe out any gains they accrue in the first few months. Thus PC OEMs are in a death spiral losing more and more market share as time passes and thus losing cost leverage over suppliers.

Imagine the task at hand for new HP CEO Meg Whitman as she tries to salvage this quarter’s revenue by first having to reassure customers that they are staying in the PC business. At the time of the last earnings call, when Apotheker announced that the tablet was to be cancelled and the PC group spun out, HP lost immediate leverage with suppliers as they expected to receive order reductions and cancellations.

The near term bottom for tech stocks, and in particular Dell and Apple, came the following day: August 18[SUP]th[/SUP]. I would tend to speculate that Dell and Apple reached out to suppliers to take advantage of HP’s debacle. Now, as Whitman restarts the PC group’s engine, HP has to go back to suppliers to beg for parts at higher prices. And the worst part is that HP has to re-enter the 10% margin PC business in order to reassure customers who are looking at buying higher margin servers, networking and services. Apotheker’s critical mistake was that he believed dropping the PC group would not impact the other business units. The phones went quiet.

Apple’s complete domination of its supply chain through the “Swing Consumer” strategy will allow it to continue to squeeze suppliers and improve margins. They do, however, have one more threshold to cross and that is with its processor strategy. Apple will be a dual CPU House for a long time but it needs cost leverage over Intel. Both are playing a slow elephant dance before the ultimate partnership is signed that could be a benefit to both.

In the past year, both Intel and Apple have implemented communication and corporate strategies to try to gain an upper hand on each other. As everyone knows, Apple is good about keeping secrets for things that it truly doesn’t want the world to know. It goes to the extend of tracking down missing iphone prototypes mistakenly left at bars or cutting off suppliers who talk too much about upcoming and actual shipping products. In areas that it wants to frame public opinion, it communicates in ways that appear secretive: like the recent reports that TSMC is signed on to fab the A6 and future A7 processors at 28nm and 20nm respectively. It’s anybody’s guess if and what an A7 is but it magically appears in print. One wonders where Intel fits in the scheme of things.

Perhaps we have been offered a hint at what is coming.

Midway through Intel’s IDF conclave, where their upcoming 22nm chips were on full display, there was a simple press release announcing that Intel was issuing debt for the first time in over 20 years (see Intel Announces Senior Notes Offering). The initial press release made no mention of the amount but did say the purpose was for stock repurchases. Given that Intel had just raised their dividend in August and have been aggressively buying back stock with their massive operating cash flows, the offering seemed out of place.

The Wall St. Journal later confirmed that the debt offering raised $5B in total: a number that is familiar to anyone looking to build an advanced 22nm or 14nm Fab. Perhaps Intel’s way of saying they are ready to build an additional fab for a new customer. More on how this can play out in another column.


Semiconductor equipment spending beginning to decline

Semiconductor equipment spending beginning to decline
by Bill Jewell on 09-25-2011 at 7:41 pm

Semiconductor manufacturing equipment shipments have leveled off after a strong rebound from the 2008-2009 downturn. August 2011 three-month-average shipments based on combined data from SEMI (North American and European companies) and SEAJ (Japanese companies) were $2.9 billion, down from a peak of $3.2 billion in May 2011. Three-month-average bookings have dropped significantly to $2.2 billion in August, down 31% from the peak of $3.3 billion in August 2010. The book-to-bill ratio dropped to 0.78 in August, indicating continuing declines in billings.
SEMI’s September 2011 forecast for semiconductor manufacturing equipment calls for a 23% increase in billings in 2011 followed by a decline of 3% in 2012. One of the largest capital spenders, TSMC, plans to cut spending by 19% in 2012 after a 25% increase in 2011, according to Taiwan press reports. What will be the impact of decreased spending on semiconductor capital equipment on semiconductor capacity and utilization? The latest available data from Semiconductor Industry Capacity Statistics (SICAS) for 1st quarter 2011 showed industry IC capacity utilization of 94.2%, the fifth consecutive quarter with utilization above 90%. Semiconductor shipments are currently sluggish. Recent forecasts for the 2011 market range from a decline of 2% to growth of 5%. Forecasters agree the semiconductor market will pick up to stronger growth in 2012, ranging from 5% to 10%. Our forecast at Semiconductor Intelligence remains at 4% for 2011 and 10% for 2012.

The worldwide economic outlook is very uncertain, causing semiconductor manufacturers to be very cautious. The downward trend in seconductor manufacturing equipment spending will slow down the rate of capacity growth. If the forecasts of improving semiconductor market growth in 2012 hold true, IC capacity utlization should remain above 90% through at least 2011 and probably into at least the first half of 2012. Utilization in the 90% to 95% range is generally healthy for the industry – high enough for semiconductor manufacturers to remain profitable but not high enough to result in significant shortages.

Semiconductor Intelligence, LLC can perform a variety of services to provide you and your company with the intelligence needed to compete in the highly volatile environments of the semiconductor and electronics markets.