100X800 Banner (1)

Captain Ahab Calls Out for the Merger of nVidia and AMD

Captain Ahab Calls Out for the Merger of nVidia and AMD
by Ed McKernan on 08-16-2011 at 8:00 pm

Call me Ishmael. Some years ago –in the mid 1990s – having little or no money in my purse and nothing particular to interest me on shore, I thought I would sail the startup ship Cyrix and see the watery part of the PC world. Whenever I find myself grim about the mouth or pause before coffin warehouses, and bring up the rear of every funeral I meet – I think back to the last words that came from Captain Ahab before the great Moby Dick took him under: “Boy tell them to build me a bigger boat!”

You see the great Moby Dick is just not any whale, it is the $55B great white sperm whale that has been harpooned many of times and taken many a captain Ahabs to the bottom of the ocean. It still lives out there unassailable, despite the ramblings of the many new, shiny ARM boats docked on Nantucket Island, a favorite vacation spot of mine from my youth.

Perhaps there could be a great whaling ship constructed out of the battered wood and sails of the H.M.S. nVidia and the H.M.S. AMD. Because the alternative is that they must go down separately. Patience wears thin for ATIC (Advance Technology Investment Company), the Abu Dhabi investment firm that has poured billions of dollars into Global Foundries and AMD with the hope of being the long term survivor in the increasingly costly Semiconductor Wars. To be successful, the company needs a fab driver larger than what nVidia and AMD represent separately.

Jen Hsun Huang is the most successful CEO to ever challenge Intel in the PC ecosystem and yet he is not strong enough to overcome the Moore’s Law steamroller that naturally seeks to integrate all the functions of a PC into one chip. Both AMD and Intel have integrated chipsets and “good enough” graphics into their CPU thus limiting his leading revenue generator. He made a strategic move with Tegra to get out in front of the more mobile platforms known as Smartphones and Tablets but they may not ramp fast enough to allow him to make it to the other side of the chasm.

AMD, has pursued Intel forever but now is without a leader that can stop the carnage of a strategy that seeks to be Intel’s me too kid brother. It bleeds with every CPU sold to the sub $500 market. Lately, Intel has been on allocation and given them a profitable reprieve, but don’t count on it lasting forever as Intel eventually moves to the next node and adds more capacity.

There are huge both short term and long term benefits should Jen Hsun decide to merge with AMD. In the short term, nVidia and AMD are in a graphics price war where the AMD sales guy tells the purchasing exec “whatever nvidia bids mark me down for 10% less and see you at the golf links at 4 o’clock.” They have lost key sockets in Apple’s product line as well as other vendors. Merging with AMD raises revenue and earnings in an instant. The merged company would eliminate the duplicate graphics and operations groups.

Next, nVidia could implement the ARM+x86 multicore product strategy for the ultrabook market that I outlined in: Will AMD Crash Intel’s $300M Ultrabook Party? . The market offers high growth, ASPs and margins and is a close cousin of the tablet which nVidia is already targeting with Tegra.

Third, nVidia has made traction in the High Performance Computing (HPC) Market with Tesla. But don’t get confused with HPC = Data Center Servers. The Data Center runs x86 all the time. Intel has a $10B+ business going to $20B in the next 3 years. They are raising prices at will with no competition in sight. nVidia and AMD could team up to offer customers an alternative platform with performance and power tradeoffs between x86 and Tesla.

The icing on the cake is that this can all be financed by ATIC. Back in January when Dirk Meyer was let go as CEO of AMD and the stock was $9, I speculated to a semiconductor analyst that AMD would be bought when it went under $5. Why $5, it’s psychological. The wherewithal to do this is in ATIC’s hands but they have little time to spare.

ATIC owns 15% of AMD and 87% of Global Foundry. Today nVidia is worth $8B and AMD is worth $4.2B. Combined they would be worth significantly more than $12B because the graphics competition would end and the joint marketing and manufacturing operations would consolidate. It is logical for ATIC to take a 20% ownership in nVidia and finance the rest of the purchase in any number of ways. Back in the DRAM downturn of the 1980s, IBM bought a 20% stake in Intel to guarantee they would be around until the 386 hit the market.

Now that the ECB and the Fed have lowered interest rates to 0% and have the printing presses running overtime, why wouldn’t ATIC finance the new H.M.S. Take- No-Prisoners.


Solido – Variation Analysis and Design Software for Custom ICs

Solido – Variation Analysis and Design Software for Custom ICs
by Daniel Payne on 08-15-2011 at 7:11 pm

Introduction
When I designed DRAM chips at Intel I wanted to simulate at the worst case process corners to help make my design as robust as possible in order to improve yields. My manager knew what the worst case corners were based on years of prior experience, so that’s what I used for my circuit simulations.
Continue reading “Solido – Variation Analysis and Design Software for Custom ICs”


TSMC 28nm and 20nm Update!

TSMC 28nm and 20nm Update!
by Daniel Nenni on 08-15-2011 at 3:00 pm

First, I would like to congratulate Samsung on their first 20nm test chip press release. Some will say it is a foundry rookie mistake since real foundries do not discuss test chip information openly. I like it because it tells us that Samsung is 6-9 months BEHIND the number one foundry in the world on the 20nm (gate-last HKMG) process node. Samsung gave up on gate-first HKMG? 😉

Unfortunately, the latest news out of TSMC corporate is that 28nm revenues will be 1% of total revenues in 2011 versus the forecasted 2%. Xbit Labs did a nice article here. The official word is that:

“The delay of the 28nm ramp up is not due to a quality issue, we have very good tape-outs. The delay of ramp up is mainly because of softening economy for our customers. So, customers delayed the tape-outs. The 28nm revenue contribution in the Q4 2011 will be roughly about 1% of total wafer revenue,” said Lora Ho, senior vice president and chief financial officer or TSMC.

TSMC’s competitors on the other hand, are whispering that there is a 28nm yield problem, using the past 40nm yield ramping issues as a reference point. Rather than speculate and pull things out of my arse I asked people who actually have 28nm silicon how it is going. Unanimously it was, “TSMC 28nm yield is very good!” Altera and Xilinx are already shipping 28nm parts . The other markets I know with TSMC 28nm silicon are microprocessors, GPUs, and MCUs.

“We are far better prepared for 28nm than we were for 40nm. Because we took it so much more seriously. We were successful on so many different nodes for so long that we all collectively, as an industry, forgot how hard it is. So, one of the things that we did this time around was to set up an entire organization that is dedicated to advanced nodes. We have had many, many tests chips run on 28nm, we have working silicon,” said Jen-Hsun Huang, chief executive officer of Nvidia.

It is easy to blame the economy for reduced forecasts after what we went through in 2009 and the current debt problems being over reported around the world. The recent US debt debacle is an embarrassment to every citizen of the United States who votes. Next election I will not vote for ANY politician currently in office, but I digress….

So the question is: Why do you think TSMC is REALLY reporting lower 28nm revenues for 2011?

Consider this: TSMC is the first source winner for the 28nm process node, without a doubt. All of the top fabless semiconductor companies will use TSMC for 28nm including Apple, AMD, Nvidia, Altera, Xilinx, Qualcom, Boradcom, TI, LSI, Marvell, Mediatek, etc……. These companies represent 80%+ of the SoC silicon shipped in a year (my guess).

One of the lessons semiconductor executives learned at 40nm is that silicon shortages delay new product deliveries, which cause billions of dollars in lost stock valuation, which gets you fired. Bottom line is semiconductor executives will be much more cautious in launching 28nm products until there is excess capacity, which will be mid 2012 at the earliest.

Other relevant 2011 semiconductor business data points:

[LIST=1]

  • The Android tablet market is DOA (iPad2 rules!)
  • The PC market is dying (Smartphone and tablets, Duh)
  • Mobile phones are sitting on the shelf (Are we all waiting for the iPhone5?)
  • Anybody buying a new car this year? Not me.
  • Debt, debt, unemployment, debt, debt, debt…….

    Not all bad news though, last Friday was the 30[SUP]th[/SUP] anniversary of the day I met my wife and here is how great of a husband I am: First I went with my wife to her morning exercise class. 30+ women and myself dancing and shaking whatever we got. It was a very humbling experience, believe me! Next was a picnic on Mt Diablo recreating one of our first dates, then dinner and an open air concert at Blackhawk Plaza. Life as it should be!



  • Google buying Motorola

    Google buying Motorola
    by Paul McLellan on 08-15-2011 at 10:48 am

    So Google is buying Motorola Mobility for $12.5B. If you are a partner of Google using Android then this has both upside and downside. The upside is that Motorola, having been in wireless for longer than almost anyone, presumably has a pretty good patent portfolio that can be used to defend against Apple, Nokia, Microsoft et al. The downside, of course, is that now Google has its own in-house handset company competing with you, and although right now they claim they will keep the playing field level, in the long run things don’t always work out that way.

    For now the partners are putting positive spin on it. For instance, here is the CEO of Sony-Ericsson:“I welcome Google‘s commitment to defending Android and its partners.”

    Everyone else is equally positive. But it could turn nasty if Motorola (don’t know if they are keeping the name) is either very successful (and therefore makes everyone else less successful) or unsuccessful (in which case Google will be tempted to give it an edge by getting a newer or better version of Android).


    Cadence VIP Seminar: next stop after Denali party, August 25th in San Jose

    Cadence VIP Seminar: next stop after Denali party, August 25th in San Jose
    by Eric Esteve on 08-15-2011 at 10:42 am

    attachment

    If you did not have the chance to attend the famous Denali party at DAC 2011, you may want to go to Cadence VIP seminar to be held on Thursday, August 25, 2011, from 1:00 – 4:15pm at Cadence Headquarters: 2655 Seely Avenue, San Jose, Building 10. To register, click here. The atmosphere could be slightly different, as during Denali party the VIP from Cadence were the stars of the show, when for the seminar the stars will be the VIP for AMBA4 ACE, PCI Express gen-3, USB 3.0 and DDR4, to mention a few. In this seminar Cadence will propose case studies from experts in the field addressing the most challenging issues when it comes to verifying today’s most important interfaces such as the four above listed.

    I have blogged about Cadence VIP, or VIP in general, in the past:

    Yalta in EDA: Cadence stronger in VIP territory…

    Interface IP: VIP wiki

    IP would be nothing without VIP…but what is the weight of VIP market?

    If you can make it (and you live in the Silicon Valley…), and you need to know more about VIP or need to be updated about the latest product, as Cadence is leading the VIP market and has built a wide port-folio, covering most of the existing Interfaces, you will certainly make a wise investment! To register to this VIP seminar, just go here.

    Or, if you cannot make it, have a look at Cadence impressive VIP port-folio (just click and see the product from Cadence web site):


    OPC Model Accuracy and Predictability – Evolution of Lithography Process Models, Part III

    OPC Model Accuracy and Predictability – Evolution of Lithography Process Models, Part III
    by Beth Martin on 08-15-2011 at 7:00 am

    Wyatt Earp probably wasn’t thinking of OPC when he said, “Fast is fine, but accuracy is everything,” but I’ll adopt that motto for this discussion of full-chip OPC and post-OPC verification models.

    Accuracy
    is the difference between the calibrated model prediction and the calibration wafer result. Accuracy depends on several factors, principally the intrinsic ability to represent the patterning trends through target size, pitch, and pattern shape for 1D and 2D structures at a given process condition. Calibration test pattern design coverage is important whenever model accuracy is in question.

    Additionally, because you judge OPC model prediction against experimental data, you must consider the experimental errors associated with the metrology data. For an ensemble of different test patterns, a model’s accuracy is limited by the experimental noise “floor.” Multiple repeat measurements (across wafer, across field) provide a better statistical representation and lower this noise contribution to the model. It is interesting to note that the standard error in the determination of the mean for typical OPC calibration structures is 0.5 nm for 1D and 1.5 nm for 2D.

    The degrees of freedom in the model will interact with the metrology noise such that it is possible to “over fit” the physical phenomena and start fitting the experimental noise.How can you quantitatively express the accuracy of a model? Metrics include maximum error, error range, chi-squared goodness of fit, and others. But one of the most useful is the “root mean square error value,” or errRMS (Equation 1) associated with the test pattern ensemble. This weighting (w) allows users to assign more importance to certain known critical design pitches. CDs may be used instead of EPE as well.

    [INDENT=2]Equation 1. CDsim[SUB]i[/SUB] is the model mean for each point i (measurement location); CDmeas[SUB]i[/SUB] is the data mean for each point i ; w is the user-specified weighting for each point i

    An interesting variant of RMS error (see Schunn and Wallach 2005), which accounts for sample metrology error directly, is the scaled RMS deviation (Equation 2). This objective function more heavily penalizes errors associated with precisely known data points than for data points with CDs having larger uncertainties.

    [INDENT=2]Equation 2. s[SUB]i[/SUB] is the standard deviation for each data mean i; n[SUB]i[/SUB] is the number of data values contributing to each measured mean; k is the number points i

    A related, but ultimately more important characteristic is model predictability. The duty of the OPC or post-OPC verification model is to correctly predict the patterning for every possible layout configuration that can appear per the design rules in the full chip. The number of unique design constructs for low k[SUB]1[/SUB] lithography is tremendous; several orders of magnitude more than could ever be reasonably used to train the model. If you divide a master set of patterns into two sets–use one half to train the model, and the other half to verify–the errRMS fitness should be as low on both sets.

    Another method involves including the complex 2D structures of the verification patterns, and then comparing the simulated contour with the experimental contour. If verification fitness is significantly worse than calibration fitness, the model is not sufficiently predictive. In addition, the model must account for CD variability arising from manufacturing process variability. Principle among these are focus, exposure, and mask CD variations (Figure 1).

    [INDENT=2] Figure 1. Example plot of model error RMS for various focus and exposure conditions. Model was calibrated at dose = Nom and focus = 0. Model fitness was then characterized for various defocus and exposure conditions.

    As will be outlined in a future installment of this series, the ability of a model to faithfully predict various pattern failure modes is also important. These failures typically manifest more severely as these manufacturing process parameters vary. A final consideration related to predictability is model portability. Of course, if an entirely new photoresist material, PEB temperature, or etch recipe is implemented for manufacturing, you will need a new model calibration. But if some aspect of the exposure step is slightly altered, such as NA or illumination source intensity / polarization, you should be able to “port” the same resist model and change only specific optical parameters. This is particularly helpful in early process development, when an existing process model is used to simulate next node printing with whatever new RET capabilities may become available. The degree to which the model can decouple optical exposure from resist processing is related not only to the details of the resist model, but also to the nature of the approximations “upstream” in representing the mask and optical system. The details of these mask and optical models will be the topic of my next installment in this series of articles. Stay tuned.

    –John Sturtevant, Mentor Graphics

    P.S. In case you missed them, go readPart 1 and Part 2 of this series. Then continue with Part 4.


    Will AMD Crash Intel’s $300M Ultrabook Party?

    Will AMD Crash Intel’s $300M Ultrabook Party?
    by Ed McKernan on 08-14-2011 at 7:00 am

    Let’s face it, the ships are burning in the harbor and there is only one way out of here for AMD. It needs to crash Intel’s exclusive $300M Ultrabook Party in order to grab a slice of the future, more profitable PC market.

    Intel Capital Creates $300 Million Ultrabook Fund
    Continue reading “Will AMD Crash Intel’s $300M Ultrabook Party?”


    ANSYS/Apache

    ANSYS/Apache
    by Paul McLellan on 08-13-2011 at 2:43 pm

    Last week I met with Andrew Yang, erstwhile CEO of Apache Design Systems and now formally President of Apache Design Inc, a wholly owned subsidiary of ANSYS. The merger formally closed at the start of the month. Within ANSYS Apache is positioned as Chip-aware System-level Engineering Simulation. ANSYS is pretty much completely focused on different kinds of simulation and on simulation-driven product development.

    ANSYS will keep Apache as a subsidiary and, in particular, the Apache name (and presumably the names of its products) will not be going away. The system design challenges that Apache is addressing fall into four main areas: power integrity, signal integrity, thermal/mechanical-stress integrity and electro-magnetic interference integrity. Most of these are dominated by various aspects of switching power.

    Anyone who has been through many mergers knows just how much time can be burned up in dealing with overlapping products, so this merger has it easy. There is no overlap at all. However, there is plenty of customer overlap.

    ANSYS is a very different company from even a big EDA company. It makes about 1/3 of its money on mechanical, 1/3 on fluid dynamics and 1/3 on electronics (now including Apache of course). Andrew was rightly proud that all 20 of the top 20 semiconductor companies used Apache, with over 100 total customers. ANSYS has over 40,000 customers (including 97 of the Fortune 100).

    In some ways, EDA is an easy industry: look at the semiconductor roadmap, find some effect that is currently second or third order but which will become important, and produce a solution that is ready just when designers need it. However, getting the timing right is very difficult and more companies/products fail from being too early than being too late. Apache has done a superb job of getting this timing just right so that as power and noise became more important they had the best (or sometimes only) products to perform the analysis.

    For example, a few years ago they decided that it was no longer possible to just look at the chip, they needed simultaneous analysis of chip, package and board. They acquired Optimum to jump start the package and board side of things and built a whole infrastructure for creating power models for chips and being able to analyze what came to be called chip-package-system (CPS). With the coming of 3D chips and through-silicon vias (TSVs) there are even more challenges in this area, especially the thermal issues once many die are stacked and it is hard to get the heat out.

    Based on that track record, I asked Andrew what is next. What second order effects are we going to have to start to worry about. He reckons that it is quantum effects and the impact on reliability. The margin on transistor thresholds is going away as voltages continue to decrease. Leakage will get even more out of control (and since leakage also increases with temperature there are very real possibilities of thermal runaway). It is going to get very hard to guarantee that a chip will work correctly in the system and that it will be possible to manufacture.

    ANSYS (with Apache) is extending chip-package-system to include other parts of the system such as multi-physics simulation. In many areas, most obviously automotive and aerospace, electronics is intimately tied in with mechanical and simulation-driven product development needs to combine these previously independent areas.

    Note: You must be logged in to read/write comments



    Current State of Tablet Products

    Current State of Tablet Products
    by Daniel Nenni on 08-12-2011 at 12:57 pm

    attachment

    Tablets are hot items these days. There is exuberance about the speed of application processors, size of the internal memory, capabilities of the operating systems, WiFi or 3G/4G connectivity, quality of the display, cameras megapixels, battery life, tablet weight, etc. All of these features are very important, no question about that; how else could one compare one product to the other? But let us step back for a moment and look at the bigger picture and see if there are some common threads.

    I believe several trends deserve our attention and should be spelled out. These are: the proliferation of the operating systems, proliferation of the form factors, and price point.

    The proliferation of the operating systems, as shown in Table 1, is evident. This is not exactly a testament to a harmonization effort; it looks more like the Wild West, where everyone is trying individually to position themselves as a serious contender by getting consumer attention in this newly created market segment without much consideration for standardization. There are currently six major operating systems (Android, iOS, WP7, QNX OS, WebOS, and MeeGo) jogging for the front position, and a few more in the works, recently announced by several companies; and every single one is creating its own ecosystem and applications.

    The proliferation of different form factors (anything from 5” to 10” displays), as shown in Table 2, is another characteristic of the current products, suggesting that the entire tablet market segment is in a state of ‘soul searching,’ trying to find out what the customers prefer. This is not necessarily a bad thing, but it suggests that tablets as products are indeed in their infancy and significant transformations of these products should be expected in the next couple of years before consumers cast their final judgment.

    Finally, here is the third important point related to the current tablet products. The selling price of the tablets from all major manufacturers is still high – clustering around $500. Thus, it should not come as a surprise that the second highest selling product after the iPad2 tablet in the first quarter of 2011 was Barnes and Noble’s 7 inch Nook Color priced at $250, according to the report from DigiTimes. This trend is expected to continue for the rest of the year. A price point of $250 is half that of the rest of the crowd, and certainly represents one of the best values.

    Here is the interesting part. Nook Color is not the most capable tablet but it is a solid performer. It runs the Froyo (Android 2.2) operating system tuned by Barnes and Nobel for selling books and magazines. The application processor is TI’s OMAP 3621 (the same line of OMAP 3 processors that is found in Motorola’s smartphones Droid X and Droid2). This is a single core application processor, not exactly the top of the line as is the dual core Tegra 2 from nVidia (which is found in the top tier tablets). And yet, Nook Color, the modest performer that it is, still outsells all other feature-rich android tablets.

    Table 1: Operating Systems for Tablets and Smartphones

    Company Operating Systems Product Top Tier OEMs
    (manufacturers of smartphones and tablets)
    1. Google Android 2.2/2.3 (Froyo/Gingerbread) smartphones, tablets Google, MOT, Samsung, HTC, LG, Sony Ericsson, Dell, ZTE, Huawei, Lenovo, Asus, NEC, Sanyo, Sharp
    Android 3.1 (Honeycomb) tablets MOT, Samsung, HTC, LG, Sony, Dell, ZTE, Huawei, Lenovo, Asus, Sharp
    Android 2.4 (or 4x) (Ice Cream Sandwich) smartphones, tablets Expected: Google, MOT, Samsung, HTC, LG, Sony, Dell, ZTE, Huawei, Lenovo, Acer, Asus, NEC, Sanyo, Sharp
    2. Apple Apple iOS 4 iPhone, iPad Apple
    Apple iOS 5 iPhone, iPad Apple
    3. BlackBerry QNX Neutrino OS tablets RIM
    BlackBerry OS 6 smartphones RIM
    BlackBerry OS 7 smartphones RIM
    Blackberry Colt smartphones, tablets RIM
    4. MSFT Windows WP7.1 (Mango) smartphones, tablets Nokia, Samsung, HTC, LG, Sony Ericsson, Dell, ZTE, Huawei, Lenovo, Asus, MSI
    WP8 smartphones, tablets Expected: Nokia, MOT, Samsung, HTC, LG, Sony Ericsson, Dell, ZTE, Huawei, Lenovo, Asus, MSI
    5. HP webOS 2.0 smartphones, tablets HP
    webOS 3.0 tablets HP
    6. Intel MeeGo 1.1 smartphones, tablets Intel, Acer, Lenovo. MSI
    7. Nokia Symbian smartphones Nokia

    Note: Symbian is listed here since there still will be a number of legacy smartphones from Nokia

    But there is an additional reason for the popularity of Nook Color. Once launched, it got help from the Android cell phone and tablet community of developers that developed the modded version of Google Android called CM7 specifically for Nook Color, all in an attempt to enrich this product. In collaboration with another group, XDA Developers, together they created stable software. This software is basically Android 2.3.4 (Gingerbread) that can be booted to Nook Color via a micro SD card loaded with the software. The end result is that, Nook Color can operate in dual mode, either using Froyo that came with the original device from Barnes & Nobel, or the advanced Gingerbread operating system loaded on a micro SD card. One can purchase a preloaded 8, 16, or 32 GB micro SD card from n2a SD Cards via Amazon. In essence, all together you can get a full blown 7” tablet with the same capabilities as Samsung’s Galaxy Tab for under $300. And that is a real deal! This is a fine example of the importance that open source projects can play in shaping a product and a market.

    Table 2: Tablet Form Factor

    Screen Size Major OEMs Note
    5 Inch Dell, Archos, Samsung, Sony Sony-5.5 ” dual screen tablet
    7 Inch Samsung, Barnes & Noble, Dell, HTC, RIM, Acer, ViewSonic, Huawei, HP, Sharp, MSI, Mot Nook Color at $250; ViewSonic at $250
    8 Inch Vizio, Archos Vizio at $299
    9 Inch LG, Pandigital, Archos, Samsung, Amazon Includes 8.9″ size
    10 Inch iPad, Mot, Samsung, Dell, HP, MSI, Acer, Archos, HTC, Sony Size from 9.7″ to 10.1″

    Note: Listed OEMs are shown as examples only

    The market success of the Nook Color has not gone unnoticed. Other tablet manufacturers are getting the message about the critical role of tablet price for market penetration. ViewSonic has announced a new lower $250 price for its 7” tablet, and so did Vizio for its 8” tablet, dropping the price to $299. The latest company to follow the trend is HP, dropping the price for its 16 GB 10” TouchPad that runs webOS 3.0 to $399. Consumers will certainly love this trend.

    Lj. Ristic, Managing Director, Mobile Markets, Petrov Group, Palo Alto, CA

    <script type=”text/javascript” src=”http://platform.linkedin.com/in.js”></script><script type=”in/share” data-counter=”right”></script>


    nVidia: "30 Days From Going Out of Business"

    nVidia: "30 Days From Going Out of Business"
    by Ed McKernan on 08-11-2011 at 9:36 pm

    Jen Hsun Huang, the CEO of nVidia, has a phrase he often repeats to his employees: “We are 30 days from going out of business.” With product cycles as short as 6 months, the troops are on a constant march to revenue. The earnings conference call on August 11th highlighted two critical pieces of information. First, is the success that they are having in growing their PC graphics revenue despite the Sandy Bridge onslaught. Second, and more important, there is an internal and now a publicly articulated goal that they will reach a $1B revenue run rate for Tegra in 2012.

    As outlined in an earlier Semiwiki blog titled “Intel’s Barbed Wire Fence Strategy”
    http://www.semiwiki.com/forum/content/651-intel-s-barbed-wire-strategy.html Intel is in the business of expanding its fence lines. Their current target is nVidia’s graphics business. It represents not only additional revenue but also the ability to deny nVidia the funds to develop Tegra chips that will be used competitively in Smartphones and Tablets using Android as well as some Windows 8 mobiles.

    All this may sound like Intel is the one to be most at risk to market share loss in 2012, especially with the launch of Windows 8 but, as a matter of fact, it is the other way around. First recognize that nVidia has finally returned to its peak run rate of $1B a quarter that they reached in 2007. Meanwhile, Intel is on track to do $55B in revenue this year or nearly 50% higher than 2007. Intel has the additional profits to crank out more designs targeting more segments. But there are clouds of uncertainty.

    Windows 8, as understood by most people, unlocks the whole PC market to ARM based processors. Untrue! To get Win 8 to be light on its feet – meaning small memory footprint and fast boot/resume – Microsoft had to make compromises. First and foremost, Microsoft had to limit the hardware ecosystem. A closed system with support for fewer I/O devices makes life easier. No more Swiss Army Knife. How well will the Win 8 mobile run? We won’t know until it ships. The reduced hardware ecosystem should be fine for tablet – but a “clamshell” design is open to interpretation. Are they reduced netbooks?

    Intel’s challenge is much different than nVidia’s but still attainable. Back in May, at their Analyst Meeting, Paul Otellini announced they were dropping their Thermal Design Point (TDP) for processors from 35W to 17W. We now know what drove them to this decision, or better yet, who drove them to this decision.

    A story broke yesterday in the Wall St. Journal where it was noted that Apple informed Intel that it better drastically slash its power consumption or risk losing Apple’s business. As an Intel exec said, “It was a real wake up call to us.” For more on the story – follow the link:

    http://blogs.wsj.com/digits/2011/08/10/intel-sets-300-million-fund-to-spur-ultrabooks/

    As I mentioned in an earlier blog, the MAC Air is driving the mobile market. It uses Intel ULV processors that have a TDP of 17W (50% lower than regular Intel mobile Sandy Bridge CPUs) and sells for a minimum of $220. The price of the CPU is based on the yield they get per wafer. In this case <50%. Intel needs to get to 7W ideally to make Apple happy in the near term. And they need to offer an entry-level price closer to $75 – so Apple can take MAC Air to $799 thereby sucking the oxygen out of the notebook PC market. This is where I believe Intel will be one year from now with the 22nm Ivy Bridge ULV (A straight die shrink of Sandy Bridge at a similar MHz to today’s ULV). If this still seems high relative to ARM, it is, and Apple has probably informed Intel that they need a 3-5W TDP with Haswell as they go even more aggressive on a future MAC Air. Sound confusing? Consider this like two great armies rushing to the same spot at the front. Each has their own unique strengths and weaknesses. In the end – it is the market that lies just a tad above Apple’s entry level $499 iPAD. Call it $499+1. Note: You must be logged in to read/write comments