100X800 Banner (1)

Improving Analog/Mixed Signal Circuit Reliability at Advanced Nodes

Improving Analog/Mixed Signal Circuit Reliability at Advanced Nodes
by glforte on 12-07-2011 at 3:52 pm

Preventing electrical circuit failure is a growing concern for IC designers today. Certain types of failures such as electrostatic discharge (ESD) events, have well established best practices and design rules that circuit designers should be following. Other issues have emerged more recently, such as how to check circuits for correct supply connections when there are different voltage regions on a chip in addition to other low power and electrical overstress (EOS) issues. While these topics are not unique to a specific technology node, for analog and mixed signal designs they become increasingly critical as gate oxides are get thinner for the most advanced nodes and circuit designers continue to put more and more voltage domains into their designs.

In addition to ESD protection circuit errors, some of the emerging problems that designers may face at advanced nodes include:

  • Possible re-spins due to layout errors

    • Thin-oxide/low-power transistor gate driven by the wrong (too high) voltage supply causing transistor failures across the chip
    • This might also cause degradation over a period of time, leading to reliability issues and product recalls
  • Chip reliability and performance degradation

    • Un-isolated ground/sink voltage levels between cells or circuits
    • High-voltage transistors operating in non-saturation because of insufficient/low-voltage supply

TSMC and MGC have worked together to define and develop rule decks for Calibre® PERC™ that enable automatic advanced circuit verification that addresses many of these issues. For example, in TSMC Reference Flow 11 and 12, and in AMS Reference Flow 1 and 2, as well as in standard TSMC PDK offerings, MGC and TSMC have collaborated to provide checks for ESD, latch-up and multi-power domain verification at the 28nm and 40nm nodes. The companies estimate that by using a robust and automated solution like Calibre PERC, users can achieve over 90% coverage of advanced electrical rules with no false errors and runtimes measured in minutes. This is a significant improvement over marker layers, which may achieve around 30% coverage and often result in false positives, and it is far better than visual inspection, which typically achieves only about 10% coverage and is extremely labor intensive.

Calibre PERC introduces a different level of circuit verification capability because it can utilize both netlist and layout (GDS) information simultaneously to perform checks. In addition, it can employ topological constraints to verify that the correct structures are in place wherever circuit design rules require them. Here is a representative list of the checks that Calibre PERC can be used to perform:

  • Point to point resistance
  • Current density
  • Hot gate/diffusion identification
  • Layer extension/coverage
  • Device matching
  • DECAP placement
  • Forward biased PN junctions
  • Low power checking

    • Thin-oxide gate considerations, e.g., maximum allowable voltage
    • Voltage propagation checks, e.g., device breakdown and reverse current issues
    • Detect floating gates
    • Verify correct level shifter insertions and correct data retention cell locations
  • Checks against design intent annotated in net lists

    • Matched pairs
    • Balanced nets/devices
    • Signal nets that should not cross
    • MOS device guard rings

Customers are constantly finding new ways to employ Calibre PERC to automate new types of circuit checks. Leave a reply or contact Matthew Hogan if you would like to explore how Calibre PERC might be used to improve the efficiency and robustness of new or existing checks.

by Steven Chen, TSMC and Matthew Hogan, Mentor Graphics

Acknowledgements
The authors would like to recognize members of the TSMC R&D team Yi-Kan Cheng, and his teams, Steven Chen, MJ Huang, Achilles Hsiao, and Christy Lin, as well as the entire Calibre PERC development and QA team for their support and dedication in making these outstanding capabilities possible.

This article is based on a joint presentation by TSMC and Mentor Graphics at the TSMC Open Innovation Platform Ecosystem Forum. The entire presentation is available on line on theTSMC web site (click here).

var _gaq = _gaq || [];
_gaq.push([‘_setAccount’, ‘UA-26895602-2’]);
_gaq.push([‘_trackPageview’]);

(function() {
var ga = document.createElement(‘script’); ga.type = ‘text/javascript’; ga.async = true;
ga.src = (‘https:’ == document.location.protocol ? ‘https://ssl’ : ‘http://www’) + ‘.google-analytics.com/ga.js’;
var s = document.getElementsByTagName(‘script’)[0]; s.parentNode.insertBefore(ga, s);
})();


How Fast (and accurate) is Your SPICE Circuit Simulator?

How Fast (and accurate) is Your SPICE Circuit Simulator?
by Daniel Payne on 12-06-2011 at 6:17 pm

In my dad’s generation they tweaked cars to become hotrods while in EDA today we have companies that tweak SPICE circuit simulators to become crowned speed champions. The perennial question though is, “How fast and accurate is my SPICE circuit simulator?”
Continue reading “How Fast (and accurate) is Your SPICE Circuit Simulator?”


Methodics can Now use www.methodics.com after Domain Name Battle in EDA

Methodics can Now use www.methodics.com after Domain Name Battle in EDA
by Daniel Payne on 12-06-2011 at 5:36 pm

Imagine trying to run your EDA business only to have a competitor squat on your domain name and then make disparaging remarks about you. This sounds like a match made for reality TV however it is quite real, and now this chapter in EDA has a happy ending because Methodics can use www.methodics.com as their domain name.


The Bad Guy
We’ve been blogging at SemiWiki about how Shiv Sikand of competitor IC Manage registered www.methodics.com in frustration after a sales employee left for Methodics. The publication of the following content at www.methodics.com raised the controversy quite high because it falsely implied that Methodics was out of business:

The Resolution
Fortunately for businesses that have domain name disputes you have recourse by going to the World Intellectual Property Organization (WIPO) and on November 28, 2011 Methodics won the domain name of www.methodics.com, which they now use for their new site name replacing www.methodics-da.com.

IC Data Management Companies
There are a few EDA companies offering tools that help manage the IC design process:

Conclusion
Don’t cyber squat on your competitor’s company name or product name, just compete based on the merits of your product and the skills of your sales force.


EDA mergers: Accelicon acquired by Agilent

EDA mergers: Accelicon acquired by Agilent
by Daniel Payne on 12-06-2011 at 4:51 pm

Agilent acquired EEsof back in 1999, now the EEsof group acquired Accelicon on December 1, 2011. The terms of the deal are not disclosed.

SPICE circuit simulators are only as accurate as their models and algorithms. On the model side we have Accelicon that provides EDA tools to create SPICE models based on silicon measurements:

  • Model Quality Assurance
  • Model Builder Program
  • DFM-aware PDK verification
  • SPICE Model Services


Capacitance and IV curves for MOS devices

Accelicon has partnered with many other EDA companies to fit into standard flows:

  • Synopsys HSPICE
  • Cadence Spectre
  • Mentor Eldo
  • Berkeley DA AFS
  • IPL Alliance
  • MOSFET models: BSIM3v3, BSIM4, BSISOI, BJT
  • DRC/LVS tools: Assura, Calibre, Hercules

The Advanced Model Analysis flow:

SPICE model services include a test chip design, measurements from silicon, then running through Model Builder Program (MBP) and Model Quality Assurance (MQA):

Competitors
We have several EDA companies competing in this space:

  • Silvaco – UTMOST IV
  • Agilent – ICCAP
  • ProPlus – BSIMPro, BSIMProPlus
  • Synopsys – Aurora
  • Accelicon – MQA, MBP

Summary
I don’t see any disruption in the EDA business with this acquisition because we have so many sources for SPICE models. Accelicon founder Dr. Xisheng Zhang started the company in 2002 and hopefully received a fitting reward for building up the business over the past 10 years.

See the Wiki page of all known EDA mergers and acquisitions.


Mark Milligan joins Springsoft

Mark Milligan joins Springsoft
by Paul McLellan on 12-06-2011 at 2:01 pm

Mark Milligan recently joined SpringSoft as VP Corporate Marketing. I sat down with him on Monday to get his perspective on things.

He started life, as so many of us, as an engineer. He was an ASIC designer working on low-level microcode for the Navy Standard Airborne Computer at Control Data. It was actually the first ASIC that they had done. It was the early days of RTL level languages and Mark worked on the simulation environment to verify the ASICs.

Teradyne had bought several companies and built the first simulation backplane so Mark switched to marketing and did the technical marketing for that product line.

Then he moved to the West Coast and joined Sunrise Test Systems which Viewlogic acquired (and eventuallly ended up in Synopsys). There he was an early advocate of DFT and scan. Funnily enough one pushback they used to get on running fault simulation is that designers didn’t want to know how bad it was, there wasn’t time or good tools to get coverage up and it was too late, typically, to bite the bullet and switch to a full scan methodology on the fly.

A spell in CoWare and then Virtualogix gave him a new perspective on embedded software (as did my tours of duty and VaST and Virtutech).

When Mark arrived at SpringSoft they had commissioned a customer satisfaction survey. He was pleased to discover that their customer satisfaction was 25% higher than any of Synopsys, Cadence, Mentor or Magma.

One challenge he feels SpringSoft faces is that products like Laker and Verdi have better name recognition than SpringSoft does itself.

Between 2008 to the present, SpringSoft has continued to grow and it has stayed profitable. In fact he believes they are the most profitable public EDA company (as a percentage of revenue, of course. For sure Synopsys makes more total profit). They are around 400 people, about 3/4 of them in Taiwan.

This profile, profitable and growing and medium sized, allows them to focus on specific pain areas to bring innovation to customers. They are big enough to be able to develop solutions and deliver them but small enough that they don’t have to try and do everything.

One similarity Mark noticed from a previous life was the way that, in the past, people wanted to know how good their test solution was (fault simulation) and now everyone wants to know how good their test benches are, which is a harder problem.

We talked a bit about innovation. Historically most innovation has come from small startup companies but these are no longer being funded in EDA. One the other hand, we have a few medium-sized EDA companies: SpringSoft, Atrenta, Apache (now part of Ansys but still being run as an EDA company). In the areas they cover, which for sure is not everything, there has been a lot of innovation there broadening out from single products to a portfolio that addresses a problem area.


Nvidia’s Chris Malachowsky on "Watt’s next"

Nvidia’s Chris Malachowsky on "Watt’s next"
by Paul McLellan on 12-06-2011 at 12:31 pm

The video and slides of the CEDA lunch from a month or two ago are now (finally) up here. Chris Malachowsky presented “Watt’s next.” Chris is one of the founders of nVidia and is currently its senior VP of research. He started by talking a bit about the nVidia product line but moved on to talking about supercomputers and their power requirements. Of course nVidia builds graphics chips that go in PCs and phones, but the basic parallel compute engine in those chips can be harnessed for other tasks. Given the title of the talk you won’t be surprised to know he spent most of the presentation on the challenges of power. Unless you’ve been under a rock for the last decade you have to know that power is one of the biggest challenges in chip design today. Computer architecture is one area that can make a big contribution along with all the techniques that have been developed at the SoC level. The microarchitecture can make an enormous difference. But moving data around is really a big problem: which uses more power, a 64-bit floating point multiply and add, or moving one of the operands 20mm across the die. Moving the data is already 5 times as costly, and by 10nm it will be 17x as costly (not to mention hundreds of times as costly to move it off chip). Science needs a 1000x increase in computing power but without requiring a power station to provide the power and remove the heat. The entire talkwas recorded on video and is synchronized with the slides. He ended up talking about the department of energy program to build a 1000 petaflop computer (1 exaflop) and consuming “only” 20MW. By comparison we are currently at 2 petaflops consuming 6MW, so a 500X increase in speed for only a tripling of power. Click on the thumbnail to get a graphic that is large enough you can read the details.


HSPICE – I Didn’t Know That About IC Circuit Simulation

HSPICE – I Didn’t Know That About IC Circuit Simulation
by Daniel Payne on 12-05-2011 at 11:14 am

HSPICE is over 30 years old, which is a testimony of how solid the circuit simulator has been and how widely used it is. To stay competitive the HSPICE developers have to innovate or the product will slowly loose ground to the many other simulator choices. I listened to the webinar last week to find out what was new with HSPICE.

Szekit Chan was the webinar presenter and his title is HSPICE Staff Corporate AE at Synopsys.

User-specified Options can be used for your netlists, but first look at your .st0 file to see what all of the settings are. You can probably just remove these settings because the default values are now suitable for the vast majority of IC designs. I didn’t know that you could really safely remove the .options because they were usually set by some expert and we were told, “Don’t ever touch these settings or you will get the wrong results.”

The .lis file shows how much time it takes for: operating point, transient, etc.

Use a single option instead of all those individual options: .option RUNLVL = 1 | 2 | 3 | 4 | 5 | 6

1 – Fastest
3 – Default
6 – Most accurate

This approach of simplifying the speed versus accuracy tradeoff to a single number reminds me exactly of what HSIM uses, the hierarchical Fast SPICE tool. It certainly lowers the learning curve for a tool and doesn’t require an expert to experiment with arcane tool settings.

Remove the old convergence options, just let the built-in auto-convergence do its job instead.

If you have extracted netlists with thousands or millions of RC elements, then consider using the new RC Reduction: .option SIM_LA=PACT.This also works on files like SPF and SPEF.

Avoid probing all signals with v(*), be more selective instead. If you use v(*) then you tend to fill up the disk with too much data.

Want Results Faster?
Try Multi-core with HSPICE Precision Parallel (HPP) technology. One license uses two threads.

For Monte Carlo or corners use a compute farm. Use the “-dp” switch for distributed processing. Supports both SGE and LSF.

Some other options to speed up run times:

The summary of ways that you can speed up your long circuit simulation run times using command-line options or netlist statements:

During the Q&A time:

Q: Does RC reduction only apply to post-layout?
A: No, you can use it for both pre and post-layout.

Q: Any limitation on circuit size for HPP?
A: Not really, we’ve seen up to millions of elements.

Summary
The single biggest thing that I learned was that speed versus accuracy is now a simple integer in HPICE of 1 to 6, so good bye to the old way of tweaking .options for every different netlist topology. Using up to 8 cores to simulate a design looks very efficient, returning speed improvements up to 7.43X versus a single core.

Since the Magma acquisition announcement occurred on Friday after the webinar on Wednesday I’ve been thinking about the product overlap in the SPICE and Fast SPICE categories:

[TABLE] style=”width: 400px”
|-
| Product
| Synopsys
| Magma
|-
| SPICE
| HSPICE
| FineSim SPICE
|-
| Fast SPICE
| CustomSim
| FineSim Pro
|-

HSPICE has a much larger installed base because of its age however FineSim SPICE according to some offers better speed than HSPICE.

On the Fast SPICE side of things CustomSim offers hierarchical simulation and co-simulation with the Verilog simulator VCS, while FineSim Pro is a flat simulator with a less efficient co-simulation using the Verilog API. If you have a hierarchical design or need Verilog co-simulation then I’d use CustomSim.

It will be interesting to learn how Synopsys sorts out the new product roadmap in SPICE and Fast SPICE, hopefully by DAC they will have a story to tell us that makes sense. Possible scenarios are that:
[LIST=1]

  • HSPICE users can swap for FineSim SPICE
  • FineSim SPICE users can swap for HSPICE
  • FineSim Pro users can upgrade to CustomSim
  • HSPICE and FineSim SPICE are merged and given a new name
  • FineSim Pro and CustomSim are merged and given a new name

    We’ve started a lively discussion on the Magma acquisition here on the forums.

    There’s also a Wiki page listing all SPICE and Fast SPICE tools known.


  • Taiwan Trip Report: Semiconductors, EDA, and the ASIC Business!

    Taiwan Trip Report: Semiconductors, EDA, and the ASIC Business!
    by Daniel Nenni on 12-04-2011 at 7:00 pm

    Just returning from my monthly trip to Taiwan and I find myself energized! Semiconductors, EDA, and the ASIC business have never been more exciting! The travel itself is not so exciting but since I make frequent trips the airline and hotel treat me like a king. And let me tell you, it is good to be a king!

    Speaking of royalty, I saw Dr. Morris Chang at the Royal Hsinchu Hotel on Wednesday night. He made it up the two flights of lobby stairs faster than I did! It truly would be an honor to write his autobiography, let’s hope someone does it soon.

    Speaking of autobiographies, the new book on Steve Jobs is a great read. He not only transformed the computer industry, the music industry, and mass media, most importantly, Steve Jobs transformed the semiconductor industry. Where would we, as semiconductor professionals, be without the iMac, iPod, iPhone, iPad, and Apple TV? If you have any doubts on the future of the semiconductor industry read the book. It is one of the best books on innovation I have ever read. It also gives you an intimate look at who Steve Jobs really was, and yes I read it on an iPad2.

    The big news in EDA last week is the $500m+ acquisition of Magma by Synopsys. There is a spirited discussion on the SemiWiki forum here. Please add your thoughts when you get a chance, it’s important. Communication is the key to success in any industry, even more so for EDA.

    I asked pretty much everybody I met with in Taiwan last week what they thought about the acquisition, which was one of the hot topics of the trip. The most common theme of EDA discussions is why our industry is a mere crumb of the total semiconductor pie. During a lunch conversation last week I proposed that ALLof the EDA software licenses be deactivated for one month so the semiconductor industry better appreciates EDA. With the recent acquisitions and lack of capitol investment in new EDA companies, that scenario is now much more plausible!

    Spending the day with the Global Unichip(GUC) team was the highlight of my week. As you may have read, GUC announced itself as the “Flexible ASIC Leader” taking direct aim at the traditional ASIC market led by the likes of IBM, ST Micro, TI, Renesas, and Samsung. GUC HQ is directly across the street from TSMC Fab 12 so I literally walked there.

    After three very technical presentations on IP Solutions, High Speed/Low Power ARM Design, SiP and 3D IC Services, a flash of marketing genius ran through my head. Above and beyond the technical elegance GUC offers the system houses around the world, GUC is selling insurance, insurance that your leading edge SoC will arrive on time, within specifications, and at the expected cost (first silicon success). GUC is really selling a NO RISK SoC SOLUTION.

    Dinner with Jim Lai, President of GUC, and his team highlighted the point. It was an elegant Japanese dinner with French wine and the best service you could ask for. And the cost was much less than one would have expected. It is good to be a king in Taiwan, even for just a week!



    Interoperability Forum

    Interoperability Forum
    by Paul McLellan on 12-03-2011 at 3:19 pm


    Earlier this week I went to the Synopsys Interoperability Forum. The big news of the day turned out to be Synopsys wanting to be more than interoperable with Magma, but that only got announced after we’d all gone away.

    Philippe Margashack of ST opened, reviewing his slides from a presentation at the same forum from 10 years earlier. Back then, as was fashionable, he had his “design productivity gap” slide showing silicon increasing at 58% CAGR while design productivity only increased by 21% CAGR. The things he was looking to in order to close the gap were: system level design, hardware-software co-design, IP reuse and improvements to the RTL to layout flow and in analog IP.

    We’ve made a lot of progress in those areas but, of course, you could put up almost the same list today. ST has been a leader in driving SystemC and virtual platforms and now have over 1000 users. But the platforms still suffer from a lack of standardization for modeling interrupts, specifying address maps and other things.

    One specific example that he went over was a set-top-box chip (or DVR chip) called “Orly” that can do 4 simultaenous HD video decodes on a single chip. The software was up with all their demos running just 5 weeks after they received silicon.

    Next up was John Goodenough of ARM who also took the slides from a presentation by his boss (that he says he actually put together) and compared them to the situation today. “Everything has changed but nothing has changed” was the theme. Ten years ago they needed ot simulate 3B clocks to validate a processor. Now it is two orders of magnitude bigger doing deep soak validation on models, on FPGA prototypes and, eventually, on silicon. Back then they had 350 engineers; now 1400. They had 250 CPUs for validation and now they have tens of thousands of job slots for simulation, multiple tens of thousands of CPUs, multiple emulators, FPGA prototype farms.

    Jim Hotalked about standards living for a long time (as I did recently but with different examples). He started from Roman roads and how the railway gauge came from that and so, in turn, the space shuttle boosters that had to travel by rail. That US railways are the same gauge, 4′ 8.5″, as UK is not surprising since the first railroads were built to run British locomotives. But the original gauge in Britain was based on the gauge used in the coal mines which was 4′ 8″ (arrived at by starting from 5′ and using rails 2″ wide). As with the Romans they were choosing a width that worked well behind a horse, although there is no evidence that the Roman gauge was copied. In fact in Pompeii the ruts are 4′ 9″ apart. And as for the space shuttle booster, it doesn’t depend on the track gauge but the load gauge (how bit a wagon can be and still clear bridges and tunnels). The US load gauge is very large and the UK one is very small (US trains and even French ones cannot run on UK rails despite the rails being the same distance apart for this reason).

    Mark Templeton, who used to be CEO of Artisan before it was acquired by ARM, talked about making money. In almost all markets there is a leader who makes a lot of profit, a #2 who makes some and pretty much everyone else makes no money and struggles to be able to even invest enough to keep up. So it’s really important to be #1. He talked about going to a conference where John Bourgoin of MIPS presented and went into the many neat technical details of the MIPS architecture. Robin Saxby of ARM, at the time about the same size as MIPS, presented and talked nothing about processors but about the environment of partners they had build up: silicon licensees, software partners, modeling and EDA partners and so on. For Mark it was a revelation that winning occurs through interoperation with partners. Today MIPS has a market cap of $300M and ARM is $11.7B.

    Michael Keating talked about power “just in time clocking, just enough voltage” and how over the last few years CPF and UPF (why do we need two standards?) have improved the flows so that features like multi-voltage regions, power-down, DVFS are usable. But power remains the big issue if we are going to be able to use all the transistors that we can manufacture on a chip.

    Shay Gal-On talked about multi-core and especially programming multi-core. I remain a skeptic that high core count multi-core chips can be programmed for most tasks. They work well for internet servers (just put one request on each core) and for some types of algorithm (a lot of photoshop stuff for instance) but not for most things. Verilog simulation, placement etc all seem to fall off very fast in their ability to make use of cores. The semiconductor industry is delivering multi-core as the only way to control power, but making one big computer out of a lot of little ones has been a reseach project for 40 years. He had lots of evidence showing just how hard it is: algorithms that slow down as you add more cores, different algorithms that cap out at different core ceilings and so on. But it’s happening anyway.

    And don’t forget Coore’s Law: the number of cores on a chip is increasing exponentially with process generation, it’s just not obvious yet since we are on the flat part of the curve.

    Shishpal Rawat talked about the evolution of standards organizations. There are lots of standards organizations. Some of them are merging. There will still be standards organizations in… I’m afraid it was a bit like attending a standards organization meeting.


    Microsoft’s New Tablet Strategy: Here, There and Everywhere

    Microsoft’s New Tablet Strategy: Here, There and Everywhere
    by Ed McKernan on 12-03-2011 at 10:33 am

    As mentioned in a previous post, Microsoft has started to come clean on its software strategy as it relates to Windows 8 for PCs and Tablets. The strategy has been changing quite rapidly since their first admission in September. Essentially the Windows 8 O/S will be forked based on whether the mobile device is operating on an x86 or an ARM processor, with legacy apps only being supported on Intel processors.

    The change in Microsoft’s strategy I believe is based as much on economics and profits as it is the degree of difficulty porting apps to a new processor architecture. Intel and Microsoft are in the process of stringing along the world’s longest divorce proceedings because every now and then they wake up to see that they still need each other to remain heavily profitable. The common thread or customer base is the corporate world, who fear any breakup. In addition, the HP drama that unfolded this summer shows what happens when a big company cuts off its right arm.

    Microsoft realizes that there can be two Windows 8 operating systems and perhaps three. The consumer market, both PC and tablets, is growing at an incredible rate given that costs continue to drop. Emerging PC markets, as shown by every Intel presentation to analysts, is growing at a mid teens rate because notebook prices are continuing to drop making it more affordable for new purchasers. On the flip side, Apple has carved out a leading position in tablets and greater than $1000 notebook PCs in retail. Microsoft will be under pressure to shave O/S prices in the consumer space to blunt Apple’s charge.

    When Meg Whitman rejoined HP, the immediate strategy recognition was that they would not catch Apple in the consumer tablet market and that they needed to create a firewall in the corporate market where, outside of printers, most of the revenue and profits are derived. Microsoft and Intel have a similar business model where most of their profits come from corporate. If Microsoft issued one O/S across the board, then they would be leaving a huge pile of corporate profits on the floor. In addition, Microsoft would not be able to follow the traditional model of upsell the security blanket. The right strategy called for Intel and Microsoft to join hands with HP and Dell to issue a corporate tablet that cost a few hundred $$$ more than a consumer model but retains the x86 processor and full Windows compatibility.

    These corporate tablets that arrive in 2H 2012 will in reality be very similar in hardware to the ultrabook platform being pushed by Intel. The key difference is that it will have a smaller, lower power LCD and likely a slower, lower power and lower cost Intel processor. The cost will end up being less than a similar ultrabook but closer in range to an iPAD. However, unlike the consumer market that demands non-iPAD tablets to be in the $199 – $299 range, these tablets will find a home in the corporate world at prices closer to $500.

    If Microsoft is not successful with the new corporate tablet model then they run the risk of losing O/S and Office business to Apple in corporate PCs. It is still early to tell how fast Apple will be able to penetrate the corporate world over the next few years. However, to make sure they have all the bases covered, Microsoft, according to this article, appears to be working on an Office suite for iPAD. No mention of whether this is specifically for ARM processors or whether it will be independent of the underlying hardware. In addition, there is no mention of backward compatibility.

    As the tectonic plates shift in the PC and tablet industry, one can see that for all the changes, there is still a lot of money to be tapped in the corporate market. Existing suppliers like Dell and HP are trying desperately to hold onto hardware sales as they increase services and cloud based computing. Apple has decisions to make as well. Corporate is used to buying on 3-5 year cycles and will overbuy in terms of PC performance to get the best ROI over the life of the PC. Apple may decide that they will have to ramp the performance of their next version ARM processor (A6) dramatically so as to stay within sight of Intel’s 22nm tablet processor that arrives with Windows 8 next year. 2012 should be very interesting, indeed.

    FULL DISCLOSURE: I am Long AAPL and INTC