100X800 Banner (1)

HP, Palm, tablets, PCs, smartphones

HP, Palm, tablets, PCs, smartphones
by Paul McLellan on 08-19-2011 at 2:54 pm

Hewlett-Packard purchased Palm last year for over a billion dollars primarily to get their hands on the WebOS operating system for powering its tablets and smartphones. It’s turned out to be much too little too late. Despite WebOS being a new operating system with many attractive features, HP’s tablet offering, the TouchPad, has been a major bust, selling in the hundreds and leaving major retailers complaining about their inventory and wanting HP to take it back. So HP announced yesterday that it was getting out of the tablet and smartphone business. WebOS may find a home (and the most likely would be someone who is currently betting on Android and worried that now that Google has to make real money on Android to justify its $6B acquisition of Motorola Mobile maybe they should hedge that bet; but it will be an expensive hedge). I don’t know why HP expected to be a big hit out of the gate with its WebOS strategy, and if it didn’t have the stomach for a lengthy race and thought it was a sprint, I don’t know why they bothered to get into the business in the first place.

HP, which is the largest PC manufacturer in the world, also announced that it may get out of PCs. Presumably, in the same way as IBM, by finding a home for the division in a company that is more geared up to producing consumer products.

They are also buying Autonomy, the largest software company in the UK, $10B, positioning themselves more in services and servers, competing head to head with IBM and Oracle. Of course Apotheker the CEO would probably prefer to buy his old company SAP but he can’t afford it since it is worth as much as HP.

Analysts didn’t like it and many downgraded HP, and as a result HP is down 20% (destroying $12B or so of market cap). So forget that SAP is worth as much as HP, it’s now worth $10B more.

So what a story! The big fight by Carly Fiorina (against Walter Packard, Bill’s son) to buy Compaq. Oh yes, and people’s phones being bugged. Out she goes. In comes Mark Hurd. Weird sexual shenanigans and out he goes (and pops up at Oracle). In comes Leo Apotheker (whose prior experience was all running software businesses such as SAP and for a time was hiding to avoid being subpoenaed in a lawsuit with Oracle). I wonder how long he’ll last.


Top 5 Reasons for Wasting Power

Top 5 Reasons for Wasting Power
by Paul McLellan on 08-19-2011 at 2:27 pm

Traditionally, David Letterman style, we should really have the top 10 reasons for wasting power in semiconductor design, but here are the five big ones.

Starting with reason #5: Lack of a power gating strategy
Leakage power is a huge proportion of total power and the only way to save leakage power (apart from low leakage cells when they can be used) is to turn off the power. Of course this doesn’t just save leakage power, it saves dynamic power too. Your cell-phone battery wouldn’t last very long if the transmit/receive logic was kept powered up all the time even when you weren’t making a call. This is not something that can easily be automated. The design needs to be partitioned into power regions and control signals created (usually under software control) to handle the power down and restore (and retain register values if necessary). CPF and UPF devote a lot of their specifications for making sure the boundaries of blocks like this are correctly handled.

Reason #4: Poor local register enable conditions
Synthesis tools will replace recirculating muxes with clock gates. But often a register can be gated much more frequently since either the value in the register will never be used or else it is clear from some other aspect of the design that the value in the register will not change. In both these cases power will be saved by gating the clock to the register. As always, the easiest way to waste power is to do work that is not required to be done.

Reason #3: Inefficient design architecture
It is widely known that tradeoffs made at the higher levels of abstraction can result in larger impacts on performance, power and area. Choosing the number of pipleline stages in a datapath, for example, can have a major impact on power. Having one part of the chip that forces the clock frequency higher than required for the rest of the block can waste a lot of power. Almost any aspect of memory organization (size, number, type) has a big impact on power.

Reason #2: Inefficient design implementation
This is a combination of user problems and tool problems. There are many suboptimal ways to implement things, such as having high-frequency nets longer than necessary (and thus with excess capacitance). Excessively tight timing constraints during synthesis can result in higher powered cells than necessary being selected. Almost always there is a tradeoff between performance and power and demanding unnecessarily high performance or specifying unnecessarily tight constraints can result in power being wasted.

And, drum roll please, the top reason for wasting power: Missed global clock gating opportunities
Local register-level clock gating has been automated in synthesis tools (replacing recirculating muxes with a clock gate). But there are more opportunties than this, although they required that you understand the design intent and thus know when clocks must run and when they can be stopped. For example, redundant memory reads and writes (reading the same address or writing the same data to the same address) are huge wastes of power.

See Will Ruby’s more extensive discussion of these issues here.


Design Constraints

Design Constraints
by Paul McLellan on 08-19-2011 at 2:12 pm

Design constraints, which express higher level design intent, are one of the pieces of ancillary data that are critical to the success or failure of a custom (in fact any) design. Design constraints aren’t usually contained within layout files or library information, but without these critical data, designs may not meet specifications.

Today, most custom design teams manage constraints in ad-hoc, manual fashions. This ad-hoc approach has become a significant limitation when it comes to automating the custom design process, which in turn can limit both design productivity and accuracy. Moving forward, the custom design community is starting to look to standards efforts to ease the burden of correlating and communicating design constraints throughout the design process.

Pulsic is committed to increasing interoperability in the EDA community and thus Pulsic contributed its recommendation on custom design constraints to the IPL Alliance (www.iplnow.com) in August 2010. Pulsic is a member of the working group within the IPL to create an industry standard for custom design constraints. Pulsic is making its recommendation available for review and comment here under a click through license agreement. Pulsic is proactively acting as a conduit for additional user feedback into the IPL standardization process. Please continue to check with the IPL Alliance on their website for availability of a published standard on custom design constraints.

The white paper on design constraints is here.

Pulsic’s constraint recommendation is here.


Intel’s Back to the Future Buy of Micron

Intel’s Back to the Future Buy of Micron
by Ed McKernan on 08-19-2011 at 5:14 am


In an interview that Gordon Moore gave in early 2000, the former co-founder of Intel recounted how they abandoned the DRAM market in the early 1980s in order to exit the increasingly unprofitable business and focus on the promising, yet still young x86 processor market. Intel was also home to EEPROM and NOR Flash, two memory technologies that spirited their way into the embedded markets. Now, opportunity arises for Intel to jump back into memory with both feet by buying Micron. What would be the logic of Intel going Back to the Future?

I believe part of the answer can be found in the same interview with Moore. At one point the interviewer asked about the raging battle between CISC and RISC at the end of the 1980s and early 1990s. CISC stands for Complex Instruction Set Computing (i.e. Intel x86) while RISC stands for Reduced Instruction Set Computing (e.g. SPARC, MIPs, ARM). Moore says the reason RISC was widely embraced, as the architecture of the future is because memory speeds caught upand matched processor speeds. The net result was that with RISC you could do a lot of simple operations going to memory very rapidly vs. CISC that tried to do a lot of operations on the chip. CISC was bigger but given Intel’s high volume, it did not turn out to be a cost penalty. Also Intel thrived on the PC software ecosystem tied to x86.

Today Intel grapples with the opposite position where memory is much slower than the processors so they need to develop new ways to closely couple DRAM and Flash to their multi-core processors. Intel can generate extra value in the server space if they can demonstrate a total solution that improves the performance per watt. In the consumer space, Intel may look to new package technologies to shrink the footprint in the rapidly evolving tablet and ultrabook space.

All these possibilities are bolstered by the state of the semiconductor industry and the current way Wall St. values Intel. Intel is valued at a P/E less than 9 or like an early 1980s Detroit Auto Company not knowing if survival is possible. The difference is that they are investing nearly $11B in CapEx, buying back $10B of stock this year and issuing over $4B in dividends. It is gushing with cash flow that has to be put to good use. All attempts to win over Wall St. have failed because of the fear that the PC market will collapse overnight. HP spinning out its PC business adds to the climate of fear.

Weather PCs grow 10% or decline 10%, Intel has to continue moving forward with its Barbed Wire Fence Strategy (see Intel’s Barbed Wire Fence Strategy) in order to diminish competitors and increase its ownership of the platform $$$. This could be the final consolidation phase of a market that is similar to how IBM eliminated the 7 dwarfs in the 1970s and 1980s. Intel was able to increase its platform ASP the past 12 months with the integration of graphics and the shift to mobile from desktops. If the PC market turns down, then the pressure should be felt by nVidia and AMD first.

Assuming Intel goes ahead with the purchase of Micron, there has to be a manufacturing angle as well. A semiconductor industry analyst pointed out to me that he thought Samsung was in the lead to getting to 450mm with Intel right behind. Samsung’s first choice for 450mm is Flash memory in an attempt to separate themselves from Toshiba, Sandisk and Micron. Intel may view NAND as a strategic asset that can not be dominated by Samsung., otherwise it becomes a thorn in their platform side. With a Micron acquisition, they would have two drivers on 450mm: NAND flash and x86 processors.

The obvious conclusion to this is that Intel’s future PC and Server platforms will be comprised predominately of x86 and SSDs (DRAM and HDDs are minor commodity components) and Intel does not intend to split the platform $$$ with Samsung or ARM.


Aug 25th in Fremont, CA – Hands on Calibre workshop: DRC, LVS, xRC, ERC, DFM

Aug 25th in Fremont, CA – Hands on Calibre workshop: DRC, LVS, xRC, ERC, DFM
by Daniel Payne on 08-18-2011 at 10:30 am

I’ve blogged about the Calibre family of IC design tools before:

Smart Fill replaced Dummy Fill Approach in a DFM Flow
DRC Wiki
Graphical DRC vs Text-based DRC
Getting Real time Calibre DRC Results with Custom IC Editing
Transistor-level Electrical Rule Checking
Who Needs a 3D Field Solver for IC Design?
Prevention is Better than Cure: DRC/DFM Inside of P&R
Getting to the 32nm/28nm Common Platform node with Mentor IC Tools

If you want some hands-on time with the Calibre tools then consider attending the August 25th workshop in Fremont, CA.


MUSIC in Bangalore

MUSIC in Bangalore
by Paul McLellan on 08-17-2011 at 7:18 pm

When you think of Indian music you might think of ragas for the sitar. But when you think of Indian MUSIC, that is the Magma user group meeting (Magma Users Summit for Integrated Circuits) coming up on September 7th in Bangalore (note: the date has changed from when it was originally announced). It is at Vivanta by Taj on M G Road.

There is a guest keynote by Balajee Sowrirajan of Texas Instruments OMAP business unit on Trends and challenges in designinbg wireless application processor–what is the need of the day? at 9.30am.

The second keynotet is at 12.35pm, just before lunch, by Rajeev Madhavan, Magma’s CEO.

There are other presentations by TI, Qualcomm, ARM, Netlogic, Microchip and Silicon One.

More information, including the complete agenda, is here.

To register, go here.


Fast Track your SoC Design

Fast Track your SoC Design
by Paul McLellan on 08-17-2011 at 5:24 pm

Atrenta has four seminars coming up on SoC realization. More and more design is actually about finding IP and integrating it together at the block level, and then handing it off to a standard RTL to GDSII flow. The three focus areas are:

  • finding quality IP faster
  • accelerating IP integration and SoC assembly
  • handing off RTL successfully.

The seminars are at:

To register, click on the city name above for the location that you want to attend. And if you fill out your survey at the end of the event you could win an iPad.


ANSYS Regional Conference

ANSYS Regional Conference
by Paul McLellan on 08-17-2011 at 3:15 pm

Next Tuesday, August 23rd, is the ANSYS Regional Conference for Silicon Valley. It takes place at the Techmart Network Meeting Center. Apache has three presentations during the day:

  • 9.25-9.45 Andrew Yang Introducing Apache Design Solutions
  • 11.00-11.30 Methodology for delivering power-efficient designs from concept to silicon
  • 1.30-2.00 Utilizing chip macro modeling for chip-package-system simulation.

The conference is free. A detailed agenda for the whole day is here.

Register here.

Details on ANSYS regional conferences for other regions are here.


Mr. TTL’s Future is Analog: Time to Sell OMAP to Broadcom

Mr. TTL’s Future is Analog: Time to Sell OMAP to Broadcom
by Ed McKernan on 08-17-2011 at 1:00 am

Mr. TTL (otherwise known as Texas Instruments or TI) has had a great run in the cellular market but it is time to decamp. The future is Analog and OMAP must depart to one of the remaining players looking to win the Smartphone and Tablet market. TI is exiting the market so it can focus on the high volume analog market.

On first sight, the Google acquisition of Motorola should seem to bolster TI since they are a big user of OMAPs. The new OMAP 5 architecture looks to be very impressive in terms of increased performance at reduced power over ARM9 competitors and is the basis for Ice Cream, the next version of Google’s Android O/S. So why leave the party as things are looking up? The short answer is that Analog can be a 70% Gross Margin Business and the ARM processor market is headed for major price compression.

During the last downturn, Rich Templeton decided that enough is enough. TI, a company that prides itself on always having a big Fab in Dallas, decided to move forward on building and outfitting the world’s first 300mm Analog fab. TI would build in volume and sell at competitive pricing that still left lots of room for margin. The acquisition of National, their low cost competitor completed the plan. Think of TI as in the same spot Walmart would be if it acquired Target and Costco. Of course if you aren’t the Walmart or TI type of shopper, you can always go to Linear Tech to get your high priced, custom analog.

To fill the big 300mm fab and generate high growth, Templeton has to get rid of OMAP. It is, in the eyes of Intel and Apple, a competitor and if TI wants to own the Analog world it must not compete with the big Processor Players. You could say the same about nVidia, Qualcomm, Broadcom and Marvell. A little history in order here, back in 1997 Brian Halla had dreams of conquering the world with cheap PCs. So he bought Cyrix. Intel congratulated him by redesigning their reference boards sans National analog. A major Migraine swept through the sales ranks over at National’s Headquarters.

But who would want OMAP? The current darlings of the ARM world are Qualcomm and nVidia. Broadcom and Marvell are lagging and AMD has yet to figure out it needs an ARM core to marry with x86. But of the three Broadcom has the cash and its core businesses are doing exceptionally well. Wireless is a major strength. The weakness is in application processors. Without both a strong baseband and application processor, the long-term outcome is iffy.

If Broadcom acquires OMAP they have a share of the business with nVidia in the Android market, including Motorola. A huge step up from where they are today. However, within all this, one gets the sense that more tremors are on the way and the price to play the next round will go up another notch.

Note: You must be logged in to read/write comments


Captain Ahab Calls Out for the Merger of nVidia and AMD

Captain Ahab Calls Out for the Merger of nVidia and AMD
by Ed McKernan on 08-16-2011 at 8:00 pm

Call me Ishmael. Some years ago –in the mid 1990s – having little or no money in my purse and nothing particular to interest me on shore, I thought I would sail the startup ship Cyrix and see the watery part of the PC world. Whenever I find myself grim about the mouth or pause before coffin warehouses, and bring up the rear of every funeral I meet – I think back to the last words that came from Captain Ahab before the great Moby Dick took him under: “Boy tell them to build me a bigger boat!”

You see the great Moby Dick is just not any whale, it is the $55B great white sperm whale that has been harpooned many of times and taken many a captain Ahabs to the bottom of the ocean. It still lives out there unassailable, despite the ramblings of the many new, shiny ARM boats docked on Nantucket Island, a favorite vacation spot of mine from my youth.

Perhaps there could be a great whaling ship constructed out of the battered wood and sails of the H.M.S. nVidia and the H.M.S. AMD. Because the alternative is that they must go down separately. Patience wears thin for ATIC (Advance Technology Investment Company), the Abu Dhabi investment firm that has poured billions of dollars into Global Foundries and AMD with the hope of being the long term survivor in the increasingly costly Semiconductor Wars. To be successful, the company needs a fab driver larger than what nVidia and AMD represent separately.

Jen Hsun Huang is the most successful CEO to ever challenge Intel in the PC ecosystem and yet he is not strong enough to overcome the Moore’s Law steamroller that naturally seeks to integrate all the functions of a PC into one chip. Both AMD and Intel have integrated chipsets and “good enough” graphics into their CPU thus limiting his leading revenue generator. He made a strategic move with Tegra to get out in front of the more mobile platforms known as Smartphones and Tablets but they may not ramp fast enough to allow him to make it to the other side of the chasm.

AMD, has pursued Intel forever but now is without a leader that can stop the carnage of a strategy that seeks to be Intel’s me too kid brother. It bleeds with every CPU sold to the sub $500 market. Lately, Intel has been on allocation and given them a profitable reprieve, but don’t count on it lasting forever as Intel eventually moves to the next node and adds more capacity.

There are huge both short term and long term benefits should Jen Hsun decide to merge with AMD. In the short term, nVidia and AMD are in a graphics price war where the AMD sales guy tells the purchasing exec “whatever nvidia bids mark me down for 10% less and see you at the golf links at 4 o’clock.” They have lost key sockets in Apple’s product line as well as other vendors. Merging with AMD raises revenue and earnings in an instant. The merged company would eliminate the duplicate graphics and operations groups.

Next, nVidia could implement the ARM+x86 multicore product strategy for the ultrabook market that I outlined in: Will AMD Crash Intel’s $300M Ultrabook Party? . The market offers high growth, ASPs and margins and is a close cousin of the tablet which nVidia is already targeting with Tegra.

Third, nVidia has made traction in the High Performance Computing (HPC) Market with Tesla. But don’t get confused with HPC = Data Center Servers. The Data Center runs x86 all the time. Intel has a $10B+ business going to $20B in the next 3 years. They are raising prices at will with no competition in sight. nVidia and AMD could team up to offer customers an alternative platform with performance and power tradeoffs between x86 and Tesla.

The icing on the cake is that this can all be financed by ATIC. Back in January when Dirk Meyer was let go as CEO of AMD and the stock was $9, I speculated to a semiconductor analyst that AMD would be bought when it went under $5. Why $5, it’s psychological. The wherewithal to do this is in ATIC’s hands but they have little time to spare.

ATIC owns 15% of AMD and 87% of Global Foundry. Today nVidia is worth $8B and AMD is worth $4.2B. Combined they would be worth significantly more than $12B because the graphics competition would end and the joint marketing and manufacturing operations would consolidate. It is logical for ATIC to take a 20% ownership in nVidia and finance the rest of the purchase in any number of ways. Back in the DRAM downturn of the 1980s, IBM bought a 20% stake in Intel to guarantee they would be around until the 386 hit the market.

Now that the ECB and the Fed have lowered interest rates to 0% and have the printing presses running overtime, why wouldn’t ATIC finance the new H.M.S. Take- No-Prisoners.