DAC2025 SemiWiki 800x100

Apple makes 2/3 of profits of entire mobile industry

Apple makes 2/3 of profits of entire mobile industry
by Paul McLellan on 08-02-2011 at 5:41 pm

This is an amazing picture (click to enlarge). Apple now makes 2/3 of all the profit in the entire mobile handset industry. And that is the entire handset industry, not just smartphones where it has also blown past Nokia to become number one (although there are more Android handsets than iOS, those handsets are spread across multiple manufacturers and the manufacturers make a lot less profit per handset).

In 2007 Nokia made over half the profit. They also made about 1/3 of all the phones manufactured, around one million phones per day. Now they aren’t even on the chart (you need to be profitable to have a fraction of the total profits). Samsung and HTC make another quarter of the profit, with RIM bringing up the rear.

Another interesting statistic: so far this year Microsoft made less than 1% of its revenues from mobile. So those people who point out that it makes more from patent licenses to Android manufacturers are right. Of course it is hoping for a big increase when Nokia finally ships WP7 phones, but my own opinion is that Nokia is doomed by a mixture of Chinese competition at the low end and Apple/Android at the smartphone end. But I could be wrong: carriers are very political and don’t want to be held hostage to Apple or Google. How Microsoft has weakened that now carriers may want to embrace them since they are a “safe” partner who can be rolled over if necessary.


Note: You must be logged in to read/write comments. When registering for SemiWiki put “iPad2” in the referral section and you may win one!


Has IP moved to Subsystem? Will IP-SoC 2011 bring answers?

Has IP moved to Subsystem? Will IP-SoC 2011 bring answers?
by Eric Esteve on 08-02-2011 at 11:21 am

I have shared with you the most interesting I have heard during IP-SoC 2010, in two blogs, Part I was about IP market forecast(apparently my optimistic view was quite different from the rather pessimistic vision shared by SC analysts) and Part II, named “System Level Mantra”, was strongly influenced by Cadence clever presentation, but this was before Cadence decided to drop “EDA360”, as least according with Dan Nenni in “CDNS EDA360 is dead”.

Today, it’s time to talk about the future, as the next session of IP-SoC will be held in December in Grenoble, in the French Alps. As usual since 1998, the conference will be Design IP-centric, but the interesting question will be to know where the IP industry stands on the spectrum starting from a single IP function, ending to a complete system. Nobody would allege that we have reached the upper side of the spectrum and claim that you can source complete system from an IP vendor. The death of EDA360 is a clear illustration of this status. Maybe because the SC industry is not ready to source a complete IP system (what would be the added value of the Fabless companies if/when will occur?), most certainly because the IP vendors are far to be able to do it (it will require strong understanding of specific application and market segment, associated technical know-how of such application and, even more difficult to met, adequate funding to support up-front development, accepting the risk to miss the target…). This is why an intermediate step may be to offer IP Subsystem. According with D&R, who organize IP-SoC, the IP market is already here: “Over the year IPs have become Subsystems or Platforms and thus as a natural applicative extension IP-SoC will definitively include a strong Embedded Systems track addressing a continuous technical spectrum from IP to SoC to Embedded System.” So IP-SoC 2011 will be no more IP-centric only, but IP Subsystem centric!

It will be interesting to hear the different definitions of what is exactly an IP Subsystem. If I offer a PCI Express Controller with an AMBA AXI application interface, may I call it a subsystem? I don’t think so too! But should I add another IP function (like for example Snowbush offering PCI Express plus SATA) to call it a subsystem? Or should I consider the application first, and pick –or design- the different functions needed to support this specific application? Then, how to market the CPU, the memories and probably other IP which belongs to my competitor? The answer is far to be trivial, and this will make the next IP-SoC conference worth to attend! You probably should not expect to come back home with a 100% definite answer (if anybody knows the solution, he should start a company a.s.a.p.) but you will have the chance to share the experience of people who have explored different tracks, and learn from them. If you are one of these, then you definitely should submit a paper and share your experience on how to Design or Market IP subsystems! See the “Important dates” below:

• Deadline for submission of paper summary: September 18, 2011
• Notification of acceptance: October 15, 2011
• Final version of the manuscript: November 6, 2011
• Working conference: December 7-8, 2011

If you are not yet involved into IP subsystem but in IP Design/Market, don’t worry, as the “Areas of Interest” list is pretty long:

IP Best practice
• Business models
• IP Exchange, reuse practice and design for reuse
• IP standards & reuse
• Collaborative IP based design

Design
• DFM and process variability in IP design
• IP / SoC physical implementation
• IP design and IP packaging for Integration
• IP and system configurability
• IP platform and Network on Chip

Quality and verification
• IP / SoC verification and prototyping
• IP / SoC quality assurance

Architecture and System
• IP / SOC transaction level modeling
• Multi-processor platforms
• HW/SW integration
• System-level analysis
• System-level virtual prototyping
• NoC-based Architecture

Embedded Software
• Software requirements (timeliness, reactivity)
• Computational Models
• Compilation and code generation

Real-Time and Fault Tolerant Systems
• Real-time or Embedded Computing Platforms
• Real-Time resource management and Scheduling
• Real-time Operating system
• Support for QoS
• Real-time system modeling and analysis
• Energy-aware real-time systems

If you just want to attend, just register here, and send me a note, it will be a pleasure to meet you there!

By Eric Esteve from IPnest


PathFinder webinar: Full-chip ESD Integrity and Macro-level Dynamic ESD

PathFinder webinar: Full-chip ESD Integrity and Macro-level Dynamic ESD
by Paul McLellan on 08-01-2011 at 10:00 am

The PathFinder webinar will be at 11am Pacific time on Thursday 4th August. It will be conducted by Karthik Srinivasan, Senior Applications Engineer at Apache Design Solutions. Mr. Srinivasan has over four years of experience in the EDA industry, focusing on die, system, and cross-domain analysis. His professional interests include power and signal integrity, reliability and low-power design. He holds a MSEE from the State University of New York, Buffalo.

The industry’s first comprehensive layout-based electrostatic discharge (ESD) integrity solution provides integrated modeling, extraction, and simulation capabilities to enable automated and exhaustive analysis of the entire IC, highlighting areas of weaknesses that can be susceptible to ESD induced failure. PathFinder also delivers innovative transistor-level dynamic ESD capabilities for validation of I/Os, analog, and mixed-signal designs.

Register for the webinar here.


MCU Performance Customers: The Cavalry is Coming Over The Hill

MCU Performance Customers: The Cavalry is Coming Over The Hill
by Ed McKernan on 07-31-2011 at 7:30 pm

cavalry lg

The under the radar, sleepy microcontroller market is about to undergo a rapid transformation the next several years with new entrants and the rise of 32 bit cores that will redefine the parameters for success. This will revive growth and result in new winners and losers. But lots of questions remain.

My first job out of college in 1984 was programming an 8 bit 8051 for a telephone handset. It took months to finish a programming task in assembly language that I thought I could do in a 16 bit microcontroller in a week or so. I begged my boss to allow us to switch. He declined – we couldn’t afford tacking on a couple extra bucks per telephone. Translation: I was underpaid. I swore then that the 8051 would surely be gone in a couple years. Missed that prediction!

It’s still here more than 25 years later. The 8051 along with the other 8 bit controllers are a $5B market and I am now convinced they will never make it to the Smithsonian Museum of History.

What is new is that 32 bit controllers have been on a tear the last two years. They’re finally taking off. This year 32 Bit MCUs should do roughly the same revenue as 8 bit. The magic ASP number for market liftoff is around $1 per chip – unbelievable.

The tragedy of the earthquake and Tsunami that struck Japan highlighted not only how fragile life is but also the world economy. Renesas, the company most severely hit by the earthquake has revenue of $9B that is 40% based on sales to automotive customers. This 40% dimmed the lights in Toyota, Honda and Nissan factories around the world. Think about it – a $9B company, single-sourced, levered into a $1T customer base. Imagine what the entire $15B MCU market leverage is and you see where I am going.

JIT (just in time) manufacturing was exposed and shown to be – at the extreme – very risky. Auto companies will demand 6-9 months instead of 30-60 days of inventory stored around the world. Second sourcing, the curse of the semiconductor industry in the 1970s and 1980s will be asked for but declined. What then are the alternatives that the automakers and others will pursue.

Renesas is the big dog at 30% of the MCU market and they face the supreme challenge of winnowing down the extended number of architectures that resulted from two large mergers. The first was Hitachi and Mitsubishi. More recently NEC. In their efforts to support all legacy products, they risk losing the future. And there is no talk yet of adopting ARM at 32 bits. They appear to feel safe with the current customers but the high end market is pushing for more performance.

ARM is the new love interest of many microcontroller vendors at 32 bit. The argument is that they can be targeted in very low power and high performance markets with different cores. Then there is the common programming platform which customers will appreciate. It can be compelling and seems to be working. Atmel, St Micro, TI, Infineon and others are headed down this path. Microchip has licensed MIPs to attack the 32 bit market and with its leadership in the 8 bit market – they seem be leveraging its loyal customer base for future growth.

The second trend that to me could be more impactful is the fact that Xilinx and Altera have announced plans to enter the market in the next year with a family of FPGAs that include hard core A9 processors running up to 1 Ghz with their associated Caches, hard memory controllers, CAN and Gbit Ethernet controllers. All this with a sea of LUTs and hundreds of GPIO. Ahh – yes but its an FPGA and probably will cost more than an ARM and a Leg (excuse the pun).

This is where I think it is interesting. Xilinx and Altera are focused on 28nm process technology. Much of the 32 bit MCU world is at 130nm or 90nm. By being at least 3 nodes ahead and using hard blocks for the CPU and peripherals, there is a chance that these parts will be smaller in die size than current MCUs and therefore sell at or below price parity. One caveat to this – there is no integrated flash for code store. I suspect they both will include a stacked die arrangement in their product families. Perhaps, though this is an outside chance – Altera and Xilinx will try to be pin compatible with other ARM MCU vendors.

Another aspect to watch is how analog fits into this strategy. Many MCU vendors have seen tight integration of the processor and analog on a single die as the winning formula. However, integrating analog below 90nm is difficult and doesn’t offer Moore’s Law savings. I presume the FPGA vendors are focusing on performance and will partner with leading Analog guys like Analog Devices, Linear Tech for the platform solution.

The auto and industrial markets are the most likely first targets. Automakers are begging for more performance and the 1GHz solutions from Altera and Xilinx are likely to be a leap ahead of Renesas. Plus I would suspect Altera and Xilinx architected their offerings based on input from the smaller set of large auto and industrial customers instead of the thousands of total worldwide MCU customers. Remember 40% of Renesas sales is automotive.

For the end customer – more solutions to choose from. The ingredients are: ARM standard architecture+two vendors at similar pricing+full temperature range (Auto, Industrial, consumer). For Xilinx and Altera it is an interesting new market to pursue. At $5B in size, the 32 bit MCU TAM is larger than their current combined revenue.

A week ago I listened to the Altera earnings conference call. What was intriguing was that John Daane – the CEO mentioned that they had a Japanese customer come in requesting a one time order for their high end Stratix 4 FPGA to replace an ASIC that they couldn’t source due to the Tsunami. The revenue Altera would receive in the coming quarter would be over $15M – significant enough to tell Wall St. I started thinking, the customer had to have redesigned his PCB to support the Stratix 4. But in the future, a customer in a crunch may not have to redesign – just place an order.


Smart Fill Replaces Dummy Fill Approach in a DFM Flow

Smart Fill Replaces Dummy Fill Approach in a DFM Flow
by Daniel Payne on 07-30-2011 at 7:11 pm

I met with Jeff Wilson, Product Marketing Manager at Mentor in the Calibre product group to learn more about Smart Fill versus Dummy Fill for DFM flows. Jeff works in the Wilsonville, Oregon office and we first meet at Silicon Compilers back in the 1990’s.

Dummy Fill

This diagram shows an IC layout layer on the left as originally designed, then on the right we see the same layout with extra square polygons added in order to fill in the blank space. Source: AMD

IC layouts use multiple layers like metal, poly, diffusion, via, etc. to interconnect transistors. The fab engineers know that if you can make each layer with a certain density that the yield will be acceptable. Dummy fill as shown above has worked OK for many nodes however the yield at 65nm and smaller nodes for digital designs requires a new approach in order to keep yields high.

The dummy fill helps make each layer more planar, and so there are DFM rules that need to be followed.

Q: How popular is Calibre with the dummy fill approach?
A: Calibre serves about 80% of the dummy fill market now.

Q: Is fill only used on metal layers?
A: No, actually all layers can benefit from fill techniques.

Smart Fill

Q: Why do we need to change from dummy fill?
A: The DFM rules for digital and analog designs have become more complex and the dummy fill approach just isn’t adequate to meet the rules. With dummy fill you are going to have too many violations that require manual edits, this takes up precious time on your project.

The percentage of total thickness variation has increased at each node, making CMP variation a critical issue requiring analysis. Source: ITRS

Q: What is the new approach with Smart Fill?
A: It’s DFM analysis concurrent during the fill process, so that the layout is more correct by construction.

Q: Do I need manual edits to my layout after running Smart Fill?
A: Our goal is to have zero edits after Smart Fill.

Q: At what node do I have to consider using Smart Fill?
A: Our experience with foundries and IDMs is that at 65nm and below for digital designs, and 250nm and below for analog designs will directly benefit from Smart Fill.

Q: What other issues are there to be DFM compliant with fill?
A: The size of the IC layout database needs to be reasonable and the run times kept short.


Dummy Fill on left, SmartFill on right. Source: AMD

Q: When I use the Calibre Smart Fill, do I need to learn to write new rules?
A: No, our approach has you write fewer rules.

Q: What kind of run time improvements could I see with Smart Fill?
A: One customer reported that dummy fill ran in 22 hours while Smart Fill ran in 40 minutes.

Q: What is the Mentor product name for Smart Fill?
A: We call it SmartFill and it’s part of Calibre Yield Enhancer.

Q: What other areas does YieldEnhancer automate?
A: Litho, CMP, ECD, Stress and RTA.]

Q: What about my critical timing nets?
A: SmartFill can read in a list of your critical nets and then avoid interfering with their performance by using spacing.

Source: Mentor Graphics

Q: Who would use a tool like SmartFill?
A: Foundries, IDMs and Fabless design companies that want a technology advantage.

Q: What layout databases does SmartFill support?
A: Milkyway (SNPS), OA (Cadence), LEF/DEF (Cadence), Oasis.

Q: How do you keep run times low?
A: Through Cell-based fill (more than a single shape), it helps keep the file size and run times more reasonable.

Summary
To keep yield levels acceptable there are new DFM rules that affect how fill is created. The old approach of dummy fill has given way to Smart Fill which uses a concurrent analysis approach during fill to assure that DFM rules are not violated.


Totem webinar: Analog/Mixed-Signal Power Noise and Reliability

Totem webinar: Analog/Mixed-Signal Power Noise and Reliability
by Paul McLellan on 07-30-2011 at 5:26 pm

The Totem webinar will be at 11am on Tuesday 2nd August. This session will be conducted by Karan Sahni, Senior Applications Engineer at Apache Design Solutions. Karan has been with Apache since 2008, supporting the Redhawk, Totem, Sentinel product lines. He received his MS in Electrical Engineering from the Syracuse University New York.

Totem is a full-chip, layout-based power and noise platform for analog/mixed-signal designs. Totem addresses the challenges associated with global coupling of power/ground noise, substrate noise, and package/PCB capacitive and inductive noise for memory components such as Flash and DRAM, high-speed I/Os such as HDMI and DDR, and analog circuits such as power management ICs. Integrated with existing analog design environments, Totem provides cross-probing of analysis results with industry standard circuit design tools. It also enables designers to create a protected model representing the accurate power profile of their IP for mixed-signal design verification. Totem can be used from early-stage prototyping,to guide the power network and package design, to accurate chip sign-off.

Register for the webinar here.


CDNS EDA360 is DEAD!

CDNS EDA360 is DEAD!
by Daniel Nenni on 07-30-2011 at 3:00 am

Hard to believe EDA360, the Cadence Blueprint toBattle ‘Profitability Gap’; Counters Semiconductor Industry’s Greatest Threat!, is DEAD at the ripe old age of one. As you may have already read John Bruggeman left Cadence after the company conference call last week. The formal announcement should go out on Monday after the SEC paperwork is complete. The question is why?

Richard Georing did a very nice anniversary piece “Ten Key Ideas Behind EDA360 – A Revisit” which is here. Points 1-9 are a good description of what Synopsys and Mentor already do today but they call it revenue instead of a “vision”. Point 10 is the real reason behind EDA360’s failure and JohnB’s departure:

10.No one company or type of company can provide all the capabilities needed for the next era of design. EDA360 requires a collaborative ecosystem including EDA vendors, embedded software providers, IP providers, foundries, and customers.Cadence is committed to building and participating in that ecosystem…..

One of the school teacher comments that has followed me through life is that I “don’t play well with others”, which is absolutely true to this day. The same goes for Cadence, they do not play well with others. That wasn’t always the case of course, but it certainly is today. To borrow a phrase from another SemiWiki Blog, Cadence has a barbed wire fence strategy and EDA360 cannot survive inside barbed wire.

No one will grieve more than me since EDA360was great blogging fodder. My first blog: Cadence EDA360 Manifesto caused quite a stir and got me beers with John Bruggeman. In turn I gave him an EDA360 monogrammed grey hoodie which he actually wore. Calling it a “manifesto” was clearly a PR mistake which they admitted and corrected.

My second blog: Cadence EDA360 Redux! Made fun of the tag line:

“Cadence Design Systems, Inc. (NASDAQ: CDNS), the global leader in EDA360………”

Of course, why wouldn’t Cadence be the global leader in something they just made up? Actually I typed: “Of course, why wouldn’t Cadence be the global leader in something they just pulled out of their corporate butts?” My wife/editor, however, did not like the mental image it created so I changed it. Butt now you know the truth! Cadence PR got rid of that tag line shortly thereafter.

I also blogged TSMC OIP vs CDNS OIP Analysis to point out the error of choosing the same name as TSMC for a similar program:

The TSMC Open Innovation Platformpromotes timeliness-driven innovation amongst the semiconductor design community, its ecosystem partners and TSMC’s IP, design implementation and DFM capabilities, process technology and backend services……

Cadence Design Systems, Inc. (NASDAQ: CDNS), the global leader in EDA360, today announced the Cadence Open Integration Platform, a platform that significantly reduces SoC development costs, improves quality and accelerates production schedules…..

Cadence dropped that one as well. Lawyer letters may have been involved so I cannot take full credit. My Semiconductor Realization!Blog was much more EDA360 supportive:

Per JohnB: EDA360is a top down approach starting with System Realization – to SoC Realization – ending with Silicon Realization. The WHY of EDA360makes complete sense, great vision, I’m on board, I even have an EDA360shirt. The question I had was: exactly HOW was this going to work? I still do not know the answer.

My last blog: Cadence EDA360 is Paper! (The one year anniversary is paper by the way, thus the title) was also a positive one:

I think EDA360 is an excellent road map for Cadence. The company seems to have focus and hopefully EDA360 products will continue to be developed and deployed.

Cadence centralized product marketing in support of EDA360with JohnB as its leader. Cadence product marketing is back to decentralized reporting into engineering. Marketing versus engineering driven, I miss JohnB Already! R.I.P EDA360!

Note: you must be logged in to read/write comments.


Cache Coherency and Verification Seminar

Cache Coherency and Verification Seminar
by Paul McLellan on 07-27-2011 at 5:45 pm

At DAC Jasper presented a seminar with ARM on cache coherency and verification of cache coherency. The seminar is now available online for those of you that missed DAC or missed the seminar itself.

Cache architectures, especially for multi-core architectures, are getting more and more complex. Techniques originally pioneered on supercomputers are now finding their way into complex SoCs. The difference in performance between making an off-chip memory reference versus finding data in one of the caches already on the chip is so big that it is worth paying a price in additional complexity to add hardware that keeps caches coherent when data is written to one of them. But this complexity needs to have a good specification of exactly what the guarantees of coherency area, and a mechanism for verifying that the guarantees hold.

To view the seminar register here.

To request the white paper on the subject register here.


Intel’s Mobile Deja Vu All Over Again Moment

Intel’s Mobile Deja Vu All Over Again Moment
by Ed McKernan on 07-26-2011 at 12:49 pm

We have been here before… and when I say “we” I do include myself. Back in 1997, I joined a secretive company called Transmeta. The company was two years old and working on a new x86 microprocessor to challenge Intel. The original focus of the company was not to build a lower power processor, but one that was faster. As with many start-ups things change and Rev 2 is what ships. The challenge the ARM camp is providing today is broader and more serious, however it is similar in many ways to 10 years ago and from my perspective it really is déjà vu all over again.

When Transmeta was formed, the venture investors were buying into a storyline that the new architecture would replace many legacy x86 transistors with a VLIW engine and a software layer that offered not only translation but also acceleration. You could count on a subset of groups of instructions that were used over and over again that theoretically could be made to run faster than the way Intel processes its instructions. In addition, the rarely used instructions in the x86 core were just eating up space and power.

The great discovery for the company during the late 1990’s was not that VLIW and code morphing was a better way to build a processor, it was the fact that Intel made an architecture decision to pursue a very high MHz solution with Pentium 4. It would once and for all outrun AMD and as everyone knows from the 1990’s, the ASPs of processors were based purely on MHz and not actual performance.

In the pursuit of high MHz, Intel was forced to come clean to mobile vendors that the next generation mobile parts were going to run much hotter and both Intel and their customers scrambled to find cooling solutions to dissipate the heat. These took up space and were costly. The result was that average notebooks increased in size, thickness, and weight. Moving in the opposite direction of one of my observations of computing: computing always moves in the direction of smaller and lighter.

The Pentium 4 move, at its worst, disenfranchised the Japanese mobile vendors building for the home market. It was a market that was mostly mobile and accounted for about 20% of the overall WW mobile market. But one has to remember that in 2000 desktop was still 80% of the market, so 20% of 20% is just 4%. As a result, Intel’s revenue stream was driven by 80% of the market leveraged off of high MHz.

Mobile computing did not surpass desktop until around 2006. What held back mobile was the high cost of LCD screens and the fact that WiFi wasn’t prevalent until after 2001. Therefore, Intel extended its fence lines with Centrino to include WiFi and to block AMD.

With today’s mobile challenge, there is no question that Intel got started late in answering the call, however there are advantages and disadvantages to both ARM’s and Intel’s current standing and I speak based on my experience.

The advantages for ARM are that they have a long history in phones and wide adoption rates with big OEMs carrying the product into the new tablet space. I don’t assume that Win 8 is going to naturally knock down the barriers in the PC space. It takes horsepower to get into PCs and ARM is not there. I have doubts they will get there even with nVidia in 2012/2013. Intel is best positioned to respond to this threat with their current processors and maneuvers like rolling out Thunderbolt, a clear barbed wire strategy. Secondly, I am not sure how much support ARM will get from MSFT in winning the PC space. For years, MSFT has implemented a high bar list of requirements to be considered MSFT ready (this includes minimum CPU and Graphics specs down to minimum DRAM etc…). Why would MSFT spend resources shoring up a market where they already control 95% market share? Sorry IDC, ARM is not going to have 13% of PC market share in 2015.

ARM’s weakness is two fold: First they are trying to go after too much of the market at once which dilutes their resources (they should drop the PC and Server push for now). Second they are going to see the field of customers naturally winnow down to 3 or 4 customers and their destiny is based on these large customers who are going to be asking for big discounts on royalties. I will cover this in a follow up article.

Back to my original focus on Intel and its current standing. The strengths of the company are servers and mobile PCs. Intel is gaining strength as AMD melts away and integrated graphics reduces nVidia’s presence. There is between $5-$10B in available TAM that they are going to feed off of, this is 3-5X more TAM than if they owned Apple’s Tablet and iPhone business today (120MU * 15 = $1.8B) and it is more profitable.

The winning solution in the tablet and smartphone space is a new x86 architecture using circuit design techniques that shut off more regions of the core, married to a right sized graphics engine, and leveraging their leadership with SRAM caches. All of this in a leading edge, 22nm or 14nm tri-gate process. The pieces are there.

Now back to my experience at Transmeta and why I see a repeat of the past. Transmeta won the Japanese vendors when we offered them a processor with a Thermal Design Point (TDP) of 7 watts and a standby power of 200mW. TDP is the worst case power, not average power, a mechanical engineer has to design a mobile to. I believe that the TDP that Apple designs the Tablet to is 3-5W. And with tricks, Apple is able to accommodate Intel’s Sandy Bridge ULV with 17W TDP in its MAC Air. Ideally this should be closer to 7W. So the CPU design teams for Haswell, the 22nm mobile part due in Q1 2013, know what to shoot for as they design their CPU. Hitting these design points means the gap with ARM will close and more importantly the 100% die yield to this TDP will allow Intel to start selling $60 parts instead of $225 parts in this space.

In the Analyst Meeting in May, Paul Otellini articulated this TDP message – it was one of the key takeaways. It went right over the head of the press and analysts because they did not understand what he meant by moving their mobile designs from 35W TDP to 17W TDP. Couple this with the 22nm announcement where Intel said they would reduce standby power by 10X and you have a series of products coming that will go toe to toe with ARM competitors.

It’s déjà vu all over again. More to come….


Note: To read/write comments you must be logged in.


Synopsys MIPI Webinar

Synopsys MIPI Webinar
by Eric Esteve on 07-26-2011 at 6:05 am

Synopsys MIPI Webinar: MIPI is really getting traction

Synopsys last two acquisitions of IP vendors, former ChipIdea in 2009 (Mixed-signal product line of MIPS) and Virage Logic in 2010, have allowed to built a stronger, diversified IP port-folio. Amazingly, Synopsys has found MIPI IP product line in the basket in both cases. Until recently, Synopsys has been pretty discrete about this Interface IP product, essentially used in the high end Wireless phone segment – the Smartphone – at least at the beginning.

To register to this MIPI Webinar, just go here.

Now, MIPI protocols are increasingly being adopted in the market, primarily interfacing an SoC to a camera, display and RFICs, while newer MIPI protocols are being promoted for storage, chip-to-chip connectivity and next-generation camera and displays. Synopsys holding a webinar on MIPI is a good sign that MIPI protocol is getting traction on the market. If you have a doubt, just go to SemiWiki Industry Wiki page, and just have a look at the number of views for the different Interface IP. The ranking is very clear:
[LIST=1]

  • MIPI IP 1 192
  • PCIe IP 675
  • USB 3.0 IP 616
  • DDR IP 595
  • SATA IP 556

    MIPI is generating more interest than the other protocols, two times more!

    This webinar will be hold by Hezi Saar, in charge of Marketing for MIPI PHY and Controller IP product line. Coming from Virage Logic, he brings more than 15 years of experience in semiconductor and electronics industries in embedded systems. He will explain the building blocks and integration challenges faced by designers while integrating MIPI protocols into SoCs. Hezi is a smart guy, no doubt about it! FYI, he is the Synopsys person who has decided to publish the serial four part blog “Interview with Eric Esteve: Interface IPtrends”

    It is a good idea to do such an evangelization work, as MIPI protocol adoption has suffered from the number and complexity of connectivity protocols. But if you take some time to dig into MIPI, you realize that MIPI offer a solution for every type of connection (Display, camera, RFIC, Mass storage…) which is always the best optimization for the type of chip/application you want to connect to. Don’t forget that MIPI was initially developed for the Wireless handset market. Production volume can reach dozens of millions of IC (so each fraction of a square millimeter count!) and the power consumption is the key issue at the system level, so you must use a protocol which is exactly tailored for your needs, interface with a display or an RFIC is necessarily different. Hezi will probably explain that, if the protocols are different, the physical interface stays the same, using the same type of PHY is a good way to minimize the learning curve for the SoC engineer and the risk at the production level.

    To register to this MIPI Webinar, just go here.

    By Eric EsteveIPnest