Synopsys IP Designs Edge AI 800x100

CDNS EDA360 is DEAD!

CDNS EDA360 is DEAD!
by Daniel Nenni on 07-30-2011 at 3:00 am

Hard to believe EDA360, the Cadence Blueprint toBattle ‘Profitability Gap’; Counters Semiconductor Industry’s Greatest Threat!, is DEAD at the ripe old age of one. As you may have already read John Bruggeman left Cadence after the company conference call last week. The formal announcement should go out on Monday after the SEC paperwork is complete. The question is why?

Richard Georing did a very nice anniversary piece “Ten Key Ideas Behind EDA360 – A Revisit” which is here. Points 1-9 are a good description of what Synopsys and Mentor already do today but they call it revenue instead of a “vision”. Point 10 is the real reason behind EDA360’s failure and JohnB’s departure:

10.No one company or type of company can provide all the capabilities needed for the next era of design. EDA360 requires a collaborative ecosystem including EDA vendors, embedded software providers, IP providers, foundries, and customers.Cadence is committed to building and participating in that ecosystem…..

One of the school teacher comments that has followed me through life is that I “don’t play well with others”, which is absolutely true to this day. The same goes for Cadence, they do not play well with others. That wasn’t always the case of course, but it certainly is today. To borrow a phrase from another SemiWiki Blog, Cadence has a barbed wire fence strategy and EDA360 cannot survive inside barbed wire.

No one will grieve more than me since EDA360was great blogging fodder. My first blog: Cadence EDA360 Manifesto caused quite a stir and got me beers with John Bruggeman. In turn I gave him an EDA360 monogrammed grey hoodie which he actually wore. Calling it a “manifesto” was clearly a PR mistake which they admitted and corrected.

My second blog: Cadence EDA360 Redux! Made fun of the tag line:

“Cadence Design Systems, Inc. (NASDAQ: CDNS), the global leader in EDA360………”

Of course, why wouldn’t Cadence be the global leader in something they just made up? Actually I typed: “Of course, why wouldn’t Cadence be the global leader in something they just pulled out of their corporate butts?” My wife/editor, however, did not like the mental image it created so I changed it. Butt now you know the truth! Cadence PR got rid of that tag line shortly thereafter.

I also blogged TSMC OIP vs CDNS OIP Analysis to point out the error of choosing the same name as TSMC for a similar program:

The TSMC Open Innovation Platformpromotes timeliness-driven innovation amongst the semiconductor design community, its ecosystem partners and TSMC’s IP, design implementation and DFM capabilities, process technology and backend services……

Cadence Design Systems, Inc. (NASDAQ: CDNS), the global leader in EDA360, today announced the Cadence Open Integration Platform, a platform that significantly reduces SoC development costs, improves quality and accelerates production schedules…..

Cadence dropped that one as well. Lawyer letters may have been involved so I cannot take full credit. My Semiconductor Realization!Blog was much more EDA360 supportive:

Per JohnB: EDA360is a top down approach starting with System Realization – to SoC Realization – ending with Silicon Realization. The WHY of EDA360makes complete sense, great vision, I’m on board, I even have an EDA360shirt. The question I had was: exactly HOW was this going to work? I still do not know the answer.

My last blog: Cadence EDA360 is Paper! (The one year anniversary is paper by the way, thus the title) was also a positive one:

I think EDA360 is an excellent road map for Cadence. The company seems to have focus and hopefully EDA360 products will continue to be developed and deployed.

Cadence centralized product marketing in support of EDA360with JohnB as its leader. Cadence product marketing is back to decentralized reporting into engineering. Marketing versus engineering driven, I miss JohnB Already! R.I.P EDA360!

Note: you must be logged in to read/write comments.


Cache Coherency and Verification Seminar

Cache Coherency and Verification Seminar
by Paul McLellan on 07-27-2011 at 5:45 pm

At DAC Jasper presented a seminar with ARM on cache coherency and verification of cache coherency. The seminar is now available online for those of you that missed DAC or missed the seminar itself.

Cache architectures, especially for multi-core architectures, are getting more and more complex. Techniques originally pioneered on supercomputers are now finding their way into complex SoCs. The difference in performance between making an off-chip memory reference versus finding data in one of the caches already on the chip is so big that it is worth paying a price in additional complexity to add hardware that keeps caches coherent when data is written to one of them. But this complexity needs to have a good specification of exactly what the guarantees of coherency area, and a mechanism for verifying that the guarantees hold.

To view the seminar register here.

To request the white paper on the subject register here.


Intel’s Mobile Deja Vu All Over Again Moment

Intel’s Mobile Deja Vu All Over Again Moment
by Ed McKernan on 07-26-2011 at 12:49 pm

We have been here before… and when I say “we” I do include myself. Back in 1997, I joined a secretive company called Transmeta. The company was two years old and working on a new x86 microprocessor to challenge Intel. The original focus of the company was not to build a lower power processor, but one that was faster. As with many start-ups things change and Rev 2 is what ships. The challenge the ARM camp is providing today is broader and more serious, however it is similar in many ways to 10 years ago and from my perspective it really is déjà vu all over again.

When Transmeta was formed, the venture investors were buying into a storyline that the new architecture would replace many legacy x86 transistors with a VLIW engine and a software layer that offered not only translation but also acceleration. You could count on a subset of groups of instructions that were used over and over again that theoretically could be made to run faster than the way Intel processes its instructions. In addition, the rarely used instructions in the x86 core were just eating up space and power.

The great discovery for the company during the late 1990’s was not that VLIW and code morphing was a better way to build a processor, it was the fact that Intel made an architecture decision to pursue a very high MHz solution with Pentium 4. It would once and for all outrun AMD and as everyone knows from the 1990’s, the ASPs of processors were based purely on MHz and not actual performance.

In the pursuit of high MHz, Intel was forced to come clean to mobile vendors that the next generation mobile parts were going to run much hotter and both Intel and their customers scrambled to find cooling solutions to dissipate the heat. These took up space and were costly. The result was that average notebooks increased in size, thickness, and weight. Moving in the opposite direction of one of my observations of computing: computing always moves in the direction of smaller and lighter.

The Pentium 4 move, at its worst, disenfranchised the Japanese mobile vendors building for the home market. It was a market that was mostly mobile and accounted for about 20% of the overall WW mobile market. But one has to remember that in 2000 desktop was still 80% of the market, so 20% of 20% is just 4%. As a result, Intel’s revenue stream was driven by 80% of the market leveraged off of high MHz.

Mobile computing did not surpass desktop until around 2006. What held back mobile was the high cost of LCD screens and the fact that WiFi wasn’t prevalent until after 2001. Therefore, Intel extended its fence lines with Centrino to include WiFi and to block AMD.

With today’s mobile challenge, there is no question that Intel got started late in answering the call, however there are advantages and disadvantages to both ARM’s and Intel’s current standing and I speak based on my experience.

The advantages for ARM are that they have a long history in phones and wide adoption rates with big OEMs carrying the product into the new tablet space. I don’t assume that Win 8 is going to naturally knock down the barriers in the PC space. It takes horsepower to get into PCs and ARM is not there. I have doubts they will get there even with nVidia in 2012/2013. Intel is best positioned to respond to this threat with their current processors and maneuvers like rolling out Thunderbolt, a clear barbed wire strategy. Secondly, I am not sure how much support ARM will get from MSFT in winning the PC space. For years, MSFT has implemented a high bar list of requirements to be considered MSFT ready (this includes minimum CPU and Graphics specs down to minimum DRAM etc…). Why would MSFT spend resources shoring up a market where they already control 95% market share? Sorry IDC, ARM is not going to have 13% of PC market share in 2015.

ARM’s weakness is two fold: First they are trying to go after too much of the market at once which dilutes their resources (they should drop the PC and Server push for now). Second they are going to see the field of customers naturally winnow down to 3 or 4 customers and their destiny is based on these large customers who are going to be asking for big discounts on royalties. I will cover this in a follow up article.

Back to my original focus on Intel and its current standing. The strengths of the company are servers and mobile PCs. Intel is gaining strength as AMD melts away and integrated graphics reduces nVidia’s presence. There is between $5-$10B in available TAM that they are going to feed off of, this is 3-5X more TAM than if they owned Apple’s Tablet and iPhone business today (120MU * 15 = $1.8B) and it is more profitable.

The winning solution in the tablet and smartphone space is a new x86 architecture using circuit design techniques that shut off more regions of the core, married to a right sized graphics engine, and leveraging their leadership with SRAM caches. All of this in a leading edge, 22nm or 14nm tri-gate process. The pieces are there.

Now back to my experience at Transmeta and why I see a repeat of the past. Transmeta won the Japanese vendors when we offered them a processor with a Thermal Design Point (TDP) of 7 watts and a standby power of 200mW. TDP is the worst case power, not average power, a mechanical engineer has to design a mobile to. I believe that the TDP that Apple designs the Tablet to is 3-5W. And with tricks, Apple is able to accommodate Intel’s Sandy Bridge ULV with 17W TDP in its MAC Air. Ideally this should be closer to 7W. So the CPU design teams for Haswell, the 22nm mobile part due in Q1 2013, know what to shoot for as they design their CPU. Hitting these design points means the gap with ARM will close and more importantly the 100% die yield to this TDP will allow Intel to start selling $60 parts instead of $225 parts in this space.

In the Analyst Meeting in May, Paul Otellini articulated this TDP message – it was one of the key takeaways. It went right over the head of the press and analysts because they did not understand what he meant by moving their mobile designs from 35W TDP to 17W TDP. Couple this with the 22nm announcement where Intel said they would reduce standby power by 10X and you have a series of products coming that will go toe to toe with ARM competitors.

It’s déjà vu all over again. More to come….


Note: To read/write comments you must be logged in.


Synopsys MIPI Webinar

Synopsys MIPI Webinar
by Eric Esteve on 07-26-2011 at 6:05 am

Synopsys MIPI Webinar: MIPI is really getting traction

Synopsys last two acquisitions of IP vendors, former ChipIdea in 2009 (Mixed-signal product line of MIPS) and Virage Logic in 2010, have allowed to built a stronger, diversified IP port-folio. Amazingly, Synopsys has found MIPI IP product line in the basket in both cases. Until recently, Synopsys has been pretty discrete about this Interface IP product, essentially used in the high end Wireless phone segment – the Smartphone – at least at the beginning.

To register to this MIPI Webinar, just go here.

Now, MIPI protocols are increasingly being adopted in the market, primarily interfacing an SoC to a camera, display and RFICs, while newer MIPI protocols are being promoted for storage, chip-to-chip connectivity and next-generation camera and displays. Synopsys holding a webinar on MIPI is a good sign that MIPI protocol is getting traction on the market. If you have a doubt, just go to SemiWiki Industry Wiki page, and just have a look at the number of views for the different Interface IP. The ranking is very clear:
[LIST=1]

  • MIPI IP 1 192
  • PCIe IP 675
  • USB 3.0 IP 616
  • DDR IP 595
  • SATA IP 556

    MIPI is generating more interest than the other protocols, two times more!

    This webinar will be hold by Hezi Saar, in charge of Marketing for MIPI PHY and Controller IP product line. Coming from Virage Logic, he brings more than 15 years of experience in semiconductor and electronics industries in embedded systems. He will explain the building blocks and integration challenges faced by designers while integrating MIPI protocols into SoCs. Hezi is a smart guy, no doubt about it! FYI, he is the Synopsys person who has decided to publish the serial four part blog “Interview with Eric Esteve: Interface IPtrends”

    It is a good idea to do such an evangelization work, as MIPI protocol adoption has suffered from the number and complexity of connectivity protocols. But if you take some time to dig into MIPI, you realize that MIPI offer a solution for every type of connection (Display, camera, RFIC, Mass storage…) which is always the best optimization for the type of chip/application you want to connect to. Don’t forget that MIPI was initially developed for the Wireless handset market. Production volume can reach dozens of millions of IC (so each fraction of a square millimeter count!) and the power consumption is the key issue at the system level, so you must use a protocol which is exactly tailored for your needs, interface with a display or an RFIC is necessarily different. Hezi will probably explain that, if the protocols are different, the physical interface stays the same, using the same type of PHY is a good way to minimize the learning curve for the SoC engineer and the risk at the production level.

    To register to this MIPI Webinar, just go here.

    By Eric EsteveIPnest


  • Global Technology Conference 2011

    Global Technology Conference 2011
    by Daniel Nenni on 07-24-2011 at 1:13 pm

    Competition is what made the semiconductor industry and semiconductors themselves what they are today! Competition is what drives innovation and keeps costs down. Not destructive competition, where the success of one depends on the failure of another, but constructive competition that promotes mutual survival and growth where everybody can win. The semiconductor design ecosystem on the other hand is the poster child for destructive competition, which is why EDA valuations are a fraction of what they should be, but I digress…..

    GlobalFoundries is the first “truly global” foundry which brings a different type of competition. Truly global is defined as having fabs in Dresden, New York, Singapore, and a new fab planned for Abu Dhabi and other parts of the world. India? Russia? If they put a fab in Russia maybe Sarah Palin can see it from Alaska! 😀

    The first Global Technology Conference was one of the best I have attended. It was packed with semiconductor industry executives from around the world. Even as a lowly Blogger, I was welcomed with executive interviews and V.I.P. treatment all the way. The Global guys are a class act, believe it.

    This year:
    GLOBALFOUNDRIES senior executives and technologists share their vision and perspectives on driving leading-edge technology innovation through True Collaboration as the industry moves to the 32/28nm technology node and beyond.

    In addition to the technical highlights of the GFI roadmap to 20nm and 3DIC, here is the meat of the conference as planned today:

    The GlobalFoundries ecosystem partners will also be there for discussions and demonstrations:

    This conference is all about communication within the semiconductor design and manufacturing ecosystem which is the biggest challenge we face as an industry today. It’s time to take action. It’s time to take personal responsibility for the industry that supports my extravagant lifestyle. Attend this conference and make a difference!


    Intel Q2 Financial Secret: “Shhhh….We’re on Allocation”

    Intel Q2 Financial Secret: “Shhhh….We’re on Allocation”
    by Ed McKernan on 07-22-2011 at 10:47 pm

    xeone7

    Every Semiconductor Analyst has given Intel the once over a hundred times about their slowing PC unit volume. They are looking in the wrong place because the true secret of the Q2 earnings – in my humble opinion – is that Intel’s factories are full and parts are on allocation. What???

    Check it out, high-end, 8 and 10 core XEON processors introduced this spring are selling for between $100 and $1200 more on the gray market. Gray markets can act as a bleed off valve and in times of production excess parts will sell for under list while in times of shortage, a customer will make a quick buck selling out the back door to others who have an urgent need.

    I didn’t have a clue about the current “Allocation Situation” until I listened to the earnings conference call. From my perspective it was all stellar until they got to the data center revenue growth. It was up only 15% year over year. I was expecting 25-30% – which is what Intel cranked out the last 3 quarters. Why 15% after Google ups the Cap Ex by 100% and IBM waxes about the Cloud?

    With a 30% year over year increase in the Data Center Group, Intel would have hit $13.4B in revenue – a true blowout. Then to add to the intrigue, they go on to forecast revenue of $14B ($14B +/- $500M) for Q3 and a large increase in R&D and even more Cap EX for 2011. How can this be if PC sales are lagging and dividends are being shoveled out the door at a furious pace? Did Paul Otellini lose control of the checkbook?

    I believe there once was a high tech exec that said, “cash flow is more important than your mother.” Obviously he wasn’t invited over to Sunday dinner with Mom after that. History shows that Otellini runs a tight ship but makes strong bets on forward trends. The trend in Data Center is strong and worthy of writing some big checks for more capacity and a few hundred more engineers to kill ARM by 14nm (more about this in a later column).

    Here’s a second little nugget to chew on. Otellini knows that regardless of the shortfalls in netbooks – a minor $350M business per quarter – Intel is at a Tipping point with the data center revenue and that profits are compounding at a rate that is staggering. 80%+ gross margins and 50%+ operating margins means he needs to go ahead and build the Fabs as fast as possible to capitalize on the customers that are waiting for these new ultra performance efficient server chips that reduce sky high power and cooling bills.

    These new XEON processors that were introduced in the spring at an ASP that is 25% higher than the old models are mighty big die with some measuring over 500mm2 in area. The pricing data suggests that they are not yielding high enough yet to satisfy the demand however they are yielding to a point that it is extremely profitable. All it takes is for 5 or 10 good die per wafer to hit the high profit margins. So while the XEON family makes up less than 6% of Intel’s unit volume they probably occupy a complete fab.

    One can ask if this scenario is true then what is the impact in the notebook and desktop space. I believe Intel built to a level to satisfy the typical seasonal demand and then cut over all other wafers to the server side leaving some crumbs for AMD –which turned in a good quarter for Q2 and forecasted a strong Q3.

    Back in the 1990s Intel experienced capacity crunches multiple times during 386, 486 and Pentium ramps. Their solution was to hold prices flat for several quarters. So I reviewed Intel’s CPU pricing since January across what has to be 300 SKUs by now and I cannot see one CPU that has seen a price reduction. The second derivative of the flat pricing is that AMD sees a surprise pickup and/or the industry sees lower unit volume.

    So the mystery grows… is the PC slowdown to single digit growth actually do somewhat to a capacity crunch at Intel and not the iPAD or the crummy economy in Europe or the US. I believe that Paul Otellini’s end game is at 14nm and he would certainly sacrifice PC units with $20 Atom chips in Netbooks and $60 Celerons destined for low-end notebooks in order to sell $2000 – $4600 XEON chips that will fund the accelerated deployment of larger 14nm fabs that come on line in 2013 and the armies of engineers designing x86 for tablets and smartphones going into those same fabs.

    Full Disclosure: As an investor I am long INTC. However this is not a recommendation to buy any of the stocks covered in this article. Every investor needs to do his homework with regards to investing.


    Space is limited so register for GTC 2011 HERE today!

    PowerArtist webinar

    PowerArtist webinar
    by Paul McLellan on 07-21-2011 at 3:21 pm

    The next Apache webinar is on PowerArtist, RTL Power Analysis on July 26th at 11am Pacific time. The webinar will be conducted by David “Woody” Norwood, Principal Applications Engineer at Apache Design Solutions. David has been supporting RTL Power products for the past 8 years. He has broad EDA industry experience with 25 years in a variety of Applications Engineering and Management roles focusing on power and logic verification technologies.

    A complete RTL design-for-power platform providing fully-integrated advanced analysis and automatic reduction technologies, including sequential logic, combinatorial clock gating, memory, and data path for complex IP and SoC designs. By enabling analysis, reduction, and optimization early in the design cycle, PowerArtist helps designers meet power budget requirements and increase the power efficiency of their ICs.

    To register for the webinar go here.


    Intel’s Barbed Wire Fence Strategy

    Intel’s Barbed Wire Fence Strategy
    by Ed McKernan on 07-21-2011 at 11:38 am

    Analysts tend to make judgments regarding Intel based on an existing conventional wisdom (CW) and projecting straight line into the future. As a former Intel, Cyrix, and Transmeta processor marketing guy I would like to offer a different perspective as I have been both inside the tent looking out and outside looking in.

    The current CW is that Intel is doomed… it’s OK, we have been here before. Each time CW says Intel is doomed they implement what I will call their Barbed Wire Strategy to counter their threats and expand their market and influence. If I may, I will explain the Barbed Wire Strategy.

    A month ago I moved with my Family to Austin. I had the job of driving our car with one of my boys from Silicon Valley through Arizona, New Mexico and West Texas. In West Texas there are a lot of ranchers with big tracts of land (thousands of acres) all ringed with barbed wire. It is not particularly high but it is there to keep cattle in and people out. These days it’s also keeping in a lot of windmills. Typically the ranch house is deep inside the property off of a long dirt road that most people couldn’t find the entrance to. Consider the ranch house like Intel’s processors. They are very valuable and have existed forever. If someone were to invade the property they would not make it to the house. Now if the rancher wants to increase his property to handle more cattle (or windmills) then he can buy the property next to him and move the barbed wired fence farther out and increase his personal wealth.

    Today’s CW is that the ARM Camp has Intel’s number and the game is over. My take is that Intel is already well down the path to implementing the Barbed Wire Strategy on a number of fronts. I will talk about servers today.

    Warren Buffet talks about investing in companies with high castle walls and big moats however, in the ever changing tech business you need the Barbed Wire Strategy. At the first sign of a competitive threat – Intel looks to expand the property lines and move its Barbed Wire farther out from the center of the ranch. This past week they acquired Fulcrum– a switch chip startup that competes with Broadcom. Except no system guy would buy from them because of the fear they would not be around in a crunch. Intel acquired Fulcrum in order to own the whole line card in the Data Center (sans DRAM). The switch business is around $1B for Broadcom. So as Broadcom commoditized Cisco’s switch business the past 5 years, now Intel will commoditize Broadcom and Marvell’s switch business. Intel may not get the whole $1B of revenue but it will be additive and more importantly it will move the barbed wire fence farther out from the ranch house.

    For someone new to understand this you need to review history. The clearest example I can give is back in the early 1990’s, Intel was facing a resurgent AMD and new processor vendors Cyrix, Nexgen and C&T. The chipset market was a thriving 3rd party market. Intel wanted to increase the barriers to entry for all so they took it upon themselves to develop their own chipset for Pentium. The chipset added a minor amount of revenue, but the protective barrier it set up allowed Pentium prices to rise dramatically. Chipset vendors melted away along with Cyrix. AMD acquired Nexgen.

    Expect Data Center revenues to rise with the Fulcrum acquisition and more importantly start thinking about the impact this will have on the ARM vendors aiming for the server space.

    More Barbed Wired Stories to Follow.

    Note: You must log in to read/write comments

    Space is limited so register for GTC 2011 HERE today!

    Want to learn Mixed-Signal Design and Verification?

    Want to learn Mixed-Signal Design and Verification?
    by Daniel Payne on 07-20-2011 at 6:13 pm

    Workshops are a great where to learn hands-on about IC design technology. Mentor has a free workshop to introduce you to creating, simulating and verifying mixed-signal (Analog and Digital) designs.

    PLL waveforms showing both digital and analog signals.

    Dates in Fremont, California
    July 26, 2011
    September 15, 2011
    November 8, 2011

    Their tool is called Questa ADMS and spans both digital design with HDL and analog design using SPICE or Fast SPICE.

    These tools work both in a Mentor environment and the Cadence environment.


    Questa ADMS inside Design Architect IC


    Questa ADMS inside of the Cadence Virtuoso Analog Design Environment

    Overview

    Mentor Graphics cordially invites you to attend a FREE “hands-on” Mixed-Signal Design and Verification Workshop. In this workshop we will explore the current trends of IC design and highlight the challenges these trends create. This workshop will expose you to comprehensive solutions necessary to improve your design and verification productivity.
    During this lab-intensive technical workshop, you will gain first-hand experience evaluating Questa ADMS, Mentor’s Mixed-Signal Simulation Solution.
    Lab 1 – Getting started with ADMS

    • Explore the ADMS graphical interface and infrastructure
    • Run digital and mixed-signal simulations with an adder and ADC circuits

    Lab 2 – Mixed-Signal Simulation: Digital-centric

    • Learn about using analog SPICE circuits within a digital netlist hierarchy
    • Explore the analog-digital interface and how to bridge the domains
    • Understand the importance of validating analog and digital blocks together

    Lab 3 – Mixed-Signal Simulation: Analog-centric

    • Learn how to use HDL behavioral models within a schematic
    • Observe the impact of AMS modeling on performance and functionality on a PLL circuit



    Gary Smith on the Apache acquisition

    Gary Smith on the Apache acquisition
    by Paul McLellan on 07-20-2011 at 4:44 pm

    Gary Smith has a note out about the Apache acquisition by Ansoft (unfortunately if you get his email newsletter the link there takes you to the wrong article but it really is here or here as pdf). Most of the note actually describes the acquisition and the Apache product line which won’t reveal much new to anyone here.

    He regards the product lines as largely complementary:Together Apache brings the low power IC design solutions where ANSYS provides the extraction software for electronic packages and boards. While Apache and Ansys will overlap with common customers together they will offer complementary software solutions that will enhance the technological solutions for chip, package and board system design, particularly in the areas of electromagnetic interference (EMI), thermal stress and reliability, signal integrity and power integrity.