100X800 Banner (1)

Third Generation DFM Flow: GLOBALFOUNDRIES and Mentor Graphics

Third Generation DFM Flow: GLOBALFOUNDRIES and Mentor Graphics
by Daniel Payne on 08-26-2011 at 11:17 am

calibre yield analyzer

Introduction
Mentor Graphics and GLOBALFOUNDRIES have been working together for several generations since the 65nm node on making IC designs yield higher. Michael Buehler-Garcia, director of Calibre Design SolutionsMarketing at Mentor Graphics spoke with me by phone today to explain how they are working with GLOBALFOUNDRIES on a 3rd generation DFM (Design For Manufacturing) flow.

3rd party IP providers like ARM and Virage have been using this evolutionary DFM flow to ensure that SOCs will have acceptable yields. If the IP on your SOC is litho-clean then the effort to make the entire SOC clean is decreased. GLOBALFOUNDRIES has a mandate that their IP providers pass a DFM metric.

Manufacturing Analysis and Scoring (MAS)
In box A of the flow shown above is where GLOBALFOUNDRIES measures yield and the yield modeling info needed to give Mentor the design to silicon interactions. This could be equations that describe the variation in fail rates of recommended rules, or defect density distributions for particle shorts and opens.

Random and Systematic Defects and Process Variations
At 100nm and below nodes there are both random defects and process variations that limit yield. Critical Area Analysis (CAA) is used for random defects and Critical Failure Analysis (CFA) is used for systematic defects and process variations. These analysis help pinpoint problem areas in the IC layout prior to tape out.

DRC+ Pattern-based Design Rule Checking Technology
Patterns that identify low-yield areas of an IC can be defined visually then run in a DRC tool like Calibre.

Litho Friendly Design (LFD)
Calibre LFD accurately models the impact of lithographic processes on “as-drawn” layout data to determine the actual “as-built” dimensions of fabricated gates and metal interconnects. There are new LFD design kits for the 28nm and 20nm nodes at GLOBALFOUNDRIES.

Calibre LFD uses process variation (PV) bands that predict failure in common configurations including pinching, bridging, area overlap and CD variability.

In the early process development of 20nm the foundry uses Calibre LFD to predict the hot-spots and then create the design rules.

Place and Route
Integration with the Olympus-SOC™ design tool enables feed-forward of Calibre LFD results to give designers guidance on recommended layout improvements, and to enable revalidation of correct timing after modifications.

Summary
Foundries, EDA vendors and IC design companies are collaborating very closely to ensure that IC designs will have both acceptable yield and predictable performance. GLOBALFOUNDRIES and Mentor Graphics continue to partner on their 3rd generation DFM flow to enable IC designs at 28nm and smaller nodes. AMD is a leading-edge IC company using the Calibre DFM tools on the GLOBALFOUNDRIES process.

To learn more about how Mentor and GLOBALFOUNDRIES are working together you can visit the Global Technology Conference at the Santa Clara Convention Center on August 30, 2011


Mentor catapults Calypto

Mentor catapults Calypto
by Paul McLellan on 08-26-2011 at 10:36 am

Mentor has transferred its Catapult (high level synthesis) product line, including the people, to Calypto. Terms were not disclosed but apparently it is a non-cash deal. Calypto gets the product line. Mentor gets a big chunk of ownership of Calypto. So maybe the right way to look at this is as a partial acquisition of Calypto.

It has to be the most unusual M&A transaction that we’ve seen in EDA since, maybe, the similar deal when Cadence transferred SPW to CoWare. There are some weird details too: for example, current Catapult customers will continue to be supported by Mentor.

Who are Calypto? It was formed years ago to tackle the hard problem of sequential formal verification (sequential logical equivalence checking or SLEC). The market for this was people using high-level synthesis (HLS) since they didn’t have any way to check that the tool wasn’t screwing up other than simulation. There were many HLS tools: Mentor’s Catapult, Synfora (now acquired by Synopsys), Forte (still independent). More would come along later: AutoESL (now acquired by Xilinx), Cadence’s C to Silicon. But there really weren’t enough people at that time using HLS seriously to create a big enough market for SLEC.

So Calypto built a second product line on the same foundation, to do power reduction by sequential optimization. This is looking for things like “if this register could be clock-gated under certain conditions, so it doesn’t change on this clock cycle, then the downstream register can be clock gated on the following clock cycle because it won’t change.” For certain types of designs this turns out to save a lot of power. And a lot more people were interested in saving power than doing SLEC (although everyone who saved power this way needed to use SLEC to make sure that the design was functionally the same afterwards).

So now Calypto also has HLS so has a tidy little portfolio: HLS, SLEC for checking that HLS didn’t screw up, power reduction. Presumably in time some of the power reduction technology can be built into HLS so that you can synthesize for power or performance or area or whatever.

Calypto was rumored to be in acquisition talks with Cadence last year but obviously nothing happened (my guess: they wanted more than Cadence was prepared to pay). They were also rumored to be trying to raise money without success.

Mentor says they remain deeply committed to ESL and view this transaction as a way to speed adoption. I don’t see it. I’m not sure how this works financially. Catapult was always regarded as the market leader in HLS (by revenue) but Mentor also had a large team working on it. If the product is cash-flow positive then I can’t see why they would transfer it, if it cash-flow negative I don’t see how Calypto can afford it unless there is a cash injection as part of the transaction.

So what has Mentor got left to speed adoption towards. The other parts of Simon Bloch’s group (apparently he is out too) were FPGA synthesis (normal RTL leval) and virtual platform technology called Vista.

Maybe Mentor decided that HLS required the kind of focused sales force that only a start-up has. Mentor seems to suffer (paging Carl Icahn) from relatively high sales costs (although Wally says that is largely because they account slightly different from, say, Synopsys). Their fragmented product line means that their sales costs are almost bound to be higher compared to the other big guys who are largely selling a complete capability (give us all your money, or most of it, and we’ll give you all the tools you need, or most of them).

Or perhaps it is entirely financial driven. Mentor gets some expenses off their books, and reduces their sales costs a little. But without knowing the deal or knowing the low-level ins and outs of Mentor’s financials it’s not really possible to tell.

Full press release here.


20nm SoC Design

20nm SoC Design
by Paul McLellan on 08-25-2011 at 12:48 am

There are a large number of challenges at 20nm that didn’t exist at 45nm or even 32nm.

The biggest issues are in the lithography area. Until now it has been possible to make a reticle using advanced reticle enhacement technology (RET) decoration and have it print. Amazing when you think that at 45nm we are making 45nm features using 193nm light. A mask is a sort of specialized diffraction grating. But at 20nm we need to go to double patterning whereby only half the polygons on a given layer can be one a reticle and a second reticle is needed to carry the others. Of course the rules for which polygons go on which reticle are not comprehensible to designers directly. It is also likely that we are moving towards restricted design rules, where instead of having minimum spacing rules we have rules that restrict spacing to a handful of values. We’ve pretty much been doing that for contact and via layers for years but now it will affect everything. This explosion of design rules means that the design rules themselves are pretty much opaque to the designers who have to follow them and tight integration of design rule checking, RET decoration, reticle assignment and so on must be tightly integrated into both automated tools (such as place and route) and more manual tools (such as layout editors).

Another theme that goes through many blog entries here is that variation is becoming more and more extreme. The variance gets so large that it is not possible to simply guard band it or else you’ll find you’ve built a very expensive fab for minimal improvement in performance. Variation needs to be analyzed more systematically than that.

These two aspects are major challenges. Of course we have all the old challenges that designs get larger and larger moving from node to node requiring more and more productive tools and better databases. Not to mention timing closure, meeting power budgets, analysis of noise in the chip-package and board, maybe 3D TSV designs. Manufacturing test. The list goes on.

To get a clear vision of what success will require, view Magma’s webinar on 20nm SoC design here.



Magic Media Tablet: illusion about a niche market?

Magic Media Tablet: illusion about a niche market?
by Eric Esteve on 08-24-2011 at 9:31 am

According with ABI research, worldwide annual media tablet shipments are expected to top 120 million units in 2015, which is more than decent for a niche market. But if you compare it with the smartphone market (you can find the smartphone shipments forecast here), media tablet will weight 15% in unit shipment of the smartphone in 2015. Maybe the media tablet Silicon content will generate more $ per unit? Alas the answer is no, at least if we look at the heart of the system, the application processor, as the IC used in media tablet is strictly the same than for smartphone: OMAP4 from TI, Tegra2-NVIDIA and even Apple using A4 for iPhone4 and iPad (or A5 for iPhone5 and iPad2). So the SC Total Addressable Market (TAM) is directly linked with the Media Tablet unit shipments, at least if we evaluate the application processor TAM, and the comparison with Smartphone unit shipments make sense.

Is this TAM really addressable by the Nvidia, TI, Qualcomm and more?
The same research from ABI says “Android media tablets have collectively taken 20% market share away from the iPad in the last 12 months. However, no single vendor using Android (or any other OS) has been able to mount a significant challenge against it.” This means that, during the last 12 months the “real” TAM for the SC pure players (excluding Apple) was 20% of the 40 million units shipped (IPnest evaluation). Or 8 million application processors, at an Average Selling Price (ASP) of $20, that makes a “huge” $ market of $160M. Let’s compare it with the same figures for the smartphone market for the same period: about 400 million units shipped in total, 330 million being non Apple, create a $ TAM over $6B. The application processor TAM for media tablet is today less than 3% of the equivalent for smartphone!

It will certainly change in the future, as both products shipment are expected to strongly grow and we can expect Android based media tablet to gain market share against iPad, increasing both the market size and the addressable part of the application processor IC market. If we make the assumption that Apple market share of the media table pass from 80% to 50% from 2011 to 2015, it will leave about 60 million to A5 competitors, Nvidia, Qualcomm, TI and others. As the ASP for Application processor is expected to decline to about $15 in 2015, the $ TAM will grow to about $1B, which is pretty good for a niche market! But this is still a mere 10% or so of the application market for the smartphones.

You may want to question these forecast about the media tablet, thinking the market size growth evaluation is underestimated… Let’s have a look at the regional split from the first figure: the largest part is in North America, then comes Western Europe and then Asia-pacific. Eastern Europe, MEA and Latin America are staying very low. The first conclusion is that it’s a market for “riche” people (in countries where the personal income is high).

If you still have a doubt, just imagine you are 20, part of the middle class in one of these “rich” countries, and you have $500 to invest. What will you buy first?
·Laptop(even if you probably need to spend a little more)?
·Netbook (in this case some money will remain)?
·Smartphone (you will probably not pay it through the subscription)?
·Or a media tablet?
I would bet that media tablet will come last, or just before the netbook!

Last point: a couple of years ago some analysts have predicted the Netbook segment to explode. As a matter of fact, the Netbook shipments have seen very strong growth in 2009. It seemed confirming these predictions, but this was in fact due to the recession: it was more economical to buy a Netbook than a Laptop. If we look at the forecast for Netbook shipment for 2011, they are expected to decline by 18%. Partly because the economy is better than in 2009, partly because they are cannibalized by the media tablet. So, the media tablet is not essential, like can be a PC or a smartphone, and you can guess that they would be severely hurt by any economy recession, more than smartphones would be. We think media tablet is a nice, niche product for rich people, but it will never generate the expected revenues as the smartphones will for the application processor IC manufacturers. But you may have a different opinion… feel free to post it!

Eric Esteve from IPnest


Itanium Neutron Bombs Hit HP Campuses, Oracle Looking for Survivors

Itanium Neutron Bombs Hit HP Campuses, Oracle Looking for Survivors
by Ed McKernan on 08-23-2011 at 11:37 pm

attachment

It was a series of Itanium Neutron Bombs detonating during the reign of 4 management teams (Platt, Fiorina, Hurd and Apotheker) that left HP campuses in Cupertino and Palo Alto in the custody of crickets. The devastation to employees and stockholders is absolutely immense and the current strategy calls for a further retreat into the enemy territory of IBM and Oracle. If you want to point to a date that will live in Infamy – it is July 6, 1994. The day when Intel and HP tied the knot with a pair of Itanium rings. The stock then sat at $9.25, a point it may soon revisit.

It was a marriage intended to solidify the positions of both spouses as monopolistas in their respective markets. Intel, the king of PC processors was looking for a way to expand its reach into the rich, RISC based server space that was then dominated by IBM, Sun and HP. HP wanted to leverage Intel’s processor design and Fab economics to propel them into the lead. If HP were to sprinkle a new Intel 64 bit processor across all its various lines of workstations and minicomputers it could outperform and outsell Sun’s SPARC. There were numerous issues with this deal from the start. Would Intel sell Itanium at a reasonable price? Would Intel hit a performance target that met the market needs? Would software vendors port over to the new architecture?

The answer to these questions was the same back then as they are today. No hindsight is required. In fact by early 1994, the industry had just witnessed the knockout blow Intel inflicted upon the RISC camp with its complex x86 architecture. Intel prevailed with volume, good enough performance, and a truckload of software apps that made it impossible for an industry to move to RISC (see Intel’s Back to the Future Buy of Micron).

But a new twist appeared that created confusion and uncertainty inside Intel that would set-in-motion the Itanium train. HP Engineers were playing around with a new architecture called VLIW (Very Long Instruction Word). They claimed it was the architecture of the future that would outperform any implementation of x86.

Factions inside of Intel argued both sides of the x86 vs. VLIW. A Titanic Battle ensued where the only possible outcome was to do not one or the other but both. However to win the approval of analysts, roadmaps appeared in the mid 1990s showing that Itanium processors would occupy not only high-end servers but also desktops and laptops. To get there though, would require porting Microsoft and the whole PC software industry. Intel listened to the sweet talking server Harlot, girded its loins and poured billions into the Itanium hardware and software ecosystem. Seventeen years later the needle has barely budged.

All was quiet on the western front for Intel and HP as they continued to print money through the late 1990s. However HP thought it wiser if it could assemble the complete set of Over-the-Hill Gang minicomputer architectures and convert them to Itanium as well. So added to their PA RISC and Apollo computers, were Convex Computer acquired in 1995 and Tandem and DEC VAX acquired with Compaq in 2001. Each entity though had the issue of porting applications and to this, HP created software translators that effectively ran at only 10-20% of a native speed Itanium. A good example of Less is More.

No worries, with Compaq out of the way, HP could win the PC market through economies of scale on the purchasing side and through domination of the retail channel by securing most of the shelf space. Fiorina and Hurd kept cutting the operating expenses like it was kudzu in North Georgia. But no matter how much they cut, they couldn’t eliminate the next lower cost competitor coming out of Taiwan or China.

Craig Barrett, the CEO of Intel, went on his own buying spree in the late 1990s for networking silicon to fill his fabs, all the while neglecting the threat that AMD had planned with the launch of its 64 bit x86 server processor in September 2003. Intel finally threw in the towel on converting the server world to Itanium when they launched their 64 bit Xeon processor in July 2004, which was exactly 10 years after the HP – Intel handshake. Imagine where Intel would be today if they could rewind the clock and instead of pouring billions into Itanium, they built an x86-64 bit that hit the market in 2000. There likely would have been no resurrection of AMD coming out of the tech downturn.

The HP that began in 1939 in a garage in Palo Alto does live on in the 1999 spinout called Agilent and its further spinout called Avago. The largest IPO in Silicon Valley at the time, Agilent and Avago were considered too slow growth to fit under the HP corporate umbrella. But both are highly profitable like they were when they spun out. On a price to sales basis Agilent and Avago are 4 and 10 times respectively more valuable than HP.

As Mark Hurd and Larry Ellison huddle in Oracle’s headquarters, what if anything is left of HP that would be of value (sans printers)? Ah, you say – the property that the HP campuses occupy in Cupertino and Palo Alto! Let’s jump in the car and take a look!

The drive along the tree shaded Pruneridge Avenue in Cupertino is very calm. I know it well because my wife and I lived in an apartment on the corner of Pruneridge and Wolfe when we moved to California in 1998. My wife worked for HP for a short period of time in the old Tandem building. I would point out the campus to my boys as we drove past on our way to our pool past the end of the campus. For many years the buildings were unoccupied until a real estate agent put in a bid from a secret buyer – Larry’s friend Steve. In 2015 it will all come alive again when a pentagon-sized spaceship campus opens up.

<script src=”http://platform.linkedin.com/in.js” type=”text/javascript”></script>

<script type=”IN/Share” data-counter=”right”></script>


Formal Verification for Post-silicon Debug

Formal Verification for Post-silicon Debug
by Paul McLellan on 08-23-2011 at 5:52 pm

OK, let’s face it, when you think of post-silicon debug then formal verification is not the first thing that springs to mind. But once a design has been manufactured, debugging can be very expensive. As then-CEO of MIPS John Bourgoin said at DesignCon 2006, “Finding bugs in model testing is the least expensive and most desired approach, but the cost of a bug goes up 10× if it’s detected in component test, 10× more if it’s discovered in system test, and 10× more if it’s discovered in the field, leading to a failure, a recall, or damage to a customer’s reputation.”

But formal verification at the chip level is a major challenge. There are capacity issues, of course, since chips are much larger than individual blocks (yeah, who knew?). The properties to be validated are high-level and complex. So basic assertion-based verification (ABV) is inadequate and deep formal is required.

Traditionally when a bug is found in silicon, additional simulation is done to try and identify the source of the undesired behavior. But this approach can be very time-consuming and is inadequate for finding those bugs that only occur in actual silicon after a long time (and so only appear in simulation after a geological era has gone by). Of course the original testbench doesn’t exhibit the behavior (or the chip wouldn’t have been released to manufacturing) meaning that more vectors need to be created without necessarily having much idea about what vectors would be good.

Formal verification avoids this problem since it ignores the testbench completely and uses its own input stimuli. As a result, formal analysis is capable of generating conclusive answers to verification problems.

Formal verification has the capability to cut through this. Declare a property that the bad behavior does not exist. But since it is known to exist, formal verification should fail this property and, at the same time, generate an exact sequence of waveforms to reproduce the problem.

Only formal tools with a focus on full proofs of complex design properties will have the capacity to converge on the problem within an acceptable time. Additionally, once the problem has been identified and a fix proposed, formal verification can be used to prove that the problem really is fixed.

With formal’s unique ability to remove verification ambiguity, it becomes an invaluable tool in reducing the time and effort spent addressing post-silicon debug issues. Formal’s ability to quickly identify bugs, assure the cleanliness of sub-blocks, and verify the completeness of design fixes provides the highest value post-silicon debug tool in the team’s arsenal.

For the Jasper white paper on the subject, go here.


Silicon One

Silicon One
by Paul McLellan on 08-23-2011 at 5:23 pm

I have talked quite a bit over the last few years about how the trend towards small consumer devices with very fast ramp times. For example, pretty much any time Apple introduces a new product line (iPod, iPhone, iPad…) it becomes the fastest growing market in history. This has major implications for semiconductor design since the heart of these devices is semiconductors and software that runs inside them. The pace only keeps accelerating.

So what are the characteristics of this environment?

  • semiconductor manufacturers have been very successful at delivering more technology, more performance, faster time to market, resulting in products that do more and in demand for…well, more
  • lifestyles drive technology as customers want more connectivity, more performance, more integration
  • the future is a convergence of digital, analog, memory, 3D packaging and software. Pure digital SoCs are less and less common since the real world is analog. 3D packaging (package-in-package, TSV etc) are coming important
  • competitive markets with shorter and shorter product lifecycles
  • design tools get more powerful, more integrated, take account of higher-level aspects of designs to drive productivity, and lower level aspects of design to guarantee functionality and yield.

Magma’s Silicon One inititative is focused on making silicon profitable for their customers. It acknowledges that one EDA vendor cannot supply every tool required. Magma has five key technologies to address the main business challenges: time to market, product differentiation, cost, power and performance. Those five are Talus, Tekton, Titan, FineSim and Excalibur. These all work off a unified database for designing these complex chips that combine analog, digital, memory in a single chip.

Like the parable of the blind men and the elephant, each feeling a different part, people view Magma differently depending on their experience. Some view Magma as industry leaders in the classical RTL to GDSII flow. others view Magma as having one of the most sophisticated analog/mixed-signal (AMS) solutions.

Both these groups may be surprised that Magma holds a strong position in the design of memory devices such as Flash, SRAM, DRAM and image sensors (yes, these aren’t technically memory but they are treated as so due to their attributes). Magma tools are used by the top 5 memory companies.

Perhaps more surprising is that Magma is a major player in yield management with a large installed base in amost every fab on the planet, but this is known mainly to just the customers who use this solution.

The Magma white paper on Silicon One is here.



Cadence Verification IP Technical Seminar!

Cadence Verification IP Technical Seminar!
by Daniel Nenni on 08-22-2011 at 11:43 am

According to trusted sources it costs upwards of $50M to design a 40nm SoC down to the GDS. Semiconductor IP is a fast growing part of that equation and functional verification of that IP is critical. Hardware complexity growth continues to follow Moore’s Law but verification complexity is even more challenging. In fact, IP verification is widely acknowledged as the major bottleneck in SoC design. Up to 70 percent of the design development time and resources are spent on functional verification. Even with such a significant amount of effort and resources being applied to verification, functional bugs are still the number one cause of multi million dollar silicon re-spins.

Cadence Verification IP Catalog Technical Seminar
Join us for an in-depth look at Cadence Verification IP Catalog. In this seminar we will hear case studies from experts in the field addressing your most challenging issues when it comes to verifying today’s most important interfaces such as AMBA4 ACE, PCIe Gen 3, USB 3.0, DDR4 and more.

Case studies will focus on real world scenarios and be interactive in nature. In this seminar you will also hear how Cadence with Denali offers the most comprehensive, flexible and open solution on the market for verifying and integrating IP.

August 25, 2011 – Cadence Design Systems (San Jose – Bldg 10 Auditorium), San Jose, CA
1:00 PM – 4:15 PM Pacific

Agenda

[TABLE] style=”width: 90%”
|-
| 1:00pm – 1:20pm
| Overview – Top 10 Essential SoC interfaces
|-
| 1:20pm – 1:40pm
| Cadence VIP Catalog
|-
| 1:40pm – 2:10pm
| Case Study #1 – AMBA4 ACE
|-
| 2:10pm – 2:30pm
| Break
|-
| 2:30pm – 3:00pm
| Case Study #2 – PCI Express Gen 3
|-
| 3:00pm – 3:30pm
| Case Study #3 – USB 3.0
|-
| 3:30pm – 4:00pm
| Case Study #4 – DDR4
|-
| 4:00pm – 4:15pm
| Close
|-

Register for this even HERE!


WikiLeaks: Methodics vs IC Manage

WikiLeaks: Methodics vs IC Manage
by Daniel Nenni on 08-21-2011 at 4:00 pm

Human nature never ceases to amaze me. I understand the recent economic turmoil and looming National Debt has thrown us for a loop but please, let us all get some perspective here and in the words of Rodney King, “Can we all get along?”

A clever little scumbag recently registered the domain danielnenni.com and is now hawking event tickets in my name. I let the domain expire after moving my blogging to SemiWiki. Shame on me for being too cheap to protect my legal name. Daniel Nenni is not a trademark so this is a case of identity theft. Life is short so I will probably just let it go but still, not a good sign of the human condition.

Even worse is the dispute between Methodics and IC Manage. A discussion on SemiWiki started on June 30[SUP]th[/SUP] with the screen shot above. You can visit the thread HEREbut let me summarize. Apparently someone, who chose to hide their identity, registered the domain www.methodics.comand put up the message that the web page is no longer active, the company is no longer in business, and listed IC Manage as an alternative source. IC Manage and Methodics DA compete in the design data management business. The official Methodics response is HERE.

As a grand finale, last week I got this email from Mike Sottak. Mike is a long time EDA PR guy who I have worked with in the past. Mike has always proved to be a solid guy so I have no problem posting his email. I must also mention that SemiWiki works with another competitor to IC Manage and Methodics which is ClioSoft.

You may be aware of the recent shenanigans perpetrated against the design data management company Methodics. It seems the domain name www.methodics.com (which Methodics does not currently own) was set up to point visitors to the web site of their competitor IC Manage. The blatant attempt to confuse customers went so far at one point as to suggest Methodics was no longer in business. While there was little doubt that IC Manage was behind this, it wasn’t until Methodics initiated something called a Uniform Domain Name Dispute Resolution complaint with the World Intellectual Property Organization that the smoking gun was revealed. WIPO confirmed the previously-hidden registrant/owner of the domain is indeed the Vice President of ICManage.

Full disclosure: Methodics is a client of mine. But this incident strikes a personal chord with me as someone who has worked in the EDA industry for more than 20 years. They did not ask that I write to you on their behalf. I do so because I have not seen a more offensive act of malicious and unethical tactics in my career (and many of you will recall that I was at Cadence during the infamous Avanti trade secret theft case). We as an industry often get criticized for being immature and self-defeating. It’s not hard to understand why customers and other people on the periphery of EDA would think that given this type of behavior. The fact that Shiv Shikand has the arrogance to register the domain name that rightly belongs to his main competitor IN HIS OWN NAME sets a new standard for disregard of business ethics.

So why should you care? I believe that part of the role of any press or media outlet is to help police the areas they cover by holding people, companies and other institutions accountable for their actions. I therefore urge you as a key influencer of our industry to call IC Manage to the carpet on this one. I am certain that if a similar situation occurred between Synopsys, Mentor or Cadence, you would do the appropriate reporting and analysis.

I can assure you that Methodics is not interested in this purely as PR stunt, and in fact would prefer that the whole issue go away and the domain name be rightfully transferred to them. But they have been violated in no uncertain terms and have been unfairly been taken advantage of by an unscrupulous competitor. If you are in any way involved with EDA, I believe this type of behavior must be exposed for what it is. Or do we continue to look the other way and condone it through our silence, perpetuating the image of an industry that is immature and risky with which to do business?

Kind regards

Mike Sottak
Wired Island PR



Below is the report from WIPO:

<methodics.com>
Notice of Change in Registrant Information
Dear Complainant,

Further to our Acknowledgment of Receipt of Complaint, please be advised of
the following:

The registrant of the disputed domain name in the above referenced
proceeding has been identified by the concerned Registrar, Blue Razor
Domains, as being different to the entity named in the Complaint as
Respondent. The registrant information we have received from the Registrar
is as follows:

Registrant:
Shiv Sikand
15729 Los Gatos Blvd
Suite 100
Los Gatos, CA 95032
United States

Administrative Contact:
Sikand, Shiv shiv@icmanage.com
15729 Los Gatos Blvd
Suite 100
Los Gatos, CA 95032
United States
+1.4083588191

Technical Contact:
Sikand, Shiv shiv@icmanage.com
15729 Los Gatos Blvd
Suite 100
Los Gatos, CA 95032
United States
+1.4083588191

Setting the legalities of this situation aside, If this story is true, morally and ethically this is just WRONG!I have seen a lot of dirty deeds in my 25+ years in EDA but this absolutely takes the prize! Please voice your opinion in the comment section and I will make sure they get to IC Manage. If you have disputing data please send it to me and I will include it in this post.

Again, I’m biased. I had coffee with Shiv Sikand to try and smooth things over after he attacked SemiWiki on a LinkedIn group when we started working with his competitor ClioSoft. We ended up yelling at each other in a Peet’s Coffee, Shiv and his company IC Manage are still banned from SemiWiki.

*** Shiv had posted a response HERE but it has since been taken down. It was probably one of the worst apologies I have seen.


Apple Will Nudge Prices Down in 2012: PC Market Will Collapse

Apple Will Nudge Prices Down in 2012: PC Market Will Collapse
by Ed McKernan on 08-21-2011 at 7:10 am


Jack Welch, the former CEO of GE, had an edict that each business unit needed to be #1 or #2 in the market or else he sold it off. HP is #1 in PC market share but it is exiting a business that it no longer can control and soon will bleed a lot of cash. HP’s Operating margin is under 6% and falling while Apple’s is at 40% and growing. So the question I have been thinking about is: if Apple were to cut prices on iPADs $50 and MAC Air’s by $100 would the PC industry collapse?

The question goes to the heart of demand elasticity and consumer preference. We know that a large portion of the PC market has been required to remain windows due to legacy applications – just as the IBM mainframe continues to satisfy a portion of the Fortune 500 customer needs. But Apple at less than 6% of the worldwide PC market is in position to cause it to flip in 2012.

The firm NPD, which tracks PC sales through retail in the US has stated as of May 2011 that the iPAD is not cannibalizing the PC. But this may be because the iPAD had not ramped sufficiently. During the earnings call, HP’s CEO stated that the iPAD was having an effect on its consumer business. Perhaps there’s more.

The average notebook PC price in retail as of December 2010 was $453. NPD at the time remarked that the under $500 PC was growing while the $500 to $1000 PC was dropping. The over $1000 PC in retail was dominated by Apple with a 91% share.

So if Dell is at a 2.5% operating margin in its consumer business then it is making around $12 per notebook unit sold. Now if Apple lowers its iPAD and MAC Air by $449 and $899 respectively, will it cause Dell and HP to drop prices by $12 or more, effectively ending their consumer business. All appearances are the answer is yes. Or another way to think about it is that since Apple’s move with into retail with iPad and MAC Air, the PC market has lost elasticity.

But the problem must go beyond the consumer and into the corporate world for HP to exit the business. Dell reported that it has a 10-11% operating margin in the corporate world, which is still frightening. I presume HP’s is the same. So I think what the CEO of HP was communicating is that the corporate world is asking Dell and HP to sharpen their pencils for 2012 which will result in price cuts that put their whole PC business at risk of going in the red. Apple has entered the mix.

It is clear that Apple’s strategy has been to take over the consumer retail channel first and we have seen that the growth rates in this segment alone have taxed their ability to meet high demand. However, once this demand is satisfied and the brand is shown to be bullet proof, Apple can turn on the corporate world which is several times larger than retail.

For Apple to win corporate, I think it will require volume deals that are within $50 of the current iPAD price and a $100 from their current notebook price. The $100 discount will effectively cause Dell to go to 0% operating margin if you assume a $1,000 corporate notebook price. And Apple has a way of giving corporations a $100 price break without cutting margins. They will turn around to Intel and ask for a slightly slower processor at a $50 discount (so call it a 2.2GHz mobile i5 processor in place of the current 2.3GHz). A $50 CPU discount = $100 system price discount while effectively occurring at a 50% margin.

Intel faces a dilemma, because corporate is where they rule and giving into Apple will mean the loss of control they have over HP and Dell and their resulting pricing model. Long term though, they have to play the game because Apple is driving the bus and to not acquiesce to Apple is to give AMD an opportunity to respond. Also, on the up side, one of the benefits of Apple taking corporate is the elimination of the Microsoft Windows Corporate O/S tax that is over $100 per unit. Microsoft will be forced to respond to corporations request for price cuts in a significant way. So Apple’s corporate plan can have a significant, immediate impact on Microsoft.

The PC game, which used to be so staid, is getting very complex and master chess players are required to win this battle. Apple has the upper hand. Intel has to figure out how to maintain revenue growth as processor ASPs drop in the mobile space, though rise in the x86 servers building out the Cloud. Part of their plan is to feast off of the decline of AMD and nVidia – unless AMD and nVidia can merge (see Captain Ahab Calls Out for the Merger of nVidia and AMD).

The PC market, as we know it, is collapsing and it is in the hands of a company that until this year was not even #1 or #2. Sounds like its time for Jack Welch to write a new business book.

Note: You must be logged in to read / write comments