Ceva webinar AI Arch SEMI 800X100 250625

Intel Sandy Bridge Fiasco and EDA

Intel Sandy Bridge Fiasco and EDA
by Daniel Nenni on 02-27-2011 at 6:49 am

I purchased two Toyotas last year and both have since been recalled. Why has Toyota spent $1B+ on recalls in recent years? Same reason why it will cost Intel $700M (which does not include reputation damage) to recall Sandy Bridge chip sets, because someone did not do their job! The WHAT has been discussed, lets talk about HOW it happened.



Intel Identifies Chipset Design Error, Implementing Solution

SANTA CLARA, Calif., Jan. 31, 2011 – As part of ongoing quality assurance, Intel Corporation has discovered a design issue in a recently released support chip, the Intel® 6 Series, code-named Cougar Point, and has implemented a silicon fix. In some cases, the Serial-ATA (SATA) ports within the chipsets may degrade over time, potentially impacting the performance or functionality of SATA-linked devices such as hard disk drives and DVD-drives. The chipset is utilized in PCs with Intel’s latest Second Generation Intel Core processors, code-namedSandyBridge. Intel has stopped shipment of the affected support chip from its factories. Intel has corrected the design issue, and has begun manufacturing a new version of the support chip which will resolve the issue. TheSandyBridgemicroprocessor is unaffected and no other products are affected by this issue.

Coincidently Mentor Graphics recently published an article:

New ERC Tools Catch Design Errors that Lead to Circuit Degradation Failures
Published on 02-11-2011 12:18 PM: A growing number of reports highlight a class of design errors that is difficult to check using more traditional methods, and can potentially affect a wide range of IC designs, especially where high reliability is a must.Today’s IC designs are complex. They contain vast arrays of features and functionality in addition to multiple power domains required to reduce power consumption and improve design efficiency. With so much going on, design verification plays an important role in assuring that your design does what you intended. Often, verification will include simulations (for functional compliance), and extensive physical verification (PV) checks to ensure that the IC has been implemented correctly, including DRC, LVS, DFM and others. A growing number of reports highlight a class of design errors that is difficult to check using more traditional methods, and can potentially affect a wide range of IC designs, especially where high reliability is a must.

Note the comment by SemiWiki Blogger Daniel Payne, who used to work at both Intel and Mentor Graphics:


A transistor-level checking tool like PERC can certainly catch reliability failure issues like a PMOS transistor with bulk node tied to the wrong supply (let’s say 2.5V instead of 1.1V).

Since Intel announced that a single transistor reliability issue is to blame for their recent re-spin, we can only guess at the actual reliability issue that they found and fixed with a single metal layer.

I’m sure that at Intel for all new chips they will be running their reliability tool checks before tape out instead of during fabrication.

Back in the 1980’s Intel did have an internal tool called CLCD (Coarse Level Circuit Debugger) that could crawl a transistor netlist and look for any configuration. You would write rules for each reliability or circuit issue that you knew of, then run your netlist in CLCD to detect them.

Now for the HOW: Based on my personal experience, Intel has a classic case of the NIH (not invented here) Syndrome when it comes to EDA tools and methodologies. Even when Intel purchases commercial EDA tools they are often not used in the prescribed methodology. Intel also does not collaborate well with vendors and is very secretive in the usage of purchased tools. Bottom line: Someone at Intel did not do their job. Just my opinion of course but this $700M fiasco could have and should have been avoided.


ISSCC Semiconductors for Healthy Living

ISSCC Semiconductors for Healthy Living
by Daniel Nenni on 02-26-2011 at 4:19 pm

Not only do I enjoy San Francisco, I really enjoy the International Solid-State Conference that was held in San Francisco again last week. This was ISSCC #57 I believe. ISSCC attracts a different crowd than other semiconductor conferences, probably because there are no exhibits and no sales and marketing nonsense, just serious semiconductor people. Here are some interesting ISSCC stats:

  • 3k people attended
  • 30 countries represented
  • 669 submissions
  • 211 selected
  • 32% acceptance rate
  • 50/50 industry versus academia

The conference theme for 2011 was Electronics for Healthy Living.. Electronics play a significant role in enabling a healthier lifestyle. Technology in the hospital enables doctors to diagnose and treat illnesses that might have gone undetected just a few years ago. External monitors provide us with a good assessment of our health risk and vital-sign status. Those with chronic diseases can live a more normal life with implanted devices that sense, process, actuate and communicate. Body Area Networks can be connected to a monitoring program running on our mobile phone. Those with disabilities also benefit from electronics that improve their lifestyle.

Wireless communications for healthy living has arrived. My running shoes talk to my heart monitor, my bathroom scale talks to my personal fitness program, my refrigerator talks but it lies so it is password protected. Your smartphone is the gateway and will process this “healthy” data affording us all a more “comfortable” lifestyle.

Of course it goes beyond that. The original heart pacemaker was both a medical and technological breakthrough that has saved millions of lives. Now we have deep brain stimulators, active embedded diagnostics and wearable sensors to prevent emergency health care situations. It’s a government conspiracy really, in an effort to not only reduce medicare costs but to make us healthier and more productive so we can pay off the U.S. National Debt!

Significantly increasing our lifespan and raising the retirement age to 80 is the only hope for the social security system! But I digress…..

At the semiconductor level the technological challenges are daunting: power efficiency, energy harvesting, 3D IC, analog / digital integration, RF, signal integrity, die size, and memory. I attended a few of the sessions and will post trip reports, hopefully other ISSCC attendees will post as well. I also spoke with Dr Jack Sun, TSMC Vice President of R&D and Chief Technology Officer, and Philippe Magarshack, TR&D Group Vice-President and Central CAD & Design Automation GM at STMicroelectronics, and I will write more about that in the trip reports.


Semiconductor Ecosystem Collaboration

Semiconductor Ecosystem Collaboration
by Daniel Nenni on 02-24-2011 at 8:30 am

After 27 years in semiconductor design and manufacturing I actually had to look up the word collaboration. Seriously, I did not know the meaning of the word.

Collaboration:a recursive process where two or more people or organizations work together to realize shared goals, (this is more than the intersection of common goals seen in co-operative ventures, but a deep, collective, determination to reach an identical objective).

The type of collaboration I would like to see is between the PANEL and the AUDIENCE. There will be no PowerPoint slides or lengthy introductions. We will go right to the audience to better define the type of collaboration that will be required for semiconductor design and manufacturing to be successful in the coming nodes. Bring your toughest questions and they will get answered, believe it. Registration for EDA Tech Forum is FREE. This event is filling up fast so register today!

PANEL: Enabling True Collaboration Across the Ecosystem to Deliver Maximum Innovation

At advanced nodes manufacturing success is highly sensitive to specific physical design features, requiring more complex design rules and more attention to manufacturability on the part of designers. Experts will discuss how collaboration among EDA vendors, IP suppliers, foundries and design firms is the key to enabling efficient design without over-constraining and limiting designers’ creativity. The discussion will touch on what has been accomplished, including industry initiatives under way, and where we need to go in the future.

Moderator:
Daniel Nenni, SemiWiki.com

Panelists:
Walter Ng, Vice President, IP Ecosystem, GLOBALFOUNDRIES
Rich Goldman, Vice President Corporate Marketing & Strategic Alliances, Synopsys
John Murphy, Director of Strategic Alliances Marketing, Cadence
Prasad Subramaniam, VP of R&D and Design Technology, eSilicon
Michael Buehler-Garcia, Director of Marketing, Mentor Graphics

Walter’s position on collaboration:
– “Collaboration” has now become a buzz word
– Chartered/GF pioneered definition of “Collaboration” in semiconductor manufacturing space with origination of the JDA model for process development
o Benefits in bringing the best expertise worldwide from multiple companies to innovate at the leading edge of process development
o Best economic model for access to leading edge technology in the industry which is also scalable
– Efficiency in the value chain for design enablement support of a common process technology
– “Collaboration” is now a requirement in almost everything we do
– “Concurrent Process development and design enablement development” now the focus area of Leading Edge
– “Unencumbered Open, collaborative” model with Eco-system is a requirement; need to address concerns around leakage and differentiation

Rich’s position on collaboration:
Collaboration has been key in Semiconductors since the advent of ASIC and EDA in the early 80s. Our industry depends on it, and you can’t get anywhere without it. It has just been getting more critical every year, and every new node. Without it, forget about Moore’s Law, and forget about your own company. Doing it right IS hard, especially between competitors. It is an essential skill, and will just continue to get more critical. Get used to it. Love it. Embrace it.

John’s position on collaboration:
Enabling fast ramp by optimizing for rules, tools, and IP together. The success of a semiconductor manufacturer depends upon ramping process nodes to volume production as fast as possible to achieve attractive ROI for the staggering capital investment required to build fabrication facilities. SOC designers on the bleeding edge of advanced nodes need early access to process information, design and sign-off tools optimized for the node, and IP they trust will work in real silicon, to ramp their products to volume quickly to access ever-shrinking market windows. The age of enablement which focused primarily on process rules has ended; a new age has emerged that requires industry-level collaboration that optimizes rules, tools and IP together. This requires industry-level collaboration on a scale and depth never seen nor achieved before.

Prasad’s position on collaboration:
With the desegregation in the semiconductor industry, it is no longer practical for a single company to have all the expertise required to design and manufacture integrated circuits. As a result, collaboration is a basic necessity for successful chip design. It is an ongoing process where every piece of the supply chain needs to collaborate with the other to ensure that the needs of the industry are met.

Michael’s position on Collaboration:

Market Scenario:
o Each IC node advance introduces more interactions between design and manufacturing processes, and consequently more complexity in the design flow and design verification process.
o These interactions are occurring at the same time as the business model has moved almost exclusively to fabless/COT. This means more companies doing their own design flow integration vs. just running a completely scripted flow (aka ASIC) provided by the silicon supplier.
o Each Major EDA vendor has an interface program specifically designed to help facilitate integration between different their products and other EDA tools
· Discussion Topics:
o How many designers know about these programs and are taking advantage of them?
o As the complexity of the design and process grows with each node, do we need more and tighter interactions?
o What roles should the foundry play, without becoming an ASIC supplier and controlling too much of the ecosystem?
o What should be the role and responsibility of the fabless company?


Wally Rhines DvCon 2011 Ketnote: From Volume to Velocity

Wally Rhines DvCon 2011 Ketnote: From Volume to Velocity
by Daniel Nenni on 02-23-2011 at 1:49 pm

Abstract:
There has been a remarkable acceleration in the adoption of advanced verification methodologies, languages and new standards. This is true across all types of IC design and geographic regions. Designers and verification engineers are surprisingly open to new approaches to keep pace with the relentless rise in design density. They are looking beyond simply increasing the volume of verification to instead using advanced techniques to improve the velocity of verification.

The result is design teams have not lost ground on meeting schedule goals or first-pass silicon success even as design complexity has grown. Now the focus is shifting to appreciably improving those metrics while shrinking verification costs.

Wally Rhines, Chairman and CEO of Mentor Graphics, will discuss the state of verification past, present and future. After examining the results from a leading verification survey, he’ll look at how advanced techniques are taking hold in mainstream design. Rhines will then offer his insights on where verification needs to evolve to ensure continued improvement.

Wally Rhines is not only my favorite EDA CEO, he is also one of the most inspirational speakers in EDA, so you don’t want to miss this one.

D.A.N.


Evolution of process models, part I

Evolution of process models, part I
by Beth Martin on 02-23-2011 at 1:15 pm

441 750px dill parameters

Thirty five years ago, in 1976, the Concorde cut transatlantic flying time to 3.5 hrs, Apple was formed, NASA unveiled the first space shuttle, the VHS vs Betamax wars started, and Barry Manilow’s I Write the Songs saturated the airwaves. Each of those advances, except perhaps Barry Manilow, was the result of the first modern-era, high-production ICs.

During those years, researchers were anticipating the challenges of fabricating ICs that, according to Moore’s Law, would double in transistor count about every two years. Today, the solution to making features that are much smaller than the 193nm light used in photolithography is collectively referred to as computational lithography (CL). Moving forward into double patterning and even EUV Lithography, CL will continue to be a critical ingredient in the design to manufacturing flow. Before we get distracted by today’s lithography woes, let’s look back at the extraordinary path that brought us here.

ICs were fabricated by projection printing in the early ‘70s by imaging the full wafer at 1:1 magnification. But, linewidths were getting smaller, wafers getting larger, and something had to change. As chip makers dove into the 3um process node, they got some relief by using the newly introduced steppers, or step-and-repeat printing. Still, the threat of printing defects with optical lithography spurred research into modeling the interactions of light and photoresist.

In 1975, Rick Dill and his team at IBM introduced the first mathematical framework for describing the exposure and development of conventional positive tone photoresists: the first account of lithography simulation. What is now known as the Dill Model describes the absorption of light in the resist and relates it to the development rate. His work turned lithography from an art into a science, and laid the foundation for the evolution of lithography simulation software that is indispensible to semiconductor research and development today.

The University of California at Berkeley developed SAMPLE as the first lithography simulation package, and enhanced it throughout the ‘80s. Several commercial technology computer aided design (TCAD) software packages were introduced to the industry through the ‘90s, all enabling increasingly accurate simulation of relatively small layout windows. As industry pioneer, gentleman-scientist, and PROLITH founder Chris Mack has described, lithography simulation allows engineers to perform virtual experiments not easily realizable in the fab, it enables cost reduction through narrowing of process options, and helps to troubleshoot problems encountered in manufacturing. A less tangible but nonetheless invaluable benefit has been the development of “lithographic intuition” and fundamental system understanding for photoresist chemists, lithography researchers, and process development engineers. Thus patterning simulation is used for everything from design rule determination, to antireflection layer optimization, to mask defect printing assessment.

At the heart of these successful uses of simulation were the models that mathematically represent the distinct steps in the patterning sequence: aerial image formation, exposure of photoresist, post-exposure bake (PEB), develop, and pattern transfer. Increasingly sophisticated “first principles” models have been developed to describe the physics and chemistry of these processes, with the result that critical dimensions and 3D profiles can be accurately predicted for a variety of processes through a broad range of process variations. The quasi-rigorous mechanistic nature of TCAD models, applied in three dimensions, implies an extremely high compute load. This is especially true for chemically amplified systems that involve complex coupled reaction–diffusion equations. Despite the steady improvements in computing power, this complexity has relegated these models to be used for small area simulations, on the order of tens of square microns or less of design layout area.

TCAD tools evolved through the ‘90s, and accommodated the newly emerging chemically-amplified deep ultraviolet (DUV) photoresists. At the same time, new approaches to modeling patterning systems were developing in labs at the University of California at Berkeley and a handful of start-up companies. By 1996, Nicolas Cobb and colleagues created a mathematical framework for full-chip proximity correction. This work used a Sum of Coherent Systems (SOCS) approximation to the Hopkins optical model, and a simple physically-based, empirically parameterized resist model. Eventually these two development paths resulted in commercial full-chip optical proximity correction (OPC) offerings from Electronic Design Automation (EDA) leaders Synopsys and Mentor Graphics. It is interesting to note that while the term “optical” proximity correction was originally proposed, it was well known that proximity effects arise from a number of other process steps. The label “PPC” was offered to more appropriately describe the general problem, but OPC had by that time been established as the preferred moniker.

Demand for full-chip model-based OPC corresponded with the increase in computing capability. Along with some simplifying assumptions and a limited scope of prediction capability, these OPC tools could achieve several orders of magnitude of increase in layout area over TCAD tools. An important simplification was the reduction in problem dimensionality. A single Z plane 2D contour is sufficient to represent the proximity effect that is relevant for full-chip OPC. The third dimension, however, is becoming increasingly important in post-OPC verification, in particular for different patterning failure modes, and is an area of active research.

Today, full-chip simulation using patterning process models is a vital step in multi-billion dollar fab operations. The models must be accurate, predictable, easy to calibrate in the fab, and must support extremely rapid full-chip simulation. Already, accuracy in the range of 1nm is achievable, but there are many challenges ahead in modeling pattern failure, model portability, adaptation to new mask and wafer manufacturing techniques, accommodating source-mask optimization, 3D awareness, double patterning and EUV correction, among others. These challenges, and the newest approaches to full-chip CL models, affect the ability to maintain performance and yield at advanced IC manufacturing nodes.

By John Sturtevant

Evolution of Lithography Process Models, Part II
OPC Model Accuracy and Predictability – Evolution of Lithography Process Models, Part III
Mask and Optical Models–Evolution of Lithography Process Models, Part IV


Custom and AMS Design

Custom and AMS Design
by Daniel Payne on 02-21-2011 at 10:06 pm

Samsung%203DIC%20Roadmap

For IC designers creating full-custom or AMS designs there are plenty of challenges to getting designs done right on the first spin of silicon. Let me give you a sneak peek into what’s being discussed at the EDA Tech Forum in Santa Clara, CA on March 10th that will be of special interest to you:

3D TSV (Through Silicon Vias) are gaining much attention and for good reason, they help to make our popular portable electronics quite slim and cost effective:

Panelists from the following companies will discuss: Is 3D a Real Option Today?

Rob Aitken, ARM Fellow, ARM
Bernard Murphy, CTO, Atrenta
Simon Burke, Distinguished Engineer, Xilinx
Kuang-Kuo Lin, Ph.D., Director, Foundry Design Enablement, Samsung
Juan Rey, Senior Engineering Director, Design to Silicon Division, Mentor Graphics

The TSMC AMS Design Flow 1.0 was announced back in June 2010, so come and find out what’s changed in the past 9 months. In contrast, the digital flow is already at rev 11.0, which indicates a much more standardized approach to convergence in the digital realm.

TSMC and Mentor will present on their AMS design flow:

Tom Quan, Deputy Director of Design Methodology & Service Marketing, TSMC
Carey Robertson, Director LVS and Extraction Product Marketing, Mentor Graphics

Custom physical design is getting more interoperable with standards arising like OpenAccess:

Learn how SpringSoft and Mentor are working together on signoff-driven custom physical design:

Rich Morse, Technical Marketing Manager, SpringSoft
Joseph C. Davis, Calibre Interface Marketing Manager, Mentor Graphics

The EDA Tech Forum is free to attend, however you’ll want to signup to reserve your spot today. This is a full-day event with other sessions that may answer some nagging questions that you have about AMS design tools and flows.


Synopsys at Goldman Sachs Technology Conference

Synopsys at Goldman Sachs Technology Conference
by Paul McLellan on 02-21-2011 at 7:00 pm

Aart de Geus was interviewed at the Goldman Sachs Technology Conference last week. Here is some of what he said. Strong Q1, good Q2 outlook, on-track for 2011 guidance. Strong rebound in Far East, Europe mixed, North America good. 80% revenue for year booked by start of year, 90% revenue for a quarter already booked at start of quarter. Predictability has allowed investment.

Five point economic strategy:

[LIST=1]

  • Organic growth in low to mid-single digits
  • Investment in rapid growth adjacencies: systems and IP
  • M&A: 7-8 acquisitions last year
  • Focus on efficiency
  • Maintain flat share count

    20% revenue this year coming from IP products. Second largest IP vendor behind ARM (I’m not sure if he counts Rambus as an IP supplier who were at $323M last year). IP business is changing, no longer make vs buy but differentiated IP the customer wouldn’t be able to design themselves (e.g. USB 3). There used to be 200-300 IP companies 4 years ago, now it is down to a handful since it is so risky to incorporate IP from an unknown supplier. Synopsys is the safe choice since they provide a complete solution.

    Systems: placing a major focus on hardware/software interaction. Acquired3-4 companies around prototyping last year. Bringing technology of VaST and CoWare (and presumably Virtio although he didn’t mention it) together. Decade long process.

    Pricing: industry is very competitive but Synopsys is enforcing pricing discipline. Layering IP on top of big tools deals. Implied competition is doing anything to avoid being designed out at customers.


  • Clock Domain Crossing (CDC) Verification

    Clock Domain Crossing (CDC) Verification
    by Paul McLellan on 02-21-2011 at 6:12 pm

    Multiple, independent clocks are quintessential in SoCs and other complex ASICs today. In some cases, such as in large communications processors, clock domains may number in the hundreds. Clock domain crossings pose a growing challenge to chip designers, and constitute a major source of design errors–errors that can easily slip past conventional verification tools and make their way into silicon.

    When these errors make it into silicon, the costs are high. A single silicon re-spin may cost $10 million and extend time-to-market by months, greatly reducing the chip’s market share and profit potential.

    Therefore, there is a substantial benefit to identifying and correcting CDC problems in the early stages of the design, at the RTL level, when corrections may be made quickly and at minimal cost. Unfortunately, cycle-based simulation, the mainstay of RTL-stage verification, is not well suited to finding and tracing timing-related errors resulting from CDC problems. Static timing analysis tools treat clock domain crossings as exceptions and ignore them. Furthermore, traditional structural CDC analysis tools can help identify clock domain crossings and perform some basic synchronization checking, but none offers the kind of comprehensiveness or precision users require. Such tools tend to simultaneously overlook a number of real design errors and over-report a large numbers of false violations.


    Control signal synchronization


    Data signal synchronization

    When signals cross from one clock domain to another asynchronous domain, several problems can result:


    [LIST=1]

  • Metastability. When a signal from the first clock domain is transitioning just as the second clock domain is clocked, there may be an attempt to clock an invalid logic level into a register. The register may take some time to return to a stable state.
  • Convergence of separately synchronized signals. Since there are times when it is unclear whether a signal will be latched on one clock cycle or the subsequent on, it is possible that two signals changing simultaneously in one clock domain end up a cycle apart in the other domain.
  • Data loss when a signal crosses from a domain with a faster clock to one with a slower clock, for example. More positive synchronization such as handshaking may be necessary

    To detect these problems (and ignore non-problem false errors) a good CDC verification tool must:

    [LIST=1]

  • Identify and display clock domains automatically
  • Identify unsynchronized crossings
  • Screen out false violations
  • Allow users to identify quasi-static signals (e.g. reset signals)
  • Ignore false paths
  • Recognize handshake mechanisms
  • Recognize standard synchronizers, such as 2 flop
  • Allow users to specify non-standard synchronizers
  • Detect problems at synchronized crossings: cross-domain fanouts, cross-domain fanins, reconvergent signals, gray code violations, hold-time violations


    Analysis and display of clock domains

    Images: courtesy Atrenta

    For more details: the Atrenta CDC white-paper



  • Mentor Graphics Should Be Acquired or Sold: Carl Icahn COUNTERPOINT

    Mentor Graphics Should Be Acquired or Sold: Carl Icahn COUNTERPOINT
    by Daniel Nenni on 02-20-2011 at 7:04 pm


    Daniel,

    On Jan 20th, you criticized that the EDA models are all broken and need to change. Ridiculing Synpsys, Cadence, Mentor and Magma for not agreeing to ‘pay for success’ type of model (some form of royalties).

    On Feb 14th, you state thatIcahn doesn’t understand EDA and should stay out. Maybe he is seeing the same issue you have stated. The business models are not correct and do not maximize shareholder returns. Unfortunate for Mentor that over the past x yrs, besides a Cadence hostile takeover attempt, several hedge funds started to become active with Mentor stock. Once this trend started, Mentor became the low hanging fruit with higher visibility and a higher probability for return on their investment. This return might be on short ‘buyouts’ of this funds (pay them to leave) or longer term hostile activities.

    Changing business models is a very tricky issue for public companies. All information needs to be disclosed and this can have dramatic affect on a company’s share price. Over the past 10 years, you can see at least 2 cases for Synopsys and 2+ cases for Cadence. Synopsys over this time period did two incremental jumps to a ratable model (about 50% each time and they are consistently stated 85-90% of sales in a given quarter as ratable). But each time they did this, their stock did go down. It was controlled by limiting to just 50% change. Cadence has done several ‘flips’ on ratable vs non-ratable recognition and also mentioned their ‘token/credit card’ approach. Just as with Synopsys, these events were seen as negative by investors and they predicted a lower EPS. Their stock also dove due to this forecast.

    So public companies that try to change models have external pressure on stock price and lower stock price can enable/cheapen hostile take overs. So actions that affect stock prices are carefully weighed. Companies that are not willing to go this route will use LBOs or Equity groups to purchase the company (with a huge premium paid to sr mgmt along with the equity group) to take a company private. Once private, changing business models and the results from these are not regularly reviewed by the markets (qtrly). Companies that have the cash to work thru these changes can be very successful. They can change business models, strategies, etc. Freescale did this a few years ago and the LBO saddled them with a huge debt to finance. Rich Beyer came in after the LBO was finalized and has done a remarkable job re-aligning the company for targeted markets. Selling off divisions/product lines that do not fit (reducing large expenses with not so large revenue from these groups) as well as trying to retire portions of this large debt. His last tactic/task is the recent announcement of an IPO for $1+B to resolve the final debt. If successful, he will have taken Freescale and remodeled it for a new market….Do a search on various Freescale press releases over the past 3-5 years and you will see the various actions taken.

    There are plenty of other examples of LBOs that have either succeeded or failed. Unsure what % actually are successful with this strategy. This strategy might be as dangerous as remodeling in the public/market eye.

    Concerning Carl or anyone else that is external to a given market, they have a new viewpoint (might be naive or just a different thought process on creating value) that may or may not be valid. Look at all the the changes in the recent 10 years…

    Apple has revolutionized how content is used (iPhone, iPad, iPod, iTunes) and $’s are derived from this content. They alone have probably the largest disruptor in how music, books, TV and movies are acquired and viewed. Kindle/Nook is another form that allows purchasing via WiFi more books that are automatically download to you machine that can hold 1500+ books.

    Twitter, Facebook, LinkedIn are allowing large groups of people to ‘gather’ in areas of common interest (all in parallel….not serial discussions). Facebook is credited (rightfully so) with the recent uprising across the MidEast.

    LED lights are being used in any thing from traffic lights to TVs.

    RFID trackers is another area that some people pushed for specific applications to reduce the cost of manual tracking. Now these are typically in toll tags used to electronically charge/collect tolls from credit card accounts. Removing toll booths that were unsafe (yes, cars had to stop, pay and then remerge with other cars), added time to a driver’s destination, worsened fuel mpg (slowing down, stopping, reaccelerating), and cost more to staff. Lots of people thinking of new ways to apply technology to current issues…

    One negative ‘new’ idea, sub prime loans that made a few people (financial companies and sr mgmt) very rich and undermined the entire economy. Many of these ideas came from established companies wanting to enable/attract a new source of $’s. They also saw the downside and quickly bundled these risky assets into bundles that were sold to other ‘unknowing’ investors. I played golf with one guy that worked at one of these firms. He said it was purchase quickly, re-package and sell fast. Get in, get out and get your commission… we can see where this led the US….

    What is common? Many of these people were not in the same field (including Apple that was tied to computing power….Steve Jobs saw a huge opportunity in content management and changed not only the products but also the business models that many others could not see (that were in the business). Jobs/Apple products and business models aligned with each other.

    These investor events will cause all companies (even Synopsys, Cadence and Magma) to re-evaluate if they should do something different. I do not think that getting kicked out of a comfort zone is bad. It can cause ‘out of the box’ thinking that might be much better than the current situation. Regardless who leads/manages the company. Only time will tell if everyone is in a better place in a few years.

    Complex world out there that behaves more analog than digital.

    (Please keep my identity confidential)


    Mentor Graphics to Participate in SemiWiki.com Social Media Platform

    Mentor Graphics to Participate in SemiWiki.com Social Media Platform
    by admin on 02-17-2011 at 8:16 am


    San Jose, Calif., [DATE], 2011 – SemiWiki.com today announced that Mentor Graphics, a world leader in electronic hardware and software design solutions, will participate in the SemiWiki.com global social media platform aimed at facilitating mass communication for electronic design professionals through Web 2.0 technologies.

    The goal of SemiWiki is to bring members of the semiconductor ecosystem together and to foster better collaboration in meeting the challenges of advanced semiconductor design and manufacturing. Mentor, along with other members of the EDA, IP and foundry ecosystem, will contribute meaningful content including company and product wikis, blogs and discussion forums.

    “Mentor’s core value is to enable customer success through collaboration in product design and comprehensive application support,” said Joseph Sawicki, vice president and general manager of the Design-to-Silicon division at Mentor Graphics. “We’re excited about exploring this new way to share our expertise and new product capabilities, and to respond directly to our customers’ questions and needs.”

    The site is now live and may be reached at http://semiwiki.com/. Users can easily set up an account to access information, provide feedback and post content.

    “Our industry needs a site that facilitates real time, vendor neutral discussion among real users,” said Daniel Nenni, internationally recognized industry blogger, and founder of the SemiWiki Project. “SemiWiki.com will provide our registered users with a connected community that promotes the open exchange of ideas, experiences and feedback.”

    About the SemiWiki Project

    The SemiWiki Project provides in-demand content for semiconductor design and manufacturing, facilitating peer-to-peer communications using Web 2.0 technologies. Daniel Nenni will be joined by industry bloggers Paul McLellan, Daniel Payne, Steve Moran, and Eric Esteve at SemiWiki.com.