RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

TSMC 28nm Beats Q1 2012 Expectations!

TSMC 28nm Beats Q1 2012 Expectations!
by Daniel Nenni on 04-26-2012 at 9:00 am

TSMC just finished theQ1 conference call. I will let the experts haggle over the wording of the financial analysis, but the big news is that TSMC 28nm Q1 revenue was 5%, beating my guess of 4%. So all of you who bet against TSMC 28nm it’s time to pay up! Coincidentally, I’m in Las Vegas where the term deadbeat is taken literally!

Per my blog The Truth of TSMC 28nm Yield!:

28nm Ramp:
[LIST=1]

  • 2% 1/18/2012
  • 4% 4/26/2012 (my guess)
  • 8% 7/19/2012 (my guess)
  • 12% 10/25/2012 (my guess)


    “By technology, revenues from 28nm process technology more than doubled during the quarter and accounted for 5% of total wafer sales owing to robust demand and a fast ramp. Meanwhile, demand for 40/45nm remained solid and contributed 32% of total wafer sales, compared to 27% in 4Q11. Overall, advanced technologies (65nm and below) represented 63% of total wafer sales, up from 59% in 4Q11 and 54% in 1Q11.”
    TSMC Q1 2012 conference call 4/26/2012.

    “Production using the cutting-edge 28 nanometer process will account for 20 percent of TSMC’s wafer revenue by the end of this year, while the 20 nanometer process is being developed to further increase speed and power” Morris Chang, TSMC Q1 2012 conference call 4/26/2012.

    So tell me again that “foundry Taiwan Semiconductor Manufacturing Co. Ltd. is in trouble with its 28-nm manufacturing process technologies”Mr. Mike Bryant, CTO of Future Horizons. Tell me again that “TSMC halted 28nm for weeks” in Q1 2012 Mr. Charlie Demerjian of SemiAccurate. And special thanks to Dan Hutchenson, CEO of VLSI Research, John Cooley of DeepChip, and all of the other semiconductor industry pundits who propagated those untruths.

    Lets give credit where credit is due here, I sincerely want to thank you guys for enabling the rapid success of SemiWiki.com. We could not have done it without you! But for the sake of the semiconductor ecosystem, please do a better job of checking your sources next time.

    During the TSMC Symposium this month, Dr. Morris Chang, Dr. Shang-Yi Chiang, and Dr. Cliff Hou all told the audience of 1,700+ TSMC customers, TSMC partners, and TSMC employees that TSMC 28nm is: yielding properly, as planned, faster than 40nm, meeting customer expectations, etc…

    Do you really think these elite semiconductor technologists would perjure their hard earned reputations in front of a crowd of people who know the truth about 28nm but are sworn to secrecy? Of course not! Anyone that implies they would, just to get clicks for their website ads, are worse than deadbeats and should be treated as such. Just my opinion of course!

    TSMC also announced a 2012 CAPEX increase to between $8B and $8.5B compared to the $7.3B spent in 2011. My understanding is that the additional money will be spent on 20nm capacity and development activities (FinFets!?!?). In Las Vegas that may not qualify as “going all in” but it is certainly a very large bet on the future of the fabless semiconductor ecosystem!


  • Non Volatile Memory IP: a 100% reliable, anti fuse solution from Novocell Semiconductor

    Non Volatile Memory IP: a 100% reliable, anti fuse solution from Novocell Semiconductor
    by Eric Esteve on 04-25-2012 at 9:36 am

    In this pretty shaky NVM IP market, where articles frequently mention legal battles rather than product features, it seems interesting to look at Novocell Semiconductor and their NVM IP product offering, and try to figure out what makes these products specific, what are the differentiators. Before looking at SmartBit cell into details, let’s just have a quick look about the market size of the NVM IP market. The NVM as a technology is not so young, I remember my colleagues involved in Flash memory design back in 1984 at MHS then moving to STM to create a group who has, since then, generated billion dollars in Flash memory revenues. But the concept of NVM block which could be integrated into an ASIC is much more recent, Novocell for example has been created in 2001. I remember that, in 2009, analyst predictions for the size of this IP market was around $50 million for 2015. The NVM IP market is not huge, and probably weights a couple of dozen $M today, but it’s a pretty smart technology: integrating from a few bytes to up to Mbits into a SoC can help reducing the number of chips in a system, increase security and allow for Digital Right Management (DRM) for video and TV applications, or provides encryption capability.

    To come back to embedded NVM technology, the main reason for their lack of success in ASIC in the past (in the 1990’s) was the fact that integrating a Flash memory block into a SoC was requesting to add specific mask levels, leading to an over-cost of about 40%. I remember trying to sell such an ASIC solution in 1999-2001: it was looking very attractive for the customer, until we talk about pricing and the customer realizes that the entire chip would be impacted. I made few, very, very few sales of ASIC with embedded Flash! The current NVM IP offering from Novocell Semiconductor does not generate such a cost penalty; the blocks can be embedded in standard Logic CMOS without any additional process or post process steps and can be programmed at the wafer level, in package, or in the field, as en use requires.

    An interesting feature offered by the Novocell NVM family, based on antifuse One Time Programmable (OTP) technology, is the “breakdown detector”, allowing to precisely determining when the voltage applied to the gate (allowing programming the memory cell by breaking the oxide and consequently allowing the current to flow through this oxide) will effectively have created an irreversible oxide breakdown, the “hard breakdown”, by opposition of a “soft breakdown” which is an apparent, reversible oxide breakdown. If the oxide has been stressed during a period of time which is not long enough, the hard breakdown is not effective and the user can’t program the memory cell. Looking at the two pictures (below) help understanding the mechanisms:

    • On the first, the current (vs time) is going up sharply only after the thermal breakdown is effective

    • The next pictures shows the current behavior of a memory cell for different cases, and we can see that when the hard breakdown is effective the current value is about three order of magnitude higher than for a progressive (or soft) breakdown.

    Thus, we can say that one of the Novocell’s differentiator is reliability. To avoid the limitations of traditional embedded NVM technology by utilizing the patented dynamic programming and monitoring process and method of the Novocell SmartBit™, ensuring that 100% of customers’ embedded bit cells are fully programmed. The result is Novocell’s unmatched 100% yield and unparalleled reliability, guaranteeing customers that their data is fully programmed initially, and will remain so for an industry leading 30 years or more. Novocell NVM IP scales to meet all NVM size and complexity challenges that grow exponentially as SoCs continue to move to advanced nodes such as 45nm and beyond.

    Eric Esteve from IPNEST –


    Mentor’s New Emulator

    Mentor’s New Emulator
    by Paul McLellan on 04-25-2012 at 8:00 am

    Mentor announced the latest version of their Veloce emulator at the Globalpress briefing in Santa Cruz. The announcement is in two parts. The first is that they have designed a new custom chip with twice the performance and twice the capacity. It supports up to two billion gate designs and many software engineers. Surprisingly the chip is only 65nm but Mentor reckons it outperforms competing emulators based on 45nm technology. I’m not sure why they didn’t design it at 45nm and go even faster, but this sort of chip design is a treadmill and so it is not really a surprising announcement. In fact, I can confidently predict that in 2014 Mentor will announce the 28nm version with more performance and more capacity!

    Like most EDA companies, Mentor doesn’t do a lot of chip design. After all they sell software. But emulation is the one area that actually uses the tools. Since one of the big challenges in EDA is getting hold of good test data for real chips, the group is very popular in other parts of Mentor since the proprietary nature of the data is less of an issue inside the same company.

    The other thing that they announced is VirtuaLAB. I assumed that this was already announced since Wally Rhines talked about it in his keynote at the Mentor Users’ Group U2U a week or two ago and I briefly covered it here. Historically, people have used an in-circuit-emulation (ICE) lab with real physical peripherals. These suffer from some big problems:

    • expensive to replicate for large numbers of users
    • time consuming to reconfigure (which must be done manually)
    • challenging to debug
    • doesn’t fit well with the security access procedures for datacenters (Jim Kenney, who gave the presentation, said he had to get special security clearance to go and get a picture inside the datacenter since even the IT guys are not allowed in)
    • is never where you want it (you are in India, the peripherals are in Texas)

    VirtuaLAB is a software implementation of peripherals. They run on Linux and are hardware-accurate. They can easily be shared, after all it’s just Linux. They can be reconfigured by software. You don’t need to go into the datacenter on a regular basis to reboot/reconfigure anything. Of course the purpose of all this is so that you can develop/debug and test device drivers and so on using the models. For example, here is a model of a USB 3.0 Mass Storage Peripheral (aka Thumb drive).

    Afterwards I talked to Jim. He confirmed something I’ve been hearing from a number of directions. Although people have been saying for years that simulation is running out of steam and you need to switch to emulation (especially people whose job is to sell emulation hardware), it does finally seem to be true. You can’t do verification of a modern state of the art SoC including the low-level software that needs to run against the hardware, without emulation. For example, a relatively small camera chip (10M gates) requires two weeks to simulate or 20 minutes to emulate.

    I asked him who his competition is. Cadence is still the most direct competition. Customers would love to be able to use an emulator at Eve’s price-point but it seems that for many designs, getting the design into the emulator is just too time-consuming. And EDA has always been a bit like heart-surgery, it’s really difficult to market yourself as the discount heart-surgeon.


    Fast buses at DAC

    Fast buses at DAC
    by Paul McLellan on 04-24-2012 at 10:05 pm

    UPDATE: there is free WiFi on all buses.

    OK, these are not the 128 bit 1GHz buses we have to hear about every day. They go roughly 40 miles in roughly an hour. But they take you from Silicon Valley to DAC and back, and they are cheaper than BART or Caltrain.

    For the first time this year, DAC has buses from Silicon Valley to Moscone for DAC. They depart from the Cadence parking lot at 2655 Seely Avenue (where you can leave your car all day even if you are not a Cadence employee). The buses run Monday through Wednesday.

    Into San Francisco there are buses at:

    • 7.30am
    • 8.00am
    • 8.30am.

    Return buses are at:

    • 6.15pm
    • 6.45pm
    • 7.15pm.

    I am trying to find out if there is WiFi on the buses. Or maybe Google and co already have all the WiFi enabled buses for their daily fleet of Gbuses that trawl from San Francisco to Mountain View and vice-versa.

    You can’t just show up to get on the bus. You need to attach a bus ticket to your DAC registration. If you are already registered then use the link in your confirmation email. If you are about to register, then don’t forget to add the bus before you check out.

    Full details and registration links are here.


    Audio, not your father’s MP3

    Audio, not your father’s MP3
    by Paul McLellan on 04-24-2012 at 9:26 pm

    Chris Rowen, Tensilica’s CTO, presented in Santa Cruz at the Globalpress briefing. He was basically presenting Tensilica’s audio strategy, which I’ve written about before. But he provided an interesting perspective. Globalpress (which flies journalists in from all over the world and then fills the few remaining empty seats with a few of us local guys) has been going ten years.

    Ten years ago, Globalpress was, in the audio processing area:

    • talking about 0.13um (or 130nm).
    • the first mp3 player, the Rio, was just 3 years old.
    • iPod was out…just…not even for a year. The one with a mechanical touch-wheel.
    • VHS would outsell DVD for another couple of years.
    • First cell-phone with a built-in camera released in US (by Samsung) with VGA resolution (OK, that’s not audio).

    And Tensilica had not introduced an audio core. But one year later, 9 years ago, they had. Now it is accepted wisdom that audio processing on a general purpose processor (i.e. ARM) is silly. You should offload it onto a specialized core such as the Tensilica one (or the ARC-based Synopsys one that we also recently covered). I was at the Linley Tech Mobile Conference (nee Microprocessor forum) and a Tensilica demo by the Wolfson Microelectronics (yeah, Edinburgh, one of my alma maters) showed it dramatically. The ARM would sleep for over 9 seconds and wake up for less than a second to feed a Tensilica core with data and then go to sleep. I forget the precise power reduction (there were ammeters and oscilloscopes to keep everyone honest) but it was dramatic.

    The problem is that voice requirements are going up faster than Moore’s Law (or More that Moore as we are learning to say). Basic voice runs at 200MHz today but will go up to 600MHz in 2-3 years. And those power budgets, not so much.

    We are looking forward to much higher sound performance, especially on the voice receiving side:

    • active noise control (knocking out ambient noise with its inverse)
    • beam forming microphone arrays
    • always-on voice recognition

    The point of Chris’s story wasn’t so much that voice is different, but it is a pioneer going where all the other technologies: video, radio processing (bluetooth, wireless, LTE etc), camera image processing and so on are going. And it is exploding…



    Smart mobile SoCs: Texas Instruments

    Smart mobile SoCs: Texas Instruments
    by Don Dingee on 04-24-2012 at 9:00 pm

    TI has parlayed its heritage in digital signal processing and long-term relationships with mobile device makers into a leadership position in mobile SoCs. They boast a relatively huge portfolio of design wins thanks to being the launch platform for Android 4.0. On the horizon, the next generation OMAP 5 could change the entire mobile industry. Continue reading “Smart mobile SoCs: Texas Instruments”


    Broadcom announces an HFC

    Broadcom announces an HFC
    by Paul McLellan on 04-24-2012 at 8:00 pm

    For a long time Cisco had a very high end product whose official internal name during its years of development was HFR, which stood for Huge F***ing Router (the marketing department insisted it stood for ‘fast’). Eventually it got given a product number, CRS-1, but not before I’d read an article about it in the Economist under its old name. Wikipedia is on it. I was at the Globalpress briefing in Santa Cruz today and Broadcom announced their next generation network processor, definitely a chip deserving of the HFC appellation.

    Unless you are a carrier equipment manufacturer such as Alcatel-Lucent, Ericsson or Huawei then the precise details of the chip aren’t all that absorbing. If you are, it’s called the BCM88030.

    What I think is most interesting is the scale of the chip. It’s an amazing example of just what can be crammed onto a 28nm chip. Not just in size, but also in performance and power (or lack of it).

    Firstly, this chip is a 100Gbps full-duplex network processor. This means it handles 300M packets/second, or a packet in approximately 3ns. Since its clock rate is 1GHz, that means in the time to execute 3 instructions so the only way this is workable is through parallelism. Indeed the chip contains 64 custom processors. Even that is not enough, each processor can handle up to 32 packets at a time, by advanced hardware multi-threading. Even that is not enough, some specialized functions just aren’t suited to general microprocessors and are offloaded to one of 7 specialized engines that perform functions like lookup (MAC addresses, IP addresses etc), police funtions, timing. All this while reducing power and area compared to previous generation solutions by 80%.

    That’s just the digital dimension. The chip also contains the interfaces to the outside world with 24 10Gb/s Ethernet MACs, 6 50Gb/s Ethernet MACs and 2 100Gb/s Ethernet MACs.

    What is driving the need for this amount of bandwidth is that carriers are switching completely to using Ethernet as their internal backbone between the different parts of their networks, from the base-station to the access network, to the aggregation network and in the core. This extremely high performance chip is targeted at aggregation and the core.

    In turn this is driven by 3 main things:

    • millions of smartphones and tablet computers
    • upgrade of networks from 3G to 4G with increased bandwidth
    • increasing use of video

    These are causing an explosion in mobile backhaul, the (mostly) wired network that hooks up all the base-stations into the carriers network and to the core backbone of the internet.

    The growth is quite significant. A smartphone generates 24X the data of a regular phone (I’m not sure if the includes the voice part, although in terms of bits per second that is quite low with a modern vocoder). Tablets generate 5X the data of a smartphone (and so 120X a regular phone). And the number of units is going up fast. By 2015 it is predicted that the number of connected devices will be 2X the world population. As for that video, by 2015 one million minutes of video will cross the network each second. That’s a lot of cute kittens. In total, mobile data traffic is set to increase 18 fold between 2011 and 2015.

    This is driving 100G Ethernet adoption, forecast to have 170% CAGR over the next 5 years. Hence Broadcom’s development of this chip. But, like any other system of this complexity, the chip development is accompanied by an equally challenging software development problem, to develop a tool chain and a complete reference implementation so that customers can actually use the chip.


    TSMC versus Intel at 20nm!

    TSMC versus Intel at 20nm!
    by Daniel Nenni on 04-24-2012 at 7:00 pm

    The biggest news out of the TSMC Symposium last week was the 20nm update. Lots of debate and speculation, just why is TSMC releasing one version of 20nm (20nm SoC) versus multiple versions like in 40nm (LP, G, LPG) and 28nm (HP, HPM, HPL, LP)? Here are my thoughts, I would also be interested in your feedback in the comment section. This really is a big change for both TSMC and the foundry business so it is certainly worth discussing.

    Morris Chang did a candid interview in early January discussing Intel as a competitor. Morris is a very clever man, a master at the card game bridge, so you can really read a lot into of what he has said here:

    “TSMC’s technologies and performance have reached quite a high level, bringing us into contact with different rivals,” Chang said

    The high level is volumes of mobile chips, volumes that will certainly rival Intel’s microprocessor business in the not too distant future.

    “The competitors we face are Samsung Electronics Co. and GlobalFoundries Inc., with Intel standing ‘behind a veil’ because it is a rival against many of our customers,” Chang said, adding that these TSMC customers include integrated circuit designers and integrated device manufacturers.

    The strategic positioning begins! TSMC is a pure-play foundry and collaborates with customers versus IDMs (Intel/Samsung) that competes with customers. The Apple/Samsung legal drama is a glaring example of this.

    At the Symposium, Morris mentioned R&D expenses of TSMC versus Intel and Samsung, the difference being, TSMC collaborates with customers/partners and leverages R&D expenses. So the equation looks like this:

    Top 10 TSMC customers R&D expenses + TSMC R&D expenses > Intel + Samsung R&D expenses

    Another interesting quote from the article:

    Samsung and GlobalFoundries are newcomers in the industry, Chang said, and suggested that TSMC’s customers should diversify their foundry sources rather than rely on TSMC only.

    Which is interesting advice coming from the Chairman of TSMC. It is certainly a message to TSMC employees that second source competition is always a threat so even with 50%+ market share there is no time to rest on previous accomplishments. Notice he does not mention Intel here. Of course Morris followed that quote with something of purpose:

    “All of our customers rely on TSMC in foundry production, and Intel relies on its own foundry plants,” he said. “If our technologies are not improved enough and Intel keeps improving its technologies, our customers’ products will lose competitiveness to those of Intel. It’s horrible to imagine the outcome.”

    Another competitive shot at Intel! Well played Mr Chairman. I wish I could use a bridge analogy here but I don’t play bridge. Morris ended the interview with another shot at Intel:

    “TSMC will stand behind our customers and cooperate with them. The battlefield between our customers and Intel is where we compete against Intel,” he added.

    So it is the fabless companies, ARM, and TSMC against Intel. I like those odds!

    Back to 20nm. Intel has one version of 22nm so to better compete with Intel TSMC will focus all resources on a single SoC optimized version of 20nm, simple as that. TSMC may also offer FinFets at 20nm so customers will have a choice between planar and FinFet transistor implementations, something that Intel does not offer. It is also about capacity. TSMC’s CAPEX hike is all about 20nm and with one S0C optimized version there won’t be the shortages we see at 28nm.

    Sound reasonable? Please use the comment section for further analysis.


    Mergers and Acquisitions in EDA should spark Innovation and Start ups

    Mergers and Acquisitions in EDA should spark Innovation and Start ups
    by Rich Goldstein on 04-23-2012 at 8:13 pm

    With the recent closure of the Synopsys Magma deal and the economy showing a bit of uptick and some positive outlook compared to the last 3-4 years, I believe it’s time for some of the creative minds that find themselves looking for new opportunity to consider starting their own point tool as well as IP companies.

    Many of these people are some of the brightest in technology worldwide yet will discover some new realities of the current job market. The first is that openings in EDA and related areas such as IP are few and far between; the other being that the industries in the Valley that are growing and hiring rapidly don’t have the interest or intelligence to open their doors to veterans of areas besides their own. This is a grim reality of the new wave of mobile, cloud, SaaS, and enterprise companies that are garnering the funds from VC’s and hiring as if it were the boom all over again. I have experienced this myself more times than I can even count in the last year or so as Ive attempted to break new ground and transfer my skills in recruiting for software start ups into these areas, just to be ignored or often times told that they want the “7-10 year” candidates that are hungry and connected to people in their space and grew up using social media and mobile applications in their daily lives at least beginning in college if not sooner.

    Intellectual Property (IP) is another avenue for the jobseeker to explore; perhaps a lower cost entrée into starting a smaller company that can realize success and notoriety sooner than building a software tool from the ground floor. There have been recent successes with design services companies that own their own IP and were able to be recognized as a business worthy of acquisition

    I am not a technologist, I refer to myself as “buzzword compatible”, but I do understand as the geometries of the circuits get smaller and smaller there are new challenges that need to be addressed. Through the ups and down of the economies and the consolidations that have taken place, the fact still remains that innovation will come from smaller teams of people with an eye on invention as opposed to corporate security and politics. And these pioneers will in fact resurrect the cycle that we’ve all relied on over the years of growth via acquisition. There are private avenues of funding and friends of families and people who have tasted success that can be instrumental in helping these new entrepreneurs.
    ——————————————————————————————————————————————
    Richard Goldstein has been a recruiter and advisor to startups in EDA and IP since 1984 and has placed many of the executives and leaders at startups within this industry. He has also held corporate contract recruiting positions inside companies such as Magma Design Automation, Kilopass Technology, and Xilinx.

    Rich@dacsearch.com



    DAC 2012 Must-See! Hogan’s Heros: Learning from Apple

    DAC 2012 Must-See! Hogan’s Heros: Learning from Apple
    by Holly Stump on 04-23-2012 at 6:30 pm


    Who doesn’t love the perennial Hogan’s Heros panel at DAC? Always provocative and illuminating, for technologists, entrepreneurs, and strategists.

    At DAC 2012, Jim Hogan’s panel is “Learning from Apple”:Apple. We admire their devices, worship their creators and praisetheir stock in our portfolios. Apple is synonymous with creative thinking, new opportunities, perseverance and wild success. Along the road, Apple set new technical and business standards. But how much has the electronics industry, in particular EDA, “where electronics begins,” learned from Apple? It depends.

    Lets ask Jim…..What have we FAILED to learn from Apple?

    1. Technology vs Customers?

    It’s not about being leading edge technology. It’s about the user experience: display, power until charge, applications, and content: graphics/video. A relentless push for higher quality user experience – at minimum system cost. And feature convergence – video, voice, data, audio –in every consumer device.

    Job’s Law: “Never compromise the user experience.”I know for the iPAD, Jobs had five guys working directly for him to seek out and understand how people used devices, to better spec the PAD and later projects.

    They don’t do things right on the leading edge, but just behind it. They also then beat the living hell out of their supply chain for cost savings, and spend it on things that matter, like displays (check out the brilliant iPAD 3 display, and new power supply spec!)

    2. Business and Pricing Models? Market Excitement / Charisma / Image?

    Apple business model: give people a design that is useful and trendy and you can demand a premium.And Apple is well-branded. (In the old days, Apple even paid a guy in agarage to stamp an Apple logo on every DRAM…DRAM that no one ever even saw…that’s Apple!) We buy Apple because the darn things always work and are very reliable, the apps themselves work on your device and all other Apple devices. The Brand is King. They can slap an Apple on anything and sell it, i.e. iTunes.

    The i4 repositioning is another good example. The price was lowered to $99 (through carriers) thus bringing the Apple experience to many people who had not yet bought. For the carriers, it attracted new subscribers consuming their data plans.

    And, EDA needs to think more about adjacent markets. Apple is the number one distributor of music in the world (surpassing Wal-Mart last year.) Think about Siri and its possible evolution….

    3. Managing Wall Street?

    The best way to manage Wall St. is, show 25% top growth and 25% bottom line growth. Apple showed 100% top line in 2011 and 150% bottom line. Thus they are the most valuable on earth, surpassing Exxon in March. We are indeed in the information age and energy has moved down a notch.

    Jim,who are your panelists for “Learning from Apple”?

    We have Dr. Jan M. Rabaey, Professor at UC Berkeley, who is currently the scientific co-director of the Berkeley Wireless Research Center and serves on Technical Advisory Boards for a wide range of companies. Dr. Jack Guedj is President and CEO at Tensilica, Inc. He also ran Magnum Semiconductor, which he spun out of Cirrus Logic. Tom Collopy, CTO at Aggios, was VP of Engineering at Qualcomm responsible for the Smartphone/Smartbook market, including Snapdragon (Smartphones, Android.)

    OK,Jim, we love Apple! How much is in YOUR portfolio?

    (Laughter….)

    Click for more information on Hogan’s Heros at DAC 2012

    Jim Hogan, Private Investor
    Jim is currently the managing partner of Vista Ventures, LLC. Jim has worked in the semiconductor design and manufacturing industry for more than 35 years gaining experience as a senior executive in electronic design automation, semiconductor intellectual property, semiconductor equipment, and fabrication companies. Mr. Hogan holds a B.A degree in mathematics, a B.S. degree in computer science and an M.B.A., all from San Jose State University. He serves on the Board of Advisors at San Jose State’s School of Engineering, and on several private companies’ boards of directors: Altos (acquired by Cadence May 2011), AutoESL (acquired by Xilinx February 2011), Scoperta, CLK, Tela Innovations and Shocking Technologies, Solido and Sonics. Additionally, Jim serves as a strategic advisor to several private and public technology companies.