Synopsys IP Designs Edge AI 800x100

Metastability and Fatal System Errors

Metastability and Fatal System Errors
by Daniel Nenni on 06-09-2013 at 3:00 pm

Metastability is an inescapable phenomenon in digital electronic systems. This phenomenon has been known to cause fatal system errors for half a century. Over the years, designers have used convenient rules of thumb for designing synchronizers to mitigate it. However, as digital circuits have become more complex, smaller and faster with reduced voltage, these rules of thumb are beginning to fail. A new tool from Blendics, MetaACE, has been developed that accurately evaluates metastability failures in contemporary SoCs.

A synchronizer settles to a valid output voltage in period of time that is without an upper bound. This settling-time regime is largely exponential with a time constant t. Throughout a number of the past semiconductor process generations, t has been proportional to FO4 delay and decreased in every generation, thus providing better synchronizer performance at each generation. However, a change in the relationship between t and FO4 delay has emerged at process geometries of 90 nm and below. Operating conditions and process variations further aggravate the situation and can cause many orders of magnitude variation in the MTBF of a synchronizer. As a result, traditional guidelines for synchronizer design are no longer adequate. To illustrate how these traditional rules of thumb fail, Figure 1 shows the effect of supply voltage on t and, in turn, on MTBF.


Figure 1. Settling time-constant τ, FO4 delay and MTBF as a function of the supply voltage (V) for a 65 nm CMOS synchronizer operated with a 200 MHz clock.

Note that t varies by almost an order of magnitude more than does the delay through an FO4 inverter. An equivalent increase in transistor threshold voltage Vth produces the same difference between the FO4 delay and t. Such an increase in Vth can occur under low temperature operation of the synchronizer. The combination of low supply voltage and low temperature can lead to sub-second values of MTBF and an almost certain system failure.

It would be advantageous to be able to predict synchronizer performance before fabrication. This would aid the designer in building a reliable, but not over-designed synchronizer (over-design adds area and latency to an otherwise reliable design). Blendics has developed a software system, MetaACE, that accurately predicts synchronizer MTBF.

Simulating a synchronizer can provide the essential parameters intrinsic to a particular semiconductor process, but more information is needed to estimate the MTBF of the circuit in a particular application. Extrinsic parameters such as clock period, clock duty cycle, rate of data transitions and number of stages in the synchronizer depend on the application and not on the semiconductor process. The MTBF for these various applications of a synchronizer design can be calculated given the intrinsic parameters, however. Figure 2 compares the calculated and simulated results for 2, 3 and 4 stage flip-flops for various clock periods and a data transition rate of 200 MHz.


Figure 2. Comparison of calculated and simulated estimates of MTBF.

It is clear from Figure 2 that there are extrinsic conditions under which even a 2 flip-flop synchronizer at a nominal supply voltage and temperature is unreliable. At a 1- nsec clock period, a typical double-ranked 90 nm synchronizer’s MTBF is less than a year and probably inadequate. Increasing the number of stages to four increases the MTBF to about 1010 years; more than adequate for most cases.

Manufacturers of mission critical products should carefully consider the risk of synchronizer failure and take the necessary steps so that their engineers and their semiconductor vendors will insure a satisfactory MTBF over a system lifetime, particularly when human lives are at risk.

About the Author
Dr. Jerome (Jerry) Cox, President, CEO and Founder, Blendics
Jerry received his BS, MS and ScD degrees in Electrical Engineering from MIT. From 1961 to 1973 he introduced small computers to biomedical research and participated in pioneering research in asynchronous computing. From. 1974 to 1991, he was chair of the Washington University CSE Department and in 1997 became founder and Vice President for Strategic Planning for Growth Networks. This venture-funded, chip-set company sold to Cisco in 2000 for $350M and eventually led to the top-of-the-line Cisco router, the CRS-1. Over his professional career he has taught and done research in computer architecture, medical imaging, statistical estimation theory and medical signal analysis. He has designed and built many digital systems, using both the synchronous and asynchronous approaches. He has published over 100 scientific papers and book chapters and has ten patents.

lang: en_US


DAC IP Workshop: Are You Ready For Quality Control?

DAC IP Workshop: Are You Ready For Quality Control?
by Paul McLellan on 06-07-2013 at 3:08 am

On Sunday I attended an IP workshop which was presented by TSMC, Atrenta, Sonics and IPextreme. It turns out that the leitmotiv of the afternoon was SpyGlass.

Dan Kochpatcharin of TSMC was first up and gave a little bit of history of the company. They built up their capacity over the years, as I’ve written about before, and last year shipped 15 milliion 8″ equivalent wafers. That’s a lot.

Ten years ago, TSMC could pretty much get away with throwing out the Spice rules and the DRC rules and letting design teams have at it. That no longer works because the complexity of the process means that each generation needs the tool chain to be adapted (for example, double patterning at 20nm) and nobody, even the biggest fabless guys, designs every block on their chip. IP for the process needs to be ready, especially memories, DDRx controllers, Ethernet and so on.

So TSMC started the IP alliance in 2000 for hard IP. Each block is tracked through a qualification process that starts with physical review (it must pass DRC or…fail), DRM compliance (ditto), pre-silicon assessment (design kit review), typical silicon assesment (tapeout review), split lot assessment, IP validation (characterization) and volume production (tracking customer use and yield). They have about 10,000 IP blocks in the system of which 1500 had problems, 373 of which were serious enough that they would have potentially been fatal. When a mask set costs $10M that is $3.7B in saved mask charges alone.


In 2010 they extended the program to soft IP (RTL level) working with Atrenta SpyGlass as the signoff tool. In the first go around, they focused on whether the RTL was correct and clean enough to pass Atrenta’s Lint checks, make sure the clocks were correct and so on. By the second version, using SpyGlass physical they were on the lookout for potential congestion and timing problems.

Next up was Mike Gianfagna from Atrenta. The focus of Atrenta at DAC this year is that the tools are now ready for RTL signoff. This doesn’t replace post-layout signoff which will still be required and it certainly doesn’t imply that design closure will simply happen without any manual intervention and ECOs. But it can catch a lot of problems early and ensure that the physical design process goes smoothly. The big advantages of working at the RTL level are twofold. Firstly, when problems are found they are much easier to address. And secondly, the tools (SpyGlass and others) run orders of magnitude faster than at the netlist or physical levels.


Run time would not matter if there was not good correlation between what SpyGlass predicted at the RTL level and what reality turned out to be post-layout. The most mature part of Atrenta’s technology is in the test area where they have been working for over 10 years. The prediction for stuck-at fault coverage at the RTL level is 1% off from the final numbers; for at-speed it is 2%. Power is more like 10%, area 5-10% and so on.

Atrenta/TSMC’s IPkit is now used by all partners in TSMC’s IP program. There are twice as many partners involved at this years DAC as were in 2012. IPkit 3.0 will add BugScope assertion synthesis to get better IP verification scripts.


After a brief break for coffee it was Sveta Avagyan of Sonics. She had been given a little design using IPextreme IP based around a Coldfire processor (68000 architecture). Sonics has various different interconnect and network-on-chip (NoC) technologies. Sveta showed us how to use the GUI to configure the whole subsystem interconnect. She could then use SpyGlass to make sure it was clean. Things that SpyGlass calls out as errors may, in fact, be OK and so one way to fix a problem is to issue a waiver that says it is actually OK. SpyGlass will record the waiver and track it all the way through the design process. Eventually, when the design is ready, it can be used in a chip or uploaded to IPextreme’s cloud-based infrastructure Xena.


Warren Savage discussed how Xena makes it easy for IP creators to upload designs, either fixed or parameterizable, to the cloud and for users to download them. However, Xena can also run SpyGlass (in the cloud) to produce reports on the quality of the IP, record waivers and so on.

So SpyGlass is now the de facto standard for IP quality. TSMC uses it for their soft IP program, IP providers such as Sonics can use it during IP creation (whether this is manual or something closer to compiler generated like Sonics), IPextreme can use it to qualify IP. Users can pull down some of the reports or run SpyGlass themselves on IP before deciding finally whether to use it or not. Everyone wants their IP to by SpyGlass clean (and that IPextreme and TSMC are happy with the quality too, not to mention their actual users).


Hierarchical Design Management – A Must

Hierarchical Design Management – A Must
by Pawan Fangaria on 06-06-2013 at 8:30 pm

Considering the technological progress, economical pressure, increased outsourcing and IP re-use, semiconductor industry is one of the most challenged industry today. Very frequently products get outdated leading to new development cycles. It becomes very difficult and costly to build the whole scheme of data foundation once again. A systematic management of design data and its re-use is a must in order to manage such frequent changes in product designs, thereby maintaining and improving economic health of the organization.

Last month I wrote about DesignSync, a robust design data management tool and its multiple advantages. Digging further into that data management methodology, I found about how INSIDE Contactless (an innovative company involved into designing chips for payment, access control, electronic identification etc.) used hierarchical design management methodology, offered by Dassault Systemes in its DesignSync tool, to turn a difficult situation involving scattered design data and IPs on different databases across multiple teams, spread across remote sites, into an opportunity with unified data management, leading to success in the business.

INSIDE used design tools from Cadence and Design Data Management (DDM) tool from Dassault, synchronized them to cater to modular data abstraction within the context of hierarchical configuration management and obtained excellent team collaboration across multiple sites resulting into productivity and time-to-market.

The concept of static and dynamic HREF (hierarchical reference) enables creation of multiple design modules and hierarchies under a root design that helps in bringing controlled flexibility and parallelism in the design development process with a unique database for the overall design and strict control on overall integration before release. The project hierarchies can also contain software, document, scripts and IPs from various sources with different time stamps along with the design data. The data repositories worked upon by particular teams can be placed at strategic locations to reduce network traffic. It’s a client – server platform with servers nested hierarchically at multiple sites.

The design is hierarchically built up with lowest unique abstraction of data called as “module” that has consistent collection of files and folders and has access commands like check-in, check-out and modify. Revision history is maintained at each level. HREFs connect modules and are processed when design data is fetched into the workspace. It provides a systematic automated integration of design under a unified Design Data Management (DDM) system which can be either single or distributed.

Dassault provides static and dynamic work flows – “SITaR” (Submit, Integrate, Test, and Release) and “Golden Release”. INSIDE used “SITaR”, that is suited for static HREFs referring to a specific release, at the time of tapeout when design and simulation are done on the baseline data. And “Golden Release” during development phase, where tags like “in development”, “Ready4Use”, “Golden” etc. were used on dynamic HREFs. This labelling on the hierarchical structure gives the integrator strict control over data. He/she can validate and integrate the static data efficiently without any delay. A detailed methodology can be found in Dassault’s whitepaper here.

This methodology fits well into the strategy for semiconductor PLM about which I had written earlier. This helps in efficient data management, intelligence build up for work estimation, scheduling and execution, cost estimation, and efficient and effective re-use of IPs to meet the challenges of SoC design and business.


Reshoring Semiconductor Manufacturing

Reshoring Semiconductor Manufacturing
by Paul McLellan on 06-06-2013 at 5:29 pm

So where in the world do you think semiconductor manufacturing is increasing the fastest? OK, Taiwan, that was pretty easy. But in second place, with over 20% of the world’s semiconductor equipment capital investment is the US. Growing faster than Europe, China, Japan and equal with Korea.

This was not the case half a dozen years ago. Intel was building its first fab in China at Dalian. AMD was ramping Dresden (Germany). Most semiconductor companies were transitioning to fab-lite models with modern processes being manufactured in Asia, and old fabs being milked using non-leading-edge processes. It seemed inevitable that semiconductor manufacturing would mostly be outsourced just like most other manufacturing.

And then suddenly it wasn’t. Just like GE seems to be doing in white goods (interesting article in the Atlantic here), suddenly new and expanded fabs are sprouting all over the US. AMD spun out their manufacturing to form GlobalFounries and one of the first decisions was to build a brand new state-of-the-art fab in Saratoga in upstate New York. Samsung decided to more than double their large fab in Austin, Texas, which I believe will be the biggest fab outside of Asia. Micron is expanding. Intel is expanding in Oregon and Arizona.

In 2013, it looks like over $8B will be spent on semiconductor equipment to outfit these new or expanding fabs. According to SEMI, the equipment company consortium:

  • Intel will spend up to $3.5 billion, primarily at their Fab 42 in Arizona and Dx1 Fab in Oregon
  • GLOBALFOUNDRIES will invest $1.2-$1.8 billion on equipment at their new fab in New York
  • Samsung will spend $1.8-$2.5 billion to increase capacity at their Austin facility
  • Micron, CNSE (NanofabX for G450C), IBM, and Maxim may collectively spend up to $1.5 billion in equipment this year

The numbers are expected to be even bigger in 2014.

See the SEMI report on this topic here.

And in case you’ve never heard of SEMI:

The industries that comprise the microelectronics supply chain are increasingly complex, capital intensive, and interdependent. Delivering cutting-edge electronics to the marketplace requires:

  • Construction of new manufacturing facilities (fabs)
  • Development of new processes, tools, materials, and manufacturing standards
  • Advocacy and action on policies and regulations that encourage business growth
  • Investment in organizational and financial resources
  • Integration across all segments of the industry around the world

Addressing these needs and challenges requires organized and collective action on a global scale.
SEMI facilitates the development and growth of our industries and manufacturing regions by organizing regional trade events (expositions), trade missions, and conferences; by engaging local and national governments and policy makers; through fostering collaboration; by conducting industry research and reporting market data; and by supporting other initiatives that encourage investment, trade, and technology innovation.


ARM: AMBA 5, Cortex-A12, Mali, video, POP…

ARM: AMBA 5, Cortex-A12, Mali, video, POP…
by Paul McLellan on 06-06-2013 at 4:39 pm

ARM announced several new products at DAC in a number of different spaces. In addition I got invited to a briefing with Simon Segars, 30 days from when he takes over as CEO of ARM. I asked Simon if he expected to make any major changes and he basically said ‘no’. ARM’s basic strategy in both mobile and now enterprise is coming to fruition. Unlike Brian Krzanich, recently installed as CEO of Intel, ARM doesn’t need to make major changes to get into some fast growing markets. ARM is already in mobile, and while Intel has one or two high profile wins (Samsung tablet for example), ARM is currently dominant.


The new products announced started with AMBA 5 CHI specification intended for the enterprise market. It enables high-frequency non-blocking coherent data transfer between 10s or even hundreds of processors, with support for distributed level 3 caches and very high data rates. It was developed with input from over 20 ARM partners. Verification IP is available from Jasper, Cadence, Mentor and Synopsys.

Smarphones are, of course, growing fast. But the fastest growing part of the market is not the ultra-premium smartphones that us silicon valley types have, iPhone and Galaxy primarily, but the mid-range market. ARM announced a new processor specifically target at this mid-range market called Cortex-A12. In addition, they announced a mid-range Mali GPU, the T622, and a new Mali T500 video engine. Finally, in the physical library space, they announced performance optimization packages (POPs) for these new processors.


The mobile space is growing fast. There will be over a billion smartphones shipped this year, and tablets are already outshipping PCs (hence Intel’s need to do something so as to participate in this market, either with successful products or a successful foundry strategy). ARM has a good entry level offering with Cortex-A53, and a good high end offering with A57. The new announcement drops in between.


The A12 has a 40% performance uplift from A9 with the same energy efficiency. It will (although not yet) work in big.LITTLE configurations and has good security with virtualization and TrustZone. The new Mali-T622 is the smallest GPU on the market with one or two cores. It has a 50% energy improvement over the T600 series. There is also the Mali-V500 single core 1080p60HD (scalable to 4K120) video encode/decode. With ARM Frame Buffer Compression (AFBC) gives 50% lower memory bandwidth needs and so even lower power. And for premium content providers there is TrustZone secure video build right into the video engine so that encrypted content make it all the way into the decoder.


Finally the POP IP for optimizing PPA and time to market. ARM has worked with GlobalFoundries using their GF28-SLP process for the Cortex-A12 and with TSMC using their 28HPM process for both the Cortex-A12 and the Mali-T622.

Bottom line: with massive smart mobile device growth, the market continues to segment. The mid-range, where these new cores are positioned, is expected to be over half a billion units by 2015 and will require full-featured SoCs to hit the appropriate price points.


DAC: Tempus Lunch

DAC: Tempus Lunch
by Paul McLellan on 06-06-2013 at 4:03 pm

I had time for lunch on Monday. That is to say, there was a Cadence panel session about Has Timing Signoff Innovation has become and Oxymoron? What Happened and How Do We Fix It?

The moderator was Brian Fuller, lately of EE Times but now Editor-in-Chief at Cadence (I’m not sure quite what it means either). On the panel were Dipesh Patel, EVP of Physical IP at ARM; Tom Spyrou, Design Technology Architect at Altera, Richard Trihy, Director of Design Methodology at GlobalFoudries and Anirudh Devgan of Cadence.

Dipesh started of by saying that at ARM 60% of the design process is in the timing closure loop. That’s not too bad for ARM themselves since any effort there is heavily leveraged, but their partners cannot afford that much.

Tom pointed out that it was harder to get all three of capacity, runtime and accuracy than it used to be. At his previous job in AMD they had one timing scenario that took 750Gbytes and several days to run.

Richard thought the main issue is variation and he is worried he is not seeing very effective solutions. Theyt still do hard-coding OCV and margins for clock-jitter and IR drop. But there is no much margin to go around and it will only get less.

Anirudh cheated as said that issue #1 was speed and capacity (and a fanatical dedication to the pope). #2 was accuracy, but #3 is that fixing the problems in the closure loop takes too long.

Everyone on the panel, except Anirudh, was presumably a PrimeTime user since, well, there isn’t anything else to be a user of until the very recently announced Cadence Tempus product, which was lurking in the background but wasn’t really talked about explicitly on the panel. Indeed, Tom Spyrou, when at Synopsys many years ago, was in charge of PrimeTime development.

Everyone agreed that signoff innovation wasn’t really an oxymoron since they has been a lot of innovation. But, of course, there needs to be more: current source model, multi-corner, multi-mode, parallel processing. But still big issues getting designs out. And Statistical STA (SSTA), which turned out to be a blind alley after a lot of investment.

Anirudh pointed out that in the commercial space there has only been one product for the last 20 years, anyone else got sued or bought (or both). Motive way back when, ExtremeDA more recently. There has been lots going on in academia but they were defocused by SSTA and other non-mainstream things. TSMC has no started to certify timers so that might open up the competition in the same ways as has happened with circuit simulation (Finesim, BLDA etc).

A question was asked about standards. Tom echoed my thoughts which is that you can only standardize things once the dust settles, In the meantime, non-standard solutions still. It is hard to have a standards body say what is standard when the competition is still occurring about what is standard. Richard still wants to see something to help in the OCV methodology since this is going to get so much worse at 14nm and 10nm.

An engineer from Qualcomm suggested that depending on bigger and faster machines and more memory isn’t really tenable. From eda industry perspective can we look at the infrastructure of computing changes and is there a push to more compute aware paradigm? That was a slow pitch right across the plate, given that Tempus does just that. So Anirudh hit the ball out of the park, pointing out that a single machine with big memory (1TB, very expensive) but lots of memory in lots of machines is easier to arrange with maybe 5000 machines in a server farm. Cost of the machines is much cheaper than the EDA tools. But to work well it needs top down parallelism not bottom up multithreading (like, er, Tempus).

The panel was asked whether there was conflict of interest between EDA companies looking at signoff as a competitive advantage that can slow down the innovation process. Central planning hasn’t worked too well in economies, and it seems unlikely to do so in EDA markets. Yes, it is always tempting to see wasted effort. What if Tempus didn’t need to build some of the infrastructure since they could just borrow it from PrimeTime? Not going to happen and competition drives innovation hard because EDA is a business where only the #1 and #2 products in any space make serious money.

Designs are getting bigger, processes are getting more complicated, variation getting worse. I don’t think that is going to change.


DAC: Wally’s Vision

DAC: Wally’s Vision
by Paul McLellan on 06-06-2013 at 3:07 pm

One new feature at DAC this year is that several of the keynotes are preceded by a ten minute vision of the future from one of the EDA CEOs. Today it was Wally Rhines’s turn. Wally is CEO of Mentor Graphics. He titled his talk Changing the World Through EDA. Since EDA as we know it started in the late 1970s, the number of transistors on a design has increased by over 5 orders of magnitude in an environment where the number of designers has only grown a few percent per year over the period. We had manual design well into the 1970s and since then we have created at $6B industry.

Moore’s law is not ending yet. If anything the slope of adoption of new technologies (28nm, 20nm…) has accelerated and not slowed. However, there are big problems to solve: FinFETs, reliability, thermal and stress, extreme low power.

But the unit volume growth of transistors is like nothing that the world has ever seen. CAGR in volume growth for coffee is 1.6%, computers are 9.5%. Even “explosive” cell-phone growth is 14.8%. But transistors have a CAGR of 72%. IC revenue per transistor is a traditional learning curve (as is coffee or Japanese beer). The cost decreases llinearly when plotted on a logarithmic graph, since the cost is decreasing exponentially (this part of what Moore’s law means).

EDA revenue per transistor has a similar curve, since we aren’t (unfortunately) eating up an increasing fraction of semiconductor companies profits. As a result, EDA today is $6.5B, 2% of the $300B semiconductor market and has been for nearly 15 years.

The next big opportunities in the semiconductor area, in Wally’s opinion, are: photonics, MEMS (mechanical), 3D IC (TSV, interposers etc) and new materials.

But systems will adopt automation too. They are still at the manual phase that semiconductor design was in the 1970s. For example, the BoM for a car today is nearly 50% electronics. They can’t do it manually. And electronic systems are a $1.9T industry, 2% of which is $38B.

Welcome to the next 50 years. Huge growth ahead.


DAC: Gary Smith: Don’t Give Away Your Models

DAC: Gary Smith: Don’t Give Away Your Models
by Paul McLellan on 06-06-2013 at 4:10 am

As is now traditional, Gary Smith kicked off DAC proper (there were workshops earlier and some co-located conferences started days before). He started by dismissing the idea that it costs $170M to do an SoC design.

In fact he looked at 3 different cases. Firstly, the completely unconstrained design. Well, no design is completely unconstrained but for the main part of the market (mobile of one form or another) the power budget is 5W. EDA has actually done a good job of solving power problems and the mixture of tools and methodologies has cut power consumption dramatically. Nobody gets to have an unconstrained development schedule either, it is always 9-12 months max or you are out of business.

If you have $50M to spend, you get 5W (nobody gets more) which gives you 3M gates at 1.8GHz and the same 9-12 months to spend your $50M.

Lower still, at $25M, you still get a reasonable amount of real estate to play with. $25M is important because VC funding taps out at that point (actually I’m not sure how much VC funding is going on for fabless companies period, but for sure they are not going to fund a $100M development). But if a startup picks its design carefully then it can compete.

Gary then talked about if, how and when the EDA industry will take over the embedded software industry, which is struggling with lack of profitability due to the availability of good enough open source solutions, especially based on Linux.

Emulation boxes are the key to effective virtual prototypes. And now Mentor, Cadence and Synopsys all have one. Gary is the perfect straight man to my panel this year, which is on hardware assisted verification, of which emulation boxes are a big part.

The reality today is that silicon virtual prototypes don’t quite work as cleanly as they should. Architects don’t have accurate enough power models to do their work, and so when implementation proceeds architecture needs to be reworked, and the hardware-software partitioning redone, accelerators added and so on.

Gary reckons that EDA’s secret sauce is that we have the models. Give away the tools but not the models is his message for 2013.

Gary’s forecast for EDA (or technically Laurie Balch’s) is:
[TABLE] class=”cms_table_grid” style=”width: 200px”
|-
| class=”cms_table_grid_td” | Year
| class=”cms_table_grid_td” | Market
|- class=”cms_table_grid_tr”
| class=”cms_table_grid_td” | 2013
| class=”cms_table_grid_td” | 6.1B
|- class=”cms_table_grid_tr”
| class=”cms_table_grid_td” | 2014
| class=”cms_table_grid_td” | 6.4B
|- class=”cms_table_grid_tr”
| class=”cms_table_grid_td” | 2015
| class=”cms_table_grid_td” | 6.7B
|- class=”cms_table_grid_tr”
| class=”cms_table_grid_td” | 2016
| class=”cms_table_grid_td” | 7.5B
|- class=”cms_table_grid_tr”
| class=”cms_table_grid_td” | 2017
| class=”cms_table_grid_td” | 8.3B
|-

In this EDA is defined as:

  • True EDA
  • No services
  • No ARM (it’s too big and not really EDA)
  • All other IP (Lip-Bu is going to buy them all anyway)
  • But not counting any non-commercial, internal IP development

The encore performance of Gary’s presentation will be at the DAC pavilion panel on the show floor at 9.15 this morning. If you can’t find the pavilion is is technically booth 509.

I am moderating a panel on hardware assisted verification at 4pm on Tuesday on the pavilion panel.


Kaufman Award: Chenming Hu

Kaufman Award: Chenming Hu
by Paul McLellan on 06-06-2013 at 3:44 am

This year’s Kaufman award winner is Chenming Hu. In contrast to previous years, this was presented on the Sunday evening of DAC instead of at a separate event in San Jose. Chenming’s career was reviewed by Klaus Schuegraf, Group Vice President of EUV Product Development at Cymer, Inc (now part of ASML) and also one of the (many) students of Dr Hu.

One thing that I had no idea about was that Chenming had climbed Everest at the age of 50. That seems pretty much up there (see what I did) with inventing the FinFET. However, Dr Hu’s career goes back a long way before the FinFET. He was instrumental in solving hot-electron problems back in the 1um (almost typed nm there, i’m so used to it) era. And then getting through the 3V scaling barrier.

But in addition to being the father of the FinFET, he is the father of the BSIM model. This was the first industry standard non-proprietary model and is now used by hundreds of companies (basically almost everyone) and has revolutionized many aspects of circuit simulation, making it more competitive and higher performance.

Lip-Bu, CEO of Cadence, briefly spoke. It turns out that Lip-Bu and Chenming are neighbors so they know each other well without the obvious semiconductor connection. Lip-Bu talked about a couple of boards that he is on with Chenming.


Chenming then accepted the award from Kathryn Kranen (chairman of EDAC) and Donatella Sciuto (president of IEEE CEDA). He talked briefly about his career. His first professor was Andy Grove so his relationship with Intel certainly goes back a long way. He has had countless students and today over 50 leaders in the industry went through various programs with him.

To wrap up, Dr Hu looked forward rather than back. Summarizing the FinFet’s possibilities, he said:

  • Great for digital
  • Even better for analog
  • Better stability and reliability
  • Can be scaled to the end of lithography

Chenming Hu will be interviewed by Peggy Aycinena on the DAC pavilion panel at 11.15am later this morning. Technically the DAC pavilion is booth 509.


The Return of the "Moore Noyce" Company

The Return of the "Moore Noyce" Company
by Ed McKernan on 06-04-2013 at 7:00 pm

It has been a little over a fortnight since Paul Otellini officially stepped down from the CEO post and yet it seems to be more than a long time gone. Unlike his predecessors, he was not asked to remain on the board and perhaps it is a sign that his complete disengagement from the company was necessary to complete a future strategic engagement. As stated in earlier columns, the pace of change that the new mobile market is enforcing on all major silicon suppliers is far greater than what we have witnessed since the beginning of the PC era. With mobile, all silicon is pursuing leading edge process technologies Continue reading “The Return of the "Moore Noyce" Company”