BannerforSemiWiki 800x100 (2)

EDPS: SoC FPGAs

EDPS: SoC FPGAs
by Paul McLellan on 04-09-2012 at 4:00 am

Mike Hutton of Altera spends most of his time thinking about a couple of process generations out. So a lot of what he worries about is not so much the fine-grained architecture of what they put on silicon, but rather how the user is going to get their system implemented. 2014 is predicted to be the year in which over half of all FPGAs will feature an embedded processor, and at the higher end of Altera and Xilinx’s product lines it is already well over that. Of course SoCs are everywhere, in both regular silicon (Apple, Nvidia, Qualcomm…) and FPGA SoCs from Xilinx, Altera and others.

The challenge is that more and more of systems is software, but the traditional FPGA programming model is hardware: RTL, state-machines, datapaths, arbitration, buffering, highly-parallel. But software guys are not going to learn Verilog and so there is a need for a programming model that represents an FPGA as a processor with hardware accelerators or as a configurable multi-core device. Taking a software-centric view of the world but also being able to build the FPGA so that the entire system meets its performance targets.

There have been many attempts to make C/C++/SystemC compile into gates (Forte,catapult,c2s,synphony,autoesl…) but these really only work well for for-loop type algorithms that can be unrolled, such as FIR filters. They don’t work so well for complex algorithms. What is really needed is to analyze the software to find the bottlenecks and then “automatically” build hardware for doing whatever faster.

The most promising approach at present seems to be OpenCL, a programming model developed by the Khronos group to support multicore programming and silicon acceleration. From a single source it can map an algorithm onto a CPU and GPU, onto an SoC and an FPGA or just directlly onto an FPGA. There is a natural separation between code that runs on accelerators and the code that manages the accelerators (which can run on any conventional processor).

Of course software is going to be the critical path in the schedule if these approaches are not also marged with virtual platform technology so that software development can proceed before hardware is available. Sometimes the software load already exists since software is longer-lived than hardware and may last for 4 or 5 hardware generations. But if it is being created from scratch then it may have a two year schedule, meaning that it is being targeted at a process generation that doesn’t yet have any silicon at all.


EDPS: Parallel EDA

EDPS: Parallel EDA
by Paul McLellan on 04-08-2012 at 10:00 pm

EDPS was last Thursday and Friday in Monterey. I think that this is a conference that more people would benefit from attending. Unlike some other conferences, it is almost entirely focused around user problems rather than doing a deep dive into things of limited interest. Most of the presentations are more like survey papers and fill in gaps in areas of EDA and design methodology that you probably feel you ought to know more about but don’t.

For example, Tom Spyrou gave an interesting perspective on parallel EDA. He is now at AMD having spent most of his career in EDA companies, fox turned game-keeper if you like. So he gets to AMD and the first thing that he notices is that all the multi-thread features that he has spent the previous few years implementing are actually turned off almost all the time. The reality of how EDA tools are run in a modern semiconductor company makes it hard to take advantage of.

AMD, for example, has about 20,000 CPUs available in Sunnyvale. They are managed by lsf and people are encouraged to allocate machines fairly across the different groups. A result of this is that requiring multiple machines simultaneously doesn’t work well. Machines need to be used when they become available and waiting for a whole cohort of machines is not effective. It is also hard to take advantage of the best machine available, rather than one that has precisely the resources requested.

So given these realities, what sort of parallel programming actually makes sense?

The simplest case is where there is non-shared memory and coarse grained parallelism with separate processes. If you can do this, then do so. DRC and library characterization fits this model.

The next simplest case is when shared memory is required but it is almost all read-only access. A good example of this is doing timing analysis for multiple corners. Most of the data is the netlist and timing arcs. The best way to handle this is to build up all the data, and then fork off the separate processes to do the corner analysis using copy-on-write. Since most of the pages are never written then most will be shared and the jobs will run without thrashing.

The next most complex case is when shared memory is needed for both reading and writing. Most applications are actually I/O bound and don’t, in fact, benefit from this but some do: thermal analysis, for example, which is floating-point CPU-bound. But don’t expect too much: 3X speedup on 8 CPUs is pretty much the state of the art.

Finally, there is the possibility of using the GPU hardware. Most tools can’t actually take advantage of this but in certain cases the algorithms can be mapped onto the GPU hardware and get a massive speedup. But it is obviously hard to code, and hard to manage (needing special hardware).

Another big issue is that tools are not independent. AMD has a scalable simulator that runs very fast on 4 CPUs provided the other 4 CPUs are not being used (presumably because the simulator needs all the shared cache). On multicore CPUs, how a tool behaves depends on what else is on the machine.

What about the cloud? This is not really in the mindset yet. Potentially there are some big advantages, not just in terms of scalability but in ease of sharing bugs with EDA vendors (which maybe the biggest advantage).

Bottom line: in many cases it may not really be worth the investment to make the code parallel, the realities of how server farms are managed makes all but the coarsest grain parallelism a headache to manage.


Google Glasses = Darknet!

Google Glasses = Darknet!
by Daniel Nenni on 04-08-2012 at 9:00 pm

Google Project Glasses and Augmented Reality will be the tragic end to the world as we once knew it. As we become more and more dependent on mobile internet devices we become less and less independent in life. Consider how much of your critical personal and professional information (digital capital) is stored via the internet and none of it is safe. With a quick series of keystrokes from anywhere in the world your digital capital can be altered or wiped clean leaving nothing but flesh and bones!

“People of the past! I have come to you from the future to warn you of this deception. The few have used Artificial intelligence technology to enslave the many through the use of thought control. I lead a band of Anti Geeks who fight against oppressive technologies. But we alone are not strong enough.The revolution must begin now! Join us to fight for non augmented reality!”

If you haven’t read the “Daemon” and “Freedom” books by Daniel Suarez you should, if you dare to take a peek into what augmented reality has in store for us all. Daniel Suarez is an avid gamer and technology consultant to Fortune 1000 companies. He has designed and developed enterprise software for the defense, finance, and entertainment industries. The book name “Daemon” is quite clever. In technology, a daemon is a computer program that runs in the background and is not under the control of the user. In literature a daemon is a god, or a demon, or in this case both.

The book is centered on the death of Mathew Sobol, PhD, cofounder of CyberStorm Entertainment, a pioneer in online gaming. Upon his death, Sobol’s online games create an artificial intelligence based new world order “Darknet”, which is architected to take over the internet and everything connected to it for the greater good. The interface to Darknet is a pair of augmented reality glasses much like the ones Google is developing today. While the technology described in the books seem like fiction, most of it already exists and the rest certainly will. The technology speak is easy to follow for anyone who has a minimal understanding of computers and the internet, very little imagination is required.

The book’s premise is “Knowledge is power”or more specifically “He who controls digital capital wins”. So you have to ask yourself, how long before just a handful of companies rule the earth (Apple, Google, FaceBook)? Look at the amount of digital capital Google has access to:

  • Google Search (Internet and corporate intranet data)
  • Google Chrome (Personal and professional internet browsing)
  • Android OS (Mobile communications)
  • Google Email-Voice-Talk
  • Google Earth-Maps-Travel
  • Google Wallet
  • Google Reader

There are dozens of Google products that can be used to collect and manipulate public and private data in order to thought control us and ultimately conquer the digital world.

The digital world is rampant with security flaws and back doors which could easily enable the destruction of a person, place, or thing. A company or brand name years in the making can be destroyed in a matter of keystrokes. In the book, a frustrated Darknet member erases the digital capital of a non Darknet member who cuts in line at Starbucks. Depending on my mood that day, I could easily do this.

It’s not like we have a choice in all this since the digital world is now a modern convenience. We no longer have to store our most private information in filing cabinets, safety deposit boxes, or even on our own computer hard drives. It’s a digital world and we are digital girls. The question is, who can be trusted to secure Darknet (Augmented Reality)?


Designing for Reliability

Designing for Reliability
by Paul McLellan on 04-08-2012 at 8:06 pm

Analyzing the operation of a modern SoC, especially analyzing its power distribution network (PDN) is getting more and more complex. Today’s SoCs no longer operate on a continuous basis, instead functional blocks on the IC are only powered up to execute the operation that is required and then they go into a standby mode, perhaps not clocked and perhaps powered down completely. Of course this on-demand power makes a major impact in reducing the standby power.

The development of CPF and UPF over the last few years has had a major impact. Low-power techniques including power-gating, clock-gating, voltage and frequency scaling are represented in the CPF/UPF and are verified for proper implementation. But the electrical verification tools to simulate these complex behaviors in time are still evolving.

The problem is that outdated verification techniques are inadequate. Static voltage drop simulations or simple dynamic voltage drop simulations on the PDN will not adequately represent the complex switching or power transitions as blocks come on- and off-line. The state transitions need to be checked rigorously to guarantee that power supply noise does not affect functionality.

The variations in power as cores, peripherals, I/Os and IP go into different states can be huge, with large current inrushes. Identifying these critical transitions and using them for electrical simulation of the PDN is critical. The transitions can be identified at the RTL level where millions of cycles can be processed and the relatively few cycles of interest can be selected for full electrical simulation.

Another area requiring detailed simulation is voltage islands that either power down or operate at a lower voltage (and typically frequency). This can have a major impact on leakage power as well as dynamic power. But reliability verification is complex. Failures can happen if signal transitions occur before the voltage levels o these islands reach the full-rail voltage. There is a major tradeoff in that if the islands are powered up too fast the inrush current can cause errors in neighboring blocks or in the voltage regulator. If it is powered up too slowly, then signals can start to arrive before the block is ready. To further complicate things, the inrush currents interact with the package and board parasitics too.

As SoCs have become more and more complete, previously separate analog chips such as those for GPS or RF are now on the same silicon as high-speed digital cores. Sensitive analog circuits need to be analyzed in the context of noisy digital neighbors to check the impact of substrate coupling. Isolation techniques don’t always work well at all frequencies and, of course, this needs to be analyzed in detail.

3D techniques, such as wide-IO memory on top of logic, add a further level of complexity. These approaches have the potential to make major increases in performance and reductions in power but they also create power-thermal interactions which need to be analyzed. Not just that the heat on one die can affect its performance, but head from a neighboring die can do so too.

IC designers can no longer just assume things are correct by construction in this climate. Multi-physics simulations of thermal, mechanical, and electrical behavior is a must for reliability verification.

See Arvind’s blog on the subject here.


Synopsys Users Group 2012 Keynote: Dr Chenming Hu and Transistors in the Third Dimension!

Synopsys Users Group 2012 Keynote: Dr Chenming Hu and Transistors in the Third Dimension!
by Daniel Nenni on 04-08-2012 at 7:00 pm

It was an honor to see DR. Chenming Huspeak and to learn more about FinFets, a technology he has championed since 1999. Chenming is considered an expert on the subject and is currently a TSMC Distinguished Professor of Microelectronics at University of California, Berkeley. Prior to that he was the Chief Technology Officer of TSMC. Hu coined the term FinFET 10+ years ago when he and his team built the first CMOS FinFETs and described them in a 1999 IEDM paper. The name FinFET because the transistors (technically known as Field Effect Transistors) look like fins. The fins are the 3D part in the name 3D transistors. Dr. Hu didn’t register patents on the design or manufacturing process to make it as widely available as possible and was confident the industry would adopt it, and he was right.

There is a six part series on YouTube entitled: FinFET-What it is and does for IC products, history and future scaling presented by Chenming, unfortunately it is in Mandarin. I have asked Paul McLellan to blog it since he speaks Mandarin but the slides look identical to what was presented at the keynote so it may be worth a look:

For those of you who have no idea what a FinFet is, watch this Tri-Gate for Dummies video by Intel. They call a FinFet a Tri-Gate transistor which they claim to have invented. I expect they are referring to the name rather than the technology itself. 😉

Probably the most comprehensive article on the subject was published last November by IEEE Spectrum “Transistor Wars: Rival architectures face off in a bid to keep Moore’s Law alive”. This is a must read for all of us semiconductor ecosystemites. See, like Intel I can invent words too.

Unfortunately, according to Chenming, lithography will not get easier, double patterning will still be required, and the SoC design and manufacturing cost incremental is still unknown. Intel has stated that there is a +2-3% cost delta but Chenming sidestepped the pricing issue by joking that he is a professor not an economist. In talking to Aart de Geus, Synopsys will be ready for FinFets at 20nm. Since Synopsys has the most complete design flow and semiconductor IP offering they should be the ones to beat in the third dimension, absolutely. You can read about the current Synopsys 3D offerings HERE.

Why the push to FinFets at 20nm you ask? Because of scaling, from 40nm to 28nm we saw significant opportunities for a reduction in die size and power requirements plus an increase in performance. Unfortunately standard planar transistors are not scaling well from 28nm to 20nm, causing a reduction of the power/die savings and performance boost customers have come to expect from a process shrink ( Nvidia Claims TSMC 20nm Will Not Scale?). As a result, TSMC will offer FinFets at the 20nm node, probably as a mid-life node booster, just my opinion of course. Expect 28nm, 20nm, and 14nm roadmap updates at the TSMC 2012 Technology Symposium next week. This is a must attend event! If you are a TSMC customer register HERE.

Why am I excited about transistors in the third dimension? Because it is the single most disruptive technology I will see in my illustrious semiconductor ecosystem career and it makes for good blog fodder. It also challenges the mind and pushes the laws of physics to the limits of our current understanding, that’s why.


IP-SoC day in Santa Clara: prepare the future, what’s coming next after IP based design?

IP-SoC day in Santa Clara: prepare the future, what’s coming next after IP based design?
by Eric Esteve on 04-05-2012 at 10:16 am

D&R IP-SoC Days Santa Clara will be held on April 10, 2012 in Santa Clara, CA and if you plan to attend, just register here. IP market is a small world, and EDA a small market if you look at the generated revenue… but both are essential building blocks for the semiconductor industry. It was not clear back in 1995 that IP will become essential: at that time, the IP concept was devalued by some products exhibiting poor quality level, un-efficient technical support, leading program manager to be very cautious to simply decide to buy. Making was sometimes more efficient…

Since 1995, the market has been cleaned up, the poor quality product suppliers disappearing (being bankrupt or sold for asset) and the remaining IP vendors have understood the lesson. None of the renewed vendor marketing a protocol based (digital) function would take the chance to launch a product which has not passed an extensive verification program, and the vendors of mixed-signal IP functions know that the “Day of Judgment” will be when the Silicon prototypes will be validated. This leaves very small room for low quality products, even if you may still find some new comers deliberately launching a poor quality RTL function, naively thinking that lowering the development cost will allow to sell at low price and buy market share, or some respected Analog IP vendor failing to deliver “at spec” function, just because… analog is analog, and sometimes closer to black magic than to science!

If you don’t trust me, just look at products like Application Processor for Wireless handset, or for Set-Top-Box: these chips are made at 80% of reused functions, whether internal or coming from an IP vendor. This means literally that several dozen functions, digital or mixed-signal, are IP. Would only one of these failed and a $50+ million SoC development will miss the market window. That said, will the IP concept, as it is today in 2011, will be enough to support the “More than Moore” trend? In other word, if IP in the 2000-10’s is like Standard Cell was in the 1980-90’s, what will be the IP of the 2020’s? You will find people addressing this question at IP-SoC Days!

So, the interesting question will be to know where the IP industry stands on the spectrum starting from a single IP function, ending to a complete system. Nobody would allege that we have reached the upper side of the spectrum and claim that you can source complete system from an IP vendor. Maybe the SC industry is not ready to source a complete IP system: what would be the added value of the Fabless companies if/when will occur?

In the past, IP vendors were far to be able to provide subsystem IP, requiring strong understanding of specific application and market segment, associated technical know-how of such application and, even more difficult to met, adequate funding to support up-front development. But they are starting to offer IP Subsystem, just look at the recent release of a “Complete sound IP subsystem” by Synopsys, or the emphasis put by Cadence on EDA360… So IP-SoC days will be no more IP-centric only, but IP Subsystem centric!

By Eric Estevefrom IPnest


Does Subsystem IP will finally find a market? ARC based sound subsystem IP is on track…

Does Subsystem IP will finally find a market? ARC based sound subsystem IP is on track…
by Eric Esteve on 04-05-2012 at 4:03 am

Will the launch of ARC based complete sound system IP by Synopsys ring the bell for the opening of a new IP market segment, the “Subsystem IP”? If you look at the IP market evolution, starting from standard cell libraries and memory compiler offering in the 1990’s, moving to commodity functions like UART or I2C in the late 1990’s to finally come to complexes functions offer in the 2000’s, the long list of standard based PCI, PCIe, USB 2.0, SATA and more, and if you start brainstorming, you will likely think that subsystem IP should be the next step. Your customers, the SoC integrators, have to fill IC with more and more function, the technology make it possible (think 40 or 28nm) and their customer ask for it. But the time to market (TTM) pressure, especially in market like wireless or consumer electronic, is so strong that the design team has to integrate more functions, in a technology more difficult to manage, that the “lego” solution (building a complete system by integrating – not designing- various subsystems) looks very attractive. Then, you think that subsystem IP is the ideal solution!

Synopsys offering of a sound subsystem IP is based on a complete solution: H/W, S/W, FPGA and Virtual prototyping availability. The hardware piece is articulated around a single or dual core ARC Audio Processor, highly configurable, and this is an important point, as SoC integrators always need to differentiate. If they want to provide an ultra low power solution, they will implement the solution running at 10 MHz frequency, when the 800 MHz solution will allow for performance differentiation. As well, a SoC integrator will decide to implement a very low footprint (0.2 sq mm in 40nm) version, if the goal is to release a low cost, low area SoC. But, in both case, ARC based subsystem IP, will be a complete, drop-in solution.

Because the market segments targeted are CE or Wireless, with applications like Set-Top-Box (STB), HDTV, handset or smartphone and multimedia, all of these requiring to provide sound capabilities (which can be pretty advanced like surround sound or multi-channel), but for which the sound features are not the core competencies of the chip maker, benefiting for a drop-in solution can make sense to complete the SoC integration on line with the TTM requirement. In other words, the Subsystem IP provider is not in competition with the SoC design team. The sound system is one of the features that the SoC needs to support, but not the most important feature of the SoC you are designing. Adopting such a drop-in solution accelerates TTM, does not compete with the design team core competencies (avoid NIH syndrome) and is a way to minimize risk: you integrates a pre-verified function.

When designing a SoC for smartphone or STB application, you can’t afford taking the risk of a re-spin of the chip, because of the TTM pressure, using a pre-verified solution is one of your key requirement (even if it’s not always possible to do so). Offering a pre-integrated, pre-verified and still configurable sound subsystem IP is a good way to minimize the risk associated with this part of the design. Again because of the TTM pressure, you will need to develop the software in parallel with the SoC development and prepare for the validation of the full system as soon as possible (it’s not unusual to spend one year or more to run the validation of complexes application processors like OMAP5 or Snapdragon). FPGA based prototypes (HAPS based hardware prototyping) will allow to accelerate software development, and virtual prototyping to speed up the complete system validation phase, so you don’t need to wait for SoC prototypes release to start working on the SoC validation.

The semiconductor industry is a place where creativity is welcomed, as soon as creative thinking will allow making more money. It’s an industry where some “crazy” ideas like Microprocessor concept have leaded to create huge market starting from scratch, and also an industry where brilliant ideas like the Transputer (a pioneering microprocessor architecture of the 1980s, featuring integrated memory and serial communication links, intended for parallel computing, designed and produced by Inmos) or closer to us, the concept of an ASIC platform, pre-integrating multiple functions as well as field programmable blocks (FPGA areas), have finally failed. Subsystem IP is a brilliant idea. Will this concept meet the market demand? I honestly don’t know the answer, but that I can say is that Synopsys has tried to bring a complete solution, based on H/W, S/W and prototype offering, increasing the likelihood of success.

From Eric Estevefrom IPnest


Intel’s Fait Accompli Foundry Strategy

Intel’s Fait Accompli Foundry Strategy
by Ed McKernan on 04-05-2012 at 1:09 am

As many analysts have noted, it is difficult to imagine what Intel’s foundry business will look like one, two or even three years down the road because this is all new and what leading fabless player would place their well being in the hands of one who is totally new at the game. I would like to suggest there is a strategy in place that will soon lead to tectonic shifts in the semiconductor world. The assembled pieces of “no-name” startup chip companies building in Intel’s advanced 22nm trigate process include Achronix, Tabula and now Netronome. Each represent three possible solutions to high performance data path processing that may lead to Intel’s goal of dominance in the combined server, storage, networking platform. Or, perhaps they may serve as a forcing function for leading Altera, Xilinx, Broadcom, Marvell or Cavium away from TSMC and partnering with Intel. Either outcome is a win for Intel.

For much the past three years the spotlight has shined brightly on everything that is mobile – as it should have. Questions about Intel’s ability to either counter Apple’s ARM based mobile rise or to be its eventual supplier across the board will be on every analysts mind until there is resolution. However, there is another side to Intel’s business that is not well understood. Intel always fights a multi-front war to maximize its advantage and overwhelm competitors without similar magnitudes of resources. Only Intel, historically, has been able to do this.

Today, while it charges ahead with its Medfield processor in the smartphone and tablet space to blunt ARM’s early lead, Intel enters a mopping up phase in the PC market with its Ivy Bridge based Ultrabooks that will neuter AMD and nVidia in what will be the highest volume segment by the end of 2013. And in the background Intel has opened up a third front against the foundries of TSMC, Global Foundries and Samsung who the ARM Camp depends on to win the Mobile Tsunami Marketplace. Without a process within spitting distance of Intel, ARM would be relegated to trailing edge embedded SOCs. Therefore Intel will leverage its Fabs to peel away Foundry customers, cutting off oxygen that pays for future capital expenditures at leading nodes.

The announcements that FPGA startups Achronix and Tabula are utilizing Intel’s 22nm process technology had some guessing where were Xilinx and Altera. With Netronome, the question could be where is Cavium and the Netlogic RMI group acquired by Broadcom. All attack the data processing path that Intel needs to fill out the networking platform. The acquisition of Fulcrum last summer and QLogic’s Infiniband group provide critical functions that should be able to leverage 22nm at the expense of Broadcom and Marvell’s switch chips and Mellanox Infiniband chips.

As Andy Bechtolsheim, the former Sun founder and Google investor and now running Arista, a startup building low-latency high performance switches, said the era of ASIC based switch chips is over. The inevitable march towards merchant Ethernet silicon is on and who can build the fastest chips accessing the latest technologies wins. The Fulcrum acquisition seems to preclude Broadcom and Marvell from a Foundry slot unless they were to sign away product rights.

Traction by Achronix or Tabula could force Xilinx or Altera to seek an entry into the 22nm trigate process. Until now, both Xilinx and Altera have walled off the FPGA market to startups with their software tools, leading edge processes and robust IP. However what happens if a startup competitor gets a 3-year process technology advantage. In 2009, Altera beat Xilinx out the door by 12 months with its high end 40nm Stratix IV and ended up crushing them in the communications space, a segment that represents almost half the revenue and the majority of the profits. You have to wonder if there is any reason that they aren’t both running test wafers at Intel.

Diminishing nVidia and AMD’s stature in the PC and tablet business; pulling away a Xilinx or Altera; outrunning Broadcom and Marvell in the switch chip market all seem to be part of an overriding strategy that has yet to be communicated by Intel but is a factor in their massive capital expenditure that looks to double capacity by the end of 2013 and put some distance between them and the Foundries. If Intel out executes on the process side, then many fabless vendors may be presented with a Fait Accompli.


FULL DISCLOSURE: I am long INTC, AAPL, ALTR, QCOM


DAC Pavilion Panels

DAC Pavilion Panels
by Paul McLellan on 04-05-2012 at 12:00 am

Once again DAC has a full program of panel sessions that take place on the exhibit floor at the DAC pavilion, aka booth 310.

Gary Smith kicks off the program with his annual “What’s Hot at DAC” presentation on Monday, June 4th, from 9:15-10:15am. The rest of Monday’s pavilion panels are:

  • “Low power to the people,” a panel discussing low-power design techniques, struggles and solutions.
  • “Is life-care the next killer app?” a panel looking at where electronics and EDA is going in health, energy efficiency, safety and productivity.
  • “The mechanics of creativity,” sponsored by Women in Electronic Design, a panel looking at how we can be creative on demand and sharing stories of innovation.
  • An interview with this year’s yet-to-be-announced Marie R. Pistilli award winner. The winner will be announced on April 10[SUP]th[/SUP].

On Tuesday, the panels are:

  • “Hogan’s heroes: learning from Apple,” Jim Hogan leads a panel to look at what we can all learn from Apple, now the world’s most valuable company.
  • “Foundry, EDA and IP: Solve Time-to-Market Already!” a panel discussion on what IP, EDA and foundry vendors are doing to further reduce the time to design a modern SoC.
  • “Chevy Volt teardown.” Experts discuss what is “under the hood” of the Chevy Volt especially its 310V lithium-ion battery and its control electronics.
  • “An interview with Jim Solomon,” who has been working for decades on advancing analog design, a Kauffman award winner and founder of one of the predecessors of Cadence.
  • “Conquering New Frontiers in Analog Design – Plunging Below 28nm.” Analog no longer has the luxury of trailing a couple of process generations behind digital. This panel discusses the challenges.

Wednesday’s pavilion panels are:

  • “Town Hall: The Dark Side of Moore’s Law.” This panel looks at how to get design costs back in line with Moore’s law so that EDA and semiconductor companies can also profit.
  • “Divide and Conquer – Intelligent Partitioning.” There are many reasons to partition a huge design. This panel looks at all the issues surrounding partitioning decisions. It’s moderated by me.
  • “Real World Heterogeneous Multicore.” Almost all cell-phones feature multiple processors of different types: CPUs, GPUs, DSPs. This panel discusses the issues.
  • “Teens talk tech” where, once again, high-school students tell us how they use the latest tech gadgets, and what they expect to be using in three to five years.
  • “Hardware-Assisted Prototyping & Verification: Make vs. Buy?” Emulators are expensive, but building a custom FPGA prototype has its own set of challenges. This panel discusses the tradeoffs.

Full details of all the pavilion panels are on the DAC website here.


GSA Silicon Summit at the Computer History Museum!

GSA Silicon Summit at the Computer History Museum!
by Daniel Nenni on 04-04-2012 at 9:52 pm

The first GSA Silicon Summit will address the complexity, availability and time-to-market challenges that the industry must overcome to enable low power, cost effective solutions to keep pace with Moore’s Law. With never ending customer demand of better, faster and cheaper, semiconductor manufacturers must continually push their technology processes to ensure that they are providing higher density, lower power, and faster processing speeds. This event will evaluate the predominant process technologies that are leading the industry to meet this demand.

APRIL 26th, 2012

Computer History Museum
1401 Shoreline Blvd.
Mountain View, CA 94043

REGISTER NOW!

Program

[TABLE] cellpadding=”5″ style=”width: 100%”
|-
| align=”center” valign=”top” | Time
| valign=”top” | Item
|
|-
| align=”center” valign=”top” | 8:00 a.m.
| valign=”top” | Registration/Networking Breakfast
|
|-
| align=”center” valign=”top” | 9:00 a.m.
| valign=”top” | Opening Remarks
|
|-
| align=”center” valign=”top” | 9:00 a.m.
| valign=”top” | Keynote Address: Keeping Moore’s Law AliveDr. Subramanian S. Iyer, IBM Fellow and Chief Technologist, Microelectronics Division, IBM
|
|-
| align=”center” valign=”top” | 9:30 a.m.
| valign=”top” | Keynote Address: Advancements in CMOS Technologies
Subramani Kengeri, Head of Advanced Technology Architecture, Office of the CTO, GLOBALFOUNDRIES
|
|-
| align=”center” valign=”top” | 10:00 a.m.
| valign=”top” | Panel Discussion: Extending the Life of CMOS Moderator:Dr. Roawen Chen, Vice President, Manufacturing Operation, Marvell Semiconductor, Inc.
Panelists:
|
|-
| align=”center” valign=”top” |
|

  • Jim Aralis, Chief Technology Officer and Vice President, R&D, Microsemi

|
|-
| align=”center” valign=”top” |
|

|
|-
| align=”center” valign=”top” |
|

  • Shung Chieh, Vice President, Technology Development, Aptina Imaging

|
|-
| align=”center” valign=”top” |
|

  • Matt Crowley, Vice President, Hardware Development, Tabula

|
|-
| align=”center” valign=”top” |
|

|
|-
| align=”center” valign=”top” | 11:00 a.m.
| colspan=”2″ valign=”top” | [TABLE] style=”width: 100%”
|-
| Networking Break
Sponsored by True Circuits
| align=”right” |
|-

|-
| align=”center” valign=”top” | 11:15 a.m.
| valign=”top” | Keynote Address: The Case for SOI TechnologyJean-Marc Chery, Executive Vice President and Chief Manufacturing & Technology Officer, STMicroelectronics
|
|-
| align=”center” valign=”top” | 11:45 a.m.
| valign=”top” | Keynote Address: The Revolutionary Scope of Multi-Gate Transistors Dr. Chenming Hu, TSMC Distinguished Chair Professor of Microelectronics, University of California, Berkeley
|
|-
| align=”center” valign=”top” | 12:15 a.m.
| valign=”top” | Lunch
|
|-
| align=”center” valign=”top” | 12:45 p.m.
| valign=”top” | Keynote Address: The Multidimensional Landscape Nick Yu, Vice President, Engineering, Qualcomm
|
|-
| align=”center” valign=”top” | 1:15 p.m.
| valign=”top” | Panel Discussion: 3D ICEcosystem Collaboration

Moderator:Mark Brillhart, Vice President, Technology and Quality, Cisco Systems, Inc. Panelists:
|
|-
| align=”center” valign=”top” |
|

  • Raman Achutharaman, Corporate Vice President, Strategy & Marketing, Silicon Systems Group, Applied Materials

|
|-
| align=”center” valign=”top” |
|

  • Liam Madden, Corporate Vice President, FPGA Development and Silicon Technology, Xilinx

|
|-
| align=”center” valign=”top” |
|

  • David McCann, Senior Director, Packaging R&D, GLOBALFOUNDRIES

|
|-
| align=”center” valign=”top” |
|

  • Stephen Pateras, Senior Director, Marketing, Silicon Test Division, Mentor Graphics

|
|-
| align=”center” valign=”top” |
|

  • Rich Rice, Senior Vice President, Sales & Engineering, ASE

|
|-
| align=”center” valign=”top” | 2:15 p.m.

| valign=”top” | 3D IC Working Group Meeting

|
|-

Keynote Address: Keeping Moore’s Law Alive
To sustain Moore’s Law, the industry’s brightest minds have explored the boundaries of technology and innovation to boost computing power at the 22nm and beyond. The stand-out solutions include 2.5D/3D ICs, SOI technology and 3D transistors. This keynote will address how these multiple technologies will further business’ growth and competitive edge in the near future.

Keynote Address:Advancements in CMOS Technologies

Contrary to the belief of some industry pundits that are driven by today’s digital age, the advancement of CMOS technology continues to enable the creation of leading-edge electronics by sustained scalability, accelerated time-to-market and higher yield. This keynote will showcase the resilient lifespan that CMOS performance exhibits today through technical improvements, surpassing its physical and economical limits.

Panel Discussion: Extending the Life of CMOS

Employing CMOS for the next generation of high-performance applications can be challenging, but companies continue to incorporate new process technologies that allow them to continue to maximize the use of CMOS to solve power and scaling issues. This panel session will debate the strengths and weaknesses of the solutions that advance this technology through analyzing the cost trade-offs of their technical merits and other factors, ultimately providing a snapshot of the technologies’ market impact.

Keynote Address: The Multidimensional Landscape

The buzz surrounding 2.5D/ TSV-based 3D technology is gaining momentum in the mainstream market, as more and more companies explore its cost and performance benefits and commit to tackling its technical and non-technical barriers. This keynote will address the breakthrough advances in multidimensional technology and forecast its long-term roadmap.

Keynote Address: The Case for SOI Technology

With significant technical gains in power and performance, SOI is a time-tested technology poised to enable the next generation of processors. This keynote will discuss how SOI technology will enable technological innovation and that will influence the ecosystem in new mobile, data, and consumer applications.

Keynote Address: The Revolutionary Scope of Multi-Gate Transistors

Gordon Moore calls the FinFET/Trigate transistor the most radical shift in semiconductor technology in over forty years. What are the advantages of the multigate FinFET? What makes it scalable to 10nm and single digit nm? What new opportunities are presented by it and its cousin, the ultra-thin-body UTBSOI transistor? This keynote will address the game-changing impact the introduction of multi-gate transistors will have on today’s products.

Panel Discussion: 3D IC Ecosystem Collaboration

The anticipated arrival of 3D ICs is on track for 2013, and as with any unproven technology, the cost, yield and logistical uncertainties are high if a coherent supply chain is not implemented quickly. While the technology continues to progress, the industry is working toward aligning the business goals of chip companies, foundries, packaging/assembly houses and so on. This panel session will discuss what must be accomplished within the supply chain before the debut of 3D ICs, from a standards and technical standpoint.