Synopsys IP Designs Edge AI 800x100

TSMC dilemma: Cadence, Mentor or Synopsys?

TSMC dilemma: Cadence, Mentor or Synopsys?
by Eric Esteve on 10-18-2012 at 4:49 am

Looking at the Press Release (PR) flow, it was interesting to see how TSMC has solved a communication dilemma. At first, let’s precise that #1 Silicon foundry has to work with each of the big three EDA companies. As a foundry, you don’t want to lose any customer, and then you support every major design flow. Choosing another strategy would be stupid.

The first PR came on October 12, about Chip on Wafer on Substrate tape out, here is an extract: “TSMC today announced that it has taped out the foundry segment’s first CoWoS™ (Chip on Wafer on Substrate) test vehicle using JEDEC Solid State Technology Association’s Wide I/O mobile DRAM interface… A key to this success is TSMC’s close relationship with its ecosystem partners to provide the right features and speed time-to-market. Partners include: Wide I/O DRAM from SK Hynix; Wide I/O mobile DRAM IP from Cadence Design Systems; and EDA tools from Cadence and Mentor Graphics.”

As you can see, both design tools from Cadence and Mentor are mentioned, and Cadence can be honored: the test vehicle is based on Wide I/O mobile DRAM IP from the company. We will have a look at Wide I/O more in depth soon in this blog.

Cadence and Mentor? Look like one is missing!

Then, today, the industry was awarded that Synopsys has “received TSMC’s 2012 Interface IP Partner of the Year Award for the third consecutive year. Synopsys was selected based on customer feedback, TSMC-9000 compliance, technical support excellence and number of customer tape-outs. Synopsys’ DesignWare Interface IP portfolio includes widely used protocols such as USB, PCI Express, DDR, MIPI, HDMI and SATA that are offered in a broad range of processes from 180 nanometer (nm) to 28nm.”

If you want to know more about the Interface IP market, weighting over $300 million in 2011, you should take a look at this post

The PR about the Chip on Wafer on Substrate (CoWoS) from TSMC shows that Cadence invests to develop the memory controller technology of the near future, to be used for 3D-IC on mobile applications. I suggest you to read this excellent article from Paul McLelan, so you will understand how work CoWoS from a Silicon technology standpoint.

I will rather focus on the Wide I/O Memory Controller. Here is the description of the key features, as described by Cadence:
Key Features

  • Supports Wide I/O DRAM memories compliant with JESD229
  • Supports typical 512-bit data interface from SoC to DRAM (4 x 128 bit channels) over TSV at 200MHz offering more than 100Gbit/sec of peak DRAM bandwidth
  • Independent controllers for each channel allow optimization of traffic and power on a per-channel basis
  • Supports 3D-IC chip stacking using direct chip-to-chip contact
  • Supports 2.5D chip stacking using silicon interposer to connect SoC to DRAM
  • Priority and quality-of-service (QoS) features
  • Flexible paging policy including autoprecharge-per-command
  • Two-stage reordering queue to optimize bandwidth and latency
  • Coherent bufferable write completion
  • Power-down and self-refresh
  • Advanced low-power module can reduce standby power by 10x
  • Supports single- and multi-port host busses (up to 32 busses with a mix of bus types)
  • Priority-per-command (AXI4 QoS)
  • BIST algorithm in hardware enables high-speed memory testing and has specific tests for Wide I/O devices

It’s amazing! During the last ten years, we have seen a massive move from parallel to serial interface, think about PCI moving to PCI Express, PATA being completely replaced by SATA in storage application in less than 5 years, and the list is long. With the Wide I/O concept, we can see that a massively (512-bit) parallel interface, running at 200 MHz (to be compared with LPDDR3 at 800 MHz DDR), can offer both a better bandwidth up to 17 GB/s, and a better power per transfer performance than LPDDRn solution.

Anything magic here? The higher performance in term of bandwidth can be easily explained: adding enough 64-bit wide busses will allow passing LPDDR3 performance. But the reason why the power per transfer is better is more subtle: because it’s a 3D technology, the connection between the SoC and the DRAM will be made in the 3[SUP]rd[/SUP] (vertical) dimension, as shown in the picture from Qualcomm : thus, the connection length will be shorter than any connection made on a board. Moreover, the capacitance (due to the bumping or bonding material and to the track on the PCB) will be minimized with 3D connection. Then the power per bit transferred at a certain frequency. I did not checked how this was computed, but I am not shocked by this result…

So, Wide I/O memory controller looks like a superb new technology developed by Cadence, the mobile market is healthy enough (an understatement!) to decide to introduce the technology, but, as mentioned by Qualcomm on the above picture “Qualcomm want this but also competitive pricing”…

Eric Esteve from IPNEST


Xilinx Programmable Packet Processor

Xilinx Programmable Packet Processor
by Paul McLellan on 10-17-2012 at 5:19 pm

At the Linley conference last week I ran into Gordon Brebner of Xilinx. He and I go a long way back. We had adjacent offices in Edinburgh University Computer Science Department back when we were doing our PhDs and conspiring to network the department’s Vax into the university network over a two-week vacation. We managed to do it. I think Gordon and I (and another co-conspirator called Fred whose name wasn’t really Fred, but that’s another story) drank some beer together at times, but I’m a bit hazy about that.

I came to the US and Gordon remained in Edinburgh and eventually became head of the computer science department. Then he decided to join Xilinx and move over here and do research on novel things to do with FPGAs. One area goes back to that networking stuff, how to use FPGAs to build a custom network processor that has exactly the functional and power/performance requirements.

At the evening exhibit session, they were demonstrating this at 100G line rates showing that FPGAs do have the necessary performance. And devices are now large enough that you can build significant architectures and systems.

The normal alternative is to use a network processor unit (NPU) or else a generic multicore CPU and try and get the right mix of resources, interconnect, ports etc in a fixed architecture.

So how do you actually build one of these network processors? The user describes packet processing requirements in the PX++ language, which has an object-oriented programming style. The PX++ compiler generates a customized micro-coded architecture, described in RTL, when is then synthesized for the FPGA. Subsequent changes to the PX++ description, unless very large, can then be compiled to microcode updates only, with no re-synthesis being required. The compiler also takes throughput and latency requirements separately, and customizes the generated architecture to meet these, thus showing a scalable approach. This results in trade-offs, e.g. higher throughput means more FPGA resource use, lower latency means less run-time programmability.

In the future I think we will see more of this. Today, when you think of programming an algorithm you think if implementing it in software. But for really high performance, compiling it into gates can be more effective, either using general high-level synthesis (such as AutoESL that Xilinx acquired last year) or something like PX++ that is completely focused on a specific but important problem.


8 Reasons Why I Hate My iPhone 5

8 Reasons Why I Hate My iPhone 5
by mbriggs on 10-17-2012 at 8:05 am

images?q=tbn:ANd9GcQrge1L1VKsYlHk14HvfapvlesjacXWpRugIVWGe3UHaTVWjT1P

Not really, but I regret upgrading from my Verizon based HTC Incredible to the iPhone 5. If you are on the fencebetween and iPhone 5 and a Samsung S3, consider reading this post.

I’ve been an anti Apple fan boy of sorts for the duration, but have gradually been sucked into the Apple eco system. I really like my iPad (3) and my “recliner laptop” is an old MacBook Pro. I have found the process of making calls on my Incredible (power button, swipe, call) very trying and thought an Apple device would do a much better job with many usability issues.

The 8 reasons are:

[LIST=1]

  • Voice recognition. I’m a terrible phone typist and like to utilize voice recognition for email and text messages. Android is far better at transcribing my random babblings. Siri does a fine job with “call dan cell”, or “text harry”, but that’s about it. Siri seems to understand “F*ck You Siri” and gives an interesting response. My money says it’s in the top 5 of all phrases uttered to Siri.
  • Camera. I’ve taken 20 pictures thus far, none of them good. I see much better photos from my wife’s HTC Thunderbolt and my friend’s iPhone 4s. I’ve seen the purple flare problem even when it didn’t seem like the sun should be a factor. I can accept that issue, given it appears on other phones, but my pictures are fuzzy.
  • Data Plan. Verizon forced me off my unlimited plan. I shouldn’t be using much data as I do zero video and minimal audio streaming, but I’ve gotten a warning message two months in a row that I’m nearing my Verizon threshold. (Yes, I fixed the wifi bug immediately, but still got a warning this month.)
  • Photo Sharing. I can’t run Photo Steam on my MacBook Pro as a later version of Mac OS X is required. I decided to bite the bullet and pay $20 to Apple to upgrade to Mountain Lion, which annoys me. I found out that my 5 year old (2.16 GHz Intel Core 2 Duo) is not upgradable to Mountain Lion. Ironically Photo Stream runs nicely via iCloud on my Windows 7 machine.
  • Maps. I bet you are tired of hearing about this one. I’m dependent on voice based navigation. I really miss google maps. I find the best alternatives, MapQuest and Nokia Maps unsatisfactory.
  • Cost. It’s not a $200 upgrade. Add in tax, $30 upgrade charge, cheap case and voila, it’s $300. I get that the S3 would be the same.
  • App installation. Oftentimes when I try to press the “INSTALL APP” button I need to press a half dozen times. The problem comes and goes. I don’t know if I have a flaky screen or something else is amiss. It seems to help if I kill all the apps that I am not using, but this, too, is annoying. I’m sure it will work fine if I take it to the genius bar or the Verizon store.
  • Password. Apple has forced me to use a secure password (upper and lower case, number, special character) which is hard to type. I don’t install apps in bunches, I install them periodically – and need to type in the d*mn password every time. Google still lets me use my old, lower case only, password.

    So, I lied. I don’t really HATE my iPhone 5, but if I had to do it over gain I’d get a Samsung S3. Surf to Mike’s Blog.


  • IP-SoC 2012 Conference: don’t miss keynotes talk from Cadence, Synopsys, STMicroelectronics…

    IP-SoC 2012 Conference: don’t miss keynotes talk from Cadence, Synopsys, STMicroelectronics…
    by Eric Esteve on 10-17-2012 at 4:47 am

    … Mentor Graphics, Design & Reuse or Gartner. The IP-SoC conference in Grenoble has been the very first 100% dedicated to Design IP, created by Gabriele Saucier 20 years ago, when “reuse” was more a concept than a reality within the design teams, and when Design IP was far to be a sustainable business.

    Pr Gabriele Saucier had the intuition that this concept would turn into a real business, and created “Design & Reuse”, the well-known IP portal, some years after the conference had started. Now, in 2012, Design IP and Computer Aided Engineering (CAE), both market segment as defined by EDAC, are weighting almost the same:

    EDAC report revenues of $2,292 million for CAE in 2011 and $1,580 million for Design IP.

    But, if you look at Design IP results as reported by Gartner, Design IP segment reach $1,910 million. And my guess is that some Design IP sales are not accounted by EDAC or Gartner, like these sold directly by the Silicon Foundries, by certain ASIC design houses like Global Unichip or Open Silicon, or even that some IP are directly traded between chip makers. Thus, estimating the IP market to be a two billion plus ($2.2B ?) looks realistic.

    IP and EDA are both essential building blocks for the semiconductor industry. It was not clear back at the end of the 90’s that IP will become essential: at that time, the IP concept was devalued by some products exhibiting poor quality level, un-efficient technical support, leading program manager to be very cautious to simply decide to buy. Making was sometimes more efficient… In the meantime, the market has been cleaned up, the poor quality product suppliers disappearing (being bankrupt or sold for asset) and the remaining IP vendors have understood the lesson.

    Today, none of the vendor launching a protocol based (digital) function would take the chance to launch a product which has not passed an extensive verification program, and mixed-signal IP vendors know that only Silicon proven functions will really sale. This leaves very small room for low quality products, the IP market is now mature and consolidating, even if some new comers are doing pretty well, especially in the PHY (Cosmic Circuits, VSemiconductor…), Chip Infrastructure (Arteris going from $5M in 2010 to $15M in 2011) and some promising companies, like Imagination Technologies or CEVA, finally make it, thanks to the smartphone explosion!

    IP-SoC conference last two days, December 4 and 5, so the program is wide, here is an extract:

    Keynote Talks
    IP Business: Status and Perspectives by Gabriele Saucier, CEO, Design And Reuse

    Semiconductor IP market overview” by Ganesh Ramamoorthy, Research Director, Gartner Inc

    Managing the IP Sourcing Process: an IDM Perspective” by Philippe Quinio, Group Vice President of IP Sourcing & Strategy, STMicroelectronics

    Cloud and Mobility: Disrupting the IP Ecosystem” by Martin Lund, Senior Vice President, Research and Development, SoC Realization Group, Cadence

    Keynote Talk” by Joachim Kunkel, Senior Vice President and General Manager, Solutions Group, Synopsys

    Power is now a Software Issue” by Colin Walls, Mentor Graphics

    Invited Talks

    Duopoly, Monopoly à Opportunity” by Marc Miller, Sr. Director of Marketing, Tabula

    µfluidic applications: an upcoming Eldorado for µelectronic ?” by CEA

    If you want to register, just go to Design & Reuse web site, here

    If you come, we should meet, as I plan to attend to the conference, and present a paper. The topic? I will let you know in a next blog. The presentation will certainly be IP centric, and you most probably will hear about Mobile Express, PHY IP, MIPI… just stay tuned.

    Eric Esteve


    12m FPGA prototyping sans partitioning

    12m FPGA prototyping sans partitioning
    by Don Dingee on 10-16-2012 at 9:30 pm

    FPGA-based prototyping brings SoC designers the possibility of a high-fidelity model running at near real-world speeds – at least until the RTL design gets too big, when partitioning creeps into the process and starts affecting the hoped-for results.

    The average ASIC or ASSP today is on the order of 8 to 10M gates, and that includes things up to an ARM Cortex-A9 processor core comfortably. However, that size has until recently swamped FPGA technology, forcing an RTL model to be partitioned artificially across several FPGAs before it can fit into an FPGA-based prototyping system. After spending a bunch of time integrating verified RTL IP blocks into a single functional design, it seems a bit counter-productive to split it back up to see if it really works at the validation stage. Depending on the skills of the partitioner, the diamond that was a nice RTL design can be reduced to rubble quickly.

    That risk has kept many designers from using FPGA-based prototyping for large and fast designs, opting instead for virtual platform and simulation techniques which can handle very large models today. These are both good approaches to verify functional integrity, but more and more designs are unearthing IP issues that only appear when running with faster I/O and real software (which could take WEEKS in a simulation platform). If a design team doesn’t crank things up and stress the RTL getting a look at-speed, there’s a bigger chance for failure on the first silicon pass, and that can get brutally expensive in time, money and missed markets.

    We’ve seen one major development on the FPGA-based prototyping front recently – from Aldec – and we’ve been pre-briefed on another one coming from another vendor shortly. (Insert #ICTYBTIHTKY hashtag here. You’ll read it here as soon as we can talk about it.) Let’s dig a bit into why the Aldec approach gets my attention.

    We first learned about the Aldec HES-7 in an earlier post from Daniel Payne about a month ago. I’ve been digging through the white paper co-authored between Aldec and Xilinx, looking beyond the headline that the HES-7 system now goes to 96M gates. While that’s an impressively large size, utilizing that capability requires a design to be partitioned across 8 FPGAs in pairs separated by a PCI Express interconnect.

    As the Aldec-Xilinx white paper describes, when you partition RTL to fit FPGA-based prototyping environments, you suddenly need to worry a lot about the clock tree, balancing resources between the FPGA partitions, dealing with which part of the logic gets the memory interface and I/O pins, and more. Some of you out there may be very comfortable with your partitioning skills and might have developed a formula that splits your gigantic RTL design reliably into 8 FPGA-sized pieces without side effects – I’d be thrilled to hear of a real-world example we could share here, especially how much effort this takes.

    But let’s face it, the reason Chevy puts Corvettes into showrooms is to sell Silverados, so most people can get real work done while they dream of someday needing a lot more horsepower. Not many SoC designers need 96M gates. I’m betting that the vast majority of SoC designers would love to have a 12M gate platform, running at around 50 MHz in a single FPGA without RTL partitioning. That would be the exact value proposition the Aldec HES-7 XV2000 offers. Insert one ARM Cortex-A9 based design, no partitioning, and a lot less waiting for results.

    There’s an interesting study John Blyler blogged recently on a survey asking why designers turn to FPGA prototyping for SoC design. That HW/SW co-verification bar in the chart he shows is huge. He’s also hinting at the issue we talked about of verified IP falling down during validation when integrated in a larger design.

    What are your thoughts on the state of FPGA-based prototyping? Does the ability to put an entire 12M gate design in a single large FPGA on a prototyping system open up the methodology for more SoC designers? Or does it just push the envelope so larger RTL designs can fit now and partitioning will still be required? Are the results of FPGA prototyping worth the effort of partitioning? Does the ability to validate with real software in a much shorter time offset the investment in the methodology?


    ReRAM Cell Modeling and Kinetics

    ReRAM Cell Modeling and Kinetics
    by Ed McKernan on 10-16-2012 at 8:55 pm

    Introducing the first ReRAM-Forum movie!! In part 2 of their recently published papers in the Transactions on Electron Devices of the IEEE, Professor Ielmini’s group describe the modeling of resistive switching in bipolar metal oxide ReRAM. Like part 1, the paper is collaboration with David Gilmer of Sematech who provided the Hafnium Oxide based ReRAM samples. The numerical model solves the drift/diffusion equations for ion migration, allowing the evolution of the conductive filament to be viewed in ‘real time’. The model reproduces the abrupt set and gradual reset transitions along with the kinetics of cell behavior observed experimentally. The reset process is shown in the embedded movie. See More at ReRAM-Forum.com.


    Current Timing Closure Techniques Can’t Scale – Requires New Solution

    Current Timing Closure Techniques Can’t Scale – Requires New Solution
    by Daniel Nenni on 10-16-2012 at 8:30 pm


    There’s a nice article on timing closure by Dr. Jason Xing, Vice President of Engineering at ICScape Inc. on the Chip Design website. Not familiar with ICScape? Paul McLellan called ICScape the The Biggest EDA Company You’ve Never Heard Ofand Daniel Payne did Schematic, IC Layout, Clock and Timing Closure from ICScape at DAC, just to get you started.

    Current IC designs have advanced quickly from 65 and 45 nanometers, down to 28, 20, and below. This progression to ever-smaller geometries has brought significant challenges in achieving timing closure to meet production deadlines and market windows. Engineering teams often struggle to efficiently perform late-stage ECOs (engineering change order) to meet their design as well as time to market objectives.

    In the current methodology, engineers are forced to fix ECOs using two or more different tools in the flow, iterating far too many times, often just to meet production deadlines and market windows. This method of handling ECOs will get worse with each new process node. This brings up a dire need for a solution that will allow efficient and effective handling of ECOs and hence design closure.

    What are the challenges with timing closure?

    Jason Xing is co-founder and Vice President of Engineering at ICScape, where he architected clock and timing closure products. Jason has over 15 years EDA research and development experience. In 1997, He joined Sun Labs after receiving his PhD in Computer Science from the University of Illinois at Urbana-Champaign. At Sun Labs, Xing did research on physical and logical concurrent design methodologies and shape-based routing technologies. In 2001, he joined the Sun Microsystems internal CAD development team before he started ICScape in 2005. Jason holds another PhD in Mathematics from University of Louisiana.

    ICScape is a leader in developing and delivering fast and accurate design closure solutions for today’s complex SOC designs, and a complete suite of analog and mixed-signal design, implementation and verification solutions. ICScape’s tools have been successfully used to design and deliver integrated circuits for a variety of application areas including storage, wireless, base band, data communications, multimedia, graphics, chipset and power management design. The solutions are silicon-proven through well over 100 tape-outs. The SOC design closure solutions fit into existing flows complementing signoff static timing analysis and physical design tools. While the analog / mixed-signal tools (AMS) form a complete solution, individual tools fit into existing AMS tool flows, preserving your current investment in tools.

    ICScape now has a landing page on SemiWiki so you will be reading more about them soon. With all the EDA consolidation ICScape is the one to watch for both digital and analog solutions.


    Altera’s Real Impact with ARM based SOC FPGAs

    Altera’s Real Impact with ARM based SOC FPGAs
    by Ed McKernan on 10-16-2012 at 8:15 pm

    At the annual Linley Processor Conference this past week a number of chip vendors proposed a raft of new networking solutions directed at solving today’s bandwidth issues. Perhaps the overall highlight of the conference was the recognition by Keynote Speaker Linley Gwennap of the shift that is taking place towards ARM based solutions. As part of the conference, Xilinx and Altera presented in the afternoon of the first day in a session entitled “Implementing Networking Functions by Programming Both Logic and Processors.” While Xilinx stuck to the theme and presented a variety of networking solutions utilizing the full array of their 28nm FPGAs, Altera decided to play along with the ARM architecture momentum storyline as they detailed the benefits of their SOC FPGAs as they transition from 28nm down to 20nm. There were subtle hints at current capabilities and what can be expected out in time. All around us we are witnessing platform transitions and the SOC FPGA is another one to watch closely as it can be as dramatic as the x86 PC platform to ARM based mobiles.

    A couple weeks ago Texas Instruments announced that they were pulling out of the mobile ARM smartphone and tablet race to focus its OMAP on embedded and wireless base stations. Essentially there are too many chip competitors attempting to capture a market that outside of Samsung and Apple is too small to support the likes of Qualcomm, nVidia, Mediatek, Broadcom, Intel and TI. The embedded market on the other hand is still controlled by a multitude of 8 and 16 bit proprietary MCU architectures with Renesas capturing 30% of the market. ARM’s push in this market with its 32 bit cores has shown success with Atmel, TI and ST Microelectronics but it is early in the game. Many of the above are just now shipping products at 90nm.

    Xilinx and Altera’s foray into the embedded markets with ARM based FPGAs will likely be much more successful than past attempts based on the fact that they have stretched their process lead to three nodes and the fact that end customers are finally recognizing that a jump to 32 bits is the right thing to do long term. Both companies have implemented a dual core ARM-9 processor with the full complement of L1 and L2 caches, DDR2 and DDR3 controllers, networking functions, USBs and Flash controllers. All of the above are hardened for faster performance (800MHz) and smaller area. For performance hungry markets like automotive, the new chips will be a welcome sight.

    Altera’s pitch was to communicate to the industry that the 28nm architecture will be passed forward to 20nm with a performance boost of up to 50% and up to 30% lower in power. Also, given that the ARM complex will shrink, the programmable logic gate count will increase by up to 6X. Therefore customers can carry forward their software and IP while taking advantage of better performance and economics. In addition, Altera mentioned that they will also offer the ability to create custom package solutions whereby the FPGA SOC is stacked with other ASICs or Memory.

    If one circles back to the opening comments by Linley Gwennap, where he describes the current state of the industry as one that is dominated by Freescale’s PowerPC at 50% share and MIPs and x86 at a combined 45% while ARM is just getting started then we can see that the timing of the ARM based SOC FPGA is on track with the transitions that are taking place with the other networking chip vendors. A likely outcome of this ARM transition is that SOC FPGAs, with their leading edge serdes and process technology will end up serving as a prototyping and product vehicle for the other vendors who now are the storehouses for many man years of networking. The ARM processors will operate in the control plane while the programmable logic allows the creation of high performance, custom data paths and packet processing. Therefore there is much more to this story that we will likely see play out over time.

    Full Disclosure: I am Long AAPL, QCOM, ALTR, INTC


    Laker3 in TSMC 20nm Reference Flow

    Laker3 in TSMC 20nm Reference Flow
    by Paul McLellan on 10-16-2012 at 8:10 pm

    SpringSoft, soon to be part of Synopsys but officially still a separate company for now, just announced that Laker[SUP]3[/SUP], the third generation of their layout product family, is featured in TSMC’s 20nm Custom Reference Flow.

    Laker 20nm advancements include new double patterning-aware design and voltage-dependent design rule checking (DRC) flows, enhanced flows for layout-dependent effect (LDE) and parasitic-aware layout, and advanced gradient density analysis capabilities. SpringSoft also collaborated with Mentor Graphics Corporation to qualify Laker-Calibre RealTime integration for signoff-quality, real-time DRC during custom layout.

    Another feature of Laker is its symbolic design mode. Dave Reed told me that designers of 20nm standard cell libraries are finding that the design rules are so restrictive at 20nm that there are very limited options such as reordering transistors, all of which can be done quickly and easily in symbolic mode and then the layout automatically generated.

    There are so many design rules at 20nm that it is next to impossible for a designer to comprehend them all and so the Laker-Calibre integration (both tools run off OpenAccess) giving signoff-quality DRC while doing interactive layout is pretty much essential. Designing the cells and then running a batch DRC from time to time is nowhere near fast enough. Just making sure the lower layers are all colorable for double-patterning, for example, is something where the designer wants instant feedback.

    Laker’s first involvement with 20nm has been with a product called Laker Test Chip Designer. This is as early as you can get engaged since essentially you are working with the process development people. It is in use at many foundries. This is an interesting product since it doesn’t exactly have mass market appeal (when did you last design a process test chip with dozens of ring oscillators etc) but does get Laker involved very early so that they start to understand the issues and changes as soon as possible. And issues and changes there are at 20nm.

    I would love to show you some 20nm layout so you can see just how different it is from what now seems like a do-what-you-like layout style for older process nodes. But everyone is so paranoid about what competitive manufacturers might deduce from seeing some that I’m not allowed to.



    Kaufman Award Dinner at 50th DAC in Austin

    Kaufman Award Dinner at 50th DAC in Austin
    by Paul McLellan on 10-16-2012 at 8:05 pm

    In past years the Kaufman award, the most prestigious in EDA, has been announced around September and presented during a dinner in October or November in Silicon Valley. EDAC and CEDA, the sponsors of the award, have just announced that this time the award dinner will take place in Austin at the 50th DAC following the early Sunday evening networking event.

    Due to this change of date for the presentation, the deadline for nominations for the award has also been extended until January 31st. So oddly, “this” year’s Kaufman nominee doesn’t even need to be nominated until next year.

    The forms for nominations are here.