Synopsys IP Designs Edge AI 800x100

FPGA Prototyping – What I learned at a Seminar

FPGA Prototyping – What I learned at a Seminar
by Daniel Payne on 10-14-2011 at 10:11 am

Intro
My first exposure to hardware prototyping was at Intel back in 1980 when the iAPX 432 chip-set group decided to build a TTL-based wire-wrap prototype of a 32 bit processor to execute the Ada language. The effort to create the prototype took much longer than expected and was only functional a few months before silicon came back. Fast forward to today where you can take your SOC concept and get it into hardware using an FPGA within a day or so when using a Design For Prototyping (DFP) methodology.

Seminar
Synopsys is hosting this worldwide tour forFPGA Prototypingand inviting engineers to learn more about the best practices of prototyping in hopes of bringing this technology to more SOC projects that want an earlier look at how their hardware and software will really work together. I attended the September 27th seminar in Hillsboro, Oregon not far from where I live.

At the seminar I used a laptop running Red Hat Linux and received a free book: FPGA-Based Prototyping Methodology Manual (FPMM), authored by: Doug Amos (Synopsys), Austin Lesea (Xilinx), Rene Richter (Synopsys)

It’s a hefty 470 pages, and was dated February 2011. You can download a free version here.

Our main presenter was Doug Amos, Business Development Manager (SNPS). He has been designing FPGA’s for decades, prototyping for 10 years, and has a British accent.

The seminar sponsors included: National Instruments, Synopsys and Xilinx. The hardware providers all had setups in the back of the room available for us to poke around and look at. It was quite popular and most of us were mesmerized by the blinking lights, cables, buttons and instrumentation.

Synopsys is doing 18 seminar/workshops around the world (Boston, Somerset, Toronto, Austin, Dallas, Mountain View, Irvine, Bellevue, Hillsboro, Paris, Munich, Reading, Hertzliya, Tokyo, Osaka, Beijing, Shanghai, Hsinchu)

The vision is capturing best practices for FPGA-based prototyping, and creating a place where a worldwide group of prototyping professionals can share what they learn.

There’s an online component of the FPMM with a forum, blogs and download of the book. Start the conversation in the seminar, then continue it online.

The Synopsys IP group does FPGA prototyping for their own designs and we had a few of these engineers in the room during our seminar in Hillsboro to ask questions.

Xilinx is the most popular choice for FPGA prototyping because of their high capacity to support SOC designs. The prototyping methodology could be applied to any FPGA vendor, it isn’t specific to Xilinx.

National Instruments was a sponsor, and once the prototype design is in the lab, they can help you to control all of the equipment.

Other book contributors include tier-one design companies: ARM, Freescale Semi, LSI, STMicroelectronics, Synopsys, TI, Xilinx

The book even has a review Council – Nvidia, ST, TI, Broadcom. This makes the book not just a plug for Synopsys but rather an industry attempt at sharing best practices.

Design For Prototyping (DFP) – this was the new three letter acronym that we learned about. In order to get the best prototyping experience you have to alter your design flow.

Gary Gauncher from Tektronix was introduced and later shared about his experience using FPGA prototyping.

Torrie Lewis from the Synopsys IP group that designed PCI Express was introduced to us.

Design Flow for Prototyping

Slides

Keynote – Design-for-Prototyping

Why do we prototype? (the survey results from 1648 users)
[LIST=1]

  • HW/ chip verification
  • HW/SW co-design and verification
  • SW development
  • System validation
  • IP development and verification
  • System integration
  • Post silicon debug
  • Render video or audio

    Verification – did I get what I designed?
    Validation – did I get the right thing?

    Where can Prototyping help most? (hardwoare bring-up, firmware driver, application sw, code qa and test)

    • physical layer
    • at speed debug
    • regression test (much higher volume of data pushed through)
    • multi-core integration
    • in-field test

    “pre-silicon validation of the embedded software.”, Helena Krupnova, STMicroelectornics

    Advantages – find sw bugs

    • debug multicore designs
    • real time interfaces
    • cycle-accurate modeling
    • real world speeds
    • portable
    • low-ccost replication
    • uses same RTL as SoC

    Challenges to FPGA prototyping (survey size 1649)
    [LIST=1]

  • Mapping ASIC into FPGAs
  • Clocking issues
  • Debug visibility
  • Performance
  • Capacity issues
  • Turnaround time to find a bug and fix it
  • Memory mapping
  • other

    Best FPGA prototyping – must be an FPGA expert to get the best results

    • partition across multiple chips
    • hard to achieve silicon speeds
    • complicated to debug
    • RTL must be available
    • RTL not optimized for FPGA
    • Considerable setup time required

    DFP –methodology to get into FPGA prototype earlier, more reliably
    – don’t compromise SoC design goals

    How can things be improved? (If you change your SoC design methodology you can get into prototype quicker, saving weeks of time)

    • Procedural Guidelines
    • Technical Guidelines (FPMM, Chapter 9)

    Reuse Methodology Manual (RMM) – co-written by SNPS and MENT, good practice is to avoid latches in SoC (also good for prototyping).

    1985 – first FPGA design using Karnaugh map entry.

    Advanced FGPA Design today – 5 to 10 million gates possible, RTL, SystemVerilog or C entry

    • verification techniques being used
    • does simulation do the same thing that my board does?

    ASIC design – even when 1[SUP]st[/SUP] silicon comes back you can still use the prototype to help debug silicon

    • goal is to use SW the first day that silicon comes out
    • does your IP choice exist in FPGA version as well?

    System to Silicon tool flow – FPGA based prototyping is a central theme to enable this methodology

    Example: HDMI IP prototyped in FPGA

    • Xilinx IP, Synopsys IP, new IP under test
    • Used a HAPS board with daughter cards for HDMI IO

    Technology Update – Bigger, faster prototypes, more quickly

    • Stacked silicon interconnect technology (multiple-die stacked together)
    • Virtex-7 2000T
    • Silicon interposer (metal interconnect between FPGA slices)
    • 10K routing connections, ~1ns latency (faster than pins)
    • 2 million gates (4 slices of 500K gates)
    • More Block RAMS (BRAM)
    • Long lines of interconnect, ~1.2ns delays across regions
    • About one year before Synopsys has boards using Virtex-7 2000T

    FPGA-based Prototyping Flow

    • Front end tool (Certify), automated ASIC code conversion and partitioning (about 10 years old)
    • Certify outputs to synthesis tool (Synplify Premier)
    • Identify, instrument the design, RTL name space used, at speed debug using RTL source
    • High Performance ASIC Prototyping (HAPS, Swedish company acquired by Synplicity ), boards or boxes
    • Compare Verilog (VCS) versus prototype (using UMRBus) results [FPMM, p336]

    Freescale – designing chips for phones, chips must be tested for certification

    • tried emulators for certification, protocol test ran in 21 days
    • used prototyping for certification using a HAPS 54 board [FPMM, p30], protocol test ran in 1 day

    Prototyping – Design and build your own boards, or use HAPS off the shelf?

    • cost comparison spreadsheet included

    National Instruments – Lab bench has Signal/Pattern generator for stimulus, then scopes for evaluation

    • LabVIEW software controls a PXI box for both stimulus and measurements
    • Virtual Instruments (VI)
    • Create a GUI in LabVIEW to control your designs

    Gary Goncher (Tektronix) – User experience with FPGA-based prototyping

    • Tektronix Component Solutions, subsidiary of Tektronix, custom chips, package, component test, RF& Microwave modules, data converters
    • Designed a high speed data converter board

      • TADC-1000 Digitizer Module, 8 bits, 9.5GHz bandwidth, 12.5 GS/s (10 Gibts/s)
      • Can we get high-speed data into an FPGA?
      • Used a HAPS board to connect their data converter board using a customer daughter board (digitizer interposer board)
    • Customers use this setup for multichannel RF receiver, multichannel waveform generator, waveform generator with feedback, RF-in to RF-out system
    • Customers can prototype their ideas using this Tektronix / HAPS system, then create their own custom boards

    Lab time

    I logged into a laptop and went through the lab exercise using Certify and Synplify Premier tools, working on a public-domain processor and getting it through the prototype process in under one hour of time.

    Along the way I ran scripts that automated a lot of the time-consuming grunt work to get my RTL in shape for prototyping. I had used Synplify for FPGA synthesis before and so the whole GUI of Certify was familiar to me and the lab work proceeded as documented.

    We mapped the processor design across two FPGAs just to get a feel for how partitioned worked.

    Most designers should be up and running within a day or so on their designs, it helps to have an experienced AE around to give you a tutorial on the tools.

    Photos

    Xilinx Zynq 7000


    Synopsys IP Group using FPGA Prototype


    National Instruments setup

    Post Lab

    Some of the issues for creating a prototype are: SoC design may not be FPGA ready – Pads, BIST, Gate-level netlists, cell instances, memories, clocks, IP

    • Automatic Clock Conversion scripts (gated clocks, generated clocks)
    • IP with DesignWare, directly use this IP in FPGA prototype
    • RTL Input, Partitioned (by Ceritify), synthesis (by Premier), debug (Identify)

    Xilinx showed off their ARM Dual Core Coretex-A9 Plus FPGA – ZynqTM

    • prototype of an FPGA

    Doug Amos – wrap up from Synopsys.

    Design-for-prototyping – thinking about prototyping before the spec is complete

    Modern Verification Environment – stimulus, DUT, observe

    • assertions
    • UVM (Universal Verification Methodology)
    • Hundreds of functions used to stimulate and verify
    • Random stimuli (after functional stimuli), constrained random
    • More effort to verify than to design an SoC, even more SW effort than verification effort

    RTL for prototyping – maybe change only 1% of your files to make them ready for prototyping (pull out BIST, no pads, remove analog, etc.)

    Board-level design has been prototyped – re-verify my functional test bench with my prototype (hardware in the loop), confirm that board behaves as functional simulation (HDL Bridge tool)

    Prototypers can influence both SW and HW specifications.

    Adopting DFP
    [LIST=1]

  • Growing need for prototyping
  • Known methods to prototype
    [LIST=1]

  • High FPGA capacity
  • Design own prototype, or use off-the-shelf
  • Debug and implementation tools readay
  • The book
  • Join the DFP movement
    [LIST=1]

  • www.synopsys.com/fpmm – blogs, forum

    Summary
    I learned that FPGA Prototyping is a well-understand methodology that can benefit SOC designers that need to get early access to hardware and software running together. The ebook download is filled with practical examples of how to get the most out of your prototyping experience. I recommend the seminar to any designer that wants a solid introduction to the prototyping process.


  • From IBM Mainframes to Wintel PCs to Apple iPhones: 70% is the Magic Number

    From IBM Mainframes to Wintel PCs to Apple iPhones: 70% is the Magic Number
    by Ed McKernan on 10-12-2011 at 10:51 am

    Time to ring the Bell. With the iPhone 4S, Apple has just surpassed the 70% gross margin metric that usually equates to a compute platform becoming an industry standard. IBM’s mainframe achieved it in the 1960s with the 360 series and still is able to crank it out with their Z-series. The combined Intel and Microsoft tandem (Wintel) achieved 70% in the late 1980s and continues to do so across even low-end PCs today. In addition, Intel generates >80% gross margins on its data center, XEON based platforms. Crossing 70% as one can see is a big deal and usually means that a standard has been created that cannot be overcome in head-to-head competition but only by a succeeding platform.

    Prior to the iPhone, no one thought it was possible to build a highly profitable ecosystem in the phone business. A common theme expressed by writers such as John Dvorak just prior to the release of the first iPhone (see Apple should pull the plug on the iPhone). Back in 2007 Nokia, the leading phone manufacturer was registering just over 36% in Gross Margins. Not enough of a barrier to keep others out. Apple was entering a highly fractured, but relatively low margin market.

    Apple’s 70% margin model was recognized by Analyst Chris Whitmore of Deutsche Bank (see Apple expected to achieve manufacturing margins of 70% with iPhone 4S). With Apple at only 5% of the overall phone market, this can be viewed as similar to IBM in the 1960s before the big mainframe ramp or Wintel in the late 1980s after the 386 started ramping. IBM’s mainframe revenue continued to grow through the 1980s, outlasting the minicomputer rage (DEC peaked just before the crash in 1987) but finally falling with the rise of the PC. Today IBM’s mainframe revenues are around $3.5B per year.

    The question on everyone’s mind is how Intel and Microsoft’s revenue will fare in the coming years. IBM’s corporate revenue peak with its mainframe ecosystem occurred in 1990 at $69B. Roughly 3 years after DEC peaked with the VAX. From 1991-1993 IBM went into a deep three-year slide of unprofitability. Lou Gerstner was then hired in April of 1993 but with low hopes of stopping the bleeding.

    It is significant to note that IBM’s slide started 9 years after the PC was introduced and perhaps more importantly 16 months prior to the launch of Window’s 3.1, which really marked the completion of the PC with the Full GUI Ecosystem (Including MSFT Office Apps). Intel was just starting to ramp in high volume its 2[SUP]nd[/SUP] generation 32-bit processor, the 486.

    If the iPhone is the new compute platform going forward, then from the above analysis, what we should expect is that the Microsoft-Intel PC volume and revenue should continue to grow for several years (3-4), then plateau and then decline. Through these phases, gross margins will be maintained at a combined > 70% (Intel Processors are 60% GM and Microsoft O/S is 90% GM). The combination of the two is >70% on a PC. There will be corporate legacy business for tens of years.

    Intel has probably war-gamed multiple scenarios on how its business plays out the next 5-10 years. From what we can tell in their initiatives and their actions we know several things are true. One is that they will continue to invest heavily in the data center market because it is their fastest growing processor market with >80% gross margins. Second, it will continue to invest in the mobile market because it cannibalizes desktops and is the nearest competitor to the iPAD and tablets. It fills the fabs to pay the bills now and into the future.

    And finally its future is building processors for Apple, Communications chips for Qualcomm and FPGAs for Altera and Xilinx in the most advanced process technology in the world, which in turn will raise the ASPs and margins of its customers. If TSMC can generate 50% gross margins as a broad based foundry, then Intel can generate > 60% (maybe even 70%) gross margins being more than one generation ahead of TSMC and Samsung in process technologies.

    Intel’s announcement of a $5B debt offering in September (see Apple Plays Saudi Arabia’s Role in the Semiconductor Market) is preparing them for the conversion from a pure processor play to a near term combined processor vendor and foundry partner for large volume, leading edge customers who themselves also generate 70% Gross Margins (i.e. Apple, Qualcomm, Altera, Xilinx).


    A New Name: ‘Si2Con’ Arrives October 20th!

    A New Name: ‘Si2Con’ Arrives October 20th!
    by Daniel Nenni on 10-11-2011 at 7:58 pm

    In case you have not heard, the 16th Si2-hosted conference highlighting industry progress in design flow interoperability comes to Silicon Valley (Santa Clara, CA) on October 20th. Si2Con will showcase recent progress of members in the critical areas of:

    [LIST=1]

  • Design tool flow integration (OpenAccess)
  • DRC / DFM / Parasitics interoperability (OpenDFM and OPEX)
  • Low power design (CPF. low power modeling, and CPF/UPF interoperability)
  • Interoperable Process Design Kits (OpenPDK)

    Si2 is also ramping up a brand new effort to define standards for 3D / 2.5D design of stacked die (Open3D), and this event will be an excellent opportunity to meet with Si2 and members who will be present to find out more about it.

    The detailed agenda is located here: http://www.si2.org/?page=1489
    To register on-line: https://www.si2.org/openeda.si2.org/si2_store/index.php#c1
    or a fax/mail form: http://www.si2.org/?page=1254

    Back in the early 2000’s, Si2 began hosting workshops around the new OpenAccess vision and technology with the goal of helping advance OpenAccess adoption through sharing of requirements, experiences, technical knowledge, and broadening the interest and participation in it’s guidance and development as a true community effort. These workshops grew into the “OpenAccess Conference”, which was successful – so much so that Si2 doubled them to twice per year during the helter-skelter years of rapid changes and initial adoptions around the industry.

    Once Si2 proved to be good stewards of OpenAccess, members brought them into an emerging area of need with the DFM Coalition, which was also covered in a parallel track at the OpenAccess Conference. A similar pattern repeated itself with the Open Modeling Coalition and Low Power Coalition as well. By the time Si2 began covering progress from the LPC in 2006, they began expanding the name slightly to “OpenAccess+ Conference”. Last year, they hinted at the broader scope of coverage with the name “Si2/OpenAccess+ Conference” to get industry used to associating the long-familiar “OA Conference” with this broader name. All of these were interim, incremental transitions in brand management toward the final “Si2 Conference” name to reflect the full scope of coverage.

    Why does Si2 pull together this annual conference? It’s very simple: the non-profit mission is to promote interoperability, improve efficiency, and reduce costs through enabling standards technologies – and these technologies are only valuable to the extent they are broadly adopted and used. That means bringing the right experts together on a topic to share technical challenges, educate industry peers on these new solutions, and share adoption experiences.

    Enthusiastic attendees say that one of the main benefits is networking with same-domain technical peers, listening to a wide variety of presenters that take a “flow” perspective as they do, and ability to see live demos of interoperability progress in action.

    As always, a FREE LUNCH will be provided, sponsored by Cadence Design Systems and a Demo/Networking Reception sponsored by NanGate.


  • Global Semiconductor Alliance Ecosystem Summit Trip Report!

    Global Semiconductor Alliance Ecosystem Summit Trip Report!
    by Daniel Nenni on 10-10-2011 at 7:06 pm

    Being an internationally recognized industry blogger (IRIB) does have its benefits, one of which is free invites to all of the cool industry conferences! The presentations are canned for the most part but you can learn a lot at the breaks and exhibits if you know the right questions to ask, which I certainly do.

    The GSA Semiconductor Ecosystem Summit is an executive conference focused on three core components of the semiconductor business model – supply chain practices, technology evolution, and financial trends. Distinguished executives from leading semiconductor companies will address critical topics including collaboration in the mobile ecosystem, supply chain practices for sustainable partnerships, smart technology development, hardware/software integration and redefining the funding model, to name a few. Visit with supply chain partners on the show floor in between sessions and end the day with a VIP networking dinner.

    The GSA itself is an impressive organization, much more so than the EDA Consortium or the other EDA organizations that are supposed to be helping our industry prosper, but I digress……


    In fact, I give partial credit to the GSA for the over whelming success of the fabless business model. Spend some time on the GSA website, specifically the Ecosystem Portals and you will find out why.

    The conference started with a very nice breakfast! Not the usual “continental” garbage. Sometimes I wonder if the people who organize these conferences are trying to kill us slowly with bad food.

    I was a bit disappointed that John Bruggeman did not show up. He was scheduled to moderate a panel but was replaced by Richard Goering. I had lunch with John last month, expect him to reemerge in the cloud in Q1 2012. I also spoke to Richard, asked him to blog on SemiWiki when he has something to say other than Cadence. We all miss Richard’s industry insight.

    First up was Len Jelinek, Chief Analyst IHS/iSupply:

    Clearly business is worsening around the world with the exception of China. Blame the global debt crisis, unemployment, Justin Bieber’s new haircut. Stagnation is coming from advanced countries versus emerging countries. So the cheap tablet and phone business will prosper?

    Although challenges exist, the semiconductor industry is a great place to work. Corporate spending used to drive semiconductors before 2001 but now people drive our industry (consumer electronics). Mobile phones started it. Tablets are still under way. What is the next semiconductor driver? Ultrabook? Kindle Fire?

    Forecast for 2011 = 2.9%, 2012 = 2.9%, 2010-2015 = 5.4% CAGR – Q3 boom, Q4 inventory control, Q1 debt hangover. Business management is key. 2011 growth is all about mobile. Image sensors, actuators, microprocessors, LEDs, PLD, etc….

    Foundry market will outperform the semiconductor industry: 4.3% in 2011, 7.4% in 2012.

    Manufactures have the ability to outpace semiconductor demand. Even Apple suppliers? Short term outlook remains challenging. First 450mm equipment will ship in 2014? Will shape CAPEX in 2015 and 2016.

    Next was Sanjay Mehrotra, SanDisk CEO.Flash is a key enabler of the mobile market, mobility revolution! Relentless cost reduction will continue! SanDisk flash replaced film (Kodak). The rest of the presentation looked like a SanDisk pitch so I went back for more food and cruised the exhibits and asked those questions I mentioned earlier.

    First I talked to Kurt Wolf from Silicon-IP. Kurt is a long time IP guy, we worked together when he was the Director of IP at TSMC. If you have any questions about IP outsourcing, vendor & product due diligence, contract negotiation, etc… talk to Kurt, absolutely.

    I didn’t really talk to TSMC because their booth was very busy. The TSMC OIP conference is next week so I will speak to them there.

    I didn’t talk to Samsung because I don’t consider them a credible foundry, sorry.

    I did talk to GlobalFoundries. I have high hopes for them, TSMC needs the challenge! I’m really pissed off at how AMD treated them recently, blaming 32nm yield and revenue shortfalls on GlobalFoundries. For the record: the 32nm process is SOI, it was developed by AMD, and the poorly yielding design (Llano) was designed by AMD, so who’s to blame here? Bad AMD, bad bad bad AMD.

    I also visited UMC, SMIC, TowerJazz, LFoundry, and have the same feeling as GlobalFoundries, we really need them to be successful. That is how our industry continues to grow!

    Lots of IP companies showed up but these are the ones that followed up with me on email, in alphabetical order:

    I spoke with Mahesh Tirupattur, CEO of Analog Bits. Mahesh and I are friends, great guy and a pleasure to work with. Analog Bits is the leading supplier of low-power, customizable analog IP for easy and reliable integration into modern CMOS digital chips. When asked why he was here, Mahesh actually gave me a bloggable answer:

    Participating in the GSA Semiconductor Ecosystem Summit provides us an opportunity to meet with other semiconductor industry leaders and understand each others’ perspectives on the technical and business issues impacting our industry. More importantly, the event provided us with an opportunity to explore more in-depth discussions with current customers on how our integrated clocking and interface IP product can resolve their current challenges and to revive some relationship opportunities that have fallen dormant.”

    I spokeAlvand Technologies CEO Mansour Karamat. Alvand is a leading analog IC company that specializes in high-performance data converters (ADC/DAC) and Analog Front End (AFE) for a broad range of applications, such as wireless (Wi-Fi, WiMax, LTE) and wireline (10GBASE-T) communication systems, ultrasound, mobile TV and advanced semiconductor inspection equipment. Several of the top semiconductor companies that I work with use Avland IP and highly recommend it, yes they do.

    I know Infotech from SemiWiki, quite a few of their engineers are registered members. Infotech is also a GSA Ecosystem Summit premier sponsor which I greatly appreciate. In case you don’t know Infotech, they are a single-source provider of end-to-end services for the semiconductor industry providing “concept to silicon and system prototype” solutions that support ASIC/FPGA engineering and embedded software development. I had an interesting SEO discussion with Jennifer Lund, Marketing Director. She is very sharp and totally gets social media. You can tell by their website, check it out.

    Mixel is at all of the conferences supporting the industry ecosystem for mobile design. Mixel is a leading provider of mixed-signal IP cores with a particular focus on mobile PHY such as MIPI D-PHY, M-PHY, and DigRF PHY. Google MIPI and you will see Mixel right under the MIPI Alliance. Google loves Mixel. Ashraf Takla, Mixel CEO, is the guy to talk to about MIPI.

    I met Brian Gardener, Vice President of Business Development at True Circuits. I have not worked with True Circuits before but certainly know who they are. They develop a broad range of Phase-Locked Loops (PLLs), Delay-Locked Loops (DLLs), and other mixed-signal designs. Brian was there because True Circuits believes the GSA does the best job of bringing together and representing our customers– semiconductor suppliers–and their broad needs. You get to see the leaders of these companies, and hear the issues that keep these companies up at night. GSA has also been at the forefront of highlighting semiconductor IP issues, absolutely.

    Lunch was great. Desert table really hit the spot! I blog for desert!

    Tudor Brown, President ARM, did a Keynote Address: Security & the Smart Technology Evolution. It was largely ARM centric but here are my notes:

    Mobile internet has redefined the WWW, which is out shipping desktop PCs, redefining consumer electronics.

    Security is the big challenge. Smartphone attacks are increasing, android botnets, etc… A parity of what happened in the PC space, but of course a much larger market. Intel bought McAfee for this reason. Mobile is the security nexus. Phones do not crash. Who reads phone manuals? Phones are easy to attack.

    Losing them is the #1 security hole, physical security. Expanded utility equals expanded security risk. Very personal data, our phones are our identity. Hardware, software, and services security is required. Hack attack is software that exploits weaknesses in the OS or Apps. Open OS’s are much more vulnerable (Android?). Shack attack, as in Radio Shack, more sophisticated. Lab attacks, probe chips, electron level imaging, etc……

    There is a much better security discussion on SemiWiki “Demystifying Cyber Security – Myths vs Realities’ Perspective/Event Summary”. I also did a security blog “Semiconductor Security Threat” that is worth reading.

    Hopefully someone else will comment about the rest of the program since I had to drive carpool and missed it:

    1:45 p.m. – 2:45 p.m.
    Improving Device Performance — Simplifying Software/Hardware Integration

    3:15 p.m. – 3:40 p.m.
    Macro-Economic Trends — The Tough Road Ahead

    3:45 p.m. – 4:45 p.m.
    Semiconductor Investment — Redefining the Funding Model

    5:30 p.m. – 6:30 p.m.
    A State of the Aart Conversation with Scott McGregor

    Anyone? I still have some iPad2s to give away.



    Mask and Optical Models–Evolution of Lithography Process Models, Part IV

    Mask and Optical Models–Evolution of Lithography Process Models, Part IV
    by Beth Martin on 10-10-2011 at 4:50 pm

    Will Rogers said that an economist’s guess is liable to be as good as anyone’s, but with advanced-node optical lithography, I might have to disagree. Unlike the fickle economy, the distorting effects of the mask and lithographic system are ruled by physics, and so can be modeled.

    In this installment, I’ll talk about two critical components of process models: mask models and optical models. Mask models come in two different flavors: those related to the 1D and 2D geometries on the mask and those related to 3D effects. Optical models are fundamental to representing the lithography imaging sequence, and have a well-established simulation methodology.

    Mask Models

    Historically, OPC models were calibrated based on an assumed exact match of the physical test mask and the test pattern layouts representing the test mask. However, the mask patterning process exhibits systematic proximity effects such as corner rounding and isolated-to-dense bias. In the past, it was acceptable to lump the systematic mask proximity effects into the resist process model because the mask manufacturing process is usually invariant for the life of the wafer technology. This means, however, that anytime the mask process changes substantially, the OPC model has to be recalibrated.

    More significantly, the OPC model incorrectly ascribes mask behavior to the photoresist model, which limits the predictability of the model. Recent work on mask process proximity modeling is changing this (Tejnil 2008; Lin 2009; Lin 2011). This work involves calibrating a mask process model (MPC) based on mask CD or contour measurements, then referencing the MPC model to describe the mask input to the wafer OPC calibration flow. A 50% reduction in mask CD variability can be realized with this approach.

    Mask models have also evolved to account for the introduction of alternating phase shift masking (PSM) into manufacturing, which raised awareness of the impact of mask topography on wafer lithography. Ultimately this aggressive PSM approach was replaced with more manufacturable solutions, including attenuated-PSM, which eliminated etched quartz in favor of a thin, partially absorbing layer.

    The extensively used Kirchhoff, or flat mask approximation, assumes that the mask is sufficiently thin that the diffracted light is computed by means of scalar or vector diffraction theory. This is in contrast to rigorous electromagnetic field (EMF) simulation, which accounts explicitly for the topography and refractive indices of the mask materials, and solves Maxwell’s equations in 3D (a highly compute intensive operation not suitable for full-chip).


    Figure showing near field intensity calculated by rigorous 3D electromagnetic field simulation.

    There are many approximation methods that enable a reduction of this 3D EMF system to simpler 1D or 2D representations. Comparison to full rigorous simulation shows an advantage in accuracy by accounting for 3D mask effects versus the Kirchhoff mask, but in practice, the process model can easily adapt to effectively account for the same CD behavior. This is analogous to the mask CD effect described above. Recently, even thinner absorbing layers have been reported, which further reduce the mask 3D contribution to wafer CD variation, thus rendering the continued use of the Kirchhoff approximation a reasonable trade-off.

    3D mask effects are sensitive to the angle of incidence of the light impinging upon the mask, and for high NA systems, it is necessary to account for this effect. Approximation methods are available that effectively sectorize the source then calculate the mask signal corresponding to each sector. These approaches deliver accuracy within a few nm of the rigorous simulation result.

    Optical Models

    Optical models represent the lithography imaging sequence. As introduced above, full chip simulations commonly employ the SOCS (sum of coherent systems) approximation to represent the intensity as a sum of convolutions of the mask m with k different optical kernels f[SUB]n[/SUB], as described in Equation 1.

    (Equation 1)

    For optical models, there is a strong linear dependence of simulation time on the number of SOCS decomposition kernels used in the simulation. In addition, there is a quadratic dependence on the optical diameter associated with the model. The magnitude of the eigen value coefficients ɸ[SUB]n[/SUB] in Equation 1 decay quickly as n increases. So in practice, as a compromise between accuracy and runtime, it is often the case that 100 or fewer optical kernels are used with an OD < 2.0 μm.

    There are many exposure-related factors influencing wafer CDs which can be represented in the simulation. These include wavelength, numerical aperture, ambient refractive index, film stack optical properties, exposure dose and focus, focus blur (induced by stage tilt, stage synchronization errors, or laser bandwidth), illumination intensity and polarization, pellicle thickness, and projection optics aberrations. The model parameters associated with these factors can be input as known values or can be optimized over a user-input range during calibration. Care must be taken, however, in allowing these parameters to move too far from their design values, as this may result in a less physical model. Exposure dose and focus are adjusted in the simulator to empirically match the CD behavior in terms of the scanner dose and focus. A Jones Matrix representation of the entire optical system, or alternatively individual Zernike coefficients for system aberrations can be input into the simulator directly.

    It is well known that the optical proximity effect is highly dependent upon the illumination profile, and that the actual profile differs from the profile requested by the scanner recipe. Various models have been developed to describe the actual profile analytically, but it is common today to input a symmetrized version of the in-situ measured pupilgram instead of the as-designed version.

    The continuous development of mask and optical models that better represent real silicon is crucial to the success (both in technology and economics) of optical lithography in upcoming process nodes. In the next installment of this series, I will discuss the semi-emperical resist and etch models used for full-chip correction and verification.

    — John Sturtevant

    To read the full technical paper on this topic, download Challenges for Patterning Process Models Applied to Large Scale.
    Want to read past installments of this series? Part I, Part II, Part III


    Solido & TSMC Variation Webinar for Optimal Yield in Memory, Analog, Custom Digital Design

    Solido & TSMC Variation Webinar for Optimal Yield in Memory, Analog, Custom Digital Design
    by Daniel Nenni on 10-09-2011 at 4:01 pm

    Solido has announced webinars for North America, Europe and Asia on October 12-13. They will be describing the variation analysis and design solutions in the TSMC AMS Reference Flow 2.0 announced at the Design Automation Conference this year.

    “We are pleased to broaden our collaboration with Solido in developing advanced variation and design methodology in AMS Reference Flow 2.0. TSMC customers can use Solido Variation Designer with TSMC 28nm process technology to achieve better product quality in their AMS designs.” Suk Lee, Director of Design Infrastructure Marketing, TSMC.


    Variation effects are critical to consider for designers working on nanometer designs, impacting the design’s electrical characteristics. In a recent survey, variation-aware custom IC design was ranked the #1 area requiring advancement over the next two years. The survey showed 53% of design groups missed deadlines or experienced re-spins due to variation issues, designers experienced an average 2 month delay due to variation issues, and designers spent an average 22% of design time on variation issues.


    Solido Variation Designer products were selected for TSMC’s Advanced PVT and Advanced Monte Carlo sub flows. Solido products work with TSMC process and device models, improving design performance, power and area, maximizing parametric yield and avoiding re-spins and project delays.


    The webinar will be presented by Nigel Bleasdale, Director of Product Management at Solido and Jason Chen, Design Methodology and Service Marketing at TSMC. Topics covered will be:

    [LIST=1]

  • Variation challenges in custom IC design
  • Variation-aware solutions available in the TSMC AMS reference flow
  • Methods to develop and verify designs over PVT corners in less time
  • How to efficiently apply Monte Carlo techniques in design sign-off
  • How Monte Carlo is really possible up to 6-sigma
  • Customer case studies of the above methods

    Register here (www.solidodesign.com/page/tsmc-solido-webinar/) for a 1 hour webinar:
    North America: Wed October 12, 2011 – 10am PDT
    Europe: Wed October 12, 2011 – 2pm BST/3pm CET
    Taiwan: Thurs October 13, 2011 – 9am CST
    Japan: Thurs October 13, 2011 – 10am JST
    Korea: Thurs October 13, 2011 – 10am KST


  • How ST-Ericsson Improved DFM Closure using SmartFill

    How ST-Ericsson Improved DFM Closure using SmartFill
    by Daniel Payne on 10-07-2011 at 2:38 pm

    DFM closure is a growing issue these days even at the 45nm node, and IC designers at ST-Ericsson have learned that transitioning from dummy fill to SmartFill has saved them time and improved their DFM score.

    The SOC
    ST-Ericsson designed an SOC for mobile platforms called the U8500 and their foundry choice was a 45nm node at STMicroelectronics.

    They called the chip U8500 and it had to balance battery life, graphics and a multitude of competitive features. This SOC includes:

    • Single-chip base band and APE
    • HSPA+ Modem Release 7
    • SMP Dual ARM Cortex A9 1GHz multicore processor
    • Symbian foundation, Linux Android, MeeGo and Windows Mobile OS support
    • High-definition 1080p camcorder and video
    • About 100 hours audio playback
    • 10 hours HD video playback
    • TV out using HDMI
    • Video and imaging accelerators

    Low power goals were achieved by using:

    • Adaptive Voltage Scaling
    • Dynamic voltage and frequency Scaling
    • CPU Wait for Interrupts (WFI)
    • RAM data retention during WFI
    • Fast wake-up
    • Lower power IO: HDMI, MIPI, LP DDR2, USB

    DFM Closure

    The fab (STMicroelectronics) at first provided standard DummyFill rules in their Calibre deck to ST-Ericsson. The result was that DummyFill didn’t meet all of the density related constraints in their process. There were two density rules not being satisfied with the DummyFill approach (fill first, verify after fill is completed).

    Dummy fill example where extra shapes are added to an IC layer in order to meet layout density requirements.

    The second approach was to tweak the dummy fill rule deck in order to reach a DFM clean layout. Even this approach was not a complete success as shown in the following comparison table:

    The final results in the third row of the comparison table show that using the new SmartFill capability in The Calibre YieldEnhancer tool produced a layout with 0 DRC errors and the lowest DFM score.

    DummyFill adds new fill shapes across the entire chip first, then you have to run a DRC tool as a second step to see your DRC and DFM score results.

    In contrast the SmartFill approach is analyzing the new fill shapes as they are being placed to make sure that they are DRC clean while meeting the fill constraints. To the EDA tool user it has now become more of a push-button approach instead of an iterative approach, you just need to make sure that your foundry process is supporting Caliber YieldEnhancer.

    The U8500 SOC has 1.2 million standard cells at the top level along with 32Mbit of SRAM, so saving time in DFM closure really helped out.

    Fill and Timing
    Because SmartFill (or DummyFill) adds new shapes to layers the timing of an IC will be impacted. A new fill shape adds extra parasitic capacitance which in turn can impact timing results.

    STMicroelectronics ran a digital test chip in their 45nm process to make some comparison measurements of the DummyFill versus SmartFill options:

    With SmartFill the table shows that the run time is close to that of DummyFill, while the GDS size got smaller, more fill shapes were created, total filled area was reduced, no DRC errors reported and the capacitance values were actually reduced.

    The “purpose tile_O” shapes are the largest fill shapes and they have a higher spacing requirement between the layout and these fill shapes. Fewer of these shapes is an improvement because it creates less OPC work and helps to reduce capacitance values, which in turn means less impact on timing.

    Design teams typically only re-run timing on their most critical paths after SmartFill, not re-run all static and dynamic testing.

    With Calibre YieldEnhancer you can even provide a list of critical nets as an input so that SmartFill can avoid changing your timing on these nets during the fill process.

    Rapid Thermal Annealing (RTA)
    During semiconductor manufacturing there are thermal steps used to create shallow junctions which then affect transistor performance like: Vth (Threshold Voltage) shifts, Vth variation. The three things that contribute to RTA effects are: pattern density of the layout, the RTA temperature, and the amount of time spent in annealing. As a designer we can only impact the pattern density of the layout.

    Fill shapes created by Calibre SmartFill take this into account and control the reflectivity of the wafer surface. This helps to control the electrical variability introduced by RTA.

    Summary
    To ensure DFM closure, maintain timing integrity and reduce variability effects at the 45nm node you should consider moving from the DummyFill approach to a SmartFill approach. ST-Ericsson and STMicroelectronics have achieved a better DFM score by using Calibre SmartFill on their U8500 SOC. More details about this topic can be found in this white paper.


    Jasper User Group Meeting

    Jasper User Group Meeting
    by Paul McLellan on 10-07-2011 at 11:59 am

    Jasper’s Annual User Group Meeting is on November 9th and 10th, in Cupertino California. It will feature users from all over the world sharing the best practices in verification. If you are a user of Jasper’s products then you should definitely plan to attend. This year there is so much good material that the meeting is two days long.

    Of course there will be many presentations by Jasper themselves. But much of the meeting will be taken up with users presenting their own experiences. If you are a Jasper customer and are interested in proposing a presentation, then contact Rob van Blommestein at robvb@jasper-da.com.

    The full agenda is still being developed, but already there are user presentations on:

    • Simulation task reduction
    • RTL verification
    • X-propagation
    • Using formal to verify real CPUs
    • RTL sequential equivalence checking
    • Micro-architecture validation
    • SoC integration
    • Macro verification
    • RTL Development
    • Post-silicon debug

    and presentations by Jasper on:

    • Product roadmap
    • Hints and tips for using Jasper solutions more effectively
    • Architecture validation
    • Creating verification IP
    • Introduction to intelligent proof kits
    • Property synthesis

    To find out more about the meeting, or to register for the event, go here.


    Testing, testing… 3D ICs

    Testing, testing… 3D ICs
    by Beth Martin on 10-06-2011 at 7:01 pm

    3D ICs complicate silicon testing, but solutions exist now to many of the key challenges. – by Stephen Pateras

    The next phase of semiconductor designs will see the adoption of 3D IC packages, vertical stacks of multiple bare die connected directly though the silicon. Through-silicon vias (TSV) result in shorter and thinner connections that can be distributed across the die. TSVs reduce package size and power consumption, while increasing performance due to the improved physical characteristics of the very small TSV connections compared to the much larger bond wires used in traditional packaging. But TSVs complicate the test process, and there is no time to waste in finding solutions. Applications involving the stacking of one or more memory die on top of a logic die, for example using the JEDEC Wide IO standard bus interface, are ramping quickly.

    One key challenge is how to test the TSV connections between the stacked memory and logic die. There is generally no external access to TSVs, making the use of automatic test equipment difficult if not impossible. Functional test (for example, where an embedded processor is used to apply functional patterns to the memory bus) is possible but is also slow, lacks test coverage, and offes little to no diagnostics. Therefore, ensuring that 3D ICs can be economically produced calls for new test approaches.

    A new embedded test method that works for test and diagnostics of memory on logic TSVs is built on the Built-In Self-Test (BIST) approach that is already commonly used to test embedded memories within SoCs. For 3D test, a BIST engine is integrated into the logic die and communicates to the TSV-based memory bus that connects the logic die to the memory as illustrated in Figure 1.

    For this solution to work, two critical advances over existing embedded memory BIST solutions were necessary.

    One is an architecture that allows the BIST engine to communicate to a memory bus rather than directly to individual memories. This is necessary partly because multiple memories may be stacked within the 3D IC, but mostly to allow the BIST engine to test the memory bus itself, and hence the TSV connections, rather than just the memories. Test algorithms tailored to cover bus-related failures are used to ensure maximum coverage and minimal test time. Because of this directed testing of the memory bus, the 3D BIST engine can also report the location of failures within the bus, which allows diagnosis of TSV defects.

    The second critical advance in this new 3D BIST solution is that it is run-time programmable. Using only the standard IEEE 1149.1 JTAG test interface, the BIST engine can be programmed in silicon for different memory counts, types, and sizes. Because the BIST engine is embedded into the logic die and can’t be physically modified without a design re-spin, this adaptability is essential. With full programmability, no re-design is needed over time even as the logic die is stacked with different memories and memory configurations for different applications.

    An automated flow is available for programming the BIST engine (for wafer or final package testing) to apply different memory test algorithms, to use different memory read/write protocols, and to test different memory bus widths and memory address ranges. The patterns needed to program the engine through the JTAG interface pins are generated in common formats, such as WGL or STIL, to be loaded and applied by standard automatic test equipment.

    Because this 3D test solution is embedded, it needed to have minimal impact on design flows and schedules and no impact on design performance. This is done through an automated RTL flow that integrates the BIST engine into the logic die and fully verifies its operation. The flow is compatible with all standard silicon design flows and methodologies. There is no impact to design performance because the BIST engine intercepts the memory bus with multiplexing logic placed at a point in the functional path with sufficient slack.

    This new embedded solution for testing TSVs between memory and logic die is cost effective, giving the best balance between test time and test quality. Engineers considering designing in 3D need to feel confident that they can test the TSVs without excessive delay or risk. This solution shows how that can be achieved and opens the way for a more rapid adoption of 3D design techniques.

    Stephen Pateras is product marketing director for Mentor Graphics Silicon Test products.

    The approach described above forms part of the functionality of the Mentor Graphics Tessent[SUP]®[/SUP]MemoryBIST product. To learn more, download the whitepaper 3D-IC Testing with the Mentor Graphics Tessent Platform.


    Circuit Simulation and Ultra low-power IC Design at Toumaz

    Circuit Simulation and Ultra low-power IC Design at Toumaz
    by Daniel Payne on 10-06-2011 at 4:31 pm

    I read about how Toumaz used the Analog Fast SPICE (AFS) tool from BDA and it sounded interesting so I setup a Skype call with Alan Wong in the UK last month to find out how they design their ultra low-power IC chips.


    Interview

    Q: Tell me about your IC design background.
    A: I’ve been at Toumaz almost 8 years now and before that at Sony Semi for 5.5 years. My IC design experience goes back to 1997, then starting in 2005 I’ve been in the IC design group for wireless.

    Q: Does Toumaz have a CAD group?
    A: Yes, we do have two CAD engineers.

    Q: What EDA tools are you using?
    A: For RTL simulation we have Mentor Questa, and on physical verification we’re using Calibre. Place & Route it’s Synopsys and for IC layout we’ve got Cadence Virtuoso and Assura (verification). Circuit simulation we have the Analog Fast SPICE tool from Berkeley Design Automation.

    Q: How about your product life cycle?
    A: Most of our IC designs go from definition to Tape out in about 18 months, some are quicker. Foundry choices have been: TSMC, IBM, Infineon, UMC.

    Q: What’s the first thing that you do when silicon comes back from the fab?
    A: With our engineering samples we first test to see if our spec is met, then we start to characterize with initial functional vectors plus the specialized analog RF testing.

    Q: For circuit simulation tools, what have you used before?
    A: We’ve used Cadence Spectre tools before, then we switched over to BDA. We found better results with the Berkeley tool in terms of speed. In our evaluation we used internal designs, multiple test benches and clock circuits, taking several weeks to complete our benchmarking. Overall we saw, about 5X quicker with AFS.

    Q: When the foundry provides SPICE models have there been any issues?
    Q: Yes, we had some issues with model cards and BDA. There were some differences between Spectre and BDA, causing BDA to make some tweaks in their BSIM4 RF models.

    Q: How do you simulate the whole mixed-signal chip?
    A: For Mixed-signal chips we model at two levels of abstraction, behavioral and transistor level. During simulation we can swap out transistor level versus behavioral to get the speed and accuracy trade-off we need. For analog blocks we simulate at the transistor level.

    Q: Why not use a Fast SPICE simulator for full-chip circuit simulation?
    A: Our experience shows that Full-chip with Fast SPICE gives fast but wrong answers.

    Q: Which process nodes are you designing at?
    A: Quite a range: 130nm and 110nm, some 65nm nodes.

    Q: How large are your design teams?
    A: Our design team for an SOC uses a Personal Area Network and has some layout designers, firmware team, test and production engineers, maybe 20 people in total.

    Q: What is your version control system?
    A: We have used SVN for just the RTL coding side and also tried the Design Management Framework within Cadence IC 5. Our plan is to start using ClioSoft soon in Cadence IC 6.

    Q: Are there other circuit simulation tools that you’ve looked at?
    A: Quite a few: Synopsys, Aglient and Golden Gate.

    Q: What’s really important for your circuit simulations?
    A: Accuracy and the ability to do long simulation runs, some are up to one week in duration. We do some top level parametric simulation, try different scenarios, and run lots of configurations.

    Q: What needs improvement for your circuit simulation?
    A: Well, there’s some room for improvement with co-simulation, it can be a bit flakey. We co-simulate with Verilog.

    Q: How often does BDA update their software?
    A: With BDA there are updates every few months, so we just wait for the new features that we need then install it about once a quarter.

    Q: What about your layout tools from Cadence?
    A: We freeze the Cadence toolset at the start of each project and update tools only if really needed during a project.

    Q: What was the learning curve for the BDA circuit simulator?
    A: The learning curve was short for us, we now know how to setup the options to get the accuracy vs speed.

    Q: What’s your wish list for BDA?
    A: I would prefer more flexible license terms throughout the year (like Cadence credits). We always want circuit simulation to be Faster, more accurate, and improved DC convergence.

    Summary
    Toumaz uses a mixture of EDA tools to design their ultra low-power IC designs, working with vendors like: Berkeley Design Automation, Cadence, Mentor Graphics, Synopsys and ClioSoft.