RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

Analog and Full Chip Simulation at Micron

Analog and Full Chip Simulation at Micron
by Daniel Payne on 05-08-2014 at 12:50 pm

IDM companies like Micronuse SPICE circuit simulators during the design phase in order to predict timing, currents and power on their custom IC chip designs at the transistor level. A senior memory design engineer at Micron named Raed Sabbahtalked today at a webinarabout how the embedded solutions group uses the FineSimcircuit simulator from Synopsys during the design and verification flow. My background includes DRAM design, so I felt at home listening to Raed.


Chip photo of 16nm NAND for 16GB storage
Continue reading “Analog and Full Chip Simulation at Micron”


Get that Smartphone Chip out of my Wearable!

Get that Smartphone Chip out of my Wearable!
by Daniel Nenni on 05-08-2014 at 11:30 am

Last week, I had the pleasure to present at the Linley Group Mobile Conference. My presentation was part of the Wearable Device Session, which examined wearables from several different angles including software, sensor, processor, and IP.

As the smartphone market is maturing and the pace of innovation generally slowing, there is growing industry innovation and interest in wearables. As an IP company, Imagination Technologies sees a great opportunity in this market, and our CPUs, GPUs, and connectivity technologies are already finding their way into a range of wearable devices.

Wearable devices represent a potentially large market that is still being defined. There will be several different uses for wearables – healthcare, home monitoring, industrial controls and much more. One common theme that came out during this session is that in the future, many of these devices will be running 24/7 on our bodies!

We believe that there are several challenges that must be overcome before the wearables market truly takes off. One of these is power consumption/battery life. While it is acceptable for most consumers to charge their smartphone every day (or even multiple times of day), a true wearable device will need to be able to functions for days, months or even year or longer.

Addressing this challenge and others will force more innovative designs to enter the market, designs that are quite different from today’s smartphone SoCs.

During my talk, I presented two early designs from Imagination’s licensees. Ineda Systems designed a multi-level processor called Dhanush that incorporates unique combinations of Imagination’s MIPS CPUs and highly efficient PowerVR GPUs. The new Ineda wearable processing units (WPUs) represent one of the first SoC architectures built specifically for a new generation of devices including fitness bands, smartwatches and IoT. Its innovative design will allow the Dhanush WPU to break the one day charge cycle – potentially lasting up to a month or more!

The second design I discussed is from Ingenic Semiconductor. Ingenic took a different approach to low power design. By starting with low power architecture and marrying it with low power semiconductor technology, Ingenic designed a highly integrated MIPS-based SoC. By effectively managing power usage, Ingenic is also able to offer SoCs that break the single day charge cycle.

During the panel discussion, it became clear how different wearables are from smartphones. While smartphone designs are primarily focused on function, wearables must focus on different factor: fashion. Wearables by definition will become a visible fashion accessory. And since these devices will often be worn 24 hours a day, we can see that these devices will easily become fashion statements that will define a user’s style.

I foresee companies that have never before been in the business of technology offering fashionable wearables that combine everyday function with form. Will wearables convert techy nerds into fashion icons? I can’t wait to see how this story turns out!

By John Min, Director of Solutions Engineering, Imagination Technologies

About Imagination Technologies

Imagination is a global technology leader whose products touch the lives of billions of people throughout the world. The company’s broad range of silicon IP (intellectual property) includes the key multimedia, communications and general purpose processors needed to create the SoCs (Systems on Chips) that power all mobile, consumer, automotive, enterprise, infrastructure, IoT and embedded electronics. These are complemented by its unique software and cloud IP and system solution focus, enabling its licensees and partners get to market quickly by creating and leveraging highly differentiated SoC platforms. Imagination’s licensees include many of the world’s leading semiconductor manufacturers, network operators and OEMs/ODMs who are creating some of the world’s most iconic and disruptive products. See: www.imgtec.com. Follow Imagination on Twitter, YouTube, LinkedIn, RSS, Facebook and Blog.

lang: en_US



What Executives Say About IP Licensing

What Executives Say About IP Licensing
by Pawan Fangaria on 05-08-2014 at 7:00 am

In the fabless world of semiconductor design, IP components have become indispensable partners and have enabled the development of complex billion gate SoCs. IP business in general is exhibiting a very high growth rate since couple of years and it is going to increase; the same is being reflected by a growing number of IP vendors across the world.

Well, that’s visible just as a tip of the iceberg, as a flourishing business. However, there are challenges involved in the actual operations when integration of an IP takes place in an SoC; how to ensure that the IP selection was correct, its quality was assured, its licensing was proper, it met the desired standards and many other criteria including its seamless integration within the overall PPA (Power, Performance, Area) constraints of the SoC? And more importantly from the business point-of-view – was it that the customer required? Does it fit into the price band that can provide the desired level of profitability? It reminds me about Dassault Systemes strategy towards increasing the profitability of semiconductor business.

Effective management of IPs that can provide enough security and easy access (with license), right level of packaging, good level of navigation for selection from a broad spectrum of IPs, evaluation, design margin to accommodate variations within a specified limit, and the like has become the critical success factor for any semiconductor design. In order to sustain the growth of IP and design businesses and justify investments in both, a healthy ecosystem must be maintained to satisfy these parameters that can minimise design integration cycles, shorten the overall design cycle time and improve productivity & profitability.

It’s an opportune time to get an insight into these aspects of design and IP business, when industry leaders are meeting to talk about these issues and what they are doing to solve these. A specific focus is being given to IP licensing model, technologies and methods to realize it for broader use.

There will be an exclusive panel discussion on “Strategies for Next Generation Semiconductor IP Management” between industry executives in the heart of Silicon Valley, where there will be ample opportunity to network with solution experts and learn about the best practices and technologies around IP management in the industry. The panel discussion will be moderated by Warren Savage, President and CEO, IPextreme.

Location:Computer History Museum
1401 N Shoreline Blvd, Mountain View, California

Date & Time: Thu, May 15, 2014, 6PM – 8PM

Here is the detailed agenda –

· Critical success factors to consider when building or advancing an IP licensing business model
· Key enablers for IP licensing business model maturation
· Technologies in use to enable successful IP licensing (in or out)

Dassault Systemes has facilitated this event. Attendance is free and by invitation only, so do not forget to register for free. Refreshments and appetizers will be served 🙂

More Articles by Pawan Fangaria…..

lang: en_US


180nm still a big deal

180nm still a big deal
by Don Dingee on 05-07-2014 at 3:00 pm

When I was reading the recent Daniel Payne article “Designing Change Into Semiconductor Techonomics” with commentary on a recent presentation from Aart de Geus of Synopsys, one chart jumped out at me: the most popular process node for new design starts today is 180nm.

Upon mentioning that to a few of my IoT counterparts, they quickly dismissed it as an insignificant point. Any meaningful new design is on an advanced node, right? Dead wrong, as it turns out. Mixed signal designs and microcontrollers thrive on older nodes, and the 180nm observation points to one particular process.

Sometimes, press releases assume way too much – balance is needed between overdefining common terms and boring a reader with history, and omitting a mention of an important point that may establish why a new release is significant. Such is the case with the recent Sidense news, qualifying their 1T-OTP NVM macros in TSMC’s 180nm BCD 1.8/5.0V Gen 2 process.

I’m an applications guy, not a process guy, and I’m not as close as some of my SemiWiki counterparts to TSMC, so frankly I had to go figure out what this means in the bigger picture. On the surface, we know Sidense is touring the world presenting at various stops on the TSMC Technology Symposium, so news of a TSMC process qualification is timely. But, why this one?

Aart’s chart was also fresh in my mind, so I was primed to take a closer look at what designs are still going on at 180nm. The answer is analog and power management electronics, in many cases with a microcontroller core in the mix to provide localized control and communication such as CAN or LIN for automotive applications.

BCD stands for “bipolar-CMOS-DMOS”, combining three transistor types and voltages into a single mixed signal substrate. ST claims to have invented BCD technology, and obviously others have run with it, including TSMC. The evolution of TSMC 180nm BCD continued in June 2012, when they teamed with Analog Devices on a Gen 2 enhancement with significant benefits:

Performance enhancements achieved with the 0.18 micron, 5-volt process include an order of magnitude noise improvement, a 70 percent lower standby leakage current, a 50 percent improvement in linearity and a 50 percent better capacitor and resistor matching.

Now, the motivation is a lot clearer. Sidense mentioned it took a while for them to achieve Gen 1 qualification in April 2013. By then, new design starts were probably already moving to Gen 2 to capture the above benefits, particularly the 10x noise improvement and a dramatic reduction in leakage current. With the essential requirement of analog trimming, and the usefulness of 1T-OTP NVM in implementing it, the Sidense availability on Gen 2 is welcome news for customers.

The TSMC 180nm BCD Gen 2 process targets high temperature, high reliability automotive applications, thus there is a relatively long qualification cycle. AEC-Q100 Grade 0 carries a temperature rating of -40C to 150C, suitable for “under the hood”. For automotive, data retention targets are over 10 years, so an accelerated requirement of 2000 hours at 150C has to be demonstrated.

As Analog Devices pointed out, they have worked with TSMC on BCD processes at 600nm, 350nm, and 180nm, and Dialog Semiconductor is entering the fray on 130nm – over time, the sweet spot for design starts in Aart’s presentation will likely move down to 130nm. Digital gets a lot of the attention, but it is an analog world, and mature nodes for mixed signal will play a big part for a very long time.


Intel is Still Missing Mobile!

Intel is Still Missing Mobile!
by Daniel Nenni on 05-07-2014 at 9:00 am

Paul McLellan was on assignment in Hong Kong last week so I attended the Linley Mobile Conference and was not surprised Intel did not present. During the networking sessions I asked more than a dozen people why and the answers were pretty focused on “Intel still does not play well with others” and “Intel’s current mobile offerings would not hold up.” Given that mobile is all about the ecosystem I would go with the former but the latter certainly has merit.

The Linley Group is the leading supplier of independent technology analysis and strategic consulting in semiconductors for a broad range of applications including networking, communications, PCs, servers, mobile, and embedded.

The presentations are now available on the Linley website, there is a quick registration but they are well worth the effort. Here are the presenters:

[LIST=1]

  • Linley Gwennap, Principal Analyst, The Linley Group
  • Hezi Saar, Product Marketing Manager, Synopsys
  • Ajay Jain, Director Product Marketing, Mobile Products, Rambus
  • Peter Carson, Senior Director Marketing, Qualcomm
  • Peter McGuinness, Director of Multimedia Marketing, Imagination
  • Brian Jeff, Senior Product Manager, ARM
  • Neil Trevett, VP Mobile Ecosystem, Nvidia
  • Eran Briman, VP Marketing, CEVA
  • Chris Rowen, Cadence Fellow, Cadence
  • Jason Sams, Technical Lead, Google
  • Pankaj Kedia, Sr. Director, Qualcomm
  • John Min, Director, Solutions Engineering, Imagination
  • Kurt Shuler, VP of Marketing, Arteris
  • Markus Levy, President, EEMBC
  • Dino Brusco, General Manager, BDTI
  • Bing Yu, Sr. Architect, Manager, Mediatek
  • Drew E Wingard, Chief Technical Officer, Sonics

    The Q&A sessions were pretty good but the networking was VERY good, especially Wednesday evening with the open bar. I just wish I could post some of the discussions but people talk to me “off-the-record” now that I’m infamous.

    I finally met Eram Briman of CEVA after working with them for two years on SemiWiki. He is a very smart guy and quite funny after a couple of beers. His presentation “Always-On DSP for Mobile and Wearable Devices” was very good. I will have Eric Esteve our IP expert blog it in more detail. In fact that whole session “Delivering Always-On Capability” was excellent. Tim Saxe did a nice presentation on “Combining Hardware Coprocessing and Software to Reduce Always-On Power”. Tim and I worked together at GateField many years ago so it was nice to reconnect. The other presentation in that session was ”Architectural Requirements for Always-On Subsystems” by Chris Rowan of Tensilica fame.

    The big takeaways from the conference and discussions for me were:
    [LIST=1]

  • Intel is not serious about mobile and will probably exit
  • IoT is real but the wearable segment will be the focus for now
  • Always-On is a much bigger challenge than I had imagined
  • Linley puts on a very professional conference focused on technical content
  • I drink too much when the beer is free

    I invited some of the presenters to post blogs about their Linley experience on SemiWiki so you should see those over the next week or so.

    To be clear: SemiWiki is NOT a Wizard of Oz website with a little man behind a curtain making scary noises so people will advertise. SemiWiki is an open forum (crowdsourcing) for semiconductor professionals. If you would like to post your experience from a conference or if you would like someone from SemiWiki to attend your event please let us know (an open bar is not required but it certainly helps).

    Also read: Intel Lost $1B in Mobile Last Quarter

    More Articles by Daniel Nenni…..

    lang: en_US


  • Processors For Internet of Things

    Processors For Internet of Things
    by Paul McLellan on 05-06-2014 at 10:58 pm

    Tomorrow and Thursday this week is the Internet of Things (IoT) developers conference. It takes place at the Hyatt Regency in Santa Clara. There are 3 keynotes and 3 CTO viewpoints:

    • Driving Heterogeneous System Architectures Everywhere – Amit Rohatgi, Imagination Technologies
    • Solving the Networking Puzzle: From IOT to SDN and Everything in Between Tareq Bustami, Freescale
    • Business & Design Implications of the Internet of Things – Gareth Noyes, Wind River
    • Designing Change for the Internet of Things – Chris Rommel, VDC Research
    • Ethernet Everywhere: Future-proofing IoT Networks – Martin Nuss, Vitesse Semiconductor
    • Technologies Needed For the IoT’s Always-On Healthcare Revolution – Kaivan Karimi, Atmel


    The types of chips used in IoT products fall into three broad categories. The first are autonomous chips with extensive on-chip processing power. The second is cooperative mode where the processing activity is shared between the chip and the server. And master/slave mode where the IoT chip doesn’t perform data processing but just collects the data (of course it still needs some connectivity to get to the backend processing servers.

    Power is the big battleground. IoT products have to live on battery power for a long time. They are not like your cellphone that you charge every night. In some cases they scavenge power, in others they run on batteries that might have to last the entire life of the product. Think how annoying it already is when you have to change the battery in your smoke alarm, and that is only every couple of years.


    The Andes 32-bit MCUs have been designed from the beginning to be extremely low power. They have more DMIPS per megahertz than even ARM’s lowest power Cortex-M0. One area Andes has focused on is optimizing the memory interface. The have FlashFetch which minimizes the number of accesses to flash memory, which is power hungry. This works with a pre-fetch buffer that speeds access to non-loop code (and occupies just 2K gates for 64-bit wide memory). Then for loop code there is a TinyCache that holds a whole small loop and again minimizes access to flash memory (and occupies 7K gates for a 128B buffer). FlashFetch doesn’t just reduce power it also delivers a performance boost since flash memory is also much slower.


    The other major attribute of IoT devices is the need for good security. Nobody wants malicious bad guys to take control of their thermostat or their electricity meter, never mind their front door lock or their heart pacemaker.

    Andes has security on the JTAG port to prevent unauthorized access to the internals of the chip. They also have instruction local memory security between the FlashFetch buffers and the processor itself to protect the code that is executed. They also have protection against differential power analysis, a means of extracting encryption keys and the like but analyzing the amount of power used on each instruction cycle. This is especially important for cryptographic devices such as smart cards.

    So, bottom line, Andes has:

    • More performance
    • Lower power
    • Build-in security

    Andes will be presenting at the IoT developers conference on Thursday at 2pm, immediately after lunch. Pre-registration for the conference has closed but you can register at the door.

    More articles by Paul McLellan…


    New Method for Metrology with sub-10 nm Lithrography

    New Method for Metrology with sub-10 nm Lithrography
    by Daniel Nenni on 05-06-2014 at 6:00 pm

    NewPath Research will describe their new method for nanoscale carrier profiling in semiconductors on May 19[SUP]th[/SUP] at the Annual SEMI Advanced Semiconductor Manufacturing Conference (ASMC) in Saratoga Springs, NY. This new method is intended to fill the gap that has been addressed in the Roadmaps for the semiconductor industry by providing metrology tools that have sufficiently fine resolution for use with the new sub-10 nm lithography.

    In the new method, a mode-locked ultrafast laser is focused on the tunneling junction of a scanning tunneling microscope (STM) to generate a periodic sequence of femtosecond pulses of electrons that is superimposed on the DC tunneling current. In the frequency domain the sequence of electron pulses is equivalent to hundreds of microwave harmonics at integer multiples of the pulse repetition frequency of the laser. These harmonics set the present state-of-the-art for a narrow linewidth microwave source, thus enabling hyperspectral measurements to be made with exceptionally low noise. The electromagnetic field and photon processes within each laser pulse cause each electron pulse by optical rectification, but all subsequent photon and phonon processes decay during the time between consecutive laser pulses so they have no direct effect on the measured microwave harmonics. Each electron pulse forms a nanoscale spot of electrical charge at the surface of the semiconductor sample in the STM, and hyperspectral measurements of the microwave power are made to determine the spreading impedance as the charge moves from this spot, in order to calculate the concentration of carriers in the semiconductor.

    The new method may be understood by relating it to the present state-of-the-art means of Scanning Spreading Resistance Microscopy (SSRM). In SSRM nanometer-size probes made of electrically conductive diamond are pressed into the surface of a semiconductor and the spreading resistance is measured in order to calculate the carrier concentration. However, our new method is non-destructive, and has the potential of making rapid seamless measurements across large areas of the semiconductor. Furthermore, instead of using fixed probes having a specific size, the spot may be adjusted from 0.1 nm to 1 nm in radius by varying the tip-sample separation in order to provide 3-D characterization of the sample.


    Prototype Scanning Tunneling Microscope head.

    The figure shows a prototype STM head developed to provide efficient and quantitative coupling of microwave harmonics from the tunneling junction. It is unique in that all DC and high-frequency sources and measurement apparatus are connected at one end of a section of miniature semi-rigid coaxial cable while all connections to the tip and sample are made in an electrically-small loop at the opposite end.

    The new method was developed at NewPath Research L.L.C. in collaboration with Los Alamos National Laboratory and the University of Utah, with one patent (U.S. 8,601,607) issued and several others pending.

    Celebrating 25 years of manufacturing excellence, the SEMI Advanced Semiconductor Manufacturing Conference continues to fill a critical need in our industry. Join conference co-chairs Israel Ne’eman, Applied Materials and Oliver Patterson, IBM Microelectronics, and industry professionals from around the globe during 3 days of networking, learning and knowledge-sharing on new and best-method semiconductor manufacturing practices and concepts. Beginning May 18, with a welcome reception, ASMC 2014 features over 90 technical presentations, keynotes and tutorials and networking opportunities over the following days.

    lang: en_US


    The Matrix, your ultimate OPC

    The Matrix, your ultimate OPC
    by Beth Martin on 05-06-2014 at 12:47 pm

    One of the many consequences of shrinking process nodes is that traditional OPC can no longer achieve good pattern fidelity with reasonable turn-around-time. But there is a solution; we made it ourselves and call it matrix OPC.

    First, let’s explore the problems with traditional optical proximity correction (OPC) when applied to advance node layouts. During OPC, edges in a layout are broken into fragments and each fragment is iteratively adjusted by multiplying its edge placement error (EPE) with a carefully selected or calculated feedback. The traditional OPC algorithms assume a purely one-to-one correspondence between each individual polygon fragment on a mask and its associated EPE on the wafer. That means that moving one fragment on the mask only impacts the associated EPE, and conversely, that the incremental displacement of a fragment on a mask can be estimated by its associated EPE during the OPC iterations. This approach of tuning the layout based on single fragment EPE can no longer handle the stronger fragment-to-fragment interactions that start to be seen at the 28nm technology node.

    An example of such An example of such problem is shown in Figure 1. Traditional OPC is clueless when it comes to converging on a solution when there is strong cross-coupling between neighboring fragments. In the figure, the grey boxes are the wafer target, the rectangles are the mask shape, and the circles are simulations of the target shapes as they’ll actually be manufactured. The red lines are the contour shapes and also the mask shapes computed from a traditional OPC recipe, and green lines are also contour shapes, but with a perturbation applied to one edge on the mask for a test. The left picture shows that changing the upper edge of the lower rectangle on the mask (red line) only impacts the lower contact’s contour, but the impact is on all four sides. The right picture shows that a small change in bottom edge of the upper rectangle impacts the image of both contacts, but mostly the bottom one. A perfect solution would match the simulated contours to the grey target squares. In this case, conventional OPC is not capable of finding such a mask shape.

    So how do you get OPC to recognize the effect neighbor fragments have on each other? Glad you asked, because we think the answer is matrix OPC. Matrix OPC incorporates the influence of neighboring fragments into the feedback control of fragment movements for full-chip OPC. We’ve been working on this technology for roughly 14 years, with the first publication at SPIE in 2002–“Model-based OPC using the MEEF matrix,” by Nick Cobb and Yuri Granik. Another paper to read on the topic is “A Feasible Model-Based OPC Algorithm Using Jacobian Matrix of Intensity Distribution Functions,” by Ye Chen, Kechih Wu, Zheng Shi and Xiaolang Yan. We’ve been perfecting the technique ever since, and it is now available in the Calibre[SUP]®[/SUP] tools. My colleague, Junjiang Lei, and I put this blog together as an introduction to the fruits of our long labor.

    Calibre Matrix OPC is targeted to the 28nm, 20nm, 14nm, 10nm technology nodes. It is edge-based, full-chip level, enhanced OPC, and scales to large numbers of CPUs in the same way as the traditional OPC does and with comparable runtime. Its simulation and computation are use the compact form of the Calibre 3D mask models and resist models. It is compatible with and augments the existing Calibre OPC/RET techniques, including tagging, retargeting, process-window OPC and multi-patterning OPC. We also made sure that Matrix OPC can be combined with traditional OPC in the same recipe.

    Matrix OPC works because it changes each EPE on the wafer by calculating the movements of all polygon fragments on the mask collectively. The correlations between fragments are captured in a mask error enhancement factor (MEEF) matrix. [Unfortunately, theMEEF matrix is almost always ill-posed, large in size, and sparsely populated. The condition of such a matrix demands extra care in applications, which we discussed in a SPIE paper this year, “Model-based OPC using the MEEF matrix II” Junjiang Lei, Le Hong, George Lippincott, James Word, Proc. of SPIE Vol. 9052 (2014).]

    We demonstrated the quality and performance of matrix OPC in benchmark tests of 14nm-28nm designs with known difficult hotspots. Figure 2 shows a few of the cases, which are either industrial full-chip designs or clips from full-chip layouts that were large enough to project the full chip quality confidently. On real customer designs, matrix OPC improves the OPC results in terms of the verification standards defined by customers’ OPC verification recipes. In each case, the matrix OPC recipe results in a much lower relative error number than the baseline OPC.

    In summary, the Calibre team is pleased to offer matrix OPC to reduce the severity of the errors in the OPC verification results. This new functionality is fully compatible with the existing Calibre OPC techniques and solutions, including, but not limited to, the 3D mask model, the CM1 resist model, multi-patterning OPC, tagging functions, retargeting techniques, and process window OPC functions. In a given recipe, the matrix OPC iteration and the traditional OPC iteration can be used in a mixed fashion. We’ve also recently reduced the runtime overhead required for Matrix OPC to nearly 0.

    Le Hong (product engineer) and Junjiang Lei (technical program manager), Mentor Graphics


    IC Power Noise Reliability for FinFET Designs

    IC Power Noise Reliability for FinFET Designs
    by Daniel Payne on 05-06-2014 at 9:07 am

    Reliability for ICs is a big deal because the last thing that you want to do is ship a new part only to find out later in the field that there are failures not being caught by testing. I’ve already had two consumer products fail this year because of probable reliability issues: My MacBook Pro with 16GB of RAM started rebooting caused by bad RAM chips after only 2 years of use, and my iPad 3 started rebooting multiple times per day with panics after 2 years of use. Ideally, you want to know during the design process that all sources of IC reliability are addressed, simulated and verified.

    I had the chance last week to speak with Aveek Sarkar from ANSYS by phone about the topic of IC power noise reliability for FinFET designs. Aveek joined Apache back in 2003 which was then bought by ANSYS in 2011, and he is a VP of product applications. We had previously talked about how Apache used Distributed Multi Processing (DMP) to speed up full chip IR drop analysis in July 2013.

    The RedHawk product has been used for at least a decade now to perform power, noise and reliability analysis of SoCs. What’s new is that the use of FinFET technology and higher capacity chips has created challenges that required updates to RedHaw. The big three drivers for such analysis tools has always been: Speed, Capacity and Accuracy.

    FinFET transistors have attracted much attention because they have improved performance, reduced power and higher device densities. Even the FinFET circuit delays have lower variations to VDD changes than bulk CMOS transistors do. On the other hand, FinFET devices also have reduced noise margins, EM and ESD failures are more likely plus higher temperatures impact reliability.

    With FinFET transistors the power noise is increasing, so it’s even more critical that you perform accurate analysis during the design phase. Idsat values are higher, and reducing Vdd to 700mv reduces the noise margin even smaller, so in general FinFET devices tend to be noisier. The two design methods to deal with this are: Reduce noise levels or add more decoupling capacitors. A power noise analysis tool must be run on the entire chip+package+board to get accurate answers, instead of using a simplified approach that decouples these three design levels and may leave you with wrong answers.

    The latest release of the RedHawk tool was just announced and it’s called RedHawk 2014, and here’s what to expect:

    • About a 3X speed-up improvement using the DMP approach
    • 2.5X reduction in peak RAM usage
    • Capacity to handle multi-billion transistor designs

    It’s recommended to run the package and chip-level voltage drop analysis and optimization concurrently, and the new tool option that enables this is called RedHawk-CPA. If you run voltage drop analysis at the chip level and assume that the package model can be simplified as a few lumped elements, then the analysis numbers returned are going to be overly-optimistic which will create a false sense of security when in fact you may have failures.

    Electromigration (EM) is another reliability concern for ICs at 20nm and lower nodes, and EM rules effect the size of wire routing, via sizes and even the layer choices. I saw my first EM failures at Intel while designing DRAMS in 5um NMOS technology, so EM concerns have been around for decades. RedHawk provides EM analysis and helps to pinpoint EM violations by showing you a colorized layout view where Red areas are failing, requiring the IC designer to start taking corrective action, then re-run the EM analysis tool until compliance is reached.

    Finally, Electro-Static Discharge (ESD) is another reliability issue that all ICs must be designed for protection. A designer would add diodes to prevent ESD damage. The PathFindertools helps you to find ESD violations, then it’s up to you to apply a fix and re-simulate. Here’s an actual photo of an ESD failure provided by Samsung at the 2013 International ESD Workshop, and next to it on the left shows how the PathFinder tool identified this exact area in the layout as being most susceptible to ESD failure.

    TSMC has already certified the RedHawk 2014 tool for their FinFET process called 16N v1.0: extraction, rule handling, IR drop, EM analysis.

    Summary

    ANSYS has a lot of credibility and momentum in this area of IC power, noise and reliability analysis, now supporting FinFET technology. Existing RedHawk users receive the RedHawk 2014 release as part of the maintenance, while the RedHawk-CPA tool is a new, optional add-on. I plan on visiting ANSYS at DAC to learn a bit more, so expect a blog from my San Francisco trip next month.

    lang: en_US


    Flexible Integration System for IPs into SoC

    Flexible Integration System for IPs into SoC
    by Pawan Fangaria on 05-06-2014 at 7:30 am

    The number of IPs with growing complexity and heterogeneity is ever increasing (counting into hundreds) to be integrated into a single SoC. It’s not possible to have them all available at once and in a single repository for the integration engineers to assemble all of them together and integrate into the SoC. The reality is that the IPs are scattered at multiple locations with different third party vendors, in different types of repositories and with different lifecycles. Also, generally, the SoC integration team is very small with one or two top level design experts. In such a scenario, intelligent and robust automated systems are needed to integrate the IPs, from wherever they are, into the SoC without compromising the IP rights and protection.

    Looking at the ENOVIA DesignSync Data Manager of Dassault Systemes, it has all what is needed for IPs integration into an SoC. However in order to expand the reach further to integrate IPs residing in different repositories such as CVS, GIT or Subversion, external to DesignSync environment, Dassault has created an innovative solution by extending the DesignSync further with Foreign Modules, and is called as DSFM.

    In a typical scenario, the SoC top module can have hierarchical references to other modules within the DesignSync workspace and the external modules which can be in the native CM environment (such as SVN, ClearCase, CVS etc.) of the third party vendors. How does the DSFM system make it possible to seamlessly collate these IPs from different CM systems? That’s an innovative solution. In order to establish connection with the external CM system, the External Module uses a Modules Hierarchy Reference (href) URL which is parsed and interpreted by the client populate command to call the appropriate foreign module type function (called handler) that is responsible for further parsing and generating an appropriate external CM system command.

    A top-level design vaulted within a DesignSync server can have IPs in the same server, other DesignSync server or a server external to DesignSync. The href URL used by the External Module functionality of DesignSync to establish connection with the IP in external server is different from usual href URL in the sense that it does not have host and port which provides an indication that the URL should be processed at the client side. It takes the form as – sync:///ExternalModule//, where object_data includes information to identify the target IP to be populated. For example –

    sync:///ExternalModule/SVN/www.mycompany.com/81/svn/repositoryname/tags/release-1.0

    The handler at the client side is a TCL (Tool Command Language) script developed by the customer and is specific to external repository type and IP reuse methodology. This code is installed into the DesignSync client tools installation or added into the Development Settings Templates when using Enterprise DesignSync Administration. The handler code must include the External Module package, establish the namespace for the functions and define the procedure for the CM system and DesignSync command that will trigger the handler.

    The handler produces and executes shell level command (specific to OS) that brings the data from external CM system into the user workspace. Other DesignSync commands understand the existence of the External Module href and intelligently act accordingly, e.g. ‘add’ will not add files within external module base directories to modules; ‘ci’ recognizes and skips external module base directories; ‘rmmod’ removes external module metadata, but leaves external module base directory and data intact. The whole external module data is kept populated under the designated relative path for that module.

    The overall External Module architecture must adhere to corporate IP repository standards and methodologies that can vary from customer to customer. Considerable attention must be given to registration process, legal requirements, geographical restrictions, privacy of vital IP information and so on. The system should accommodate almost all corporate IP requirements.

    The system provides very high consistency, performance and control to the design integrator over the use of IPs within a complex semiconductor design. At the same time it provides good flexibility for the IP designers to stay with their choice of CM systems and design cycles to provide good quality IPs.

    This is a step in the right direction by Dassault towards its Design Engineering strategy that focuses on IP integration, design collaboration and verification. The methodology provides a robust management of all IPs (including external IPs) into the SoC hierarchy that prevents any wrong or imperfect IP from entering the system, thus improving the verification time and reducing re-spins.

    A nice level of detailed description about the DesignSync External Modules system can be found in a whitepaperauthored by Dassault Systemes. In the whitepaper, there are nice examples of code snippets about a few procedures such as interaction of External Module handler with an SVN server.

    More Articles by Pawan Fangaria…..

    lang: en_US