Banner 800x100 0810

Improving Design Practices for an Image Sensor IDM

Improving Design Practices for an Image Sensor IDM
by klujan on 05-07-2013 at 8:30 pm

With nearly twenty five years in business, Tanner EDA Application Engineers have seen a wide range of support requests. One consistent topic area is around design data management and design reuse. In one recent instance, our customer, an IDM who produces imaging sensors for infrared vision systems, called on Tanners AE team for onsite consulting to help drive design consistency and to create libraries of common cells that could be shared. Tanner’s AEs had previously delivered tool training for several of their design groups, but business growth led the customer to add several new designers who were less experienced with these practices.

Working onsite with the designers, I learned about the common structures they needed to create as well as the inconsistencies they experienced in the manufacturing and snap grid settings – this led to us implementing a server-based Technology Library for setup and common cell structures. Because image sensor design is fraught with redesign and resizing of common cells, we also implemented a set of T-Cell templates that designers could use to create structures quickly and efficiently. (In addition to the first-order impact on design time, this approach also greatly improved quality and reduced rework). The template T-Cells were added to the server-based Technology Library; allowing shared access for all designers. With Tanner’s v16 of L-Edit, the designers make use of the multi-user Open Access environment, allowing multiple designers to work on their respective cells within a common design. The productivity gains from this consulting engagement have already delivered tangible ROI to the customer, and they have scheduled follow-on consulting to address challenges in other parts of their workflow.

Tanner EDA will exhibit at DAC 2013, June 2-4[SUP]th[/SUP], in booth 2442 and in the ARM Connected Community® (CC) Pavilion, #921. The entire analog and mixed-signal design suite will be demonstrated:

  • Front-end design tools for schematic capture, analog SPICE and FastSPICE simulation, digital simulation, transient noise analysis, waveform analysis,
  • Back-end tools, including analog layout, SDL, routing and layout accelerators as well as static timing and synthesis, and
  • Physical verification, including DRC and LVS.

Visit www.tannereda.comto learn more. DAC demo sign-ups are HERE.

Tanner EDA provides a complete line ofsoftware solutionsthat drive innovation for the design, layout and verification of analog and mixed-signal (A/MS) integrated circuits (ICs) and MEMS.Customersare creating breakthrough applications in areas such as power management, displays and imaging, automotive, consumer electronics, life sciences, and RF devices. A low learning curve, high interoperability, and a powerful user interface improve design team productivity and enable a low total cost of ownership (TCO). Capability and performance are matched by low support requirements and high support capability as well as an ecosystem of partners that bring advanced capabilities to A/MS designs.

Founded in 1988, Tanner EDA solutions deliver just the right mixture of features, functionality and usability. The company has shipped over 33,000 licenses of its software to more than 5,000 customers in 67 countries.

lang: en_US


How To Design a TSMC 20nm Chip with Cadence Tools

How To Design a TSMC 20nm Chip with Cadence Tools
by Paul McLellan on 05-07-2013 at 8:10 pm

Every process node these days has a new “gotcha” that designers need to be aware of. In some ways this has always been the case but the changes used to be gradual. But now each process node has something discontinuously different. At 20nm the big change is double patterning. At 14/16nm it is FinFET.

Rahul Deokar and John Stabenow of Cadence and Jason Chen from TSMC will present, “20nm Design Methodology: A Completely Validated Solution for Designing to the TSMC 20nm Process Using Cadence Encounter, Virtuoso, and Signoff tools.” Well, I think my title gets to the point a bit quicker!


Double patterning has been forced on us by limitations in lithography. We still use 193nm light even though we are now drawing features that are 20nm (actually there isn’t really anything on a 20nm chip that measures 20nm). If we try and draw all the polygons on the lower layers of the process, the features are too close to print correctly. So instead we have to separate them onto two separate masks, so the polygons in effect alternate. Not all layout can be split in this way, which is usually called coloring since it is basically a graph-coloring algorithm, so routers and designers need to be careful not to create uncolorable layout.

Sometimes, even (say for analog), the designer wants to color the polygons manually. Why would they do that? At this process node, the two masks are not self-aligning. They are aligned by the vestigials on the wafer that the stepper detects, just like any other mask (actually reticle) but the two polygon layers have some slop in their alignment. This means that there is much tighter control of parasitics between polygons on the same mask (which are automatically self-aligning) and different masks (which are not).

There are self-aligned double patterning techniques. They use a sacrificial spacer (where both sides of the spacer eventually get whatever is being created on that layer) but they are more expensive. If you want to get a few chapters ahead, we will need to use these approaches to build transistors at the 10nm node (and maybe the lower levels of interconnect) but at 20nm we are not. I’m not sure about 16nm.

The layout rules for 20nm are very much more restrictive, even without worrying about double patterning. There is a lot less flexibility about what can go where, and weird features like dummy gates that we started to see at 28nm (where an extra poly is required on the end of a gate that is not electrically significant, to ensue that the gate prints and behaves correctly). We also have layout dependent effects (LDE) where the transistor circuit level performance depends on how close the transistor is to other features on the die, especially well boundaries. And even design rules that depend on electrical details. There is also local interconnect that appears between the transistors and the lowest level of true metal, with all its own rules.


A little more detail on what you will learn:

  • How in-design double patterning technology (DPT) and design rule checking (DRC) can improve your productivity
  • How both colored and colorless methodologies are supported, and data is efficiently managed in front-to-back design flows
  • How local interconnect layers, SAMEMASK rules, and automated odd-cycle loop prevention are supported
  • How mask-shift modeling with multi-value SPEF is supported for extraction, power, and timing signoff.

The webinar is being given twice on May 23rd at 9am Pacific (early evening in Europe) and at 6.30pm Pacific (morning in Asia). Details here. Registration here.


Bangers: the Best Beer Bar in Austin; Live Oak Brewing, the Best Beer in Austin

Bangers: the Best Beer Bar in Austin; Live Oak Brewing, the Best Beer in Austin
by Paul McLellan on 05-07-2013 at 8:10 pm

OK, enough with all this semiconductor geeky stuff. The important thing about DAC is…where to go to eat to avoid standard issue convention center chicken Caesar salad.

And a 7 minute walk from the convention center is Bangers Sausage House and Beer Gardenwhere you can have the $8 “executive” lunch consisting of a beer and a sausage. That’s probably cheaper than that Caesar salad. But there is a catch, they only serve lunch Thursday to Sunday, as if to deliberately annoy many DAC attendees.


How many beers do they have? That would be 103 different taps. The full list is above, click on it if you want an almost readable version. As for those sausages, they are supplied with meat from all over the area and as a result have the biggest sausage selection in Austin.

So if you want to experience Bangers, go on Sunday for lunch or go one evening, but not Monday night because that is the big 50th Anniversary DAC Celebration Party at the home of Austin City Limits.

Bangers is at 79 & 81 Rainey Street. it’s website is here.


OK. So Bangers is great but if you are an exhibitor and you actually want a beer during the exhibit where can you go.

Well, how about Live Oak Brewing Company. How about experiencing the Live Oak Hefeweizen, the #1 ranked beer on the Beer Advocate’s list of Top Beers in the Southwest. Several more beers make the top 50: Old Treehugger Barley Wine, Primus Weizenbock, Pilz, Liberation Ale, Roggenbier and Oaktoberfest. You didn’t really need to get back for afternoon booth duty anyway, did you?

Live Oak Brewing Company is at 3301 East 5th Street. It’s website is here. Actually I just realized that this is just the brewery. To taste the beer (other than on a brewery tour) it is available all over Austin. Maybe even near the convention center. Early in the week when Bangers is not open until the evening.

Anyone from Austin or with local knowledge, please add more suggestions in the comments.


Wireless Algorithm Validation from System to RTL to Test

Wireless Algorithm Validation from System to RTL to Test
by Daniel Nenni on 05-07-2013 at 8:05 pm

LSK 1123

This year’s #50DAC will be chock-full of technical content because that is what attracts the masses of semiconductor professionals, like moths to a flame, or like me to a Fry’s Electronics store. Interesting note, I went to high school with Randy Fry. His Dad started the Fry’s supermarket chain which he sold then he went into electronics. That’s why Fry’s Electronics has a similar layout/business model as a grocery store.

Agilent Technologies and Aldec will co-host a technical session on how to validate a digital signal processing algorithm for both floating and fixed point levels. Attendees will gain insight on cross-domain approach to traditional FPGA design flow and learn how to validate FPGA design for leading edge wireless and radar system with a system-level simulation tool integrated into the traditional hardware design flow.

Attendees will gain valuable, practical skills with the following tools and equipment:

  • Agilent SystemVue as a programming environment to simulate and verify system performance prior to realizing a dedicated hardware implementation.
  • Co-simulation interface with Aldec Riviera-PRO for validation of functional blocks described in SystemVue hardware design library.
  • HIL (Hardware in the Loop) to accelerate both design validation and test coverage, saving additional development time.

Date: Wednesday, June 5, 2013
Time:
2:00 PM — 4:00 PM
Location:
17AB
Topic Area:
System Level Design and Communication
Speakers:

Dmitry Melnik – Aldec, Inc., Henderson, NV
Sangkyo Shin – Agilent Technologies, Inc., Santa Rosa, CA

The Design Automation Conference (DAC) is recognized as the premier event for the design of electronic circuits and systems, electronic design automation (EDA) and embedded systems and software (ESS).

Members are from a diverse worldwide community of more than 1,000 organizations that attend each year, represented by system designers and architects, logic and circuit designers, validation engineers, CAD managers, senior managers and executives, and researchers and academicians from leading universities.

Close to 300 technical presentations and sessions are selected by a committee of electronic design experts offer information on recent developments and trends, management practices and new products, methodologies and technologies.

A highlight of DAC is its exhibition and suite area with approximately 200 of the leading and emerging EDA, silicon, intellectual property (IP), embedded systems and design services providers.

The conference is sponsored by the Association for Computing Machinery (ACM), the Electronic Design Automation Consortium (EDA Consortium), and the Institute of Electrical and Electronics Engineers (IEEE), and is supported by ACM’s Special Interest Group on Design.

Some of the highlights of this year’s DAC include:

  • Keynotes by industry leaders/visionaries
  • Technical Program (panels, special sessions, Designer Track)
  • Forums, tutorials, and workshops
  • Management Day
  • Exhibition Floor
  • Colocated Conferences
  • Awards for professionals and students

About Aldec
Aldec Inc., headquartered in Henderson, Nevada, is an industry leader in Electronic Design Verification and offers a patented technology suite including: RTL Design, RTL Simulators, Hardware-Assisted Verification, SoC and ASIC Prototyping, Design Rule Checking, IP Cores, DO-254 Functional Verification and Military/Aerospace solutions. www.aldec.com

See Aldec DAC demos HERE.

lang: en_US


The Capital Lite Semiconductor Model

The Capital Lite Semiconductor Model
by Paul McLellan on 05-07-2013 at 8:05 pm

For a couple of years the GSA has had working group looking at funding of semiconductor investment. There is a general feeling, which I share, that it is hard to get a fabless semiconductor company off the ground (nobody would dream of trying to create one with a fab these days) due to the size of the investment and the relatively long time to return the investment. In an era when software companies can be created and ramped and sold in a year or so, the time and cost to build a chip and ramp it to volume makes the whole deal unattractive to conventional venture capitalists.

Some of this is driven by what is sometimes called Moore’s Second Law, that as we go down through process generations, the capital required to develop new products increases by 27% from one generation to the next. So in 2000 a semiconductor company could bring a product to market for $10M but now it takes $30M just to get to samples. To make things worse, as ASPs have come down (for the same functionality) that puts pressure on shipping in very high volume, meaning designing complex SoCs that serve multiple markets and contain a lot of semiconductor IP (SIP).

The Capital Lite Working Group has produced a discussion paper called The Capital Lite Semiconductor Model: Revitalizing Semiconductor Startup Investment which looks at this problem and proposes a solution.

One of the largest costs in a chip design is adding non-differentiated IP, the cost of which can exceed pretty much everything else put together. So one challenge is to get this cost down by partnering a large semiconductor company ($100Ms) with the smaller company to both allow undifferentiated IP to be shared and to create a possible new product line for the larger company.


Part 1 of the capital-lite deal structure is Equity for IP (EFI). The largeCo gets the opportunity to further monetize its IP and the StartupCo avoids the cost of all that design. Part 2 is Triggered Royalty License (TRL) which gives LargeCo more opportunity for revenue and for StartupCo balances independence wth capital efficiency. The royalty payments are deferred compared to paying up front for IP, so they only become due once there is revenue available to pay. Finally Equity Finance With Extra Preference (EFEP) creates a spin-in opportunity for LargeCo to absorb StartupCo once it is successful without, in some sense, paying to buy back all its own IP and money.

There are a a lot of details to make this work correctly so that StartupCo’s R&D expenses do not get consolidated into LargeCo’s financial, for example. An example is EzChip which is a publicly traded capital-lite semiconductor company that has partnered with Marvell Technology to reduce its capital intensity by sourcing IP and supply chain operations.

An example of a company creating a whole capital-lite ecosystem is SK Telecom. They have created an Innovation Center by Innopartners. It is a cross between the capital-lite model described above (which is largely driven by StartupCo’s needs) and something more directed by LargeCo. Selected entrepreneurs will be given access and routine collaboration with the top researchers at strategic partner, initial funding and guidance through the early stages, and office and lab space at the partner.

The strategic partner has the opportunity to acquire a tailor-made startup. The strategic partner describes the product required and Innopartners seeks out qualified entrepreneurs to create the startup and deliver the product. Or alternatively, entrepreneurs with an idea can present it to Innopartners in a more traditional manner, who will then seek a strategic partner to make the program viable.


There are various different financial models ranging from a spin-in path where the milestones and price are more or less negotiated up front, to traditional VC model with no exit valuation predetermined, to a standalone path where the startup never really gets completely independent nor is it absorbed.

For more information see the GSA Capital-Lite Working Group Discussion document here. See the SK Telecom White Paper here.


Cell-Aware Test Seminar

Cell-Aware Test Seminar
by Beth Martin on 05-07-2013 at 8:05 pm

You may have heard about cell-aware testing. It’s a transistor-level test (ATPG) methodology that is quickly becoming a hot topic. If you are involved in DFT and are looking for better quality and reliability, you should definitely know about cell-aware testing.


And lucky you, on May 16, 2013, you can attend a free seminar on cell-aware test at Mentor Graphics. It runs from 10:30am to 1:30pm, which means lunch is free too. Go here for details and registration.

Basically, cell-aware test lets you detect faults that occur within standard cells that are often missed with the current models. Mentor and AMD published cell-aware production test results from a 32nm processor last fall, which offers a compelling value proposition (spoiler alert: 885 DPM reduction. Yowza.)

Do you plan to use FinFETs any time soon? I hear that cell-aware testing will likely be a normal part of the DFT flow. Thisrecent article talks about FinFet defect coverage with the cell-aware methodology.

So, read the technical paper and the article, Google it, then sign up for the seminar and bring your questions.


Global Foundries Does DAC

Global Foundries Does DAC
by Paul McLellan on 05-07-2013 at 8:05 pm

Global Foundries will be at DAC in booth 1314. There will be 6 pods there demonstrating:

  • Advanced Technology: 28nm ready and ramping, and next is 20LPM and 14XM.
  • PDKs: For 28nm, 20nm and 14nm. 14nm handles FinFET enablement complexity. Robust, easy to use and high quality, supports pretty much the full range of EDA tools.
  • Design Methodology: all that 20nm stuff like double patterning. And FinFET extraction.
  • Foundry Services: GlobalShuttle advanced multi-project wafer (MPW) for flexible prototyping. Advanced interconnect 2.5D and 3D solutions.
  • ARM/SoC solutions (2 pods) industry leading performance and energy efficient ARM processor-based SoCs accompanied by energy efficient accelerator co-processors for heterogeneous computing.

There will aso be numerous presentations every day on the booth. Some may require pre-registration but there will be an open theater where partners give short presentations on their collaboration with GF. Details will soon be available on the GF DAC microsite.

Throughout DAC there will be presentations by GF employees in various places during the week:

  • Tuesday, June 4, Pavilion Panel, Dave McCann Is This the Right Time to Create Standards for 2.5D/3D-IC Designs?
  • Tuesday, June 4, Synopsys Breakfast, Subi Kengeri and Kelvin Low Two-way collaboration on FinFETs and 14XM
  • Tuesday, June 4, Management Day, Bob Madge Decision-making for complex ICs
  • Tuesday, June 4, Cadence IP Talks, Subi Kengeri Topic TBD
  • Wednesday, June 5, Mentor Panel, Richard Trihy No fear of FinFET
  • Wednesday, June 5, Pavilion Panel, Luigi Capodieci Learn the secrets of Design for Yield

GLOBALFOUNDRIES is the world’s first full-service semiconductor foundry with a truly global footprint. Launched in March 2009, the company has quickly achieved scale as the second largest foundry in the world, providing a unique combination of advanced technology and manufacturing to more than 150 customers. With operations in Singapore, Germany and the United States, GLOBALFOUNDRIES is the only foundry that offers the flexibility and security of manufacturing centers spanning three continents. The company’s three 300mm fabs and five 200mm fabs provide the full range of process technologies from mainstream to the leading edge. This global manufacturing footprint is supported by major facilities for research, development and design enablement located near hubs of semiconductor activity in the United States, Europe and Asia. GLOBALFOUNDRIES is owned by the Advanced Technology Investment Company (ATIC). For more information, visit http://www.globalfoundries.com.


A Brief History of Dassault Systèmes

A Brief History of Dassault Systèmes
by Paul McLellan on 05-07-2013 at 8:05 pm

Dassault Systèmes (DS) was created in 1981 when a small team of engineers were spun out of Dassault Aviation. They were developing software to design wind-tunnel models and so reduce the cycle time for wind-tunnel testing, using surface modeling in 3D instead. The company entered into a distribution agreement with IBM that same year and started to sell its software under the CATIA brand.

Working with large industrial customers they learned how important it was to have design support for the whole range of parts in 3D. The growing adoption of computer approaches to design, especially in automobile and aviation, triggered the vision for transforming 3D part desing into an integrated virtual product design. By 1994 V4 was released, enabling customers to reduce the physical prototypes and have a complete virtual understanding of the product. They also expanded into new verticals, adding fabrication and assembly, consumer good, high-tech, shipbuilding and energy.

In 1996 there was an IPO in Paris also listed on Nasdaq (in 2008 DS voluntarily delisted from Nasdaq).

By 1997 DS was organized into two parts to support the entire product life-cycle. Product Lifecycle Management (PLM) and design-centric for customers seeking to design products in 3D. They also acquired IBM’s Product Manager Software, and created the ENOVIA brand.

In 1999 was the initial launch of version 5, a new architecture software platform for PLM designed for both Windows NT and Unix environments. They also expanded the ENOVIA product line with the acquisition of Smarteam for small and medium sized businesses.

In 2000 DS created the DELMIA brand addressing digital manufacturing (digital process planning, robotic simulation and human modeling) and in 2005 the SIMULIA brand addressing realistic simulation.

In 2006 DS acquired MatrixOne, a global provider of collaborative PLM software and services to medium to large organizations. Prior to acquisition by DS, MatrixOne had acquired Synchronicity in 2004 which was focused on managing the value chain for electronics products, especially semiconductor.

In 2007 DS started to take control of their own distribution (at this point IBM was distributing around 50% of their product) which ended in 2010 by them acquiring IBM PLM, the business unit exclusively dedicated to sale and support of DS’s PLM software, although they also signed a global alliance agreement with IBM extending their cooperating in professional services, cloud computing, middleware, flexible financing had hardware.

Various other acquisitions took place, including Netvibes in early 2012 and Tuscany Design Automation in late 2012. Netvibes had intelligent dasboarding and Tuscany’s PinPoint product added dashboarding and design lifecycle management for SoCs. Several large semiconductor companies are using PinPoint for leading edge designs.

The combination of MatrixOne (with Synchronicity) along with Netvibes and PinPoint should lead to powerful tools for making the design process more comprehensible and efficient.

Dassault Systèmes corporate mission is to provide Businesses and People with 3DExperience universes to imagine sustainable innovations capable of harmonizing products, nature and life. A growing number of companies in all industry verticals are evolving their innovation processes to imagine the future both with, and for, their end-consumers.


Sage Design Automation iDRM Launch

Sage Design Automation iDRM Launch
by Daniel Nenni on 05-07-2013 at 7:00 pm

This is an example of what I do during the day. I work with emerging companies on disruptive technologies and help launch them into the fabless semiconductor ecosystem. This product, iDRM, is the result of three years of joint development work amongst three semiconductor foundries and some of their top customers:

Continue reading “Sage Design Automation iDRM Launch”


Sandisk and NetworkComputer

Sandisk and NetworkComputer
by Paul McLellan on 05-07-2013 at 12:33 pm

Robert Veltman and Vikash Tyagi of SanDisk Corporation presented at SNUG a few weeks ago on their selection and use of RTDA’s NetworkComputer to manage their workflows.

Like everyone else, SanDisk has a high-performance computing farm (and like everyone else they are coy about how big it is) and lots of licenses for EDA tools, simulation in particular. You probably know that EDA tools use FlexLM to keep track of license use. A load balancer has to direct the workload to suitable execution hosts based on both hardware resource availability and license availability.

There are a number of problems that can occur. First is license under-utilization. If there are per-user limits but the number of users goes down then licenses can go unused. Users will also bypass the load balancer if the submit-to-execute time is too long, such as longer than the job runtime. And licenses cannot be shared among remote sites.


SanDisk’s requirements for a load balancer were:

  • can manage the hardware and software resources
  • pre-emption capability (stop a current job and run another)
  • high-performance job scheduling (very short submit to execute delay when resources are available)
  • flexible in adapting to different organization and business models
  • global deployment with central software resource management

The two well-known load balancer tools are SGE (from Oracle) and LSF (these days owned by IBM) but they both failed to meet the high performance job scheduling needs, were not integrated with the FlexLM license manager, and hard to deploy globally. SanDisk evaluated and decided to use RTDA’s NetworkComputer (NC) which met all requirements.

So what has been their experience with NC?

Pre-emption is the ability to suspend a workload in order to free up hardware and software resources for another workload. In particular, the licenses are taken back from the pre-empted workload and then, when eventually it is resumed, it needs to re-acquire them and carry on as if the whole pre-emption had never happened.

Fairshare is used for license sharing among users when there is contention. Fast-fairshare uses pre-emption to balance loads immediately, even up to allowing a single user to consume all available licenses but balancing the load among multiple users required. With no user limit this promotes submitting workloads sooner rather than later so they get to benefit from any slack periods.


Fast-fairshare is implemented to allocate any excess licenses to sites where it is day (so the users are presumably around) rather than night (where presumably the workload is all queued up and new jobs are unlikely to arrive before morning).

Global license sharing is supported by NC. Each site gets a minimum license allocation and unused licenses go to the site with the highest demand. Pre-emption forces minimum allocation when required. There are some minor gotchas: sites with insufficient hardware can’t benefit from surplus licenses, there is a limit on the number of sites, and so on.

The results have been good. License utilization has increased since deploying NC. The scheduling is very fast and eliminates the need for out-of-queue jobs. There is an efficent way to share licenses among different sites. Workloads can be balanced instantly.

Bottom line: using NC’s built-in functions with some in-house developed software automation delivers corporate-wide advanced load balancing.

The SanDisk SNUG presentation and the accompanying white paper, both of which contain a lot more technical details, are here.