Banner 800x100 0810

Layout-based ESD Check Methodology with Fast, Full-chip Static and Macro-level Dynamic

Layout-based ESD Check Methodology with Fast, Full-chip Static and Macro-level Dynamic
by Daniel Payne on 05-22-2013 at 10:25 am

Nvidia designs some of the most powerful graphics chips and systems in the world, so I’m always eager to learn more about their IC design methodology. This week I’ve had the chance to talk with Ting Ku, Director of Engineering at Nvidia about his DAC talkin the Apache booth in exactly two weeks from today. Registration is required for this presentation.


Ting Ku, Nvidia

Continue reading “Layout-based ESD Check Methodology with Fast, Full-chip Static and Macro-level Dynamic”


The Only DM Platform Integrated with All Major Analog and Custom IC Design Flows

The Only DM Platform Integrated with All Major Analog and Custom IC Design Flows
by Daniel Nenni on 05-22-2013 at 10:00 am

As I have mentioned before, Cliosoft is the biggest little company in EDA with the most talked about products on SemiWiki. At DAC, ClioSoft will introduce integrated SOS design management (DM) solutions providing revision control, design management and multi-site team collaboration for Aglient Technologies’ Advanced Design System (ADS) Software and Mentor Graphics’ Pyxis flow. SOS is now seamlessly integrated into all major analog, RF and custom IC design flows:

  • Agilent ADS
  • Cadence Virtuoso®
  • Mentor Pyxis
  • Synopsys Laker™
  • Synopsys Custom Designer

In booth #2125 ClioSoft is replacing its popular poker game with slot car racing. As much as I like playing poker, slot car racing brings me back to my childhood so I’m looking forward to it. Maybe next year I can get them to install a massive HO scale train set? My brothers and I spent hours and hours playing with model trains.

Here is what customers are saying about Cliosoft products on SemiWiki:
[LIST=1]

Supporting the Customer Is Everyone’s Job

Cliosoft CEO on Design Collaboration Challenges!

Agilent ADS Integrated with ClioSoft


Transistor-Level Update from Cadence at DAC

Transistor-Level Update from Cadence at DAC
by Daniel Payne on 05-20-2013 at 7:47 pm

My 8 years as an IC circuit designer were at the transistor-level, so if that interests you as well then consider what there is to see from Cadence at DAC this year. IC design technology is changing quickly, so keeping up to date is important for your job security and continual education goals.

Here’s what I would recommend attending at Cadence in Booth #2214: Continue reading “Transistor-Level Update from Cadence at DAC”


Samsung’s Life of Pi @ Apache @ DAC

Samsung’s Life of Pi @ Apache @ DAC
by Paul McLellan on 05-20-2013 at 4:51 pm

Last week I talked to Eileen You of Samsung-SSI to get a preview on what they will be talking about at Apache’s customer theater at DAC. Their presentation is titledThe Life of PI: SoC Power Integrity from Early Estimation to Design Sign-off. The ‘PI’ stands for Power Integrity.

Samsung-SSI’s operations are 5 years old and have grown from 1 person to 100 and have gone through several generations of technology. Some designs are 28nm and other are currently below.

Apache tools are used to generate scenarios for power analysis and integrity. Power analysis is dependent on vectors for realistic scenarios but that is a really hard challenge they find. They are trying to expand up to the RTL level since there is too little gain from doing analysis post-synthesis since the design is hard to change.

Primarily Samsung are using RedHawk, CPM and Sentinel. Redhawk for general power analysis. For package and board they use Sentinel. Packages need to be analyzed in the frequency domain and in the time domain.

The future challenges they see are mostly big picture stuff: power grid design, power regulators, keeping costs under control with the right metal stack, and, of course, the big one that everyone faces that power density is increasing. Rocket nozzles anyone? As designs get bigger and processes have less margin, obviously higher accuracy, higher capacity. Plus getting good power vectors so that the analysis done is realistic. It is easy to waste a lot of time doing very accurate analysis with bad vectors.

The abstract of Samsung-SSI’s presentation: The life of Power Integrity (PI) analysis starts at the product infancy stage. Early analysis involves resource allocation at the system level, such as the VRM, board, and package, and at the chip level, in terms of power grid structure, power scenario analysis, and the amount and placement of intentional decoupling capacitance (DECAP). This is done through systematic PI modeling and simulation. As the design matures, the power integrity engineer gets more information on the system and on the die. There are many phases of progressive iterations to evaluate design tradeoffs. Power integrity engineers work closely with board, package, and chip design teams to achieve PI closure. At the design tape out stage, the power integrity team is responsible for signing off static and dynamic IR drop and EM to verify that multi-million gates SoC chips meet stringent power supply noise budget. We investigated the impact of board, package, package embedded with DECAP, power grid, circuit switching activity, as well as on-die DECAP and demonstrated good correlation between early estimation and the final analysis with detailed chip and package models.

To register for this or other customer presentations at the Apache booth at DAC go here.


Better, Faster, Cheaper: Evaluating EDA tools

Better, Faster, Cheaper: Evaluating EDA tools
by Randy Smith on 05-20-2013 at 3:30 pm

With DAC approaching, it is a good time for both EDA companies and their customers to take a deeper look at the evaluation process of EDA tools, and how EDA companies position their tools. I hope this is useful for customers and vendors alike.

When it comes to positioning EDA tools in the marketplace there are really only three meaningful measurement scales. Products will primarily be categorized as: (1) Better; (2) Faster; or (3) Cheaper. Of course, Faster could be used as better performance and Cheaper could be viewed as better price. However, I break it down this way because, on the equivalent of the Mazlow Scale of EDA, a Better product is usually more highly valued (e.g., optimization) than a Faster product (e.g., simulation technology treadmill) or a Cheaper one. While this method is taught in one way or another at most business schools, it is relentlessly drummed into the heads of the executives and marketers at all companies mentored by EDA icon, Jim Hogan (who says he actually borrowed it from Isadore Katz). So, for now, I will refer to this as the Hogan Scale.

What makes a product better? Generally we look at two broad categories – quality of result (QoR) and ease-of-use. The method to measure QoR will vary based on the product category. For example, you might measure QoR for a IC place and route product by result chip performance, power, and area (PPA). For a circuit simulator QoR may be primarily measured on accuracy or debugging support. Each customer needs to list and rank the criteria that matters to them. Do not include vendor-touted features unless they matter to you over the time period of the license you are considering buying.

Most benchmarks focus on QoR, leaving ease-of-use as a part of the Better measurement that is often undervalued. However, sometimes ease-of-use is measured in total clock time (set up time and difficulty; and run time). If a tool only gives good results when run by the vendor’s application engineers then you have a potential problem for several reasons: (1) you will always be dependent on support from the vendor just to get good results from the tool; (2) you may end up competing with other companies over who gets the better support from vendor; (3) if you cannot get the support you need, or perhaps cannot afford it, the tool will go underutilized; and (4) your development team may not be able to express their expertise since they cannot adequately drive the tool. To summarize, if ease-of-use is low either you won’t see the QoR you saw in the benchmark when you use the tool yourself, or you will need to rely on the vendors AEs which may be quite expensive. Perhaps more disturbing, the availability of the vendors AEs is not typically under the customer’s control.

Measuring Faster is pretty straight forward and simply requires that you give the tools you are measuring a similar amount of resources. You may need to be a bit flexible with that at times. If one vendor supports parallel processing and another does not you simply need to consider what speed you require or how much more you are willing to pay for the higher speed. Faster will often lead to better. In the classic trade-off, if you can run simulation fast, then you can do more verification by running more simulations to get better quality results from verification.

Cheaper is often a pejorative term. In this context we simply mean less expensive. If two tools are essentially similar in QoR, ease-of-use, and speed, then price will make the difference. Where there are clear differences between the tools customers should consider if saving on license cost is actually worth it. EDA tools should be a multiplier of value for customers, not a necessary evil or cost. Good engineers will produce noticeably better results with good tools than they will with bad ones.

Finally, I have been asked by small companies and other bloggers how small EDA companies can, or should, compete with the major EDA vendors. It starts by knowing where you are on the Hogan Scale. If your Better is enough, then promote that. If not, are you Faster? Cheaper simply never works for a start-up. Competing on price with the big vendors is a bad place to be.

lang: en_US


Tempus: Cadence Takes On PrimeTime

Tempus: Cadence Takes On PrimeTime
by Paul McLellan on 05-20-2013 at 7:00 am

Today Cadence announced Tempus, their new timing signoff solution. This has been in development for at least a couple of years and has been built from the ground up to be massively parallelized. Not just that different corners can be run in parallel (which is basically straightforward) but that large designs can be partitioned across multiple servers too. So Tempus is scalable to 100s of cores/servers. This scalability also means that it can handle essentially any size of design, and has already been used to analyze 100s of millions of cells (placeable instances) flat.

Of course timing closure is getting more difficult. Designs are getting larger and increased margins (e.g non-self-aligned double patterning at 20nm) make timing closure harder. Plus the number of views that need to be analyzed is also increasing exponentially leading to extremely long run times (days, not hours).

Timing closure at 20nm is growing to up to 40% of the design cycle. I’m assuming that design cycle here means the synthesis, place and route, signoff cycle and doesn’t include the RTL development and verification. As you know, modern SoC design teams are largely assembling blocks of IP that have either been purchased or developed internally in special IP development groups. In fact one o the problems with timing closure is that problems with the IP with regards to routability (which shows up often as timing violations) only comes to light suring SoC assembly.

The order of magnitude increase in performance means that it is possible to do large amounts of path-based analysis (PBA). PBA traces timing through the actual input/output pins and propagates the correct slew values. Traditionally this has been too expensive to do extensively and a more pessimistic approach is used, taking the worst (slowest) input pin and using that as a proxy for all input pins. The difference is around 2-3% in reduced pessimism. This has two effects. Firstly, some paths that violate timing without PBA will be OK with PBA and so do not need to be addressed during timing closure. And paths that still do not make timing will miss with smaller negative slack and so will not require such aggressive changes to fix. The added margin can also be taken in the form of slower/smaller/lower-power cells.

Another interesting feature is the ability to do hierarchical/incremental analysis. This makes it possible to look at just a single block (if that is all that your team is focused on) but have the timing numbers match precisely as if the entire chip was being timed. So a hierarchical design can be handled within an accurate global context.

Tempus has a tight iteration cycle with place and route, producing physically aware optimization (legalized, DRC-clean placement directives) so that typically 2-3 iterations are enough to achieve timing closure, as opposed to the all-too-common case where each set of changes fixed causes another set of paths that were previously OK to suddenly need addressing, the dreaded whack-a-mole situation where timing doesn’t really close.

Cadence have been working with Texas Instruments as a lead customer. One design (which I assume is a TI design but may not be):

  • 28nm, 44M instances, 12 views
  • Existing flow: 10 days to fix hold violations, could only work on 7 views due to capacity limitations
  • Tempus: 99.5% hold violations fixed in one ECO iteration with no degradation in setup timing. Before using Tempus there were 11,085 timing violations and after there were just 54

Cadence are working with the foundries to get Tempus signed off as signoff. For smaller fabless companies that cannot afford to do their own correlation this is essential, of course, to even be considered.

The big challenge for Cadence that I see is that it is very hard to replace “standard” trusted tools that are good enough. That’s great if you are Cadence with Virtuoso, or Mentor with Calibre. And of course Synopsys with PrimeTime. It is what I call predictable pain. PrimeTime may not be the fastest or highest capacity but it does work. Cadence is betting that if they really can deliver a 10X improvement in design closure productivity then that will be enough to get people to switch. To find out we will just have to…well…give it time.


Design Data Management – Key Winning Strategy!

Design Data Management – Key Winning Strategy!
by Pawan Fangaria on 05-19-2013 at 9:30 pm

In a complex semiconductor market today, characterized by ever increasing design size and complexity, long design cycle, rapid technological advancement, intense competition, pricing pressure, small window of opportunity, development and cross-functional teams spread across the globe and multiple design partners including several IP vendors for a single SoC, it’s essential for a corporate to identify key strategies for its sustainable competitive advantage. Among so much divergence in the ecosystem, there need to be some key elements of convergence to unify the system and provide direction to achieve goals at the right time of opportunity and in cost effective manner. Unified design data management is one such activity which is immensely important for the success of any semiconductor organization because such an organization handles large design data from various sources which gets revised or modified by multiple teams and is reused frequently.

Dassault Systemes group offers ENOVIA Synchronicity DesignSync Data Manager which brings compelling value to any semiconductor organization from technical, commercial and economical perspectives. It manages design data throughout the product life cycle; from specification, development, test and release, and follow on releases thereafter. Although I knew about DesignSync, didn’t realise that it silently works behind the scene and keeps the companies ahead in time-to-market until I read a paper on its ROI impact analysis done by Gantry Group by interviewing a group of ENOVIA’s 18 top customers. I would not go into the details of that study, however a summary of what customers liked about ENOVIA’s DesignSync is in the following table –

These inputs indicate that DesignSync is a critical enabling technology infrastructure for semiconductor and electronics companies to sustain their competitive advantage. As I realised after going through the paper, some of the key advantages of using DesignSync are –

Design Flow Efficiency – By embedding DesignSync into design flows, companies are able to automate the whole process of data management and version control to bring consistency in the design without bothering designers for this mundane but essential work. The design engineering time saved as a result of DesignSync was estimated as –

Design Engineer Productivity – No wonder, design engineer productivity is improved in several ways such as designers’ time spent in producing higher number of gates, increased product throughput, reduced team size, seamless handling of larger number of design configurations and shorter integration cycle. Quantitative figures for these are mentioned in the paper.

Getting back to tape-out configuration – When silicon comes back from the fab, DesignSync enables engineers, spread across multiple sites, to quickly pick up from where they left off in the design, hence saving enormous time-to-market for the company.

Data Set Compares – It’s extremely helpful for an enterprise wide team, distributed across geography to quickly and automatically figure out the design changes between different versions of a design within seconds.

Saving in Disk Space – It provides great relief for designers not to bother keeping multiple local copies of their design work. DesignSync instils confidence among designers for safe handling and retrieving of their design data in a unified manner without any duplication thus consuming only the required disc space. This is a boon for today’s semiconductor business where design data size itself can be in terabytes.

IP Re-use – As DesignSync supports modular design structure, it becomes easy to maintain and re-use IP modules in the designs. Use of IP is a significant strategy for semiconductor organizations.

There are other advantages; notably, ease of collaboration among teams spread across multiple sites, reduced product cost, predictable and optimized project schedules and reduced probability of failures. A reduction in re-spin of mask set (a 65 nm mask set could cost up to $2M) can save millions of dollars.

The paper also records some of very impressive quotes from Dassault’s customers who rightly saw the advantage of using DesignSync; managing consistency in data, handling seamless collaboration in enterprise wide teams, enabling to achieve project deadlines, Hierarchical Configuration Management and so on. Interested audience can take a look at the complete paper at –
ROI Impact Analysis of ENOVIA Synchronicity DesignSync® Data Manager, even try using DesignSync, if not used earlier.

To gain more info – visit Dassault’s booth #1625 at DAC 2013, Austin, TX – June 2 – 6

As a concluding remark, I would like to mention that semiconductor industry is one which continued investment in R&D even at a time of economic slowdown to gain edge in technology innovation. As a result, we are talking about 14nm and 10nm process nodes today. In such a fierce competition and $$ at stake, it’s wise to have a companion like DesignSync which helps eliminating every wastage of time, money and energy to keep the design teams and companies ahead on their goals.


CEO Interview: Jens Andersen of Invarian

CEO Interview: Jens Andersen of Invarian
by Daniel Nenni on 05-19-2013 at 9:10 pm

Invarian is an interesting EDA company that sees a niche market opening in the physical verification space. There are a number of converging factors driving this opportunity. Electromigration and voltage-drop for full-chip analysis demands SPICE level accuracy with fast runtimes. Invarian solves that problem with macro modeling and a parallel architecture. Not only are runtimes greatly reduced, but accuracy is improved by incorporating thermal effects within the overall concurrent analysis flow. It seems like everything is going 3D these days, from FinFETs to heterogeneous stacked-die packaging. In a 3D world thermal effects have a major impact on power, performance and reliability. I got a chance to sit down with Invarian’s CEO, Jens Andersen, and delve into this interesting little company. We have been friends for years so I really enjoyed catching up.

What are some specific customer IC design issues your customers are facing?

  • Our industry is facing the proverbial horns of a dilemma, exponential growth in complexity in one hand and shrinking margins of error in the other. The growth in gate-count is not the problem (Moore’s Law has been with us for some time) but rather the constraints on power consumption, electromigration issues, instantaneous voltage-drop analysis and thermal effects. Sign-off has become so burdened with Band-Aid patches over the years. The cumulative effect has been to create a cumbersome and disjointed process. It has created what I call a time dilation effect. There is simply not enough time for getting simulation runs through the pipe. This is the road to ruin, leading one down a path of unpleasant trade-offs, between incomplete analyses or missing the veritable market window.
  • Our customers are demanding new tools, and for a variety of reasons these tools must be built from the ground up, to handle the complexities of cutting edge processes and large design files, but they must also be easy to setup, use and maintain.
  • Current tools sets are inadequate, in the sense that they do not account for the interaction between thermal, power, timing and voltage. A scenario where each leg of the flow works properly, yet the chip fails catastrophically is entirely within the realm of the possible. In fact, it is a very likely result. The Law of Large Numbers has a nasty habit of punishing gamblers, whether in a casino or at a workstation. It would be folly to expect customers to risk $millions on processes that use blindly set constraints for sign-off.
  • Our customers are now able to model with very accurate results due to our concurrent methodology. Not only do we provide the accuracy necessary to ensure proper chip behavior, we do so with advanced techniques that vastly improve runtime. Without incurring risk or impacting the schedule, our overall goal is to tape-out optimum designs. We aim for just enough margins but not more than enough.
  • There is a Catch-22 when it comes to designs at advanced process nodes. It has always been the case that a process is in flux at an early stage of development. This variation keeps growing as a percentage. That is not the problem. The deviation of the actual versus the predicted has major consequences for other reasons. What has changed and why it is a major issue now, when in the past it could be handled, is that chip evolution has reached the stage where interrelated forces can no longer be neglected. Electromigration and leakage current rely on current density values and thermal effects. Power budgets are set early in the design flow with appropriate clock speeds for meeting performance specs. Transient effects and ground bounce are a constant source of potential error. It’s getting harder to tell the signals from the noise.
  • We developed easy-to-use software and leverage the proven strength of industry standard file formats. There is no need to pre-characterize data with predefined corners. That approach leads to average values that are not accurate for transient analysis. Our concurrent engines allow users the ability to know exactly the condition of their design.
  • Fragmented solutions on the market tend to confound the designer and do not model the physically correct design state. With a muddled mess of average estimations there is no choice but to fall back to worst-case scenarios. There is a penalty with this overkill approach to design. Knowledge is power.
  • We have developed algorithms that keep our concurrent analysis engines in sync to provide extremely accurate data. Our results have been correlated with true physical measurements with amazing accuracy.What does Invarian do?
    We have a dual focus:
  • Our primary focus is our InVar Pioneer Platform consisting of multiple simultaneous engines for analyzing Power, EM/IR-drop and Thermal. Our holistic approach achieves the necessary accuracy and our Macro Modeling hierarchy enables our tool to run the largest designs with lightening fast runtimes.
  • Our secondary platform is our InVar Frontier 3D Platform for true 3D thermal analysis. We scale from sub-transistor level to complex stacked-die package environments. Using physical parameters we bring a whole new level of capability for 3D thermal analysis.Invarian is the only provider of physical sign-off tools that offers concurrent analysis of various power integrity parameters as a whole. First time silicon success with the optimum design for performance and accuracy requires tools that accurately model real behavior.Why did Invarian start?
    Our founders have been working in the areas of power, voltage and thermal analysis for many years. They experienced first-hand many cases where designers got faulty silicon solely because of misleading or incomplete sign-off results. From this sprang the idea of delivering a perfectly accurate and physically correct analysis tool. The idea itself is very simple – analysis should reproduce/model physical conditions of real ICs as close as possible. Designers must have a tool that exactly models the behavior of real silicon before manufacturing. Our tool removes uncertainty, and the resulting expensive re-spin cycle, from the design process. The passion that began with our quest of building an analysis solution that reflects physical reality itself continues to resonate and drive us today.

    What is Invarian’s Roadmap?
    We are constantly updating our engines to stay ahead of the competition. One of the great things about being a new entrant in an established industry is being unbound from outmoded legacy. We have been able to build our solution from the ground up to take advantage of the latest methodologies, such as parallel processing. This has allowed us to analyze huge designs (over 150,000,000 cells) in one pass, which has historically been impossible, so our customers are very excited to finally have such powerful capability. We integrate seamlessly with the major implementation flows and SPICE engines. Enabling our customers to perform highly accurate analysis within their existing flow with a powerful GUI and ‘what-if’ capability gives them the peace of mind of knowing precisely their chip’s design behavior. Looking ahead we shall continue to listen to our customers and that is what drives our technology development and roadmap. We have some exciting announcements to make in the near future and look forward to sharing those with you.

    Will you be at Design Automation Conference this year?
    Yes, we will be in the center of the show in booth #1332 and will have suites available for private showing of our new releases; these are very exciting and will benefit any new and older generation process node designs. Please visit www.invarian.com/events.html to sign up for a demo of our solution…

Also Read:

CEO Interview: Jason Xing of ICScape Inc.

Atrenta CEO on RTL Signoff

Sanjiv Kaul is New CEO of Calypto


Complete Schedule of Synopsys 2013 DAC Events, Panels & Paper Participation (Free Food!)

Complete Schedule of Synopsys 2013 DAC Events, Panels & Paper Participation (Free Food!)
by Daniel Nenni on 05-19-2013 at 9:01 pm

Funny story, @ #49DAC I saw Aart with a very relaxed look on his face looking at the exhibit hall and in my mind he was thinking, “Mine, all mine!” But I digress……. Synopsys is the #1 EDA company for a reason and here is the supporting data for that hypothesis:

Synopsys is committed to accelerating Innovation for its customers—it’s been at the core of the company’s DNA for more than 25 years. The world’s leading semiconductor and electronics companies have relied on Synopsys’ comprehensive portfolio of integrated, system-level implementation, verification, IP, manufacturing and FPGA solutions to design their products. Meet with members of the Synopsys team at DAC to learn more about the newest solutions available to help accelerate innovation.

Visit booth #947
to see Synopsys’ technology exhibits including the HAPS family of FPGA-based prototyping solutions, and the company’s comprehensive functional verification solutions. If you are interested in a specific topic, Synopsys experts will be available for one-on-one meetings during the show.

[TABLE] align=”center” border=”1″ style=”width: 500px”
|-
| colspan=”4″ style=”width: 684px; height: 33px” | SUNDAY, JUNE 2
|-
| style=”width: 223px; height: 19px” | Event
| style=”width: 156px; height: 19px” | Time
| style=”width: 132px; height: 19px” | Location
| style=”width: 174px; height: 19px” | Additional Information
|-
| style=”width: 223px; height: 19px” | Workshop 4
Low-Power Design with the New IEEE 1801-2013 Standard
| style=”width: 156px; height: 19px” | 1:00 p.m. – 5:00 p.m.
| style=”width: 132px; height: 19px” | Austin Convention Center, Room 18C
| style=”width: 174px; height: 19px” | Speaker: Jeffrey Lee, Synopsys
|-
| style=”width: 223px; height: 33px” | MONDAY, JUNE 3
| style=”width: 156px; height: 33px” |
| style=”width: 132px; height: 33px” |
| style=”width: 174px; height: 33px” |
|-
| style=”width: 223px; height: 19px” | Event
| style=”width: 156px; height: 19px” | Time
| style=”width: 132px; height: 19px” | Location
| style=”width: 174px; height: 19px” | Additional Information
|-
| style=”width: 223px; height: 58px” | ARM-TSMC-Synopsys Breakfast
Optimizing Implementation of Performance- and Power-Balanced Processor Cores
| style=”width: 156px; height: 58px” | 7:15 a.m. – 8:45 a.m.
| style=”width: 132px; height: 58px” | Hilton Hotel, 6th Floor, Grand Ballroom H
| style=”width: 174px; height: 58px” | RSVP Required
|-
| style=”width: 223px; height: 58px” | AMS Verification Luncheon
Advance Your Mixed-signal Verification Techniques to the Next Level
| style=”width: 156px; height: 58px” | 11:30 a.m. – 1:30 p.m.
| style=”width: 132px; height: 58px” | Hilton Hotel, 6th Floor, Grand Ballroom G
| style=”width: 174px; height: 58px” | RSVP Required
|-
| style=”width: 223px; height: 58px” | IC Compiler Luncheon
The Many Faces of Advanced Technology
| style=”width: 156px; height: 58px” | 11:30 a.m. – 1:30 p.m.
| style=”width: 132px; height: 58px” | Hilton Hotel, 6th Floor, Grand Ballroom H
| style=”width: 174px; height: 58px” | RSVP Required
|-
| style=”width: 223px; height: 58px” | Pavilion Panel
Affiliation Avenue: The Road to Success
| style=”width: 156px; height: 58px” | 1:30 p.m. – 2:30 p.m.
| style=”width: 132px; height: 58px” | Austin Convention Center, Booth 509
| style=”width: 174px; height: 58px” | Moderator: Sashi Oblisetty, Synopsys
|-
| style=”width: 223px; height: 58px” | GlobalFoundries Theater
Foundry Reference Flow Ecosystem Empowers Designers to Achieve Aggressive Time-to-Market Challenges
| style=”width: 156px; height: 58px” | 1:45 p.m. – 2:00 p.m.
| style=”width: 132px; height: 58px” | Austin Convention Center, Booth 1314 Theater
| style=”width: 174px; height: 58px” |
|-
| style=”width: 223px; height: 58px” | Customer Insight Sessions
Success with Synopsys’ Galaxy Implementation Platform
| style=”width: 156px; height: 58px” | 2:00 p.m. & 3:00 p.m.
| style=”width: 132px; height: 58px” | Austin Convention Center, Level 3, Room 10B
| style=”width: 174px; height: 58px” | Speakers:
2:00 p.m. – Yongjoo Jeon, Samsung
3:00 p.m. – Michael V. Leuzze, LSI
RSVP Required
|-
| style=”width: 223px; height: 58px” | GlobalFoundries Theater
Xceptional IP for GlobalFoundries 14nm-XM Technology
| style=”width: 156px; height: 58px” | 2:00 p.m. – 3:00 p.m.
| style=”width: 132px; height: 58px” | Austin Convention Center, Booth 1314 Theater
| style=”width: 174px; height: 58px” |
|-
| style=”width: 223px; height: 58px” | Samsung Theater
Galaxy Innovations and Collaboration with Samsung for 14-nm FinFET Success
| style=”width: 156px; height: 58px” | 4:30 p.m. – 4:45 p.m.
| style=”width: 132px; height: 58px” | Austin Convention Center, Booth 915 Theater
| style=”width: 174px; height: 58px” |
|-
| style=”width: 223px; height: 55px” | PrimeTime SIG Dinner
Technology Panel – Advanced ECO Methodology
| style=”width: 156px; height: 55px” | 6:00 p.m. – 9:30 p.m.
| style=”width: 132px; height: 55px” | Brazos Hall 204 E 4th St
| style=”width: 174px; height: 55px” | RSVP Required
|-
| colspan=”4″ style=”width: 684px; height: 33px” | TUESDAY, JUNE 4
|-
| style=”width: 223px; height: 19px” | Event
| style=”width: 156px; height: 19px” | Time
| style=”width: 132px; height: 19px” | Location
| style=”width: 174px; height: 19px” | Additional Information
|-
| style=”width: 223px; height: 58px” | Partner Breakfast with GlobalFoundries and Synopsys
Deploying 14XM FinFETs in Your Next Mobile SoC Design
| style=”width: 156px; height: 58px” | 7:15a.m. – 8:45 a.m.
| style=”width: 132px; height: 58px” | Hilton Hotel, 6th Floor, Grand Ballroom G
| style=”width: 174px; height: 58px” | RSVP Required
|-
| style=”width: 223px; height: 58px” | Keynote Visionary Talk
Massive Innovation and Collaboration in the
“GigaScale” Age!
| style=”width: 156px; height: 58px” | 9:15 a.m. – 9:30 a.m.
| style=”width: 132px; height: 58px” | Austin Convention Center, Ballroom ABC
| style=”width: 174px; height: 58px” | Speaker: Aart de Geus, Synopsys Chairman and co-CEO
|-
| style=”width: 223px; height: 58px” | Customer Insight Sessions
Success with Synopsys’ Galaxy Implementation Platform
| style=”width: 156px; height: 58px” | 10:00 a.m. & 2:00 p.m.
| style=”width: 132px; height: 58px” | Austin Convention Center, Level 3, Room 10B
| style=”width: 174px; height: 58px” | Speakers:
10:00 a.m. – Martin Foltin, HP
2:00 p.m. – Tim Whitfield, ARM
RSVP Required
|-
| style=”width: 223px; height: 55px” | Paper Session 4.1
Double-Patterning Lithography-Aware Analog Placement
| style=”width: 156px; height: 55px” | 10:30 a.m. – 12:00 p.m.
| style=”width: 132px; height: 55px” | Austin Convention Center, Room 13AB
| style=”width: 174px; height: 55px” | Speakers: Tung-Chieh Chen, and
Ta-Yu Kuan, Synopsys
|-
| style=”width: 223px; height: 55px” | DAC Management Day 2013
| style=”width: 156px; height: 55px” | 10:30 a.m. – 6:00 p.m.
| style=”width: 132px; height: 55px” | Austin Convention Center, Room 17AB
| style=”width: 174px; height: 55px” | Organizer: Yervant Zorian, Synopsys
|-
| style=”width: 223px; height: 58px” | Custom Design Luncheon
Addressing Custom Design Challenges with Laker
| style=”width: 156px; height: 58px” | 11:30 a.m. – 1:30 p.m.
| style=”width: 132px; height: 58px” | Hilton Hotel, 6th Floor, Grand Ballroom G
| style=”width: 174px; height: 58px” | RSVP Required
|-
| style=”width: 223px; height: 58px” | Verification Luncheon
SoC Leaders Verify with Synopsys
| style=”width: 156px; height: 58px” | 11:45 a.m. – 1:45 p.m.
| style=”width: 132px; height: 58px” | Hilton Hotel, 6th Floor, Grand Ballroom H
| style=”width: 174px; height: 58px” | RSVP Required
|-
| style=”width: 223px; height: 58px” | Samsung Theater
Accelerating SoC Designs with Synopsys DesignWare® IP for Samsung Processes
| style=”width: 156px; height: 58px” | 1:30 p.m. – 1:45 p.m.
| style=”width: 132px; height: 58px” | Austin Convention Center, Booth 915 Theater
| style=”width: 174px; height: 58px” |
|-
| style=”width: 223px; height: 78px” | Paper Session 11.3
Automatic Design Rule Correction in the Presence of Multiple Grids and Track Patterns
| style=”width: 156px; height: 78px” | 1:30 p.m. – 3:00 p.m.
| style=”width: 132px; height: 78px” | Austin Convention Center, Room 14
| style=”width: 174px; height: 78px” | Speaker: Nitin D. Salodkar, Synopsys
Authors: Subramanian Rajagopalan, Sambuddha Bhattacharya, Shabbir H. Batterywala
|-
| style=”width: 223px; height: 78px” | Paper Session 12.4
An ATE-Assisted DFD Technique for Volume Diagnosis of Scan Chains (Best paper candidate)
| style=”width: 156px; height: 78px” | 1:30 p.m. – 3:00 p.m.
| style=”width: 132px; height: 78px” | Austin Convention Center, Room 15
| style=”width: 174px; height: 78px” | Speaker: Rohit Kapur, Synopsys
|-
| style=”width: 223px; height: 77px” | GlobalFoundries Theater:
Synopsys/GlobalFoundries Collaboration on Interoperable PDK Enablement
| style=”width: 156px; height: 77px” | 4:45 p.m. – 5:00 p.m.
| style=”width: 132px; height: 77px” | Austin Convention Center, Booth 1314 Theater
| style=”width: 174px; height: 77px” |
|-
| style=”width: 223px; height: 58px” | IPL Alliance Dinner
iPDKs: A Thriving PDK Standard
| style=”width: 156px; height: 58px” | 6:00 p.m. – 7:30 p.m.
| style=”width: 132px; height: 58px” | Hilton Hotel, 6[SUP]th[/SUP] Floor, Grand Ballroom G
| style=”width: 174px; height: 58px” | RSVP Required
|-
| style=”width: 223px; height: 65px” | Media/Analyst/Blogger Dinner
| style=”width: 156px; height: 65px” | 6:30 p.m. – 9:30 p.m.
| style=”width: 132px; height: 65px” | Malverde—located above La Condesa, 400 W 2nd Street, Austin, TX
| style=”width: 174px; height: 65px” | NOTE: As space is limited, RSVPs are required by Tuesday, May 28, and are taken on a first-come, first-served basis.
|-
| colspan=”4″ style=”width: 684px; height: 31px” | WEDNESDAY, JUNE 5
|-
| style=”width: 223px; height: 19px” | Event
| style=”width: 156px; height: 19px” | Time
| style=”width: 132px; height: 19px” | Location
| style=”width: 174px; height: 19px” | Additional Information
|-
| style=”width: 223px; height: 58px” | Paper Session 25.1
Machine-Learning-Based Hotspot Detection Using Topological Classification and Critical Feature Extraction
| style=”width: 156px; height: 58px” | 9:00 a.m. – 10:30 a.m.
| style=”width: 132px; height: 58px” | Austin Convention Center, Room 14
| style=”width: 174px; height: 58px” | Author: Charles Chiang, Synopsys
|-
| style=”width: 223px; height: 58px” | Samsung Theater
Galaxy Innovations and Collaboration with Samsung for 14-nm FinFET Success
| style=”width: 156px; height: 58px” | 10:30 a.m. – 10:45 a.m.
| style=”width: 132px; height: 58px” | Austin Convention Center, Booth 915 Theater
| style=”width: 174px; height: 58px” |
|-
| style=”width: 223px; height: 58px” | Pavilion Panel
IP Pitfalls: Avoid the Wild Ride
| style=”width: 156px; height: 58px” | 10:30 a.m. – 11:15 a.m.
| style=”width: 132px; height: 58px” | Austin Convention Center, Booth 509
| style=”width: 174px; height: 58px” | Moderator: Warren Savage, IPextreme
Panelists: John Swanson, Synopsys; Keith Odom, National Instruments; Hans Bouwmeester, Open-Silicon
|-
| style=”width: 223px; height: 58px” | GlobalFoundries Theater
Synopsys/GlobalFoundries Collaboration on Interoperable PDK Enablement
| style=”width: 156px; height: 58px” | 10:45 a.m. – 11:00 a.m.
| style=”width: 132px; height: 58px” | Austin Convention Center, Booth 1314 Theater
| style=”width: 174px; height: 58px” |
|-
| style=”width: 223px; height: 58px” | Paper Session 32.4
Spacer-Is-Dielectric-Compliant Detailed Routing for Self-Aligned Double Patterning Lithography (Best paper candidate)
| style=”width: 156px; height: 58px” | 1:30 p.m. – 3:00 p.m.
| style=”width: 132px; height: 58px” | Austin Convention Center, Room 15
| style=”width: 174px; height: 58px” | Moderator: Mehmet Yildiz
Authors: Qiang Ma, Hua Song, James Shiely, Gerard Luk-Pat, Alexander Miloslavsky, Synopsys
|-
| style=”width: 223px; height: 44px” | Technical Panel 34
EDA: Meet Analytics; Analytics: Meet EDA
| style=”width: 156px; height: 44px” | 4:00 p.m. – 5:30 p.m.
| style=”width: 132px; height: 44px” | Austin Convention Center, Room 16AB
| style=”width: 174px; height: 44px” | Moderator: Janick Bergeron, Synopsys
|-
| colspan=”4″ style=”width: 684px; height: 31px” | THURSDAY, JUNE 6
|-
| style=”width: 223px; height: 19px” | Event
| style=”width: 156px; height: 19px” | Time
| style=”width: 132px; height: 19px” | Location
| style=”width: 174px; height: 19px” | Additional Information
|-
| style=”width: 223px; height: 44px” | Technical Panel 47
Analog Design with FinFETs: “The Gods Must be Crazy!”
| style=”width: 156px; height: 44px” | 1:30 p.m. – 2:30 p.m.
| style=”width: 132px; height: 44px” | Austin Convention Center, Room 16AB
| style=”width: 174px; height: 44px” | Panelist: Navraj Nandra, Synopsys
|-
| style=”width: 223px; height: 44px” | Technical Panel 52.1
Routability-Driven Placement for Hierarchical Mixed-Size Circuit Designs
| style=”width: 156px; height: 44px” | 1:30 p.m. – 3:00 p.m.
| style=”width: 132px; height: 44px” | Austin Convention Center, Room 14
| style=”width: 174px; height: 44px” | Author: Tung-Chieh Chen, Synopsys
|-

Additional Information

Synopsys Main Booth #947
Conversation Central, Synopsys’ online radio show, is back again at DAC 2013 with an exciting line up of guests. Synopsys will host two discussions a day from the main booth—come join us! Each show will also be recorded for later viewing and listening on Synopsys’ YouTube channel, iTunes, and the Conversation Central show notes page.
We invite you to sit and listen to a selection of our past shows while visiting the Synopsys booth at DAC. For more information, visit the show notes page at http://blogs.synopsys.com/conversationcentral/
or follow on Twitter: #snps and #50DAC

Other Booths that Include Synopsys

ARM Connected Village Booth #921
Visit Synopsys to see how our collaboration with ARM® helps address leading-edge challenges for system-on-chip design and software development.

GlobalFoundries Booth #1314
Visit Synopsys at the GlobalFoundries booth to find out more about our collaboration and support for advanced process technology. Synopsys will participate in presentations at the GlobalFoundries Theater on Monday, Tuesday and Wednesday.

Samsung Booth #915 Theater
See how Synopsys and Samsung have accelerated SoC designs and collaborated on 14-nm FinFET technology in the Samsung Theater on Monday and Tuesday.

Si2 Booth #1427
As Si2 celebrates its 25[SUP]th[/SUP] anniversary, Synopsys and other members will showcase how they are applying in their products and solutions the spectrum of standards that have been developed at Si2.

TSMC Booth #1746
Synopsys will present on “Enabling Advanced SoC Designs for TSMC Processes with Synopsys DesignWare® IP” in TSMC’s Open Innovation Platform Theater.

lang: en_US


BDA Introduces High-Productivity Analog Characterization Environment (ACE)

BDA Introduces High-Productivity Analog Characterization Environment (ACE)
by Daniel Nenni on 05-19-2013 at 7:45 pm

Last week Berkeley Design Automation introduced a new Analog Characterization Environment (ACE) – a high-productivity system to ensure analog circuits meet all specifications under all expected operational, environmental, and process conditions prior to tapeout.

While standard cell characterization and memory characterization are well defined application areas with standard flows, dedicated tooling, and rigorous metrics to ensure high quality results, analog circuit characterization is still largely ad hoc and highly dependent on individual designers’ experience. As a result analog circuit results in silicon are predictably unpredictable. Berkeley Design Automation has set out to change that with ACE.

Two Analog Worlds
Understanding the need for ACE requires first understanding the current environment analog designers work within. For leading-edge analog circuit design and verification, there are two separate and distinct worlds today:

Analog World 1 – The High-Performance Analog/Mixed-Signal World: In this world ICs are dominated by precision analog/RF circuitry including: sensors, ADCs, DACs, PLLs, RF, filters, power management, etc., and integrated digital is important and growing. Cadence dominates World 1, and Synopsys is rarely seen. Cadence simulators (Spectre, Spectre Turbo, Spectre RF, APS, APS RF, UltraSim, etc.) have historically dominated World 1, but BDA Analog FastSPICE (AFS) has made significant inroads handling the toughest problems in the last few years. All World 1 simulators use Spectre syntax and models. World 1 lives within Cadence ADE 5.1 or ADE-L/ADE-XL 6.1. World 1 designers wouldn’t dream of manually editing a netlist or doing anything from the command line.

Analog World 2 – The Big-Digital SoC World: With all due respect to digital design teams, SoC performance bottlenecks are all analog: i) high-speed I/O (getting data on and off the chip), ii) PLL/clocking (any slop in the clock is a tax on every digital path), and iii) memory (half the silicon area). Synopsys dominates World 2, and Cadence is rarely seen. Synopsys simulators (HSPICE, HSIM, NanoSim, XA, FineSim) have historically dominated World 2, but again BDA AFS has made significant inroads handling the toughest problems in the last few years. All World 2 simulators use HSPICE syntax and models. World 2 lives at the command-line—manually editing netlists, scripting, and issuing command-line runs. World 2 designers wouldn’t dream of doing anything through Cadence ADE.

BDA uniquely competes very effectively in both of these worlds, giving it a unique viewpoint on the market. Moreover, increasingly World 1 is meeting World 2 on the same project, making analog characterization an even bigger challenge.

The Growing Need for Analog Characterization
Analog designers’ simulation tasks may be classified as performance verification (hereafter simply “verification”) followed by characterization. As used here, verification is what designers do to make sure their circuit meets specifications under nominal conditions and under a few conditions that the designer expects may break it – perhaps a few PVT corners, a few extreme operating conditions, and maybe a bit of Monte Carlo for mismatch. Everything looks good. Verification is done. The circuit is probably good to go, and the designer would like to move on to the next interesting problem.

Not so fast. There are many possible process, voltage, and temperature corners in nanometer silicon technologies. There are many possible operating conditions (e.g., signal levels, calibration setups, operating modes, and noise sources). There is also a lot of global and local process variation. The process corners in the PDK do not correspond to any analog circuit’s 3-sigma points. In fact, often they are not even close. Global process variation affects every circuit and every measurement (e.g., frequency, gain, duty cycle, jitter, SNDR, etc.) in every circuit differently. Key design components (e.g., inductors, varactors, etc) also have critical variation. What combination of these different environmental, operational, and process conditions (a.k.a., “variants”) result in a blown spec in silicon is anyone’s guess—especially since today’s circuits are almost all highly nonlinear. This is a combinatorial problem with typically 100s or 1000s of known important variants. The objective of analog characterization is to ensure an analog/mixed-signal/RF circuit meets all specs under all these variants.

Characterization is not fun. It’s tedious, it’s boring, it’s error-prone, it takes a long time…and there haven’t been any good tools to help. As a result, analog designers cut corners—a lot of corners (and sweeps and Monte Carlo runs). Although designers know they should do a lot more characterization, it is too painful and takes too long. So they don’t. Instead they make sure their circuits have plenty of margin (no one’s measuring over-design anyway) and hope that they don’t miss a scenario that causes a silicon problem.

Characterization in the Two Analog Worlds
In World 1 designers live within ADE. By all accounts ADE is pretty good for verification setup and analysis, but it is not so good for characterization. All Cadence customers are migrating to ADE 6.1, in which most characterization capabilities are in ADE-XL. Many designers find ADE-XL so cumbersome that they are actually performing less characterization than before. For example, setting up a corner is completely different than setting up a sweep and both are completely different than setting up a Monte Carlo. Non-trivial combinations of corners, sweeps, and Monte Carlo require OCEAN scripting for setup and post-processing. After a characterization run, designers find it impractical to effectively mine their own data, let alone look at results across a project.

In World 2 designers hand-edit netlists and write scripts. Many of the simulators, including HSPICE, have very limited support for combinations of corners, sweeps, and Monte Carlo making it very difficult to setup some of the most obvious combinations (e.g., a temperature sweep under a voltage sweep). In this world every designer does things differently, so it’s impractical to mine characterization data in any consistent way.

ACE and Beyond
Analog characterization is not rocket science. It requires straightforward tool design and engineering—analytics for analog. It didn’t exist, so BDA developed the Analog Characterization Environment. ACE is an environment where basic units like tests, measurements, and variants (i.e., corners, sweeps, and Monte Carlo runs) are characterization building blocks that are easy to create, easy to combine, and easy to reuse. ACE makes it easy to specify the experiments you want to run. ACE makes it easy to view the results and analyze the data. ACE makes it easy to create regressions. ACE makes it easy for designers, CAD engineers, and third parties to access, use, and add to the characterization data.

ACE makes systematic analog characterization practical, but it is only the first step. BDA created ACE with true openness in mind. ACE stores all characterization data in an Open Verification Database (OVD) that is quite literally open to all. All OVD data is in standard formats wherever applicable and where standard formats don’t exist, the data is in XML if intended for tools or text if intended for designers. OVD provides a foundation for CAD engineers, designers, and third-parties to integrate analog characterization into the rest of their IC design process from top-down specification and digital verification to configuration management, variation analysis, and circuit optimization tools (e.g., Cadence, MunEDA, and Solido). BDA is already working on a number of extensions and encourages customers and third-parties to do likewise.

Visit BDA at #50DAC and see Analog Characterization Environment!

lang: en_US