I’m keenly interested in SPICE circuit simulators, so at DACI met with John Piercefrom Cadence to get an update on what’s new this year.
John Pierce, Cadence
Q: What’s new this year?
Wait until September, then expect some new product announcements.
January 2014 – new features coming as well
Q: How is library characterization going?
We’re bringing Spectreunder Liberate (characterization tool), Altos acquisition two years ago. Altos had an established customer base, TSMC has the Altos scripts for reference flow design kits. Liberate works with any SPICE tool.
Improving memory characterization of SRAMS, Spectre is available.
Maybe 1,000 cells in your library, use a characterization tool with Spectre to get libraries generated. Maybe 10% of the cells would dominate cell characterization. Majority of runs are very short.
Liberate has a technology (InsideView – patented), understands the cell to infer the logic function, knows how to create the input stimulus. For more complex cells there hasn’t been enough automation. It identifies sequential cells like FF, RAM, etc.
Full coverage, good throughput of just the right amount of stimulus. Output is .LIB for timing, power, noise, statistical models.
Q: Anything new with hierarchical SPICE circuit simulation?
What happened to HSIM, isomorphic simulation doesn’t work any more with parasitics because there’s not enough matching.
Power gating doesn’t work so well with Fast SPICE.
Q: Any comments on Berkeley DA and their improved capacity?
Q: What should we expect to see in the next year for circuit simulation at Cadence?
Our Direction is: Capacity, Performance, infrastructure, verification automation from SPICE through mixed-signal.
Q: How is your support for FinFET devices?
We have BSIM-CMG support for FinFET, yes well supported. 2011 supported FinFETs. Active in CMG, modeling groups.
Q: Did you hear about G-Analog offering GPU-based library characterization?
Library characterization with GPU is not that interesting of an approach. With 1,000 cells only 20% are complex and take most CPU effort. We use a distributed approach research wise to run on a single core, use multi-core and multi-cpu, manage all of those jobs. Done in hours to a day.
It’s mostly model evaluation time and load time of the netlist. Easy IT decision.
lang: en_USShare this post via:
There are no comments yet.
You must register or log in to view/post comments.