WP_Term Object
(
    [term_id] => 15
    [name] => Cadence
    [slug] => cadence
    [term_group] => 0
    [term_taxonomy_id] => 15
    [taxonomy] => category
    [description] => 
    [parent] => 157
    [count] => 598
    [filter] => raw
    [cat_ID] => 15
    [category_count] => 598
    [category_description] => 
    [cat_name] => Cadence
    [category_nicename] => cadence
    [category_parent] => 157
)
            
14173 SemiWiki Banner 800x1001
WP_Term Object
(
    [term_id] => 15
    [name] => Cadence
    [slug] => cadence
    [term_group] => 0
    [term_taxonomy_id] => 15
    [taxonomy] => category
    [description] => 
    [parent] => 157
    [count] => 598
    [filter] => raw
    [cat_ID] => 15
    [category_count] => 598
    [category_description] => 
    [cat_name] => Cadence
    [category_nicename] => cadence
    [category_parent] => 157
)

DAC: Tempus Lunch

DAC: Tempus Lunch
by Paul McLellan on 06-06-2013 at 4:03 pm

 I had time for lunch on Monday. That is to say, there was a Cadence panel session about Has Timing Signoff Innovation has become and Oxymoron? What Happened and How Do We Fix It?

The moderator was Brian Fuller, lately of EE Times but now Editor-in-Chief at Cadence (I’m not sure quite what it means either). On the panel were Dipesh Patel, EVP of Physical IP at ARM; Tom Spyrou, Design Technology Architect at Altera, Richard Trihy, Director of Design Methodology at GlobalFoudries and Anirudh Devgan of Cadence.

Dipesh started of by saying that at ARM 60% of the design process is in the timing closure loop. That’s not too bad for ARM themselves since any effort there is heavily leveraged, but their partners cannot afford that much.

Tom pointed out that it was harder to get all three of capacity, runtime and accuracy than it used to be. At his previous job in AMD they had one timing scenario that took 750Gbytes and several days to run.

Richard thought the main issue is variation and he is worried he is not seeing very effective solutions. Theyt still do hard-coding OCV and margins for clock-jitter and IR drop. But there is no much margin to go around and it will only get less.

Anirudh cheated as said that issue #1 was speed and capacity (and a fanatical dedication to the pope). #2 was accuracy, but #3 is that fixing the problems in the closure loop takes too long.

Everyone on the panel, except Anirudh, was presumably a PrimeTime user since, well, there isn’t anything else to be a user of until the very recently announced Cadence Tempus product, which was lurking in the background but wasn’t really talked about explicitly on the panel. Indeed, Tom Spyrou, when at Synopsys many years ago, was in charge of PrimeTime development.

Everyone agreed that signoff innovation wasn’t really an oxymoron since they has been a lot of innovation. But, of course, there needs to be more: current source model, multi-corner, multi-mode, parallel processing. But still big issues getting designs out. And Statistical STA (SSTA), which turned out to be a blind alley after a lot of investment.

Anirudh pointed out that in the commercial space there has only been one product for the last 20 years, anyone else got sued or bought (or both). Motive way back when, ExtremeDA more recently. There has been lots going on in academia but they were defocused by SSTA and other non-mainstream things. TSMC has no started to certify timers so that might open up the competition in the same ways as has happened with circuit simulation (Finesim, BLDA etc).

A question was asked about standards. Tom echoed my thoughts which is that you can only standardize things once the dust settles, In the meantime, non-standard solutions still. It is hard to have a standards body say what is standard when the competition is still occurring about what is standard. Richard still wants to see something to help in the OCV methodology since this is going to get so much worse at 14nm and 10nm.

An engineer from Qualcomm suggested that depending on bigger and faster machines and more memory isn’t really tenable. From eda industry perspective can we look at the infrastructure of computing changes and is there a push to more compute aware paradigm? That was a slow pitch right across the plate, given that Tempus does just that. So Anirudh hit the ball out of the park, pointing out that a single machine with big memory (1TB, very expensive) but lots of memory in lots of machines is easier to arrange with maybe 5000 machines in a server farm. Cost of the machines is much cheaper than the EDA tools. But to work well it needs top down parallelism not bottom up multithreading (like, er, Tempus).

The panel was asked whether there was conflict of interest between EDA companies looking at signoff as a competitive advantage that can slow down the innovation process. Central planning hasn’t worked too well in economies, and it seems unlikely to do so in EDA markets. Yes, it is always tempting to see wasted effort. What if Tempus didn’t need to build some of the infrastructure since they could just borrow it from PrimeTime? Not going to happen and competition drives innovation hard because EDA is a business where only the #1 and #2 products in any space make serious money.

Designs are getting bigger, processes are getting more complicated, variation getting worse. I don’t think that is going to change.

Share this post via:

Comments

0 Replies to “DAC: Tempus Lunch”

You must register or log in to view/post comments.