WP_Term Object
(
    [term_id] => 15
    [name] => Cadence
    [slug] => cadence
    [term_group] => 0
    [term_taxonomy_id] => 15
    [taxonomy] => category
    [description] => 
    [parent] => 157
    [count] => 598
    [filter] => raw
    [cat_ID] => 15
    [category_count] => 598
    [category_description] => 
    [cat_name] => Cadence
    [category_nicename] => cadence
    [category_parent] => 157
)
            
14173 SemiWiki Banner 800x1001
WP_Term Object
(
    [term_id] => 15
    [name] => Cadence
    [slug] => cadence
    [term_group] => 0
    [term_taxonomy_id] => 15
    [taxonomy] => category
    [description] => 
    [parent] => 157
    [count] => 598
    [filter] => raw
    [cat_ID] => 15
    [category_count] => 598
    [category_description] => 
    [cat_name] => Cadence
    [category_nicename] => cadence
    [category_parent] => 157
)

Cadence Selected to Support Major DARPA Program

Cadence Selected to Support Major DARPA Program
by Bernard Murphy on 07-26-2018 at 7:00 am

When DARPA plans programs, they’re known for going big – really big. Which is what they are doing again with their Electronics Resurgence Initiative (ERI). Abstracting from their intro, this is a program “to ensure far-reaching improvements in electronics performance well beyond the limits of traditional scaling”. This isn’t just about semiconductor processes. They want to redefine the way we architect and design/implement, along with the foundations of design, pushing ideas beyond the timeframes that industry will normally consider (they’re looking at 2025-2030 horizons).

22035-magestic-min.jpg

In architecture they have two programs: software-defined hardware (SDH – runtime reconfigurable hardware) and domain-specific system on chip (DSSoC – mixing general and application-specific processors, accelerators, etc). In design they have two programs: intelligent design of electronic assets (IDEA – no human in the loop layout generator, runs within 24 hours) and Posh open source hardware (POSH – hardware assurance technology for signoff-quality validation of open source mixed signal SoCs. And finally, materials and integration: 3D-SoC (3DSoC – enable > 50X in SoC digital performance at power) and foundations required for novel compute (FRANC – proofs of principle for beyond von Neumann compute architectures).

DARPA held a summit in San Francisco, 23-25 July, to launch the initiative and announce some of the winning proposals, including a joint proposal from Cadence, NVIDIA and CMU. I talked with Dr. David White (Sr Group Director of R&D at Cadence) who will be running the Cadence part of the program, which Cadence calls MAGESTIC, as PI. David has a strong background in both AI and design. He completed his doctorate in EE/CS at MIT on characterizing semiconductor wafer states using machine learning (ML) and other methods. He later co-founded the DFM company Praesagus, later acquired by Cadence, and for the last ~10 years has been running Virtuoso EAD and is also lead for the Cadence ML task force.

Unsurprisingly, Virtuoso has been leveraging ML for quite a while, so they’re not coming into this cold. And since they’re partnered with NVIDIA and CMU, this is a heavy-hitting team. David says they’ll start with analog. That’s pretty clear – they already have product on which to experiment and build. But remember the goal is ambitious – no human in the loop to generate layout – so this will take a bit more than polishing.

Interestingly, they are including intelligent PCB place and route in their goals. In placement, they will use deep learning to evaluate the possible design space and placements, select an optimal set based on analytics and previous learning, run the placement and feed metrics back to the learning engine. They’ll do a similar thing in routing, again feeding back the fitness of result to the DL engine.

22035-magestic-min.jpg

David expanded further on the expected flow for custom IC design, I would guess because the foundations of this are already clear through their Virtuoso EAD and advanced node place and route capabilities. The key challenge here is to address uncertainty in design intent; how can you remove the human from the loop if what the human wants isn’t clear? We know what we want, but in a general and not fully-specified sense, and we expect to have it adapt as implementation progresses. This is where ML combined with analytics has promise; to capture implicit intent and best-practices based on what is explicitly known but also on what can be observed from legacy designs.

He illustrated their approach with a custom layout example. Locally a designer can run fast extraction and electrically-aware assistance, fast RC analysis, static EM analysis and so on. From this they can switch to a more intensive electrically-driven optimization where they can explore design alternatives aligned with intent (captured as design constraints), each of which is graded using cost functions. All of this is of course massively parallelized (server farms, clouds, etc) to get quick turn-around. This whole subsystem interacts with an intelligent tools subsystem for ML, analytics and optimization, the intelligent tools both observing the outcome of analyses and optimizations at the designer level and feeding back recommendations and refinements. Obviously this flow still has a human in the loop but you could imagine through learning, refinement and new capabilities, the need for that human could be minimized or even eliminated in some cases.

We wrapped up with a couple of questions that occurred to me. How do you bootstrap this system? David said that this is a common challenge in such systems; the standard approach is to start with baseline models, while allowing those models to adapt as they learn. Does he expect that system behaviors will diverge when applied to different applications? Yes, certainly. Baseline models won’t change but tools should tailor themselves to provide optimal results for a target application. Which raises an interesting point they may consider – might the tool be able optimize across multiple target applications, to build a product to serve multiple markets?

Kudos to Cadence for landing a role on this ambitious initiative. I’m sure the rest of us will also benefit over time from the innovations they and other partners will drive. You can learn more about the DARPA initiative HERE and Cadence’s MAGESTIC program HERE.

Share this post via:

Comments

There are no comments yet.

You must register or log in to view/post comments.