WP_Term Object
    [term_id] => 34
    [name] => ANSYS, Inc.
    [slug] => ansys-inc
    [term_group] => 0
    [term_taxonomy_id] => 34
    [taxonomy] => category
    [description] => 
    [parent] => 157
    [count] => 179
    [filter] => raw
    [cat_ID] => 34
    [category_count] => 179
    [category_description] => 
    [cat_name] => ANSYS, Inc.
    [category_nicename] => ansys-inc
    [category_parent] => 157
    [is_post] => 1

John Lee: Market Trends, Raising the Bar on Signoff

John Lee: Market Trends, Raising the Bar on Signoff
by Bernard Murphy on 06-07-2018 at 7:00 am

I talked to John Lee (GM of the ANSYS Semiconductor BU) recently about his views on market trends and the ANSYS big-picture theme for DAC 2018. He set the stage by saying he really liked Wally’s view on trends (see my blog on Wally’s keynote at U2U). John said these confirm what he is seeing – a trend to specialization, some around new applications like autonomous vehicles, some around traditional platforms like mobile and HPC and some around new technologies like 5G.

REMINDER: Make sure you register for your free I Love DAC pass by June 8th!

He noted some other trends, one to what he calls reaggregation, especially in moves for pure-play foundries to ramp up in-house ASIC services. Another is in system vendors getting close (or close again) to silicon: Cisco acquiring Leaba Semi, Amazon acquiring Annapoorna Labs, Bosch building their own foundry and Facebook and Google clearly being very active as judged by semiconductor design job postings in Indeed.com. In all cases, chip design activity is being driven increasingly by system companies who are seeing more advantage/differentiation in dedicated rather than general-purpose solutions.

In John’s view, these shifts are triggered by hot applications moving to more complex processes and packaging, and increasingly becoming sensitive to system-level design constraints. He cited for example automotive suppliers, traditionally very conservative on process but now moving to 7nm and already starting to engage at 5nm to integrate more complex compute (AI, 5G, imaging, …) onto silicon. For similar reasons, 3D packaging is picking up, in FPGAs, network, mobile and image sensors, all to scale beyond Moore’s law.

With these moves come more challenges, one being the sheer size of the design analysis task. To take one example in John’s domain, at 16nm a typical power grid might be modelled with a billion resistors; at 7nm you now have to handle 10-20B, up 1-2 orders of magnitude. The criticality of problems is also rising. FinFETs are driving higher current density into narrower interconnects and razor-thin voltage margins in these processes amplify risks in timing, yield and reliability.

John said that at 7nm they’re seeing breakdown in the standard approach to managing timing and voltage margins, judging by the calls they’ve had from a near full-house of high-end customers. These companies have told them that they are losing important yield to performance problems they thought were covered. They’re finding they need tighter timing accuracy near the margins than traditional variability methods can offer, but MC-SPICE just can’t handle the volume of paths that need to be checked.

Meanwhile, mobile and crypto-currency applications are pushing the power envelope even harder. In 5G, serdes frequencies run above 50GHz. At these frequencies noise-coupling is no longer simply capacitive; you also have to consider induction and you can’t limit analysis to nearest-neighbor only. Thermal effects become more important; between thermal and higher FinFET drive into thinner interconnect, EM risk goes up. And when you stack die, thermal problems increase, adding potential mechanical problems – warping between die and on the interposer.

The industry has developed a lot of good tools to attack these different concerns pointwise, for example PTSI for signal integrity, RedHawk for power integrity, Calibre for manufacturability and FX and RedHawk for reliability. However these tools are siloed, each excellent in its own domain if you can rely on margins to model inter-domain variability. That’s like modeling a problem in a box with margins as the sides of the box. You can analyze one problem really well inside that box, but the analysis can’t extend beyond the sides. This works well when the box completely surrounds the problem. But if you have to make the box really big to meet that goal, the design may not be viable – too expensive, too slow, too power-hungry or too unreliable. On the other hand, if you shrink the box to meet product specs, at least some behavior will spill beyond the edges of your analysis; you don’t even look at some realistic behaviors which could lead to failure.

At DAC, ANSYS is going to talk how to more completely analyze the problem space in all dimensions by removing the box, something they call “beyond signoff”. A reasonable approach must continue to depend on the learning already built-in to best-in-class tools, which means the solution needs to be open and extensible. It also must manage vast amounts of data from all of these tools to enable the kind of true multi-physics analytics common today in other domains, such as design for aircraft engines. This is only possible if you can leverage best-in-class technologies and computational sciences. And the solution needs to enable designers to innovate beyond the democratized processes, IPs and software platforms that we all have use, letting them build on their expertise to differentiate in power, performance, cost and reliability.

To learn more about what ANSYS plans for DAC 2018, click HERE.