WP_Term Object
(
    [term_id] => 14
    [name] => Synopsys
    [slug] => synopsys
    [term_group] => 0
    [term_taxonomy_id] => 14
    [taxonomy] => category
    [description] => 
    [parent] => 157
    [count] => 699
    [filter] => raw
    [cat_ID] => 14
    [category_count] => 699
    [category_description] => 
    [cat_name] => Synopsys
    [category_nicename] => synopsys
    [category_parent] => 157
)

ARM, NXP Share Usage, Challenges at Synopsys Lunch

ARM, NXP Share Usage, Challenges at Synopsys Lunch
by Bernard Murphy on 03-20-2019 at 7:00 am

Synopsys runs a “Industry verifies with Synopsys” lunch at each DVCon, which isn’t as cheesy as the title might suggest. The bulk of the lunch covers user presentations on their use of Synopsys tools which I find informative and quite open, sharing problems as much as successes. This year, Eamonn Quiqley, FPGA engineering manager from ARM and Amol Bhinge, R&D emulation and verification HW director from NXP, shared their experiences.

23110-customer-feedback-min.jpeg

Eamonn hails from Ireland where they are great spellers but terrible pronouncers as I think the saying goes (half of my relatives are from the Cork area); pronouncing his name challenged most of the other speakers (it’s “Aymon” by the way). He talked about providing enterprise-class FPGA-based verification at ARM at their Trondheim, Redhill and Austin facilities. Here FPGA means FPGA-prototyping using HAPS.

I’m guessing this isn’t the only enterprise-scale use of FPGA prototyping, but it’s the first I have seen and it’s pretty impressive. We’re getting more familiar with datacenter-based emulation, but this is HAPS prototyping in long aisles of cabinets (I counted at least 12 per side in one image), each with multiple bays of prototyping systems. Looks just like a regular datacenter aisle but without the flashing lights on the cabinets (all the flashing lights are on the systems inside).

The goal of course is to provide global access and resource sharing with resilience (reliability, maintainability) and to optimize use of resources, also to provide flexibility in how these systems can be used. The trick in meeting the flexibility goal is to provide configurability in a controlled/limited range of options. This they accomplish through a number of widely-used (for them) configurations, from 1 to 16 FPGAs. The most heavily used configuration has 4 FPGAs, with each FPGA connected to the others. They add another S104 system to this to extend to support 8 FPGAs, which he said was designed to cover many needs and could be adapted if needed. They use these configs most commonly for CPU debug. For GPU debug they double this up again, allowing for up to 16 FPGAs. Cabling and configurations are designed to support multi-design mode (MDM) to maximize usage at all times.

Debug on FPGA prototypes is always tricky; after all they’re designed for speed rather than deep and broad visibility. Eamonn said that they find that deep-trace debug works really well if you know what you want to look at, capturing up to 2k signals at 17MHz, whereas global state visibility, running at 100k cycles/hour, works well if you know roughly whenyou want to look but not where.

Amol (NXP) opened with an interesting stat. Did you know that every new car contains at least 100 NXP products? I didn’t but it doesn’t sound unreasonable given the level of automation we’re now seeing even in entry-level cars. Rather than talking about specific verification objectives, Amol provided an entertaining and enlightening tour through challenges he still sees in SoC verification.

He kicked off with an interesting statement. Verification tools provide many flavors of coverage, but in his view it is already difficult to address just one type at a reasonable level across multiple domains. He views coverage closure as a long pole for multiple reasons: exclude files for IPs are not as reusable as they should be, it is difficult to deal with tie-offs, constants and parameters (he suggested these need added focus in verification flows) and they’re still struggling to get coverage on IPs.

He made an interesting point –there should be more investment in coverage for IO muxing. I know this is an area already covered as an app by formal tools (in fact he mentioned this area when he discussed formal tools), but I also know that IO muxing architectures can be highly custom, even from design group to design group within a company. I wonder how much effort is required to configure these apps to custom structures? Perhaps so much that many verification groups still resort to simulation-based signoff, in which case coverage metrics would certainly be interesting?

Amol said they worry particularly about false passes, whether checkers, assertions or VIPs may themselves contain errors or may overlook certain possibilities. He noted they had found parameter errors and tieoff errors which should have been caught but were not. He particularly likes Certitude, Z01X and the VC Formal FTA App in tracking down problems of this nature.

Gate-level verification continues to be important (thanks to automotive I believe) and they have found defects at this level which escaped RTL verification. A problem here is turn-around time, in tool run-time (he mentioned running 44 gate-level test cases took many months) and also in debug. He likes shaking out possible bugs earlier in RTL, and he cited VC Formal FXP as a useful tool in this area. But he still sees need for more work in tools and methodologies.

Amol wrapped up with a request for more support in performance verification, particularly along targeted paths such as PCIe to DDR or core to DDR. He mentioned need for more standardization and innovation in this area.
Overall, entertainment and insight into what can be possible for enterprise level FPGA prototyping and where yet more development is needed. And a free lunch – what more could you ask for? To watch the event, click HERE.

Share this post via:

Comments

There are no comments yet.

You must register or log in to view/post comments.