With High-Level Synthesis (HLS) the first benefit that comes to my mind is reduced design time, because coding with C or SystemC is more efficient than low-level RTL code. What I’ve just learned is that there’s another benefit, a reduction in the amount of functional simulation required. One HLS customer was able to see this reduction by comparing regression tests that took 1,000 servers running for 3 months using an RTL verification methodology, while using an HLS verification approach the same regressions completed on 12 servers in just 1 week, now that’s a huge difference.
A verification engineer can write tests that give 100% functional and structural coverage on synthesizable SystemC or C++, but when you run these same tests on the generated RTL the functional coverage is still 100% while the structural coverage will lower to about only 80%. This coverage difference is understandable because the HLS tool is automatically creating RTL that is:
- Adding states to an Finite State Machine (FSM)
- Adding Stall states
- Adding Pipeline ramp-up and ramp-down states
- Using one-hot encoded logic
- Structuring logic that can lower reported coverage
Thankfully for us, the engineers at Calyptohave figured out a methodology to produce verification optimized RTL code. At DVCon this month they presented a paper, and then made that into a White Paper titled, “Closing Functional and Structural Coverage on RTL Generated by High-Level Synthesis“.
The verification flow for testing RTL generated by HLS is shown below in three iterative loops:
In stage 1 you are running directed and constrained random tests so that the C coverage tool shows 100% function, line branch and condition coverage. After stage 1 you run HLS and generate the RTL code, so in stage 2 the RTL code is ready for functional verification. At the end of stage 2 you have to add reset and stall tests. Stage 3 is where RTL structural verification takes place.
Stall and Reset Coverage
During HLS additional states are added, so we need to identify and target tests at these states to improve our stall coverage numbers. In the Catapult HLS tool a small state machine is added per IO cell, and this state is added into a “staller” block which disables flip-flops and the FSM in its thread. You target tests at each IO separately to reach the 100% structural coverage goal.
For reset coverage your tests will place the FSM into every state and then assert reset.
It’s likely that a low structural coverage number is being caused by unreachable code, so you can use a formal tool to run an unreachability check. Mentor has a formal tool called Questa CoverCheck and it typically proves that about 50% of uncovered lines are not reachable.
This formal approach still means that the remaining lines have to be manually analyzed by tracing the logic back to prove that it has unreachable conditions. Redundancies in the source code cause these unreachable conditions and they can be traced back to:
- Writing loops
- Conditional statements
- A condition inside of a loop
An example design was selected that has a Discrete Cosine Transform (DCT) used in the decoder for a High Efficiency Video Codec (HEVC). This design used just 490 lines of C++ code, and a functional and structural coverage of 100% was achieved.
Related – HLS Tools Coming into Limelight!
The Catapult tool synthesized the C++ code into 15,735 lines of Verilog. Applying the C++ stimulus to the RTL showed about 94% coverage (279 holes). The library components from Catapult are excluded from the coverage data by using a “gray box”, leaving only 153 holes. CoverCheck was run to find unreachable coverage bins, reducing our coverage holes to just 107, about 98% coverage.
Stall and reset testing was next, leaving us with just 25 coverage holes, at 99.5% coverage.
In the end, the last 25 coverage holes were manually waived.
HLS is an established methodology for modern SoC design, and the challenge of getting high functional and structural coverage at the RTL level is now possible by following a set of high-level modeling coding guidelines. Formal tools are used in this new methodology to identify unreachable lines. The approach presented has the potential to save from weeks to months from your next SoC schedule, something worth checking out.