Once upon a time there was a struggle for verification completion of semiconductor designs at gate level. Today, beyond imagination, there is a struggle to verify a design with billions of gates at the RTL level which may never complete. The designs are large SoCs with complex architectures and several constraints of area, performance, power, testability, synthesis and so on, requiring huge vector sets at hardware and software levels, often not enough for complete verification. The verification being a critical requirement, several techniques of verifying the designs in different ways evolve year after year, albeit with further increase in SoC size and complexity – it’s a vicious cycle!
Since the designs are at system level, the verification intent must start at the system level with hardware abstraction verified at that level. Typically, hardware can be abstracted at algorithmic, TLM (Transaction Level Model) or RTL level. The algorithmic model is a software model (written in C++ or SystemC) which simulates fastest but without timing. The TLM models (based on standard TLM library written in SystemC) can be segregated into untimed, loosely timed, approximately timed and cycle accurate whereas the RTL models are fully cycle accurate and synthesizable to actual gates exhibiting slowest speed in simulation. It’s clear that design and verification must start at the algorithmic level which can be mapped to TLM and then to RTL appropriately. The state-of-science here is how accurately a design is mapped at different levels and equivalence checked between them.
Designing and verifying the algorithmic model in C++ or SystemC is very efficient; ~1 month of simulation in RTL can be accomplished in less than 10 minutes in algorithmic model. They can be used as reference models (wrapped with SystemC or embedded in SystemVerilog in UVM environment) for hardware and tested in ESL (Electronic System Level) platform with TLM fabric. A TLM model again can be simulated ~100x faster compared to RTL. The synthesizable portion of TLM with bit-accurate datatypes in C++/SystemC can be transformed into RTL. Keeping the algorithmic model as the golden reference, equivalence between the three models can be checked. The synthesizable TLM can be verified very effectively and efficiently with limited performance testing without any clock in real time. The coverage can be monitored by using assertions and cover points in the source code, similar to what is done in algorithmic model, and can be carried forward in RTL. Analysis and profiling tools such as gcov can be used effectively.
So, what is the platform for doing all these operations? I was impressed by looking at Calypto’s Catapult, a High Level Design and Verification Platform. The whole process is described at length in an on-line webinarat Calyptowebsite. The platform considers complete SoC with algorithms, control logic, interfaces and protocols which can be targeted to any particular technology of choice. It provides ~10x improvement in verification productivity with the system level description synthesized to correct-by-construction RTL and provision to integrate any last minute code changes efficiently. Practical results with customer designs have shown up to ~18% area saving and ~16x gain in time compared to hand-coded designs. The platform is supported with Catapult LP which provides closed loop PPA exploration where different solutions can be explored and evaluated by changing the constraints, not the source code. Power saving techniques such as clock gating and others are used very efficiently.
Once an optimized RTL is synthesized, it’s essential to verify its correctness. The Catapult platform provides different ways of verifying the RTL. The SCVerify capability automatically generates the test infrastructure with SystemC transactors communicating with RTL which can be simulated with industry standard VCS, Incisive or Questa. The synthesizable TLM reference models can be compared with the result received from RTL co-simulation.
Another way is to test all models in the popular UVM environment. As shown in the above image, the synthesizable TLM model can be placed under test. The agents provide conversion between different levels of abstractions. After testing the TLM model, it can be swapped with the RTL Implementation (shown at the top right in the image) which can again be tested with the same test vectors.
A powerful and unique capability of Catapult platform is SLEC (Sequential Logic Equivalence Checking) which can be used to check equivalence between an ESL model in terms of algorithmic or TLM model and a RTL model. This high level formal verification unlocks the real potential of ESL that allows fastest simulations at algorithmic and TLM level without risking any design inconsistency.
With directed tests, constraint runs, FSM reset transitions and stall tests, close to 100% verification coverage can be aimed; some unreachable points may require waiving.
The designs can be re-synthesized to different performance points and/or target technologies. Rich Toone of Calypto has presented the whole information in great detail. It’s worth going through the webinar– How to Maximize the Verification Benefit of High Level Synthesis.
More Articles by Pawan Fangaria…..
Share this post via:
Next Generation of Systems Design at Siemens