Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/index.php?threads/debugging-is-not-free.2162/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021370
            [XFI] => 1050270
        )

    [wordpress] => /var/www/html
)

Debugging is not free!!

Dear Readers,
Debugging is not free!!

Looks very true statement for ASIC Engineers especially who are contributing/working on Verification. Any test bench must be planned and test bench supports debug is no exception!
Debugging large test benches has changed recently. The test benches are becoming larger and complex than they used to be. In addition to these test benches are now using object oriented constructs, class based libraries, methodologies for verification components. Each of these features adds to the pain of debugging the test benches.
There are couple of major things which should be well take care while architecting the test bench to reduce the pain of Debugging activities (nobody can reduce this activity/phase completely)

1. Well organizased layered architecture of designing test bench
.
Test bench should be desinged with debug in mind ! While defining test bench architecture, engineers should make sure each functionalities/features and cover points, How to cover and organize? Transaction based architecture is really useful to maintain, debug and organize the testbenches, specially when the test benches are really comlex. Layered architecture is one of the major architectural strategy which helps engineers during their debugging phase.

2. Naming conventions, directory structure, Class names, class members variables

Engineers are usually not paying attention on these type of such small small things but are very important! Naming conventiaons, directories and class and members names will not affect your functinality or feature during your verification activity but they will surely help you to reduce your debugging burden!!
Naming conventions help eliminate mistakes by being consistent and simple to understand. Finding things becomes easy. For example finding all the i2c_scoreboard factory creation calls in test bench is easy with grep, we can simply do a grep given below given command:

grep -r i2c_scoreboard *.sv


These things are really helpful to engineers when they starts working on the already designed test bench. There are many product based companies who are keep using their designed test bench from years !! In this case what happens is, when new engineers starts working on these types of compex test bench, they will feel more conform while debugging if the conventions and such things are followed. Otherwise to understand these type of minor things consumes engineers most of time which is pain for organization!!

3. Selecting methodologies while architecting the environment

Methodogolies are performing major part during the debugging phase. All the methodologies have their specially feature and abilities to help designing and debugging the environments / test benches. UVM/VMM/OVM and other methodologies reporting system (messaging) have many abilities including verbosity control, message IDs filtering and hierarchical property setting.
There are many messaging features in methodologise like dynamic message controlling. Sometime certain debuggin should start after many clocks or after some condition is reached. Once condition reached we want to change the verbosity level, we can do this using the methodology feature, For example : In UVM :

repeate (100000) @ (posedge clk);

i2c_agent_h.set_report_verbosity_hier(....);

This way we can set the verbosity level when we want to avoid generating large log files and to make debug easy, This way there are hundreds of other feature which helps in debugging, please refer methodology reference manual for further details.
Keeping these things in mind if we design test bench, we will surely be saving our deugging time!

-ASIC With Ankit

<script src="//platform.linkedin.com/in.js" type="text/javascript"></script>
<script type="IN/Share" data-counter="right"></script>
 
Last edited by a moderator:
Ankit,

I'm curious about the Verification effort for a 50M gate ASIC, would you say that Verification takes something like 5X the Design effort? Of course it depends on the specific design and the team, I'm just interested in general numbers.

Daniel
 
Same philosophy for any 'designed' component (can be Verilog/VHDL code, SW to run on product, physical design file structures, how to design for test or mfg or...).

The cost of debug is not truly understood until a design comes out of mfg and does work. Even worse is if it seems to work until some corner case or reliability issues pop up months later. Debugging once the design is mfg or sent to customers is much more expensive. Back in 1970's(?), I believe Motorola did a study on the cost of fixing a problem in various stages of development. In design, after mfg, after in direct customers' and end (user) customer: they estimated that it was a 10x multiplier for each stage of a product's life: $1, $10, $100, $1000. So $1 invested upfront could have significant P&L impact. But difficult to prove: either you did it right the first time and did not incur the additional costs OR you 'screwed' up and ate the costs to fix but by them it is too late....
 
Ankit,

I'm curious about the Verification effort for a 50M gate ASIC, would you say that Verification takes something like 5X the Design effort? Of course it depends on the specific design and the team, I'm just interested in general numbers.

Daniel

Daniel,

The numbers vary a lot based on the % reuse of design that has seen silicon, % reuse from third party and new code. Similarly % resue of verification effort also plays an important factor. Infact there is a list of items that conspire to decide the figures (total run time of a test is one more). Being into consulting I see this ratio to vary from 1.5X upto 6X easily. I believe to derive a good equation would demand a PHD :)

Thanks & Regards,
Gaurav Jalan
 
Daniel,

Sorry for the late response, As Gaunrav has mentioned in his early response (Thanks Gaurav) numbers vary a lot based on the % reuse of desing, % resuse of third parti IPs/VIPs, complexity and % of new code. I agree with Gaurav, there is a list of items to decide the figures. In my career experience I have seen this ratio to vary from 2X to 5/6X. There is no standard equation as per my experience and is all depends on lots of factors for perticular project which means 'This ratio varies project to project based on the items/factors that conspire to decide the figures'

Considering the project for which I have worked on so far, it varies from 2X to 5/6X. There are many itesm to consider to decide this ratio. Few of them are:
1. Desing complexity
2. % of reuse of design
3. % of reuse of third party IP/VIPs
4 % of new code
5. Simulation/Regression and debugging time (based on the functional and code complexity)
6. Dynamic changes in Requirement Specification (How frequent your design specification changes plays a major role)

Thanks & Regards
ASIC With Ankit
 
Back
Top