[content] => 
    [params] => Array
            [0] => /forum/index.php?threads/benchmarks-anyone.927/

    [addOns] => Array
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021370
            [XFI] => 1050270

    [wordpress] => /var/www/html

Benchmarks Anyone?


One of the problems with developing simulators and other EDA tools is a lack of good benchmarks and industrial scale sample code. So I thought I'd start a thread for people to post links to any useful Verilog-AMS, Spice, Verilog, VHDL and related tool data (e.g. run times) that can be used by folks working on new tools.
Great idea, now if we can only get the EDA vendors to change their license agreements that forbid publishing benchmark data publicly. You might be able to skate around the issue by publishing anonymous results if users care to email you directly.

I'd love to see some SPICE benchmarks run on the same HW and OS.

One challenge with SPICE tools is that they can use options to control the speed versus accuracy which will create quite different run times.

Another idea is to find a friendly EDA user who would install and run your tool then provide you metrics privately.
To do this, we need to define the following :-
a) A set of standard designs (e.g. PLLs, DS ADC, RF LNA+Mixer etc ...)
b) A set of defined test bench and measurement results (to compare accuracy)
c) A generic process model (BSIM4, PSP, etc ...)

Preferably, the simulation takes 1-5 hours to run (preferably a single simulation rather than sweeps, though sweeps will test other aspects of the simulator, e.g. load times, and how it uses previous sim data)

These set of "designs + test benches" can then be published here for users to download and we can all publish our results. I guess we need to compare accuracy, simulation time, and what hardware we are using. We can further compare single thread/core excutions vs multithread/core runs as well.

@Daniel, does this come under benchmarking ? Surely we are allowed to publish our own run times ?

Also, I think the smaller simulator providers will be very keen on this idea (maybe I should charge some consultancy fee !) ... one that comes to mind is BDA as they claim that they are super fast and super accurate compared to cadence and mentor's offerings.
Sounds like an acceptable plan for your own simulator but not for a commercial simulator because the commercial simulator license agreements strictly prohibit the publication of comparative results. Over at DeepChip they sometimes publish comparative results under anonymous names to prevent scrutiny and legal action.

I remember in the early days of logic synthesis there was a set of benchmark designs that vendors would add to their regression suites to see how they fared against their previous release and against competitors.
i'll have to look up our licence agreements ... any EDA vendor who is "scared" of this challenge is just a chicken (reminds me of carlos in the movie Hop).
It is not easy to have those data. Each and every EDA vendor has those but of course they will not share with you. Only when you engage with customers you might have such info as they may tell you if you are faster or slower, more or less accurate, more or less easy to use etc. etc.

I was at Mentor and Synopsys and today I run my own company: I can tell you I saw those data many times but of course I cannot share them as I signed a non disclosure agreement. Hard life for developers but I'm sure you have some good friends using commercial tools that can help. Another place I will look at are universities.
Thanks for sharing that site with Verilog simulator comparisons. More data is always better and can save someone the time and trouble to compare so many different choices.
I was just wondering if anyone has some SystemC benchmarks, or just a large code example with poor performance?