Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/threads/is-there-a-comparison-on-the-eda-tools-and-their-effectiveness.3201/page-3
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021770
            [XFI] => 1050270
        )

    [wordpress] => /var/www/html
)

Is there a comparison on the EDA tools and their effectiveness?

Hey skmurphy, I like your support of the idea of benchmark, I've done more than a few in my day, and I do not yet see a reason why it can't be done, and that includes working around the limitations and restrictions that are seemingly, at least to me, arbitrary. Thanks for your inputs here, very useful.
 
Gabe Moretti

Because when a company purchases a license for a competitive tool from any vendor but a struggling start-up that company signs a non-disclosure agreement that prohibits the kind of comparison you want. The "hype" is defeated by learned evaluation, not by some report whose value is lost after less that a quarter. Moreover a tool "effectiveness" depends in a significant way from the ease of integration into a specific flow, that varies from company to company. Your meter, "Degree of Effectiveness" is relative to the using company, the skill of the engineers that are working with the tool, and the nature of the specific design. One can prove practically anything with a benchmark, you "just' have to use the right design. Let's not be lazy, let's do the work of evaluating within the environment and the type of designs the tool will be used in. And, by the way, at 20 nm and below the results also depend on the foundry used as well.


Good inputs Gabe, we'll look to incorporate what you've mentioned here. Thanks bud.
 
Jeremy Birch
The problem also varies by tool type. For simulators and other analysis tools there are standard inputs, commands, and a knowable golden result. For synthesis, place and route and other heuristic tools, the result may differ a large amount due to a small difference in the input data, the tool invocation varies widely, and there is generally not a knowable golden result. For these types of tool it is harder to know which tool is best for your type of design, and it is also not so easy to identify metrics by which to measure better or worse tools. For instance we might assume that all P&R results need to be DRC and LVS clean (although the benchmark data may not allow this to be achieved), but what do you measure after that? A result with more wire and more vias might be bad (longer delays) or might be good (higher yield, lower coupling etc). To determine the best result then depends on the use of analysis tools which might vary again in their analysis.

Tools which excel at large square designs with lots of metals may be truly awful at long thin designs with 2 or 3 metals, and vice versa. Producing meaningful benchmarks that help a wide variety of customers would be pretty tricky.


Jeremy I think you are right about this, there is going to be segmented and tiered approach to the benchmarking effort, couldn't agree more - thanks for sharing this
 
Brian Durwood

Hi Daniel, you may not like this but for HLL "it ain't the tool". Many of the high end tools use similar "guts", e.g. LLVM, SUIF or the like, within their compilers. The biggest variable in QoR (quality of results) is how clever the sw developer or hw engineer is in refactoring code into coarse grained logic that the compiler can parallelize into multiple streaming processes. From there the process is highly iterative as you try out different configurations and test.

Kind of like the Verilog vs. VHDL debate. The tools are mostly competent. Which you prefer is largely style. Though I admit there is overselling. I'll be interested to follow the dialog and see what others think.
 
Brian Durwood
...
Kind of like the Verilog vs. VHDL debate. The tools are mostly competent. Which you prefer is largely style. Though I admit there is overselling. I'll be interested to follow the dialog and see what others think.

I would say barely competent, they've been around since the '80s and incorporate no support for power management or abstractions above RTL. It raises the question: does it matter how fast your simulator is if the results are useless?
 
Last edited:
C. David Stahl
I've found it somewhat difficult for FPGA applications. For example, in one recent design I found that synplify inferred more BRAM vs XST. Of course that doesn't mean anything -- the tools know the BRAM is there and the tools will make use of available resources.
 
Anatolij Sergiyenko
The effect of the EDA tool depends not only on the FPGA architecture but even on the FPGA generation, because the company provides a new tool for any new FPGA generation. From my experience Symplify does better results for some Virtex generations, for others - XST does. And these results are depend much better on the constraint set selection.
 
Howard Cowles
This should be an interesting discussion. Have been an Allegro user since it was Valid Allegro, I think, version 4. Still have a "what's new" in version 5 handout at home in a box somewhere. Used Sci-Cards before Allegro. The other designers at that company did not like the decision to transition to Allegro and they did everything they could think of to use the old system to do our designs until the manager gave away all the workstations to a college. Forced us to use Allegro.

I think every tool has its merits. The ease of use comes with time and the abilities of every designer to develop the way they use the software to complete their designs. Most designs are different in some way and many require that we learn a new way to do something to get the job done quickly and correctly.

I like that Cadence has improved the Constraint Manager over the years. Diff pairs used to be very painful, now not so much so as well as matched groups. Negative planes and now dynamic positive planes are more examples of how the software has become easier to use to accomplish the end goal.

In short the software will only do what the designer has the knowledge to tell it what to do. No software can do the entire project by itself, auto-place, auto-route can assist in some aspects, but there is never going to be a one button solution to creating a design.

Anyone can say this or that software is superior to another. They are different, can any one EDA tool set do everything perfectly for every designer who uses it, probably not. It is a carpenter that builds the house, not the hammer. So too in design, it is the designer that creates a successful working design, the software is just the tool and the designers experience determines how successful the design is.
 
Greg Phillips
You might as well wish for a tool that turns lead into gold. The biggest obstacle to this sort of thing is that the performance of each tool is very design dependent. Given enough data, every tool will fail. Many tools work well with one type of hardware/data, and fail with others. If you were to create a matrix of all the design types against all the tools to test against, you get a huge task in terms of time and money. If it were to be funded, and the time taken to complete the task, the data would be out of date by the time it were finished. (Our new release fixes that).
 
Greg Phillips
You might as well wish for a tool that turns lead into gold. The biggest obstacle to this sort of thing is that the performance of each tool is very design dependent. Given enough data, every tool will fail. Many tools work well with one type of hardware/data, and fail with others. If you were to create a matrix of all the design types against all the tools to test against, you get a huge task in terms of time and money. If it were to be funded, and the time taken to complete the task, the data would be out of date by the time it were finished. (Our new release fixes that).


Greg wishing to turn lead into gold, hih??? Yes it has been mentioned that the performance of each tool is design dependent. So could you please kindly provide what this fix does and how it does it, so that this a constructive discussion, otherwise what you've stated is simply a marketing ploy, at least that is how it comes across. Please we would love to hear more.

Thanks
 
Greg Phillips
You might as well wish for a tool that turns lead into gold. The biggest obstacle to this sort of thing is that the performance of each tool is very design dependent. Given enough data, every tool will fail. Many tools work well with one type of hardware/data, and fail with others. If you were to create a matrix of all the design types against all the tools to test against, you get a huge task in terms of time and money. If it were to be funded, and the time taken to complete the task, the data would be out of date by the time it were finished. (Our new release fixes that).

This is why I would like to see EDA turn into more of an "apps" market - more of a database framework which is open that can support small tools that perform specific tasks rather than big tools that try to do everything. From a standards perspective that probably comes down to developing better APIs and skipping further language development - it's a lot easier to add some new methods to a (C++) class than changing syntax/semantics and adding keywords in an HDL. Then you can mix-and-match to get the best set of apps for your particular design.

I'd start by getting the license on OpenAccess changed so that anyone can build their own version of the OA database, and Si2's job would be to say whether it was compliant or not. One could probably just build an open-source OA DB anyway and drop it on github, but in the current litigious environment buy-in would be better, and with a one-company/one-vote system at Si2 that shouldn't be hard.
 
Ward Vercruysse


I agree with Greg, and it is even worse. In my experience, most benchmarks I was involved in, whether performed by vendor teams or internal teams, ended up being more a benchmark on the teams rather than the tools.

Secondly, there are (almost) no absolute measures of goodness for tools. It's much more about how well does a tool fit the overall design methodology and the beliefs of the design team using the tool.
 
This is why I would like to see EDA turn into more of an "apps" market - more of a database framework which is open that can support small tools that perform specific tasks rather than big tools that try to do everything. From a standards perspective that probably comes down to developing better APIs and skipping further language development - it's a lot easier to add some new methods to a (C++) class than changing syntax/semantics and adding keywords in an HDL. Then you can mix-and-match to get the best set of apps for your particular design.

Adding features to a language takes to long to adopt and propagate through the industry. In 2005 The IC team that I was on at the time finially decided that the verilog 2001 changes were stable enough that we could start using them. That is way to long for an industry that moves as fast as we do.

When you give your verilog code to tools like simulation, synthesis or rtl-checkers, it first has to expand all the tic(`) macros before it elaborates the design and executes any generate statements. You then wind up with each tool creating it's own internal representation of the design. How do we know that they all came up with the same answer?

For tic(`) macros the answer is easy. I run all my verilog code through a verilog preprocessor before submitting it to my tools. I know they all got the same answer because I gave it to them and they didn't have to do a thing. I don't have to wonder if the post processed output is correct because I can easily look at it. And if it is wrong then it will be equally wrong for all the tools.As far as I am concerned they should strip these marcos from the language and everyone sould use an external tool. You can also do the same for elaboration and generate statements. If they run as seperate tools then that simplifies all your downstream tools and speeds up adding new features to your tool flow.

I see the SOC design process splitting into verilog/vhdl for leaf level ip, system verilog to verification and IP-Xact for the SOC hierarchial levels. You can't design a 300 million gate IC by hand coding verilog files.
 
Adding features to a language takes to long to adopt and propagate through the industry. In 2005 The IC team that I was on at the time finially decided that the verilog 2001 changes were stable enough that we could start using them. That is way to long for an industry that moves as fast as we do.

When you give your verilog code to tools like simulation, synthesis or rtl-checkers, it first has to expand all the tic(`) macros before it elaborates the design and executes any generate statements. You then wind up with each tool creating it's own internal representation of the design. How do we know that they all came up with the same answer?

For tic(`) macros the answer is easy. I run all my verilog code through a verilog preprocessor before submitting it to my tools. I know they all got the same answer because I gave it to them and they didn't have to do a thing. I don't have to wonder if the post processed output is correct because I can easily look at it. And if it is wrong then it will be equally wrong for all the tools.As far as I am concerned they should strip these marcos from the language and everyone sould use an external tool. You can also do the same for elaboration and generate statements. If they run as seperate tools then that simplifies all your downstream tools and speeds up adding new features to your tool flow.

I see the SOC design process splitting into verilog/vhdl for leaf level ip, system verilog to verification and IP-Xact for the SOC hierarchial levels. You can't design a 300 million gate IC by hand coding verilog files.


Mr Eaton, This is exactly the kind of constructive feedback on how to make the benchmark work that is vital to such an effort, thanks for your suggestions, this kind of informed perspective is a value add versus some (ahem) other takes on the issues that are faced here.

Yes we all know about the imposition of EULA restrictions but for all the reasons that have been mentioned here on this thread these have acted as a barrier towards real innovation in the EDA/DFM tool domain, there definitely needs to be a change.

Since the EDA/DFM tool vendors seemingly are dropping the ball on serving all of the market's needs, then perhaps what needs to take place is an innovation insurgency, disrupt the status quo that isn't working for all of us, and as a result get the EDA/DFM tool vendors to be good buddies w/ the end-user community, which is what this forum is supposed to support right?

Someone tell me that I've missed something critical for our community?
 
Robert Kezer
It would be nice to have a consumer reports for the EDA field. An unbiased group that compares and tests software. deepchip.com may have some information as well as EDAcafe.
 
Thanks Robert, we'll add you to the list of concerned citizens in our community. Many of us on this thread want the same thing. Contact me via LinkedIn and we'll talk more.

Best,
Richard Platt
 
Having been in the EDA field for over 20 years now, I have seen plenty of benchmarks. Some tools are relatively easy to compare - for example DRC/LVS where the results must be correct, and the criteria are usually runtime/resource usage. Others, like P&R are much harder as they depend a lot on the actual design data, the skills of the AE/customer driving the tools and their ease of use.

Even if you could publish some sort of significant comparison data, there are other factors that usually influence the choice of tools - price of course, but also the relationship between the customer and the vendor.
 
Back
Top