There are 3 common misconceptions about debugging FPGA with the real hardware:
[LIST=1]
Can I kindly ask to forget about misconception #1?
If the word ‘debugging’ hurts your eyes or your ears, call it ‘functional verification’, ‘functional coverage’, ‘corner case testing’ or perhaps ‘specification check’. Engineers can improve the techniques, methodologies and their competences. Engineers can seek ways to automate the verification process. Engineers can learn, improve, work better. It remains that verifying a design is at the heart of any engineering activity that presents some level of complexity. Electronic system design has become an incredibly complex task. Even the best engineer does some verification.
When considering misconception #2, think to it: debugging does not happen over the air. You need to reserve resources for debug.
It can be ‘hardware resources’– e.g.:
– FPGA resources, like I/Os, logic and memory;
– PCB resources, like a connector, some area on the PCB used to collect the data and maintain its integrity.
It can be ‘engineering resources’ – typically the time spent by the engineering team to find a bug with the chosen debugging strategy.
In all cases, the project budget is impacted with additional costs:
– the cost for an extra area on the PCB;
– the extra cost of an FPGA with a larger package or a higher speed grade;
– the cost of a new tool like a logic analyzer or a scope;
– the engineering hours cost for implementing a specific debugging strategy.
Thinking that debugging wastes resources is a rather entrenched idea that originates from design cost optimization. Once the system goes to production, the extra money put in the system ‘real estate’ that does not participate to the system’s functionality is considered as wasted margin. Any such dollar is worth saving because it quickly multiplies once the system is produced in (large) series.
The problem with this way of thinking is that it does not take into account the gigantic loss in opportunity if the product comes late to the market – or even worse, the cost of a bug escaping to production.
Anyway, the consequence here is that we, ‘electronic engineers’ are naturally inclined to:
– save on hardware costs, and:
– do more ‘engineering’ (after all, we are engineers !)
…
Actually, every stage of the design flow mobilizes resources. What can be mobilized to do the job is essentially a trade-off -or, if you prefer- a question of economics.
I believe that the secret to efficient debugging consists in finding the right mix of techniques. What is the right mix for you? Well, the answer depends on the economics of your project.
At my company – Exostiv Labs – we try to provide new ways of debugging and analyzing FPGA designs (- or FPGA prototypes, since our customers are often really ASIC / SoC engineers) from the hardware. Going early to hardware is absolutely reachable economically when you work on a FPGA-based system. Performing some of the verification steps with the target hardware has arguably at least 3 benefits:
1) Speed of execution is optimal– as the target system runs… at system speed;
2) The target hardware won’t suffer from incomplete modeling, as itcan be placed in its real environment;
3) Software can be tested earlier
But a typical complaint about debugging on hardware because it does not offer the same level of visibility as simulation.
Theoretically, the list of requirements for tools used on FPGA for debugging would be:
– Reach the same level of visibility than RTL simulation
– Have at speed level of execution
– Not requiring any kind of spare connector
– Not requiring any memory or logic (or…) resource.
Do you see what is wrong here? Yes, a kind of ‘no-compromise’ attitude…
The solution for the above problem would be “a simulation software that runs thousands of trillion times faster and always works with models that *exactly match* the environment of the system”, right?
Actually, we have a solution that offered 200,000 times more visibility than the existing FPGA tools. Not quite the same theoretical level as a simulation – but the tool runs on real hardware, at speed of execution.
The benefit? Well-used with simulation – and other techniques – it reduces some guess work and the overall debug time from months to weeks. Our customers have even found new ways of using it – benefit from the much bigger visibility to run profiling test of their new FPGA (like evaluating the traffic density on busses,…).
What does it solve?
– Modelling problems, first. An ability to have a decent visibility of the system under real conditions, things that escape your analysis when simulation models do not exactly match reality. You know, when you turn on the switch and nothing works because the environment was not properly modeled in the simulation…
–It opens new capabilities, like being able to see for real the inner working of a design not during 1 µs, but during hours… It gets you closer to simulation visibility at real speed.
The cost of it? In addition to the purchase price, mobilizing FPGA transceivers, some memory and some logic, a connector and a small learning curve (for details – and more commercial content – see:https://www.exostivlabs.com/why-exostiv/). This can be a lot or nothing for you. It is a matter of economics.
Misconception #3 : does it solve all the problems at once? Absolutely not.
Again, each engineer needs to find the mix of techniques that works for him/her and that brings a value – usually it is ‘going earlier to market’ and ‘saving countless hours on debugging’. The value is a lot of different things – all is in the mix.
It is a question of economics…
Share this post via:
AI Semiconductor Market