Everyone has heard the expression, “Half the job is having the right tool.” In the case of FPGA-based prototyping, however, the right tool for the job is only the beginning. What teams really need to think through is what exactly should be done with an FPGA-based prototyping tool?
The obvious answer is prototyping an SoC, pre-silicon. We go get some third party IP, some legacy IP from the previous design, and a few new IP blocks, and toss all the RTL into an FPGA-based prototyping system. Every new release of FPGA-based prototyping systems brings bigger FPGAs, so in theory more SoC designs fit in a given system. But, is it worth the trouble of going through the hassle of partitioning a design and tweaking it for debugging?
I’d submit that the challenge is not getting your RTL to “work”. Competent design teams can create an IP block to a functional specification, and run a simulator on it, and figure out what needs to be fixed, iterating to goodness. IP blocks can then be strung together into a design and simulated – the more IP, the slower the simulation – and eventually, a design is deemed as working.
As far as you know, at least. Are there corner cases in timing between the integrated blocks? Are the I/O blocks compliant with interfacing standards? Were enough test suites run to completely validate the design? Were the IP blocks exercised simultaneously to find problems in interaction?
These are incredibly hard questions to answer comprehensively with a functional simulator. That’s why people have turned to emulator platforms – but they are budget busters. Getting those answers in emulation is expensive and still relatively slow.
What about the “what-if” factor? Is there a more efficient way to fix a problem, or even implement a functional requirement? The process of system exploration is often skipped because it is just too time consuming – fix it, and move on as quickly as possible.
S2C explores these and many other thoughts in a new 8-minute presentation on their Videos page:
They take on many of the objections we hear to using FPGA-based prototying systems. Some of these have been solved simply by using ultra-large FPGAs, but others are addressed through a solid engineering approach designed to increase the flexibility and usefulness of a platform in the prototyping process.
The bottom line here is these FPGA-based prototyping solutions are not just huge FPGAs glued to a board. S2C explores ideas like deep trace capture, real-world I/O via daughtercards, and the benefits of distributed development using remote system management capability. The combination of architecture, hardware, and software makes this more than just “a tool.”
I’d like to get some feedback and discussion, not so much about product features as about the state of the FPGA-based prototyping concept. Are the challenges and benefits S2C is describing in the presentation ones you are experiencing? What other concerns are there with using an FPGA-based prototyping system? Is there another strong benefit that isn’t being talked about much? We’ll ask an S2C representative to respond to your ideas.Share this post via: