FPGA programmable logic has served in many capacities since it was introduced back in the early 80’s. Recently, with designers looking for innovative ways to boost system performance, FPGA’s have moved front and center. This initiative has taken on new urgency with the slowing down of process node based performance gains. The search has moved to new algorithmic and architectural innovations that can push performance forward to meet the needs of big data, cloud computing, mobile, networking and other domains.
The new applications for FPGA’s are a far cry from the glue-logic uses that they first fulfilled. FPGA’s have been moving up the semiconductor food chain for some time though. They were applied to networking applications by Cisco and others back in the 90’s – as they entered their second decade. Most recently a major shift occurred when FPGA’s were paired with CPU’s to facilitate compute intensive operations. FPGA’s cannot adapt to new tasks as quickly as a general-purpose CPU, but they excel at repetitive operations that involve high throughput.
Microsoft has embraced this approach for its cloud and search engine operations after they assessed its feasibility in their Catapult project. Another big mover in this space is Intel with its $16B acquisition of Altera. Long gone are the days where FPGA’s were a poor man’s alternative to ASIC’s. Commercial FPGA’s routinely are built on leading edge process nodes – to wit, Altera going to Intel 22nm for its first FinFET design. FPGA’s have become quite efficient and they come with a bevy of ancillary IP and high performance IO’s ensure high performance.
In a recent white paper by Achronix, they argue that the pairing of CPU’s and FPGA’s was inevitable and in many ways obvious. However, for FPGA’s to be effectively paired with CPU’s several further optimizations are required. For one, the FPGA needs to access system memory using cache coherence. Another point that Achronix makes is that data transfer between system memory should operate as fast as possible. They also posit that board area ought to be reduced and that unused or unnecessary IP blocks or modules should be eliminated, to save cost and wasted silicon area.
The Achronix white paper touches on the CCIX group’s work to create a high speed standard for cache coherent memory that can be used by heterogeneous processors, IO devices and accelerators. Recent news on CCIX shows that 25Gb/sec has been demonstrated over PCIe 4.0. However, there is usually a price to pay when going off chip for any data, and especially for cache coherent data. The solution is to combine FPGA fabrics into SOC’s so they gain the efficiencies of being on-chip.
Achronix has a successful line of FPGA chips, the Speedster22i, but their latest move is shaking up the FPGA market. By taking their proven FPGA technology and embedding it, system designers can reap significant benefits. General purpose FPGA chips often have resources that are not optimally aligned with the target application. For instance, the off the shelf configuration might include IP, embedded memories or LUT’s that are not needed. Alternatively, Achronix eFPGA offers designers the ability to tailor the FPGA fabric tightly to the system requirements. Also, bypassing the need to go off-chip reduces the IO pad/ring overhead on both sides, while saving power, and improving speed and reliability.
The Achronix white paper covers the history of FPGA’s up to the new era of embeddable FPGA fabric, while articulating the advantages of this new approach. Additionally, they provide an overview of how they engage with customers to ensure design success. In the past FPGA’s have always been a game changer. However, with advances is technology their importance is system design has grown. With the switch over to embedded FPGA technology, an even higher level of performance and efficiency is possible. In some ways, this represents a fundamental shift in SOC design, one that will certainly create new opportunities in many of today’s leading application areas.Share this post via: