WP_Term Object
(
    [term_id] => 497
    [name] => ArterisIP
    [slug] => arterisip
    [term_group] => 0
    [term_taxonomy_id] => 497
    [taxonomy] => category
    [description] => 
    [parent] => 178
    [count] => 92
    [filter] => raw
    [cat_ID] => 497
    [category_count] => 92
    [category_description] => 
    [cat_name] => ArterisIP
    [category_nicename] => arterisip
    [category_parent] => 178
    [is_post] => 1
)

Enterprise SSD SOC’s Call for a Different Interconnect Approach

Enterprise SSD SOC’s Call for a Different Interconnect Approach
by Tom Simon on 03-01-2016 at 12:00 pm

The move to SSD storage for enterprise use brings with it the need for difficult to design enterprise capable SSD controller SOC’s. The benefits of SSD in hyperscale data centers are clear. SSD’s offer higher reliability due to the elimination of moving parts. They have a smaller foot print, use less power and offer much better performance. SSD’s are also more scalable, a big plus where storage needs run into the petabyte range.

Nevertheless, SSD’s create the need for more complex and sophisticated controllers. Unlike early SSD implementations that used SATA, SAS or Fibre Channel to connect to their hosts, enterprise SSD use NVMe protocol to directly connect to PCIe. NVMe was developed specifically for SSD memory and takes advantage of its low latency, high speed and parallelism. The table below from Wikipedia shows the comparison.

 Enterprise SSD controllers connect to many banks of NAND memory and deal with low level operations such as wear leveling and error correction, both of which have special requirements in this application. The SSD controller must offer low latency, extremely high bandwidth, low power, and internal and external error correction.

A large number of unique IP blocks must be integrated to deliver a competitive SSD controller SOC. Here is a short list of necessary IP’s that are commonly used: ARM R5/7, PCIe, DDR3/4, NVMe, DMA, RAM, SRAM, RAID, NAND, GPIO, ECC and others. The parallel operation of these IP blocks presents a significant design problem for IP interconnection and internal data movement.

 Designing the interconnections between all the functional units has become one of the most critical aspects of these designs. Due to larger IP blocks with wide buses and increasing need for interconnect using wires that are not scaling with transistor sizes, the design effort and chip resources consumed by on-chip interconnections are becoming a large burden to design teams.

Buses and crossbars are running out of steam in these newer designs. For example, going to AMBA 4 AXI for 64 bits requires a width of 272 wires, and 408 wires for 128 bits of data. The other problem is that many of these wires are idle for much of the time. For example, a four cycle burst write transaction only uses the 56 wire write address bus in 25% of the cycles.

Networks on Chips (NoC) dramatically reduce the difficulties that would be encountered with large bus structures. Arteris, a leading provider of NoC IP, has just published a white paper on the advantages of using their FlexNoC to facilitate implementation of enterprise SSD controllers. The biggest advantages come from simultaneously reducing the widths of the block interconnections and tailoring them to the predicted traffic. It’s well understood that the earlier in the design process issues are addressed the easier it will be to deal with the downstream effects of that issue. Instead of waiting for the P&R stage to grapple with interconnect across the chip, FlexNoC planning and implementation starts at RTL, making the process more efficient and easier.

FlexNoC works by converting a wide variety of IP protocols at their source to agnostic serialized packet data and routing it to its target, where is it reassembled upon delivery. There are RTL elements required for the NoC to operate but the overall area required by the NoC IP and interconnect wires is significantly less than the equivalent bus or crossbar structures. Because the NoC data can be pipelined and buffered, it is actually faster than high drive strength busses. The NoC RTL can be synthesized and placed so that it conforms to the predefined routing channels.

The overall effect is less routing congestion, leading to a smoother back end implementation flow. The resulting design benefits from less latency and even more robust data integrity due to built in error correction within FlexNoC.

To gain a deeper understanding of the benefits of using FlexNoC I suggest reviewing the Arteris white paper located here. There are a number of additional benefits and implementation details that are covered in this and the other available Arteris downloads

More articles from Tom…