WP_Term Object
(
    [term_id] => 159
    [name] => Siemens EDA
    [slug] => siemens-eda
    [term_group] => 0
    [term_taxonomy_id] => 159
    [taxonomy] => category
    [description] => 
    [parent] => 157
    [count] => 716
    [filter] => raw
    [cat_ID] => 159
    [category_count] => 716
    [category_description] => 
    [cat_name] => Siemens EDA
    [category_nicename] => siemens-eda
    [category_parent] => 157
)
            
Q2FY24TessentAI 800X100
WP_Term Object
(
    [term_id] => 159
    [name] => Siemens EDA
    [slug] => siemens-eda
    [term_group] => 0
    [term_taxonomy_id] => 159
    [taxonomy] => category
    [description] => 
    [parent] => 157
    [count] => 716
    [filter] => raw
    [cat_ID] => 159
    [category_count] => 716
    [category_description] => 
    [cat_name] => Siemens EDA
    [category_nicename] => siemens-eda
    [category_parent] => 157
)

A VIP to Accelerate Verification for Hyperscalar Caching

A VIP to Accelerate Verification for Hyperscalar Caching
by Bernard Murphy on 12-18-2019 at 6:00 am

Non-volatile memory (NVM) is finding new roles in datacenters, not currently so much in “cold storage” as a replacement for hard disk drives, but definitely in “warm storage”. Warm storage applications target an increasing number of functions requiring access to databases with much lower latency than is possible through paths to traditional storage.

NVMe

In common hyperscalar operations you can’t hold the whole database in memory, but you can do the next best thing – cache data close to compute. Caching is a familiar concept in the SoC/CPU world, though here caches are off-chip, rather than in the processor. AWS for example provides a broad range of caching solutions (including 2-tier caching) and talks about a wide range of use-cases, from general database caching, to content delivery networks, DNS caching and Web caching.

There are several technology options for this kind of storage. SSD is an obvious example, and ReRAM is also making inroads through Intel Optane, Micron 3D Xpoint and Crossbar solutions. These solutions have even lower latency than SSD and much finer-grained update control, potential increasing usable lifetime through reduced wear on rewrites. Google, Amazon, Microsoft and Facebook have all published papers on applications using this technology. In fact Facebook was an early innovator in this area with their JBOF (just a bunch of flash) solution.

JBOF is a good example of how I/O interfaces have had to evolve around this kind of system. Traditional interfaces to NVM have been based on SATA or SAS but are too low bandwidth and high latency to meet the needs of storage systems like JBOF. This has prompted development of an interface much better suited to this application, called NVMe. This standard provides hugely higher bandwidth and lower latency through massive parallelism. Where SATA for example supports only a single I/O queue, with up to 254 entries, NVMe support 64K queues, each allowing 64K entries. Since NVM intrinsically allows for very high parallelism in access to storage, NVMe can maximally exploit that potential.

The NVMe standard is defined as an application layer on top of PCIe, so builds on a proven high-performance standard for connectivity to peripherals. This is a great starting point for building chip solutions around NVMe since IP and verification IP (VIP) for PCIe are already well-matured. Still, a verification plan must be added around the NVMe component of the interface.

Which is understandably complex. An interface to an NVM cache can have multiple hosts and NVM controller targets, each through deep 64K queues. Hosts can be multicore, and the standard supports parallel I/O with those cores. Multiple namespaces (allowing for block access) and multiple paths between hosts and controllers are supported, along with many other features. (Here’s a somewhat old but still very informative intro.)

Whatever NVMe-compliant component you might be building in this larger system, it must take account of this high-level of complexity, correctly processing a pretty rich range of commands in the queues, along with status values. If you want a good running start to getting strong coverage in your verification, you can learn more about Coverage Driven Verification of NVMe Using Questa VIP HERE.

Share this post via:

Comments

There are no comments yet.

You must register or log in to view/post comments.