NVMe is a set of protocols and technologies that dramatically accelerate the way data is transmitted, stored and retrieved. In this article, we discuss why NVMe is emerging now, summarize the architecture of NVMe and NVMe over Fabric, and outline common use cases and benefits.

What Is Non-Volatile Memory Express (NVMe)?

Murat Karslioglu
Murat is an infrastructure architect with experience in storage, distributed systems and enterprise infrastructure development. He is vice president of product at MayaData, as well as one of the maintainers of the CNCF project OpenEBS and author of Kubernetes – A Complete DevOps Cookbook.

NVMe is arguably the most important advance in transmitting and storing data since the invention of SCSI in the 1970s. During the past half-century, the amount of data stored per year has grown exponentially, growing from hundreds or thousands of terabytes stored each year in the 1970s to more than 50 zettabytes in 2021, a more than 1 million-fold increase, according to recent analyst reports. Meanwhile, underlying disk drives haven’t improved much, with their performance constrained by the speed of sound and protocols for transporting and retrieving data to approximately 150 IOPS per device.

As an example, JBODs (server enclosures packed with Just a Bunch of Disks) connected via the SAS protocol are limited in the throughput available to each device and in how many devices can be connected. By comparison, NVMe over TCP has nearly unlimited connectivity; while SAS is limited to a single queue, NVMe has 65,000 queues each.

By using NVMe end to end, from workload access through any container-attached storage and high-performance SSDs, NVMe and NVMeoF enable data center hardware and software to efficiently harness the flash-memory benefits of SSDs. This gives NVMe various advantages, including: high input/output speeds, lower latency and the ability to handle more commands.

Read More:   What They Are and Why You Should Care – InApps 2022

NVMe Architecture

At its core, NVMe consists of a target controller and a host-side interface while using non-volatile memory to persist data. The NVMe interface specification allows the optimization of solid-state drives (SSDs) using a PCIe interface. In a typical data center setup, NVMe hosts are connected to one or more NVMe SSDs through a PCIe bus. The NVMe SSD consists of an SSD controller, PCIe host interface and non-volatile memory.

The Host’s NVMe driver uses memory-mapped I/O (MMIO) controller registers and the system’s DRAM for submission and completion queues in input/output. The number of parallel operations it can support depends on the number of mapped registers. As mentioned above, NVMe allows for 65,000 queues each with a depth of 65,000.

Key Performance Characteristics

  • NVMe storage offers much higher speeds than SATA-based SSDs and is much faster than traditional spinning hard drives. Hard drives are limited by the spinning speed, which caps out at 7,200 RPM. SATA hard drives have a maximum read speed of up to 500 MB/s, while SSDs using SATA have a maximum read speed of 2 GB/s. By comparison, PCIe Gen 4.0 NVMe m.2 version SSDs show quoted performance figures of 7,000 MB/s sequential reads and 5,000 MB/s sequential writes, over a 10x improvement in the performance of SATA connected SSDs.
  • NVMe also offers lower latency. The logical device interface uses the parallelism and low latency offered by SSDs to ensure quick response times. The typical latency for SATA storage is 6 milliseconds while that of NVMe is 2.8 ms. The PCIe architecture allows speed and mapping of operations in memory, allowing up to 64,000 operations per queue.

The Benefits of Choosing NVMe for a Data Center

  • High-performance flash memory: NVMe allows enterprises to take advantage of high-performance flash memory. While SATA worked well with hard disk drives (HDDs) and has grown to support the I/O needs of SSDs, it still experiences various inefficiencies when used with modern, flash-based SSDs. NVMe, on the other hand, offers multiple streams of data that allow complete utilization of CPU and GPU resources.
  • Division and streamlining of data: Built for speed, NVMe allows the division and streamlining of data so it can all be written simultaneously. This means that NVMe offers multiple times the bandwidth, multicore support and improvements in latency.
  • Supports different form factors and connections: The specification also supports different form factors and connections allowing a versatile mix of flash-based options for any storage platform. NVMe comes with several commands that allow direct communication between NVMe hosts and SSDs. This creates quicker interfaces and streamlined workflows for writing and reading data on solid-state storage. PCIe and NVMe eliminate bottlenecks arising from inefficient SATA buses, improving the storage and management of data while optimizing system performance.
  • Reduce power consumption: With quicker read/write times, data centers that embrace NVMe experience a reduction in power consumption. Data persistence in flash memory is almost completely immutable, which reduces the chances of a disaster. NVMe also uses solid-state components that barely move, make noise or dissipate heat. This guarantees little wear and tear, allowing complete utilization of investment in data center infrastructure.
Read More:   Moving to the Cloud Presents New Use Cases for Feature Flags – InApps Technology 2022

Popular NVMe Use Cases

Some use cases for which NVMe may be suitable are:

Bottleneck Databases: Any application that is performance-constrained by the responsiveness of its databases can benefit greatly from adopting NVMe. NVMe flash storage media connected directly to application nodes or via low-overhead storage management systems offer enhanced performance that reduces the number of servers and database systems needed for data sets. This is because NVMe drastically reduces database scan times due to its fast IOPS performance. Additionally, NVMe can be efficiently used for various aspects of databases including memory extension, temp space, logging, data access and in-memory deployment, among others.

High-Performance Computing Application: NVMe SSDs are well-regarded for mission-critical applications such as foreign exchange trading, healthcare, video game serving and defense, where low latency is highly desirable. By reducing data read times, calculations can be performed faster, enabling better and quicker decision-making.

Artificial Intelligence: AI and machine learning systems often rely on field-programmable arrays, custom integrated circuits and graphical processing units for data processing and decision-making. With its large bandwidth and low latency, the NVMe devices connected end to end via NVMe can unblock such systems.

NVMe over Fabrics (NVMe-oF)

In 2020 with the release of the Linux kernel 5.0, NVMe over TCP emerged. This protocol promises to effectively make the storage backplane routable. The implications of this advancement will be to accelerate the decomposition of storage into the operating environment, available from local disks and from high-performance NVMe-connected JBODs deployed in a resilient manner. NVMe over TCP makes the connection between servers and storage more efficient and can reduce the CPU utilization for server workloads, especially when delivered in conjunction with data processing units (DPUs).  NVIDIA, Fungible and other providers are investing heavily in DPUs, which promise a future in which data traffic is able to better traverse the data center without overtaxing the CPU.

Top NVMe Vendors

NVMe allows standard all-flash systems by specifying a standard PCIe connection. Various vendors including Dell, HPE, NetApp, Pure Storage, IBM and Nimble offer market-ready flash systems that can replace traditional SATA and SAS SSDs with NVMe SSDs. These all-in-one shared storage systems are similar to prior storage systems in their fundamental architecture. Rather than embracing the decomposition of storage into the operating environment via NVMe and NVMe over TCP, these solutions provide traditional management benefits through centralization and control.

Conversely, a handful of container-attached storage solutions that offer a middle ground between shared storage system architectures and local storage tend to embrace the decomposition of storage capabilities into a mix of local disk and remotely accessed JBODs, cloud volumes and object stores. One of the most popular is OpenEBS by MayaData, an open source and cloud native computing project used by enterprises including Arista, Flipkart and Bloomberg. Mayastor, OpenEBS’s latest storage engine, delivers very low overhead versus the performance capabilities of underlying devices and is built around NVMe and NVMe over TCP. While OpenEBS Mayastor does not require NVMe devices and does not require the workloads to access data via NVMe, an end-to-end deployment from a workload running a container supporting NVMe over TCP through the low-overhead OpenEBS Mayastor and NVMe devices will perform as close as possible to the theoretical maximum performance of the underlying devices.

OpenEBS Mayastor is unique among open source storage projects in using NVMe over TCP internally. To learn more about how OpenEBS Mayastor was able to deliver less than 6% overhead versus the theoretical maximum performance of fast underlying NVMe devices from Intel, please visit this article. This is among the only solutions available to able to handle a workload that may need more than 1.5 million IOPS per container.

OpenEBS Mayastor builds a foundational layer that enables workloads to coalesce and control storage as needed in a declarative, Kubernetes-native way. While doing so, the user can focus on what’s important: benefiting from the insights that their applications using stateful workloads are intended to discover.

If you’re interested in trying out Mayastor for yourself, instructions for how to set up your own cluster and run a benchmark like the commonly used fio can be found here. See also the benchmarking details of OpenEBS Mayastor and Intel Optane.

Featured image via Pixabay.