Computer memory, aka RAM (random access memory), comes with pluses and minuses. It’s fast, and because it’s snuggled up to the CPU, there’s not a lot of latency. But it’s volatile – when the power goes away, its contents are lost.
Flash storage like SSDs, on the other hand, is non-volatile (when the power goes off, it retains its data), and it is certainly quicker than spinning hard drives, but it still suffers from latency issues (it can take a while for data to get from storage to memory, where the work happens).
Wouldn’t it be nice if we could have the best of both worlds: low latency, non-volatile storage that hangs out close to the CPU? That would make any processor-intensive workloads hum, since they wouldn’t be handicapped waiting for data to wend its way from storage.
Hewlett Packard Enterprise (HPE) has been busily working on this problem, and has just released a partial solution: HPE Persistent Memory. Designed to accelerate database and analytics workload performance, HPE Persistent Memory includes the HPE 8GB NVDIMM and other emerging technology to deliver what the company refers to as breakthrough performance and reliability when processing data-intensive workloads such as Online Transaction Processing (OLTP) on Microsoft SQL Server. It’s not cheap, though, at $1,232 CDN for an 8GB NVDIMM module – but then again, what new tech is cheap?
HPE isn’t the first to produce NVDIMMs. Micron Technology is also building them. However, HPE is integrating them into a couple of models of its ProLiant Gen 9 servers, and is working with Microsoft to use the technology to boost server and database performance. Nothing earlier than HPE ProLiant Gen9 will support NVDIMM, according to HPE, which says that drivers are forthcoming as a download for Windows Server 2012 R2, and will be integrated into Windows Server 2016.
HPE’s tests show impressive performance improvements on HPE ProLiant DL360 and DL380 Gen9 Servers, delivering up to a 4X increase in transaction performance, as well as 2X+ faster database logging performance for Microsoft SQL Server, up to 4X+ faster SQL Cluster replications when moving the log from NAND flash to HPE NVDIMMs, and 2X+ faster transaction rates in Linux applications when using HPE NVDIMMs. It has demonstrated performance improvements in other areas such as Hadoop, and in search, as well.
But what the heck is NVDIMM? Micron explains it nicely on its product page:
“NVDIMM is a non-volatile persistent memory solution that combines NAND flash, DRAM, and an optional power source into a single memory subsystem. The solution delivers DRAM-like latencies and can back up the data it handles, providing the ability to restore quickly if power is interrupted.
NVDIMMs operate in the DRAM memory slots of servers to handle critical data at DRAM speeds. In the event of a power fail or system crash, an onboard controller safely transfers data stored in DRAM to the onboard non-volatile memory, thereby preserving data that would otherwise be lost. When the system stability is restored, the controller transfers the data from the NAND back to the DRAM, allowing the application to efficiently pick up where it left off.”
In other words, it works kind of like the caching controllers of days gone by, where frequently used data was held in memory protected by a battery backup, to render it quickly accessible.
HPE tells me that the NVDIMM technology is similar to what it is using in its next iteration of The Machine, a new computing architecture that uses a huge pool of memory for everything – both storage and RAM. It will ultimately rely on photonics, and on a kind of non-volatile memory known as memristors, but will go through multiple intermediate steps as the various technologies come into being and are perfected.
Persistent memory is important for both The Machine and for systems existing today because it addresses the knotty problem of latency. Many systems are capable of much more processing than they actually do. They sit and twiddle their electronic thumbs much of the time because data can’t make it from storage, be it hard disk or flash, quickly enough to keep the CPU busy. Yet in areas such as online transaction processing that demand super-speedy data access, it isn’t safe to hold data only in main memory, where the CPU can get at it quickly, because of the risk that a power blip could corrupt or delete it.
NVDIMMs address the problem by offering a hybrid approach. They contain speedy volatile DRAM, plus slower non-volatile memory with a controller with a battery backup that shuffles the data safely to the non-volatile memory should the power stutter or fail, and the smarts to manage data movement between the memory technologies. Currently, HPE allows up to 128 GB of NVDIMMs in a single server, plus a top-up of any supported amount of standard memory.
Persistent memory is a technology that is long overdue. Computer architecture has remained more or less the same for decades, with vendors working to speed up the individual components without tweaking the way they work together. They do so for good reasons – existing software and hardware relies on the old architecture, and would require significant (and expensive) work to alter. Yet we’re approaching a time when the industry has to figure out how to make better hardware with improved architecture work with existing -and new – software. The old architectures just won’t be able to keep up.