Skip to main content

Your Questions Answered – Applications Take Advantage of Persistent Memory Webcast

We hope you had time to check out our recent webcast on Applications Take Advantage of Persistent Memory Raghu Kulkarni of Viking Technology, a member of the SNIA Solid State Storage Initiative, did a great job laying the foundation for an understanding of Persistent Memory today, just in time for the SNIA Persistent Memory Summit.

You can catch up on videos of Summit talks, along with the slides presented, here.

During the webcast, we had many interesting questions.  Now, as promised, Raghu provides the answers.  Happy reading, and we hope to see you at one of our upcoming webcasts or events.

Q.  Does NVDIMM-N encryption lower the performance levels that you presented?

A.  It typically depends on the implementation and differs from each vendor. Generally speaking, Save and Restore operations will increase by a small factor – less than 10%.  Products from some vendors, like Viking, will not see a performance degradation as it is offset by a faster transfer rate

Q.  What are the read/write bandwidth capabilities of NVDIMM-N? How does that compare to Intel’s Persistent Memory?

A.  For Byte-addressable mode, NVDIMM-N in theory has the same high performance as DRAM, around 100ns. With the latest Linux drivers in DAX mode, NVDIMM-N are still expected to be better than Intel’s Persistent Memory.

Q.  On the use cases, what are the use cases when Persistent Memory is attached to an accelerator chip compared to a Processor attached setup?

A.  Mainly to accelerate the performance by storing the metadata or even data in Persistent Memory, so that the request can be acknowledged immediately without having to wait for commits to SSD/HDD. It also saves the rebuild time, which is a common practice for volatile memory.

Q.  How does BIOS/MRC work when a Persistent Memory is attached to an accelerator (ASIC/FPGA/GPU) over PCIe, when trying to extended/increase the memory for the processor?

A.  System BIOS will not detect the Persistent Memory sitting on PCIe; it only discovers Persistent Memory installed in DIMM slots. FPGA/ASIC, etc. have to build their own bottom up code to detect and present the Persistent Memory on PCIe depending on the use case.

Q.  Do we need application changes to take advantage of Persistent Memory-aware file storage? how does it compare against the DAX mode?

A.  To take advantage of the low latency/high performance nature of Persistent Memory, it would be beneficial to modify the applications. However, one can still leverage the existing IO stack if modifying the application is not an option. Check out pmem.io for pre-built libraries that can be directly integrated into applications.

Q.  Should the Persistent Memory usage be compared against the Storage or Memory. Which is a more relevant use case for Persistent Memory?

A.  Typically, a media that is Byte-addressable is called Persistent Memory (PM); however, you can also access it in Block mode. Again, depending on the application needs, use case, and other system level factors it can be used in either modes.  However, you will find best performance when accessing in Byte-addressable/Load-Store mode.

 

 

Innovating File System Architectures with NVMe

It’s exciting to see the recent formation of the Solid State Drive Special Interest Group (SIG) here in the SNIA Solid State Storage Initiative.  After all, everyone appreciates the ability to totally geek out about the latest drive technology and software for file systems.  Right? Hey, where’s everyone going? We have vacation pictures with the dog we stored that we want to show…

Solid state storage has long found its place with those seeking greater performance in systems, especially where smaller or more random block/file transfers are prevalent.  Single-system opportunity with NVMe drives is broad, and pretty much unquestioned by those building systems for the modern IT environments. Cloud, likewise, has found use of the technology where single-node performance makes a broader deployment relevant.

There have been many efforts to build the case for solid state in networked storage.  Where storage and computation combine — for instance in a large map/reduce application — there’s been significant advantage, especially in the area of sustained data reads.  This has usually comes at a scalar cost, where additional systems are needed for capacity. Nonetheless, finding cases where non-volatile memory enhances infrastructure deployment for storage or analytics.  Yes, analytics is infrastructure these days, deal with it.

Seemingly independent of the hardware trends, the development of new file systems has provided significant innovation.  Notably, heavily parallel file systems have the ability to serve a variety of network users in specialized applications or appliances.  Much of the work has focused on development of the software or base technology rather than delivering a broader view of either performance or applicability.  Therefore, a paper such as this one on building a Lustre file system using NVMe drives is a welcome addition to the case for both solid state storage and revolutionary file systems that move from specific applications to more general availability.

The paper shows how to build a small (half-rack) cluster of storage to support the Lustre file system, and it also adds the Dell VFlex OS implemented as a software defined storage solution.  This has the potential to take an HPC-focused product like Lustre and drive a broader market availability for a high-performance solution. The combination of read/write performance, easy adoption to the broad enterprise, and relatively small footprint shows new promise for innovation.

The opportunity for widespread delivery of solid state storage using NVMe and software innovation in the storage space is ready to move the datacenter to new and more ambitious levels.  The SNIA 2019 Storage Developer Conference  is currently open for submissions from storage professionals willing to share knowledge and experience.  Innovative solutions such as this one are always welcome for consideration.

Hacking with the U

New Capability in Familiar Places

When it comes to persistent memory, many application developers initially think of change as hard work that likely yields incremental result.  It’s perhaps a better idea to look at the capability that’s new, but that’s already easily accessible using the methods that are in place today.  It’s not that enabling persistent memory is effortless, it’s more that normal code improvement can take advantage of the new features in the standard course of development.

The concept of multiple memory tiers is ingrained in nearly every programming model.  While the matrix of possibility can get fairly complex, it’s worth looking at three variables of the memory model.  The first is the access type, either via load/store or block operation. The second is the latency or distance from the processing units; in this case the focus would be on the DIMM.  The last would be memory persistence.

Adding persistence to the DIMM tier of memory provides opportunity to programmers in a variety of ways.  Typically, this latency is used for most of the program flow, while data eventually is moved to a farther tier such as disk or network for persistence.  Allocating the majority of data to a low-latency tier like a DIMM has significant potential.

An example of this in the marketplace would be SAP’s HANA in-memory database.  However, it’s less well-known that more traditional database products in the same category have built-in methodologies for moving data that is repeatedly accessed into the DIMM tier, later committing changes to storage via background processes.  It’s likely that adding persistence to DIMMs in volume would be both valuable and also architecturally possible in a short period of development time.

One way that this process is simplified for developers is the fact that the SNIA NVM Programming Model for DIMM-based persistence incorporates both load/store and block access modes.   Developers already familiar with using SSD over rotating media — that would be a fourth memory vector, deal with the ambiguity — would be able to see some incremental performance and potentially some system design simplification.  Those already using memory for data storage could utilize better recovery options as well as explore changes that high-performance storage could bring.

Join other developers on Wednesday, January 23rd at the SNIA Persistent Memory Programming Hackathon to explore options for how your software can take advantage of this new opportunity. Complimentary registration is available at this link.

Opportunity for Persistent Memory is Now

It’s very rare that there is a significant change in computer architecture, especially one that is nearly immediately pervasive across the breadth of a market segment.  It’s even more rare when a fundamental change such as this is supported in a way that software developers can quickly adapt to existing software architecture. Most significant transitions require a ground-up rethink to achieve performance or reliability gains, and the cost-benefit analysis generally pushes a transition to the new thing be measured in multiple revisions as opposed to one, big jump.

In the last decade the growth of persistent memory has bucked this trend.  The introduction of the solid-state disk made an immediate impact on existing software, especially in the server market.  Any program that relied on multiple, small-data, read/write cycles to disk recognized significant performance increases. In cases such as multi-tiered databases, the software found a, “new tier,” of storage nearly automatically and started partitioning data to it.  In an industry where innovation takes years, improvement took a matter of months to proliferate across new deployments.

While the SSD is now a standard consideration there is unexplored opportunity in solid-state storage.  The NVDIMM form factor has been in existence for quite some time, providing data persistence significantly closer to processing units in the modern server and workstation.  Many developers, however, are not aware that programming models already exist to easily incorporate some simple performance and reliability, both for byte and block access in programs.  Moreover, new innovations of persistent memory are on the horizon that will increase the density and performance of DIMM form factors.

Perhaps it’s time that more software architecture should be working on adapting this exciting technology.  The barriers to innovation are very low, and opportunity is significant. Over the year 2019, SNIA will be sponsoring the delivery of several workshops dedicated to opening up persistent memory programming to the developer community.  The first of these will be a Persistent Memory Programming Hackathon at the Hyatt Regency Santa Clara CA on January 23, 2019, the day before the SNIA Persistent Memory Summit.   Developers will have the opportunity to work with experienced software architects to understand how to quickly adapt code to use new persistent memory modes in a hackathon format.  Learn more and register at this link.

Don’t miss the opportunity to move on a strategic software inflection point ahead of the competition.  Consider attending the 2019 SNIA Persistent Memory Summit and exploring the opportunity with persistent memory.

Exceptional Agenda – and a Hackathon – Highlight the 2019 SNIA Persistent Memory Summit

SNIA 7th annual Persistent Memory Summit – January 24, 2019 at the Hyatt Santa Clara CA – delivers a far-reaching agenda exploring exciting new topics with experienced speakers:

  • Paul Grun of OpenFabrics Alliance and Cray on the Characteristics of Persistent Memory
  • Stephen Bates of Eideticom, Neal Christiansen of Microsoft, and Eric Kaczmarek of Intel on Enabling Persistent Memory through OS and Interpreted Languages
  • Adam Roberts of Western Digital on the Mission Critical Fundamental Architecture for Numerous In-memory Databases
  • Idan Burstein of Mellanox Technologies on Making Remote Memory Persistent
  • Eden Kim of Calypso Systems on Persistent Memory Performance Benchmarking and Comparison

And much more!  Full agenda and speaker bios at http://www.snia.org/pm-summit.

Registration is complimentary and includes the opportunity to tour demonstrations of persistent memory applications available today from SNIA Persistent Memory and NVDIMM SIG, SMART Modular, AgigA Tech, and Viking Technology over lunch, at breaks, and during the evening Networking Reception.  Additional sponsorship opportunities are available to SNIA and non-SNIA member companies – learn more.

New Companion Event to the Summit –

Persistent Memory Programming Hackathon

Wednesday January 23, 2019 9:00 am – 2:00 pm

Join us for the inaugural PM Programming Hackathon on the day before the Summit –a half-day program designed to get software developers an understanding of the various tiers and modes of Persistent Memory and what existing methods are available to access them.  Learn more and register at https://www.snia.org/pm-summit/hackathon

Emerging Memory Questions Answered

With a topic like Emerging Memory Poised to Explode, no wonder this SNIA Solid State Storage Initiative webcast generated so much interest!  Our audience had some great questions, and, as promised, our experts Tom Coughlin and Jim Handy provide the answers in this blog. Read on, and join SNIA at the Persistent Memory Summit January 24, 2019 in Santa Clara CA.  Details and complimentary registration are at www.snia.org/pm-summit.

Q. Can you mention one or two key applications leading the effort to leverage Persistent Memory?

A. Right now the main applications for Persistent Memory are in Storage Area Networks (SANs), where NVDIMM-Ns (Non-Volatile Dual In-line Memory Modules) are being used for journaling.  SAP HANA, SQLserver, Apache Ignite, Oracle RDBMS, eXtremeDB, Aerospike, and other in-memory databases are undergoing early deployment with NVDIMM-N and with Intel’s Optane DIMMs in hyperscale datacenters.  IBM is using Everspin Magnetoresistive Random-Access Memory (MRAM) chips for higher-speed functions (write cache, data buffer, streams, journaling, and logs) in certain Solid State Drives (SSDs), following a lead taken by Mangstor.  Everspin’s STT MRAM DIMM is also seeing some success, but the company’s not disclosing a lot of specifics.

Q. I believe that anyone who can ditch the batteries for NVDIMM support will happily pay a mark-up on 3DXP DIMMs should Micron offer them.

A: Perhaps that’s true.  I think that Micron, though, is looking for higher-volume applications.  Micron is well aware of the size of the NVDIMM-N market, since the company is an important NVDIMM supplier.  Everspin is probably also working on this opportunity, since its STT MRAM DIMM is similar, although at a significantly higher price than Dynamic Random Access Memory (DRAM).

Volume is the key to more applications for 3DXPoint DIMMs and any other memory technology.  It may be that the rise of Artificial Intelligence (AI) applications will help drive the greater use of many of these fast Non-Volatile Memories.

Q.  Any comments on HPE’s Memristor?

A: HPE went very silent on the Memristor at about the same time that the 3D XPoint Memory was introduced.  The company explained in 2016 that the first generation of “The Machine” would use DRAM instead of the Memristor.  This leads us to suspect that 3D XPoint turned some heads at HPE.  One likely explanation is that HPE by itself would have a very difficult time reaching the scale required to bring the Memristor’s cost to the necessary level to justify its use.

Q. Do you expect NVDIMM-N will co-exist into the future with other storage class memories because of its speed and essentially unlimited endurance of DRAM?

A: Yes.  The NVDIMM-N should continue to appeal to certain applications, especially those that value its technical attributes enough to offset its higher-than-DRAM price.

Q. What are Write/Erase endurance limitations of PCM and STT? (vis a vis DRAM’s infinite endurance)?

A: Intel and Micron have never publicly disclosed their endurance figures for 3D XPoint, although Jim Handy has backed out numbers in his Memory Guy blog (http://TheMemoryGuy.com/examining-3d-xpoints-1000-times-endurance-benefit/).  His calculations indicate an endurance of more than 30K erase/write cycles, but the number could be significantly lower than this since SSD controllers do a good job of reducing the number of writes that the memory chip actually sees.  There’s an SSD guy series on this: http://thessdguy.com/how-controllers-maximize-ssd-life/, also available as a SNIA SSSI TechNote.   Everspin’s EMD3D256M STT MRAM specification lists an endurance of 10^10 cycles.

Q. Your thoughts on Nanotube RAM (NRAM)?

A: Although the nanotube memory is very interesting it is only one member in a sea of contenders for the Persistent Memory crown.  It’s very difficult to project the outcome of a device that’s not already in volume production.

Q. Will Micron commercialize 3D XPoint? I do not see them in the market as much as Intel on Optane.

A: Micron needs a clear path to profitability to rationalize entering the 3D XPoint market whereas Intel can justify losing money on the technology.  Learn why in an upcoming post on The Memory Guy blog.

Thanks again to the bearded duo and their moderator, Alex McDonald, SNIA Solid State Storage Initiative Co-Chair!  Bookmark the SNIA Brighttalk webcast link for more great webcasts in 2019!

Remote Persistent Memory: It Takes a Village (or Perhaps a City)

By Paul Grun, Chair, OpenFabrics Alliance and Senior Technologist, Cray, Inc.

Remote Persistent Memory, (RPM), is rapidly emerging as an important new technology. But understanding a new technology, and grasping its significance, requires engagement across a wide range of industry organizations, companies, and individuals. It takes a village, as they say.

Technologies that are capable of bending the arc of server architecture come along only rarely. It’s sometimes hard to see one coming because it can be tough to discern between a shiny new thing, an insignificant evolution in a minor technology, and a serious contender for the Technical Disrupter of the Year award. Remote Persistent Memory is one such technology, the ultimate impact of which is only now coming into view. Two relatively recent technologies serve to illustrate the point: The emergence of dedicated, high performance networks beginning in the early 2000s and more recently the arrival of non-volatile memory technologies, both of which are leaving a significant mark on the evolution of computer systems. But what happens when those two technologies are combined to deliver access to persistent memory over a fabric? It seems likely that such a development will positively impact the well-understood memory hierarchies that are the basis of all computer systems today. And that, in turn, could cause system architects and application programmers to re-think the way that information is accessed, shared, and stored. To help us bring the subject of RPM into sharp focus, there is currently a concerted effort underway to put some clear definition around what is shaping up to be a significant disrupter.

For those who aren’t familiar, Remote Persistent Memory refers to a persistent memory service that is accessed over a fabric or network. It may be a service shared among multiple users, or dedicated to one user or application. It’s distinguished from local Persistent Memory, which refers to a memory device attached locally to the processor via a memory or I/O bus, in that RPM is accessed via a high performance switched fabric. For our purposes, we’ll further refine our discussion to local fabrics, neglecting any discussion of accessing memory over the wide area.

Most important of all, Persistent Memory, including RPM, is definitely distinct from storage, whether that is file, object or block storage. That’s why we label this as a ‘memory’ service – to distinguish it from storage.  The key distinction is that the consumer of the service recognizes and uses it as it would any other level in the memory hierarchy. Even though the service could be implemented using block or file-oriented non-volatile memory devices, the key is in the way that an application accesses and uses the service. This isn’t faster or better storage, it’s a whole different kettle of fish.

So how do we go about discovering the ultimate value of a new technology like RPM? So far, a lively discussion has been taking place across multiple venues and industry events. These aren’t ad hoc discussions nor are they tightly scripted events; they are taking place in a loosely organized fashion designed to encourage lots of participation and keep the ball moving forward. Key discussions on the topic have hopscotched from the SNIA’s Storage Developers Conference, to SNIA/SSSI’s Persistent Memory Summit, to the OpenFabrics Alliance (OFA) Workshop and others. Each of these industry events has given us an opportunity for the community at large to discuss and develop the essential ideas surrounding RPM. The next installment will occur at the upcoming Flash Memory Summit in August where there will be four sessions all devoted to discussing Remote Persistent Memory.

Having frequent industry gatherings is a good thing, naturally, but that by itself doesn’t answer the question of how we go about progressing a discussion of Remote Persistent Memory in an orderly way.  A pretty clear consensus has emerged that RPM represents a new layer in the memory hierarchy and therefore the best way to approach it is to take a top-down perspective. That means starting with an examination of the various ways that an application could leverage this new player in the memory hierarchy. The idea is to identify and explore several key use cases. Of course, the technology is in its early infancy, so we’re relying on the best instincts of the industry at large to guide the discussion.

Once there is a clear idea of the ways that RPM could be applied to improve application performance, efficiency or resiliency, it’ll be time to describe how the features of an RPM service are exposed to an application. That means taking a hard look at network APIs to be sure they export the functions and features that applications will need to access the service. The API is key, because it defines the ways that an application actually accesses a new network service. Keep in mind that such a service may or may not be a natural fit to existing applications; in some cases, it will fit naturally meaning that an existing application can easily begin to utilize the service to improve performance or efficiency. For other applications, more work will be needed to fully exploit the new service.

Notice that the development of the API is being driven from the top down by application requirements. This is a clear break from traditional network design, where the underlying network and its associated API are defined roughly in tandem. Contrast that to the approach being taken with RPM, where the set of desired network characteristics is described in terms of how an application will actually use the network. Interesting!

Armed with a clear sense of how an application might use Remote Persistent Memory and the APIs needed to access it, now’s the time for network architects and protocol designers to deliver enhanced network protocols and semantics that are best able to deliver the features defined by the new network APIs. And it’s time for hardware and software designers to get to work implementing the service and integrating it into server systems.

With all that in mind, here’s the current state of affairs for those who may be interested in participating. SNIA, through its NVM Programming Technical Working Group, has published a public document describing one very important use case for RPM – High Availability. The document describes the requirements that the SNIA NVM Programming Model – first released in December 2013 — might place on a high-speed network.  That document is available online. In keeping with the ‘top-down’ theme, SNIA’s work begins with an examination of the programming models that might leverage a Remote Persistent Memory service, and then explores the resulting impacts on network design. It is being used today to describe enhancements to existing APIs including both the Verbs API and the libfabric API.

In addition, SNIA and the OFA have established a collaboration to explore other use cases, with the idea that those use cases will drive additional API enhancements. That collaboration is just now getting underway and is taking place during open, bi-weekly meetings of the OFA’s OpenFabrics Interfaces Working Group (OFIWG). There is also a mailing list dedicated to the topic to which you can subscribe by going to www.lists.openfabrics.org and subscribing to the Ofa_remotepm mailing list.

And finally, we’ll be discussing the topic at the upcoming Flash Memory Summit, August 7-9, 2018.  Just go to the program section and click on the Persistent Memory major topic, and you’ll find a link to PMEM-202-1: Remote Persistent Memory.

See you in Santa Clara!

Exceptional Agenda on Tap for 2018 Persistent Memory Summit

Persistent Memory (PM) has made tremendous strides since SNIA’s first Non-Volatile Memory Summit in 2013. With a name change to Persistent Memory Summit in 2017, that event continued the buzz with 350+ attendees and a focus turning to applications.

Now in 2018, the agenda for the SNIA Persistent Memory Summit, upcoming January 24 at the Westin San Jose, reflects the integration of PM in a number of organizations. Zvonimir Bandic of Western Digital Corporation will kick off the day exploring the “exabyte challenge” of persistent memory centric architectures and memory fabrics. The fairly new frontier of Persistent Memory over Fabrics (PMoF) returns as a topic with speakers from Open Fabrics Alliance, Cray, Eideticom, and Mellanox. Performance is always evolving, and Micron Technologies, Enmotus, and Calypso Systems will give their perspectives. And the day will dive into futures of media with speakers from Nantero and Spin Transfer Technologies, and a panel led by HPE will review new interfaces and how they relate to PM.

A highlight of the Summit will be a panel on applications and cloud with a PM twist, featuring Dreamworks, GridGain, Oracle, and Aerospike. Leading up to that will be a commentary on file systems and persistent memory from NetApp and Microsoft, and a discussion of virtualization of persistent memory presented by VMware.   SNIA found a number of users interested in persistent memory support in both Windows Server 2016 and Linux at recent events, so Microsoft and Linux will update us on the latest developments. Finally, you will want to know where the analysts weigh in on PM, so Coughlin Associates, Evaluator Group, Objective Analysis, and WebFeet Research will add their commentary.

During breaks and the complimentary lunch, you can tour Persistent Memory demos from the SNIA NVDIMM SIG, SMART Modular, AgigA Tech, Netlist, and Viking Technology.

Make your plans to attend this complimentary event by registering here: http://www.snia.org/pm-summit. See you in San Jose!

The OpenFabrics Alliance and the Pursuit of Efficient Access to Persistent Memory over Fabrics

 

Guest Columnist:  Paul Grun, Advanced Technology Development, Cray, Inc. and Vice-Chair, Open Fabrics Alliance (OFA)

Earlier this year, SNIA hosted its one-day Persistent Memory Summit in San Jose; it was my pleasure to be invited to participate by delivering a presentation on behalf of the OpenFabrics Alliance.  Check it out here.

The day long Summit program was chock full of deeply technical, detailed information about the state of the art in persistent memory technology coupled with previews of some possible future directions this exciting technology could conceivably take.  The Summit played to a completely packed house, including an auxiliary room equipped with a remote video feed.  Quite the event! Read More