What is eBPF, and Why Does it Matter for Computational Storage?

Recently, a question came up in the SNIA Computational Storage Special Interest Group on new developments in a technology called eBPF and how they might relate to computational storage. To learn more, SNIA on Storage sat down with Eli Tiomkin, SNIA CS SIG Chair with NGD Systems; Matias Bjørling of Western Digital; Jim Harris of Intel; Dave Landsman of Western Digital; and Oscar Pinto of Samsung.

SNIA On Storage (SOS):  The eBPF.io website defines eBPF, extended Berkeley Packet Filter, as a revolutionary technology that can run sandboxed programs in the Linux kernel without changing kernel source code or loading kernel modules.  Why is it important?

Dave Landsman (DL): eBPF emerged in Linux as a way to do network filtering, and enables the Linux kernel to be programmed.  Intelligence and features can be added to existing layers, and there is no need to add additional layers of complexity.

SNIA On Storage (SOS):  What are the elements of eBPF that would be key to computational storage? 

Jim Harris (JH):  The key to eBPF is that it is architecturally agnostic; that is, applications can download programs into a kernel without having to modify the kernel.  Computational storage allows a user to do the same types of things – develop programs on a host and have the controller execute them without having to change the firmware on the controller.

Using a hardware agnostic instruction set is preferred to having an application need to download x86 or ARM code based on what architecture is running.

DL:  It is much easier to establish a standard ecosystem with architecture independence.  Instead of an application needing to download x86 or ARM code based on the architecture, you can use a hardware agnostic instruction set where the kernel can interpret and then translate the instructions based on the processor. Computational storage would not need to know the processor running on an NVMe device with this “agnostic code”.

SOS: How has the use of eBPF evolved?

JH:  It is more efficient to run programs directly in the kernel I/O stack rather than have to return packet data to the user, operate on it there, and then send the data back to the kernel. In the Linux kernel, eBPF began as a way to capture and filter network packets.  Over time, eBPF use has evolved to additional use cases.

SOS:  What are some use case examples?

DL: One of the use cases is performance analysis. For example, eBPF can be used to measure things such as latency distributions for file system I/O, details of storage device I/O and TCP retransmits, and blocked stack traces and memory.

Matias Bjørling (MB): Other examples in the Linux kernel include tracing and gathering statistics.  However, while the eBPF programs in the kernel are fairly simple, and can be verified by the Linux kernel VM, computational programs are more complex, and longer running. Thus, there is a lot of work ongoing to explore how to efficiently apply eBPF to computational programs.

For example, what is the right set of run-time restrictions to be defined by the eBPF VM, any new instructions to be defined, how to make the program run as close to the instruction set of the target hardware.

JH: One of the big use cases involves data analytics and filtering. A common data flow for data analytics are large database table files that are often compressed and encrypted.  Without computational storage, you read the compressed and encrypted data blocks to the host, decompress and decrypt the blocks, and maybe do some filtering operations like a SQL query.  All this, however, consumes a lot of extra host PCIe, host memory, and cache bandwidth because you are reading the data blocks and doing all these operations on the host.  With computational storage, inside the device you can tell the SSD to read data and transfer it not to the host but to some memory buffers within the SSD.  The host can then tell the controller to do a fixed function program like decrypt the data and put in another local location on the SSD, and then do a user supplied program like eBPF to do some filtering operations on that local decrypted data.  In the end you would transfer the filtered data to the host.  You are doing the compute closer to the storage, saving memory and bandwidth.

SOS:  How does using eBPF for computational storage look the same?  How does it look different?

Jim – There are two parts to this answer.  Part 1 is the eBPF instruction set with registers and how eBPF programs are assembled.  Where we are excited about computational storage and eBPF is that the instruction set is common. There are already existing tool chains that support eBPF.   You can take a C program and compile it into an eBPF object file, which is huge.  If you add computational storage aspects to standards like NVMe, where developing a unique tool chain support can take a lot of work, you can now leverage what is already there for the eBPF ecosystem. 

Part 2 of the answer centers around the Linux kernel’s restrictions on what an eBPF program is allowed to do when downloaded. For example, the eBPF instruction set allows for unbounded loops, and toolchains such as gcc will generate eBPF object code with unbounded loops, but the Linux kernel will not permit those to execute – and rejects the program. These restrictions are manageable when doing packet processing in the kernel.  The kernel knows a packet’s specific data structure and can verify that data is not being accessed outside the packet.  With computational storage, you may want to run an eBPF program that operates on a set of data that has a very complex data structure – perhaps arrays not bounded or multiple levels of indirection.  Applying Linux kernel verification rules to computational storage would limit or even prevent processing this type of data.

SOS: What are some of the other challenges you are working through with using eBPF for computational storage?

MB:  We know that x86 works fast with high memory bandwidth, while other cores are slower.  We have some general compute challenges in that eBPF needs to be able to hook into today’s hardware like we do for SSDs.  What kind of operations make sense to offload for these workloads?  How do we define a common implementation API for all of them and build an ecosystem on top of it?  Do we need an instruction-based compiler, or a library to compile up to – and if you have it on the NVMe drive side, could you use it?  eBPF in itself is great- but getting a whole ecosystem and getting all of us to agree on what makes value will be the challenge in the long term.

Oscar Pinto (OP): The Linux kernel for eBPF today is more geared towards networking in its functionality but light on storage. That may be a challenge in building a computational storage framework. We need to think through how to enhance this given that we download and execute eBPF programs in the device. As Matias indicated, x86 is great at what it does in the host today. But if we have to work with smaller CPUs in the device, they may need help with say dedicated hardware or similar implemented using additional logic to aid the eBPF programs One question is how would these programs talk to them?  We don’t have a setup for storage like this today, and there are a variety of storage services that can benefit from eBPF.

SOS: Is SNIA addressing this challenge?

OP: On the SNIA side we are building on program functions that are downloaded to computational storage engines.  These functions run on the engines which are CPUs or some other form of compute that are tied to a FPGA, DPU, or dedicated hardware. We are defining these abstracted functionalities in SNIA today, and the SNIA Computational Storage Technical Work Group is developing a Computational Storage Architecture and Programming Model and Computational Storage APIs  to address it..  The latest versions, v0.8 and v0.5, has been approved by the SNIA Technical Council, and is now available for public review and comment at SNIA Feedback Portal.

SOS: Is there an eBPF standard? Is it aligned with storage?

JH:  We have a challenge around what an eBPF standard should look like.  Today it is defined in the Linux kernel.  But if you want to incorporate eBPF in a storage standard you need to have something specified for that storage standard.  We know the Linux kernel will continue to evolve adding and modifying instructions. But if you have a NVMe SSD or other storage device you have to have something set in stone –the version of eBPF that the standard supports.  We need to know what the eBPF standard will look like and where will it live.  Will standards organizations need to define something separately?

SOS:  What would you like an eBPF standard to look like from a storage perspective?

JH – We’d like an eBPF standard that can be used by everyone.  We are looking at how computational storage can be implemented in a way that is safe and secure but also be able to solve use cases that are different.

MB:  Security will be a key part of an eBPF standard.  Programs should not access data they should not have access to.  This will need to be solved within a storage device. There are some synergies with external key management. 

DL: The storage community has to figure out how to work with eBPF and make this standard something that a storage environment can take advantage of and rely on.

SOS: Where do you see the future of eBPF?

MB:  The vision is that you can build eBPFs and it works everywhere.  When we build new database systems and integrate eBPFs into them, we then have embedded kernels that can be sent to any NVMe device over the wire and be executed.  The cool part is that it can be anywhere on the path, so there becomes a lot of interesting ways to build new architectures on top of this. And together with the open system ecosystem we can create a body of accelerators in which we can then fast track the build of these ecosystems.  eBPF can put this into overdrive with use cases outside the kernel.

DL:  There may be some other environments where computational storage is being evaluated, such as web assembly.

JH: An eBPF run time is much easier to put into an SSD than a web assembly run time.

MB: eBPF makes more sense – it is simpler to start and build upon as it is not set in stone for one particular use case.

Eli Tiomkin (ET):  Different SSDs have different levels of constraints.  Every computational storage SSDs in production and even those in development have very unique capabilities that are dependent on the workload and application.

SOS:  Any final thoughts?

MB: At this point, technologies are coming together which are going to change the industry in a way that we can redesign the storage systems both with computational storage and how we manage security in NVMe devices for these programs.  We have the perfect storm pulling things together. Exciting platforms can be built using open standards specifications not previously available.

SOS:  Looking forward to this exciting future. Thanks to you all.

Q&A on Data Movement and Computational Storage

Recently, the SNIA Compute, Memory, and Storage Initiative hosted a live webcast “Data Movement and Computational Storage”, moderated by Jim Fister of The Decision Place with Nidish Kamath of KIOXIA, David McIntyre of Samsung, and Eli Tiomkin of NGD Systems as panelists.  We had a great discussion on new ways to look at storage, flexible computer systems, and how to put on your security hat.

During our conversation, we answered audience questions, and raised a few of our own!  Check out some of the back-and-forth, and tune in to the entire video for customer use cases and thoughts for the future.

Q:  What is the value of computational storage?

A:  With computational storage, you have latency sensitivity – you can make decisions faster at the edge and can also distribute computing to process decisions anywhere.

Q:  Why is it important to consider “data movement” with regard to computational storage?

A:  There is a reduction in data movement that computational storage brings to the system, along with higher efficiencies while moving that data and a reduction in power which users may not have yet considered.   

Q: How does power use change when computational storage is brought in?

A:  You want to “move” compute to that point in the system where operations can be accomplished where the data is “at rest”. In traditional systems, if you need to move data from storage to the host, there are power costs that may not even be currently measured.  However, if you can now run applications and not move data, you will realize that power reduction, which is more and more important with the anticipation of massive quantities of data coming in the future.

Q: Are the traditional processing/storage transistor counts the same with computational storage?

A:  With computational storage, you can put the programming where it is needed – moving the compute to that point in the system where it can achieve the work with limited amount of overhead and networking bandwidth. Compute moves to where the data sits at rest, which is especially important with the explosion of data sets.

Q:  Does computational storage play a role in data security and privacy?

A: Security threats don’t always happen at the same time, so you need to consider a top-down holistic perspective. It will be important both today and in the future to consider new security threats because of data movement.

There is always a risk for security when the data is moving; however, computational storage reduces the data movement significantly, and can play as a more secure way to treat data because the data is not moving as much. Computational storage allows you to lock the data, for example, medical data, and only process when needed and if needed in an authenticated and secure fashion.  There’s no requirement to build a whole system around this.

Q:  What are the computational storage opportunities at the edge? 

A:  We need to understand the ecosystem the computational storage device is going into. Computational storage sits at the front line of edge applications and management of edge infrastructure pieces in the cloud.  It’s a great time to embrace existing cloud policies and collaborate with customers on how policies will migrate and change to the edge.

Q: In your discussions with customers, how dynamic do they expect the sets of code running on computational storage to be? With the extremes being code never changing (installed once/updated rarely) to being different for every query or operation. Please discuss how challenges differ for these approaches.

A:  The heavy lift comes into play with the application and the system integration.  To run flexible code, customers want a simple, straightforward, and seamless programming model that enables them to run as many applications as they need and change them in an easy way without disrupting the system.  Clients are using computational storage to speed up the processing of their data with dynamic reconfiguring in cutting edge applications.  We are putting a lot of effort toward this seamless and transparent model with our work in the SNIA Computational Storage Technical Work Group.

Q:  What does computational storage mean for data in the future?

A: The infrastructure of data and data movement will drastically change in the future as edge emerges and cloud continues to grow. Using computational storage will be extremely beneficial in the new infrastructure, and we will need to work together as an ecosystem and under SNIA to make sure we are all aligned to provide the right solutions to the customer.  

Computational Storage in the Real World

Computational storage has arrived, with real world applications meeting the goal of enabling parallel computation and/or alleviating constraints on existing compute, memory, storage, and I/O.  The SNIA Computational Storage Special Interest Group has gathered examples of computational storage use cases which demonstrate improvements in application performance and infrastructure efficiency through the integration of compute resources either directly with storage or between the host and the storage. First up in the SNIA Computational Storage Demo Series are our SIG member companies Eideticom Communications and NGD Systems. Their examples demonstrate proof of computational storage concepts.  They also illustrate how SNIA and the Compute Memory and Storage Initiative (CMSI) member companies are advancing the SNIA Computational Storage Architecture and Programing Model, which defines recommended behavior for hardware and software that supports computational storage.

The NGD Systems use case highlights a Microsoft Azure IoT System running on a computational storage device. The video walks through the steps to establish connections to agents and hubs, and shows how to establish event monitors and do image analysis to store images from the web into a computational storage device.

The Eideticom Communications use case highlights transparent compression via a stacked file system and a NVMe-based computational storage processor. The video walks through the steps to mount a no-load file systems, and run sys admin commands to read/write files to disk with compression illustrating speed and application transparency.

We invite you to visit our Computational Storage Use Cases page for these and more examples of real world computational storage applications. Questions? Send them to askcmsi@snia.org.

Cutting Edge Persistent Memory Education – Hear from the Experts!

Most of the US is currently experiencing an epic winter.  So much for 2021 being less interesting than 2020.  Meanwhile, large portions of the world are also still locked down waiting for vaccine production.  So much for 2020 ending in 2020.  What, oh what, can possibly take our minds off the boredom?

Here’s an idea – what about some education in persistent memory programming?  SNIA and UCSD recently hosted an online conference on Persistent Programming In Real Life (PIRL), and the videos of all the sessions are now available online.  There are nearly 20 hours of content including panel discussions, academic, and industry presentations.  Recordings and PDFs of the presentations have been posted on the PIRL site as well as in the SNIA Educational Library.

In addition, SNIA is now in planning for our April 21-22, 2021 virtual Persistent Memory and Computational Storage Summit, where we’ll be featuring the latest content from the data center to the edge. Complimentary registration is now open. If you’re interested in helping us plan, or proposing content, you can contact us to provide input.

Spring will be here soon, with some freedom from cold, lockdown, and boredom.  We hope to see you virtually at the summit, full of knowledge from your perusal of SNIA education content.

Experts Speak at Flash Memory Summit



2020 brought new developments in persistent memory and computational storage. SNIA Compute, Memory, and Storage Initiative was pleased to sponsor two tracks at the recent Flash Memory Summit where industry leaders captured the advances.  Videos and presentations are now available.

In the Persistent Memory Track, Dave Eggleston of Intuitive Cognition Consulting and Chris Petersen of Facebook combine to deliver a state of the union address for the industry effort underway to deliver persistent memory. They examine industry advances of persistent memory media, the new devices and form factors for persistent memory attachment, remote and direct-attached PM with low latency interfaces like CXL, and describe the best fit applications and use cases for persistent memory.

Jia Shi of Oracle and Yao Yue of Twitter then dive into a rapid-fire presentation on two examples of how persistent memory is changing the landscape – in appliances, in infrastructure, and in applications – from the perspective of a social networking company and a cloud and enterprise software provider.  They highlight the motivation for using persistent memory and the delivered results

Finally, Ginger Gilsdorf of Intel and Tom Coughlin of Coughlin Associates look ahead to how Persistent Memory technology is evolving, including maximizing performance in next-generation applications, and provide their perspective on PM market growth projections.

The track concludes with speakers reuniting in a panel to discuss the reasons that have stopped persistent memory from gaining wider usage and identifying breakthroughs that are beginning to appear.

The Computational Storage Track opens with an update by Chuck Sobey of Channel Science who discusses the shifting of compute power to the storage; use cases including database, big data, AI/ML, and edge applications; and how the framework for computational storage is driven by SNIA and the NVM Express standards groups.

Stephen Bates of Eideticom follows with an outline of the state of the nation in computational storage standards. He then describes computational storage examples already in use that illustrate ways storage challenges are being met, and comments on promising directions to explore for the future.

Andy Walls of IBM then discusses using computational storage to handle big data, allowing data to reside close to processing power, thus allowing processing tasks to be in-line with data accesses. He covers computational storage examples already in use for application distribution and other promising directions to explore for the future.

Neil Werdmuller and Jason Molgaard of Arm discuss flexible computational storage solutions, and how data-driven applications that benefit from database searches, data manipulation, and machine learning can perform better and be more scalable if developers add computation directly to storage.

A lively panel with Arm, Eideticom, NGD Systems, and ScaleFlux rounds out the track, discussing keys to making computational storage work in your applications.  

Enjoy these presentations and contact us at askcmsi@snia.org with your questions and comments!



Compute Everywhere – Your Questions Answered!

Recently, the SNIA Compute, Memory, and Storage Initiative (CMSI) hosted a wide-ranging discussion on the “compute everywhere” continuum.  The panel featured Chipalo Street from Microsoft, Steve Adams from Intel, and Eli Tiomkin from NGD Systems representing both the start-up environment and the SNIA Computational Storage Special Interest Group. We appreciate the many questions asked during the webcast and are pleased to answer them in this Q&A blog. 

Read More

Take 10 – Watch a Computational Storage Trilogy

We’re all busy these days, and the thought of scheduling even more content to watch can be overwhelming.  Great technical content – especially from the SNIA Educational Library – delivers what you need to know, but often it needs to be consumed in long chunks. Perhaps it’s time to shorten the content so you have more freedom to watch.

With the tremendous interest in computational storage, SNIA is on the forefront of standards development – and education.  The SNIA Computational Storage Special Interest Group (CS SIG) has just produced a video trilogy – informative, packed with detail, and consumable in under 10 minutes!

What Is Computational Storage?, presented by Eli Tiomkin, SNIA CS SIG Chair, emphasizes the need for common language and definition of computational storage terms, and discusses four distinct examples of computational storage deployments.  It serves as a great introduction to the other videos.

Advantages of Reducing Data Movement frames computational storage advantages into two categories:  saving time and saving money. JB Baker, SNIA CS SIG member, dives into a data filtering computational storage service example and an analytics benchmark, explaining how tasks complete more quickly using less power and fewer CPU cycles.

Eli Tiomkin returns to complete the trilogy with Computational Storage:  Edge Compute Deployment. He discusses how an edge computing future might look, and how computational storage operates in a cloud, edge node, and edge device environment.

Each video in the Educational Library also has a downloadable PDF of the slides that also link to additional resources that you can view at your leisure.  The SNIA Compute, Memory, and Storage Initiative will be producing more of these short videos in the coming months on computational storage, persistent memory, and other topics.

Check out each video and download the PDF of the slides!  Happy watching!

Are We at the End of the 2.5-inch Disk Era?

The SNIA Solid State Storage Special Interest Group (SIG) recently updated the Solid State Drive Form Factor page to provide detailed information on dimensions; mechanical, electrical, and connector specifications; and protocols. On our August 4, 2020 SNIA webcast, we will take a detailed look at one of these form factors – Enterprise and Data Center SSD Form Factor (EDSFF) – challenging an expert panel to consider if we are at the end of the 2.5-in disk era.

Enterprise and Data Center Form Factor (EFSFF) is designed natively for data center NVMe SSDs to improve thermal, power, performance, and capacity scaling. EDSFF has different variants for flexible and scalable performance, dense storage configurations, general purpose servers, and improved data center TCO.  At the 2020 Open Compute Virtual Summit, OEMs, cloud service providers, hyperscale data center, and SSD vendors showcased products and their vision for how this new family of SSD form factors solves real data challenges.

Read More

Your Questions Answered on Persistent Memory Programming

On April 14, the SNIA Compute Memory and Storage Initiative (CMSI) held a webcast asking the question – Do You Wanna Program Persistent Memory? We had some answers in the affirmative – answering the call of the NVDIMM Programming Challenge

The Challenge utilizes a set of systems SNIA provides for the development of applications that take advantage of persistent memory. These systems support persistent memory types that can utilize the SNIA Persistent Memory Programming Model, and that are also supported by the Persistent Memory Development Kit (PMDK) Libraries. 

The NVDIMM Programming Challenge seeks innovative applications and tools that showcase the features persistent memory will enable. Submissions are judged by a panel of SNIA leaders and individual contest sponsors.  Judging is scheduled at the convenience of the submitter and judges, and done via conference call.  The program or results should be able to be visually demonstrated using remote access to a PM-enabled server.

NVDIMM Programming Challenge participant Steve Heller from Chrysalis Software joined the webcast to discuss the Three Misses Hash Table, which uses persistent memory to store large amounts of data that greatly increases the speed of data access for programs that use it.  During the webcast a small number of questions came up that this blog answers, and we’ve also provided answers to subjects stimulated by our conversation. 

Q: What are the rules/conditions to access SNIA PM hardware test system to get hands on experience? What kind of PM hardware is there? Windows/Linux?

A: Persistent memory, such as NVDIMM or Intel Optane memory, enables many new capabilities in server systems.  The speed of storage in the memory tier is one example, as is the ability to hold and recover data over system or application resets.  The programming challenge is seeking innovative applications and tools that showcase the features persistent memory will enable.

The specific systems for the different challenges will vary depending on the focus.  The current system is built using NVDIMM-N.  Users are given their own Linux container with simple examples in a web-based interface.  The users can also work directly in the Linux shell if they are comfortable with it.

Q: During the presentation there was a discussion on why it was important to look for “corner cases” when developing programs using Persistent Memory instead of regular storage.  Would you elaborate on this?

A: As you can see in the chart at the top of the blog post, persistent memory significantly reduces the amount of time to access a piece of data in stored memory.  As such, the amount of time that the program normally takes to process the data becomes much more important.  Programs that are used to data retrieval taking a significant amount of time can then occasionally absorb a “processing” performance hit that an extended data sort might imply.  Simply porting a file system access to persistent memory could result in strange performance bottlenecks, and potentially introduce race conditions or strange bugs in the software.  The rewards of fixing these issues will be significant performance, as demonstrated in the webcast.

Q: Can you please comment on the scalability of your HashMap implementation, both on a single socket and across multiple sockets?

The implementation is single threaded. Multiple threading poses lots of overhead and opportunity for mistakes. It is easy to saturate performance that only persistent memory can provide. There is likely no benefit to the hash table in going multi-threaded. It is not impossible – one could do an example of a hash table per volume. I have run across multiple sockets that were slower with an 8% to 10% variation in performance in an earlier version.  There are potential cache pollution issues with going multi-threaded as well.

The existing implementation will scale one to 15 billion records, and we would see the same thing if we have enough storage. The implementation does not use much RAM if it does not cache the index.  It only uses 100mb of RAM for test data and does not use memory.

Q: How would you compare your approach to having smarter compilers that are address aware of “preferred” addresses to exploit faster memories?

The Three Misses implementation invented three new storage management algorithms.  I don’t believe that compilers can invent new storage algorithms.  Compilers are much improved since their beginnings 50+ years ago when you could not mix integers and floating-point numbers, but they cannot figure out how to minimize accesses.  Smart compilers will probably not help solve this specific problem.

The SNIA CMSI is continuing its efforts on persistent memory programming.  If you’re interested in learning more about persistent memory programming, you can contact us at pmhackathon@snia.org to get updates on existing contests or programming workshops.  Additionally, SNIA would be willing to work with you to host physical or virtual programming workshops.

Please view the webcast and contact us with any questions.

Feedback Needed on New Persistent Memory Performance White Paper

A new SNIA Technical Work draft is now available for public review and comment – the SNIA Persistent Memory Performance Test Specification (PTS) White Paper.

A companion to the SNIA NVM Programming Model, the SNIA PM PTS White Paper (PM PTS WP) focuses on describing the relationship between traditional block IO NVMe SSD based storage and the migration to Persistent Memory block and byte addressable storage.  

The PM PTS WP reviews the history and need for storage performance benchmarking beginning with Hard Disk Drive corner case stress tests, the increasing gap between CPU/SW/HW Stack performance and storage performance, and the resulting need for faster storage tiers and storage products. 

The PM PTS WP discusses the introduction of NAND Flash SSD performance testing that incorporates pre-conditioning and steady state measurement (as described in the SNIA Solid State Storage PTS), the effects of – and need for testing using – Real World Workloads on Datacenter Storage (as described in the SNIA Real World Storage Workload PTS for Datacenter Storage), the development of the NVM Programming model, the introduction of PM storage and the need for a Persistent Memory PTS.

The PM PTS focuses on the characterization, optimization, and test of persistent memory storage architectures – including 3D XPoint, NVDIMM-N/P, DRAM, Phase Change Memory, MRAM, ReRAM, STRAM, and others – using both synthetic and real-world workloads. It includes test settings, metrics, methodologies, benchmarks, and reference options to provide reliable and repeatable test results. Future tests would use the framework established in the first tests.

The SNIA PM PTS White Paper targets storage professionals involved with: 

  1. Traditional NAND Flash based SSD storage over the PCIe bus;
  2. PM storage utilizing PM aware drivers that convert block IO access to loads and stores; and
  3. Direct In-memory storage and applications that take full advantage of the speed and persistence of PM storage and technologies. 

The PM PTS WP discussion on the differences between byte and block addressable storage is intended to help professionals optimize application and storage technologies and to help storage professionals understand the market and technical roadmap for PM storage.

Eden Kim, chair of the SNIA Solid State Storage TWG and a co-author, explained that SNIA is seeking comment from Cloud Infrastructure, IT, and Data Center professionals looking to balance server and application loads, integrate PM storage for in-memory applications, and understand how response time and latency spikes are being influenced by applications, storage and the SW/HW stack. 

The SNIA Solid State Storage Technical Work Group (TWG) has published several papers on performance testing and real-world workloads, and the  SNIA PM PTS White Paper includes both synthetic and real world workload tests.  The authors are seeking comment from industry professionals, researchers, academics and other interested parties on the PM PTS WP and anyone interested to participate in development of the PM PTS.

Use the SNIA Feedback Portal to submit your comments.