Computational Storage – Driving Success, Driving Standards Q&A

Our recent SNIA Compute, Memory, and Storage Initiative (CMSI) webcast, Computational Storage – Driving Success, Driving Standards, explained the key elements of the SNIA Computational Storage Architecture and Programming Model and the SNIA Computational Storage API . If you missed the live event, you can watch on-demand and view the presentation slides. Our audience asked a number of questions, and Bill Martin, Editor of the Model, and Jason Molgaard, Co-Chair of the SNIA Computational Storage Technical Work Group, teamed up to answer them.

What’s being done in SNIA to implement data protection (e.g. RAID) and CSDs? Can data be written/striped to CSDs in such a way that it can be computed on within the drive?

Bill Martin:  The challenges of computation on a RAID system are outside the scope of the Computational Storage Architecture and Programming Model. The Model does not address data protection in that it does not specify how data is written nor how computation is done on the data.  Section 3 of the Model discusses the Computational Storage Array (CSA), a storage array that is able to execute one or more Computational Storage Functions (CSFs). As a storage array, a CSA contains control software, which provides virtualization to storage services, storage devices, and Computational Storage Resources for the purpose of aggregating, hiding complexity, or adding new capabilities to lower level storage resources. The Computational Storage Resources in the CSA may be centrally located or distributed across CSDs/CSPs within the array.

When will Version 1.0 of the Computational Storage Architecture and Programming Model be available and when is operating system support expected?

Bill Martin:  We expect Version 1.0 of the model to be available Q2 2022.  The Model is agnostic with regard to operating systems, but we anticipate a publicly available API library for Computational Storage over NVMe.

Will Computational Storage library support CXL accelerators as well? How is the collaboration between these two technology consortiums?

Jason Molgaard: The Computational Storage Architecture and Programming Model is agnostic to the device interface protocol.  Computational Storage can work with CXL. SNIA currently has an alliance agreement in place with the CXL Consortium and will interface with that group to help enable the CXL interface with Computational Storage.  We anticipate there will be technical work to develop a computational storage library utilizing the CS API that will support CXL in the future. 

System memory is required for PCIe/NVMe SSD. How does computational storage bypass system memory?

Bill Martin: The computational storage architecture relies on computation using memory that is local to the Computational Storage Device (CSx).Section B.2.4 of the Model describes the topic of Function Data Memory (FDM) on the CSx and the movement of  data from media to FDM and back. Note that a device does not need to access system memory for computation – only to read and write data. Figure B.2.8 from the Model illustrates CSx usage.

Diagram

Description automatically generated

Is this CS API Library vendor specific, or is this a generic library which could also be provided for example by an operating system vendor?

Bill Martin:  The Computational Storage API is not a library, it is a generic interface definition.  It describes the software application interface definitions for a Computational Storage device (CSx).There will be a generic library for a given protocol layer, but there may also be vendor specific additions to that generic library for vendor specific CSx enhancements beyond the standard protocol definition.

Are there additional use cases out there? Where could I see them and get more information?

Jason Molgaard:  Section B.2.5 of the Computational Storage Architecture and Programming Model provides an example of application deployment.  The API specification will have a library that could be used and/or modified for a specific device. If the CSx does not support everything in NVMe, an individual could write a vendor specific library that supports some host activity.

There are a lot of acronyms and terms used in the discussion.  Is there a place where they are defined?

Jason Molgaard:  Besides the Model and the API, which provide the definitive definition of the terms and acronyms, there are some great resources.  Recent presentations at the SNIA Storage Developer Conference on Computational Storage Moving Forward with an Architecture and API and Computational Storage APIs provide a broad view of how the specifications affect the growing industry computational storage efforts. Additional videos and presentations are available in the SNIA Educational Library, search for “Computational Storage”.

Why Cryptocurrency and Computational Storage?

Our new SNIA Compute, Memory, and Storage webcast focuses on a hot topic – storage-based cryptocurrency.

Blockchains, cryptocurrency, and the internet of markets are working to transform finance, wealth, safety, digital security, and trust. Storage-based cryptocurrencies had a breakout year in 2021. Proof of Space and Time is a new blockchain consensus that uses storage capacity to secure the blockchain. Decentralized file storage will enable alternatives to hyperscale data centers for hosting files and objects. Understanding the TCO of a storage system and optimizing the utilization of the storage hardware is critical in scaling these systems.

Join our speakers, Jonmichael Hands of Chia Network and Eli Tiomkin of NGD Systems, for this discussion on how a new approach of auto-plotting SSDs combined with computational storage can lower the total TCO. Registration is free for this webcast on Tuesday, February 15 at 10:00 am Pacific time. Click on the link to register and see you there! https://www.brighttalk.com/webcast/663/526154

See You – Virtually – at SDC 2021

SNIA Storage Developer Conference goes virtual September 28-29 2021, and compute, memory, and storage are important topics.  SNIA Compute, Memory, and Storage Initiative is a sponsor of SDC 2021 – so visit our booth for the latest information and a chance to chat with our experts.  With over 120 sessions available to watch live during the event and later on-demand, live Birds of a Feather chats, and a Persistent Memory Bootcamp and Hackathon accessing new systems in the cloud, we want to make sure you don’t miss anything!  Register here to see sessions live – or on demand to your schedule. 

Agenda highlights include:

LIVE Birds of a Feather Sessions are OPEN to all – SDC registration not required. Here is your chance, via zoom, to ask your questions of the SNIA experts.  Registration links will go live on September 28 and 29 at this page link.

Computational Storage Talks

A great video provides an overview of sessions. Watch it here.

  • Computational Storage APIs – how the SNIA Computational Storage TWG is leading the way with new interface definitions with Computational Storage APIs that work across different hardware architectures.
  • NVMe Computational Storage Update – Learn what is happening in NVMe to support Computational Storage devices, including a high level architecture that is being defined in NVMe for Computational Storage. The architecture provides for programs based on a standardized eBPF. (Check out our blog on eBPF.)

Persistent Memory Presentations

A great video provides an overview of sessions. Watch it here.

What’s New in Computational Storage? A Conversation with SNIA Leadership

The latest revisions of the SNIA Computational Storage Architecture and Programming Model Version 0.8 Revision 0 and the Computational Storage API v0.5 rev 0 are now live on the SNIA website. Interested to know what has been added to the specifications, SNIAOnStorage met “virtually” with Jason Molgaard, Co-Chair of the SNIA Computational Storage Technical Work Group, and Bill Martin, Co-Chair of the SNIA Technical Council and editor of the specifications, to get the details.

Both SNIA volunteer leaders stressed that they welcome ideas about the specifications and invite industry colleagues to join them in continuing to define computational storage standards.  The two documents are working documents – continually being refined and enhanced. If you are not a SNIA member, you can submit public comments via the SNIA Feedback Portal. To learn if your company is a SNIA member, check the SNIA membership list. If you are a SNIA member,  go here to join the Computational Storage Technical Work Group member work area. The Computational Storage Technical Work Group chairs also welcome your emails.  Reach out to them at computationaltwg-chair@snia.org.

SNIAOnStorage (SOS):  What is the overall objective of the Computational Storage Architecture and Programming Model?

Jason Molgaard (JM): The overall objective of the document is to define recommended behavior for hardware and software that supports computational storage.  This is the second release of the Architecture and Programming Model, and it is very stable.  While changes are dramatic, this is primarily because of feedback we received both from the public and to a larger extent from new Technical Work Group members who have provided insight and perspective.

SOS: Could you summarize what has changed in the 0.8 version of the Model?

JM: Version 0.8 has four main takeaways:

  1. It renames the Computational Storage Processor.  The component within a Computational Storage Device (CSx) is now called a Computational Storage Engine (CSE).  The Computational Storage Processor (CSP) now only refers to a device that contains a Computational Storage Engine (CSE) and no storage.
  2. It defines a new architectural concept of a Computational Storage Engine Environment (CSEE).  This is something that is attached to a specific CSE and defines the environment that a Computational Storage Function (CSF) operates in.
  3. It defines a new architectural element of a Resource Repository that contains CSEEs that are available for activation on a CSE and also CSFs that are available for activation on a CSEE.
  4. Discovery and configuration flow are now documented in Version 0.8

SOS: Why did the TWG decide to work on the release of a unique API document?

Bill Martin (BM): The overall objective of the Computational Storage API document is to define an interface between an application and a CSx. Version 0.5 is the first release to the public by the Technical Work Group. 

There are three key takeaways from version 0.5:

  1. The document defines an Application Programming Interface (API) to CSxs
  2. The API allows a user application on a host to have a consistent interface to any vendor’s CSx.
  3. A Vendor defines a library for their device that implements the API.  Mapping to wire protocol for the device is done by this library.   Functions that are not available on a specific CSx may be implemented in software.

SOS: How can vendors use these documents?

BM: The Computational Storage Architecture and Programming Model is what I would categorize as a “descriptive” document.  There are no “shalls” or “shoulds” in the document. Rather, the Model is something to use and view the elements they should be considering. It shows the components that are in the architecture, what they mean, and how they interact with each other. This allows users to understand the frameworks and options that can be implemented with a common language and understanding.

The API document, in contrast, is a “prescriptive” document.  It describes how to use the elements defined within the architectural document – how to do discovery and configuration, and how to utilize the architecture.   

These documents are meant to be used together. Some implementations may not use all of the elements of the architecture, but all of the elements are logically there.

JM: Individuals who are looking to implement computational storage – and who are developing their own computational storage devices – should absolutely review both documents and use them to provide feedback and questions. Many vendors are considering what their computational storage device should look like. This architecture framework provides good guidance and baseline nomenclature we can all use to speak the same language.

SOS: Are there any specific areas where you are looking for feedback?

BM:  In the API specification, we’d like feedback on whether or not the discovery should cover everything implementers want to discover about a CSx. We’d like more depth and details and what things people think they want to discover about a device.  We’d also like comments on items that need to be added on how you interact with the devices to execute a Computational Storage Function (CSF).

JM:  For the Model, we’d like feedback on whether people see the value of this descriptive document and are actually following it.  We’d like to know if there are additional ideas or interests of definition that users want to see when constructing architectures, or whether there are gaps in defined activities.

SOS:  Where can folks find out more information about the Specifications?

BM: We invite everyone to attend the upcoming SNIA Storage Developer Conference (SDC) . We will be virtual this year on September 28 and 29.  Registrants can view 12 presentations on computational storage, including a Computational Storage Update from the Working Group that the Co-Chairs of the Computational Storage TWG, Scott Shadley and Jason Molgaard, are presenting, one I will be giving on Computational Storage Moving Forward with an Architecture and API, and another by my Computational Storage TWG colleague Oscar Pinto on Computational Storage APIs. 

And anyone with interest in computational storage can attend an open discussion during SDC on computational storage advances that will be featured in a Birds-of-a-Feather session via Zoom on September 29 at 4:00 pm Pacific. Go here to learn how to attend this SDC special event and all the Birds-of-a-Feather sessions.

SOS: Thanks to you both.  Our readers may want to know that  SNIA’s work in computational storage is led by the 250+ volunteer vendor members of the Computational Storage Technical Work Group.  In addition to these two specifications, the TWG has also updated computational storage terms in the Online SNIA Dictionary.

The SNIA Computational Storage Special Interest Group accelerates the awareness of computational storage concepts and influences industry adoption and implementation of the technical specifications and programming models. Learn more at http://www.snia.org/cmsi

What is eBPF, and Why Does it Matter for Computational Storage?

Recently, a question came up in the SNIA Computational Storage Special Interest Group on new developments in a technology called eBPF and how they might relate to computational storage. To learn more, SNIA on Storage sat down with Eli Tiomkin, SNIA CS SIG Chair with NGD Systems; Matias Bjørling of Western Digital; Jim Harris of Intel; Dave Landsman of Western Digital; and Oscar Pinto of Samsung.

SNIA On Storage (SOS):  The eBPF.io website defines eBPF, extended Berkeley Packet Filter, as a revolutionary technology that can run sandboxed programs in the Linux kernel without changing kernel source code or loading kernel modules.  Why is it important?

Dave Landsman (DL): eBPF emerged in Linux as a way to do network filtering, and enables the Linux kernel to be programmed.  Intelligence and features can be added to existing layers, and there is no need to add additional layers of complexity.

SNIA On Storage (SOS):  What are the elements of eBPF that would be key to computational storage? 

Jim Harris (JH):  The key to eBPF is that it is architecturally agnostic; that is, applications can download programs into a kernel without having to modify the kernel.  Computational storage allows a user to do the same types of things – develop programs on a host and have the controller execute them without having to change the firmware on the controller.

Using a hardware agnostic instruction set is preferred to having an application need to download x86 or ARM code based on what architecture is running.

DL:  It is much easier to establish a standard ecosystem with architecture independence.  Instead of an application needing to download x86 or ARM code based on the architecture, you can use a hardware agnostic instruction set where the kernel can interpret and then translate the instructions based on the processor. Computational storage would not need to know the processor running on an NVMe device with this “agnostic code”.

SOS: How has the use of eBPF evolved?

JH:  It is more efficient to run programs directly in the kernel I/O stack rather than have to return packet data to the user, operate on it there, and then send the data back to the kernel. In the Linux kernel, eBPF began as a way to capture and filter network packets.  Over time, eBPF use has evolved to additional use cases.

SOS:  What are some use case examples?

DL: One of the use cases is performance analysis. For example, eBPF can be used to measure things such as latency distributions for file system I/O, details of storage device I/O and TCP retransmits, and blocked stack traces and memory.

Matias Bjørling (MB): Other examples in the Linux kernel include tracing and gathering statistics.  However, while the eBPF programs in the kernel are fairly simple, and can be verified by the Linux kernel VM, computational programs are more complex, and longer running. Thus, there is a lot of work ongoing to explore how to efficiently apply eBPF to computational programs.

For example, what is the right set of run-time restrictions to be defined by the eBPF VM, any new instructions to be defined, how to make the program run as close to the instruction set of the target hardware.

JH: One of the big use cases involves data analytics and filtering. A common data flow for data analytics are large database table files that are often compressed and encrypted.  Without computational storage, you read the compressed and encrypted data blocks to the host, decompress and decrypt the blocks, and maybe do some filtering operations like a SQL query.  All this, however, consumes a lot of extra host PCIe, host memory, and cache bandwidth because you are reading the data blocks and doing all these operations on the host.  With computational storage, inside the device you can tell the SSD to read data and transfer it not to the host but to some memory buffers within the SSD.  The host can then tell the controller to do a fixed function program like decrypt the data and put in another local location on the SSD, and then do a user supplied program like eBPF to do some filtering operations on that local decrypted data.  In the end you would transfer the filtered data to the host.  You are doing the compute closer to the storage, saving memory and bandwidth.

SOS:  How does using eBPF for computational storage look the same?  How does it look different?

Jim – There are two parts to this answer.  Part 1 is the eBPF instruction set with registers and how eBPF programs are assembled.  Where we are excited about computational storage and eBPF is that the instruction set is common. There are already existing tool chains that support eBPF.   You can take a C program and compile it into an eBPF object file, which is huge.  If you add computational storage aspects to standards like NVMe, where developing a unique tool chain support can take a lot of work, you can now leverage what is already there for the eBPF ecosystem. 

Part 2 of the answer centers around the Linux kernel’s restrictions on what an eBPF program is allowed to do when downloaded. For example, the eBPF instruction set allows for unbounded loops, and toolchains such as gcc will generate eBPF object code with unbounded loops, but the Linux kernel will not permit those to execute – and rejects the program. These restrictions are manageable when doing packet processing in the kernel.  The kernel knows a packet’s specific data structure and can verify that data is not being accessed outside the packet.  With computational storage, you may want to run an eBPF program that operates on a set of data that has a very complex data structure – perhaps arrays not bounded or multiple levels of indirection.  Applying Linux kernel verification rules to computational storage would limit or even prevent processing this type of data.

SOS: What are some of the other challenges you are working through with using eBPF for computational storage?

MB:  We know that x86 works fast with high memory bandwidth, while other cores are slower.  We have some general compute challenges in that eBPF needs to be able to hook into today’s hardware like we do for SSDs.  What kind of operations make sense to offload for these workloads?  How do we define a common implementation API for all of them and build an ecosystem on top of it?  Do we need an instruction-based compiler, or a library to compile up to – and if you have it on the NVMe drive side, could you use it?  eBPF in itself is great- but getting a whole ecosystem and getting all of us to agree on what makes value will be the challenge in the long term.

Oscar Pinto (OP): The Linux kernel for eBPF today is more geared towards networking in its functionality but light on storage. That may be a challenge in building a computational storage framework. We need to think through how to enhance this given that we download and execute eBPF programs in the device. As Matias indicated, x86 is great at what it does in the host today. But if we have to work with smaller CPUs in the device, they may need help with say dedicated hardware or similar implemented using additional logic to aid the eBPF programs One question is how would these programs talk to them?  We don’t have a setup for storage like this today, and there are a variety of storage services that can benefit from eBPF.

SOS: Is SNIA addressing this challenge?

OP: On the SNIA side we are building on program functions that are downloaded to computational storage engines.  These functions run on the engines which are CPUs or some other form of compute that are tied to a FPGA, DPU, or dedicated hardware. We are defining these abstracted functionalities in SNIA today, and the SNIA Computational Storage Technical Work Group is developing a Computational Storage Architecture and Programming Model and Computational Storage APIs  to address it..  The latest versions, v0.8 and v0.5, has been approved by the SNIA Technical Council, and is now available for public review and comment at SNIA Feedback Portal.

SOS: Is there an eBPF standard? Is it aligned with storage?

JH:  We have a challenge around what an eBPF standard should look like.  Today it is defined in the Linux kernel.  But if you want to incorporate eBPF in a storage standard you need to have something specified for that storage standard.  We know the Linux kernel will continue to evolve adding and modifying instructions. But if you have a NVMe SSD or other storage device you have to have something set in stone –the version of eBPF that the standard supports.  We need to know what the eBPF standard will look like and where will it live.  Will standards organizations need to define something separately?

SOS:  What would you like an eBPF standard to look like from a storage perspective?

JH – We’d like an eBPF standard that can be used by everyone.  We are looking at how computational storage can be implemented in a way that is safe and secure but also be able to solve use cases that are different.

MB:  Security will be a key part of an eBPF standard.  Programs should not access data they should not have access to.  This will need to be solved within a storage device. There are some synergies with external key management. 

DL: The storage community has to figure out how to work with eBPF and make this standard something that a storage environment can take advantage of and rely on.

SOS: Where do you see the future of eBPF?

MB:  The vision is that you can build eBPFs and it works everywhere.  When we build new database systems and integrate eBPFs into them, we then have embedded kernels that can be sent to any NVMe device over the wire and be executed.  The cool part is that it can be anywhere on the path, so there becomes a lot of interesting ways to build new architectures on top of this. And together with the open system ecosystem we can create a body of accelerators in which we can then fast track the build of these ecosystems.  eBPF can put this into overdrive with use cases outside the kernel.

DL:  There may be some other environments where computational storage is being evaluated, such as web assembly.

JH: An eBPF run time is much easier to put into an SSD than a web assembly run time.

MB: eBPF makes more sense – it is simpler to start and build upon as it is not set in stone for one particular use case.

Eli Tiomkin (ET):  Different SSDs have different levels of constraints.  Every computational storage SSDs in production and even those in development have very unique capabilities that are dependent on the workload and application.

SOS:  Any final thoughts?

MB: At this point, technologies are coming together which are going to change the industry in a way that we can redesign the storage systems both with computational storage and how we manage security in NVMe devices for these programs.  We have the perfect storm pulling things together. Exciting platforms can be built using open standards specifications not previously available.

SOS:  Looking forward to this exciting future. Thanks to you all.

Q&A on Data Movement and Computational Storage

Recently, the SNIA Compute, Memory, and Storage Initiative hosted a live webcast “Data Movement and Computational Storage”, moderated by Jim Fister of The Decision Place with Nidish Kamath of KIOXIA, David McIntyre of Samsung, and Eli Tiomkin of NGD Systems as panelists.  We had a great discussion on new ways to look at storage, flexible computer systems, and how to put on your security hat.

During our conversation, we answered audience questions, and raised a few of our own!  Check out some of the back-and-forth, and tune in to the entire video for customer use cases and thoughts for the future.

Q:  What is the value of computational storage?

A:  With computational storage, you have latency sensitivity – you can make decisions faster at the edge and can also distribute computing to process decisions anywhere.

Q:  Why is it important to consider “data movement” with regard to computational storage?

A:  There is a reduction in data movement that computational storage brings to the system, along with higher efficiencies while moving that data and a reduction in power which users may not have yet considered.   

Q: How does power use change when computational storage is brought in?

A:  You want to “move” compute to that point in the system where operations can be accomplished where the data is “at rest”. In traditional systems, if you need to move data from storage to the host, there are power costs that may not even be currently measured.  However, if you can now run applications and not move data, you will realize that power reduction, which is more and more important with the anticipation of massive quantities of data coming in the future.

Q: Are the traditional processing/storage transistor counts the same with computational storage?

A:  With computational storage, you can put the programming where it is needed – moving the compute to that point in the system where it can achieve the work with limited amount of overhead and networking bandwidth. Compute moves to where the data sits at rest, which is especially important with the explosion of data sets.

Q:  Does computational storage play a role in data security and privacy?

A: Security threats don’t always happen at the same time, so you need to consider a top-down holistic perspective. It will be important both today and in the future to consider new security threats because of data movement.

There is always a risk for security when the data is moving; however, computational storage reduces the data movement significantly, and can play as a more secure way to treat data because the data is not moving as much. Computational storage allows you to lock the data, for example, medical data, and only process when needed and if needed in an authenticated and secure fashion.  There’s no requirement to build a whole system around this.

Q:  What are the computational storage opportunities at the edge? 

A:  We need to understand the ecosystem the computational storage device is going into. Computational storage sits at the front line of edge applications and management of edge infrastructure pieces in the cloud.  It’s a great time to embrace existing cloud policies and collaborate with customers on how policies will migrate and change to the edge.

Q: In your discussions with customers, how dynamic do they expect the sets of code running on computational storage to be? With the extremes being code never changing (installed once/updated rarely) to being different for every query or operation. Please discuss how challenges differ for these approaches.

A:  The heavy lift comes into play with the application and the system integration.  To run flexible code, customers want a simple, straightforward, and seamless programming model that enables them to run as many applications as they need and change them in an easy way without disrupting the system.  Clients are using computational storage to speed up the processing of their data with dynamic reconfiguring in cutting edge applications.  We are putting a lot of effort toward this seamless and transparent model with our work in the SNIA Computational Storage Technical Work Group.

Q:  What does computational storage mean for data in the future?

A: The infrastructure of data and data movement will drastically change in the future as edge emerges and cloud continues to grow. Using computational storage will be extremely beneficial in the new infrastructure, and we will need to work together as an ecosystem and under SNIA to make sure we are all aligned to provide the right solutions to the customer.  

Dive – or Dip – into SNIA Persistent Memory + Computational Storage Summit Content

SNIA’s 9th annual Summit was a success with a new name and an expanded focus – Persistent Memory + Computational Storage – from the data center to the edge.  

The Summit moved to a two-day virtual platform and drew twice as many attendees as the previous year. We experimented with 20-minute sessions to great success.  Attendees saw leading technology experts discussing real world applications and use cases, providing insights on technology trends and futures, and networking  in “live via the internet” panels and Birds-of-a-Feather sessions.

The recap of our 2021 event – agenda – abstracts – speaker bios – links to videos and presentations – is summarized on the PM+CS Summit home page

But we know your time is precious – so here are a few ways to sample a lot of great content presented over two full days.

  1. Read our colleague Tom Coughlin’s Forbes blog on the event
  2. Not only did Tom and Jim Handy present on memory futures at the event, but they also provided the fastest sub-7 minute recaps of both Wednesday’s and Thursdays sessions with their lively commentary.
  3. New to persistent memory and/or computational storage technologies?  Check out our tutorials featuring Persistent Memory and Computational Storage Special Interest Group leaders giving you what you need to know.
  4. Love the back and forth?  You’ll enjoy the recordings of our live panel sessions where colleagues debate (and sometimes agree) on the topics of today:
  5. Is Persistent Memory your focus?  We’ve sorted the Persistent Memory Summit content for you in our SNIA Educational Library
  6. A Computational Storage man or woman?  Here is the list of all the Computational Storage content during the Summit to watch via our SNIA Educational Library.
  7. Want to get hands-on?  We have extended the opportunity to experience the  Persistent Memory Workshop and Hackathon with access to new cloud-based PM systems for more learning opportunities.

We extend a thank you and shout-out to our SNIA Compute, Memory, and Storage Initiative members and colleagues who presented in sessions and participated in panels. They represent these leading companies in the industry.

AMD, Arm, Coughlin Associates, Dell, Eideticom, Facebook, Futurewei Technologies, G2M Communications, Hewlett Packard Enterprise, Intuitive Cognition Consulting, Intel, Lenovo,  Los Alamos National Laboratory, MemVerge, Micron, Microsoft, MKW Ventures Consulting, NGD Systems, NVIDIA, Objective Analysis, Samsung, ScaleFlux, Silinnov Consulting, and SMART Modular Technologies.

We thank our Summit sponsors: Eideticom, MemVerge, Futurewei Technologies, SMART Modular Technologies, and NGD Systems; and the SNIA Compute Memory and Storage Initiative members who underwrote the event.

Finally, we thank you for your interest in SNIA Compute, Memory, and Storage Initiative outreach and education.  We look forward to seeing you at upcoming SNIA events, including our Storage Developer Conferences in EMEA, India, and the U.S.  Find out more details on SDC.

Continuing to Refine and Define Computational Storage

The SNIA Computational Storage Technical Work Group (TWG) has been hard at work on the SNIA Technical Document Computational Storage Architecture and Programming Model.  SNIAOnStorage recently sat down via zoom with the document editor Bill Martin of Samsung and TWG Co-Chairs Jason Molgaard of Arm and Scott Shadley of NGD Systems to understand the work included in the model and why definitions of computational storage are so important.

SNIAOnStorage (SOS): Shall we start with the fundamentals?  Just what is the Computational Storage Architecture and Programming Model?

Scott Shadley (SS):  The SNIA Computational Storage Architecture and Programming Model (Model) introduces the framework of how to use a new tool to architect your infrastructure by deploying compute resources in the storage layer.

Bill Martin (BM): The Model enables architecture and programming of computational storage devices. These kinds of devices include those with storage physically attached, and also those with storage not physically attached but considered computational because the devices are associated with storage.

SOS: How did the TWG approach creating the Model and what does it cover?

SS:  SNIA is known for bringing standardization to customized operations; and with the Model, users now have a common way to identify the different solutions offered in computational storage devices and a standard way to discover and interact with these devices. Like the way NVMe brought common interaction to the wild west of PCIe, the SNIA Model ensures the many computational storage products already on the market can align to interact in a common way, minimizing the need for unique programming to use solutions most effectively.  

Jason Molgaard (JM):  The Model covers both the hardware architecture and software application programming interface (API) for developing and interacting with computational storage.

BM:  The architecture sections of the Model cover the components that make up computational storage and the API provides a programming interface to those components.

SOS:  I know the TWG members have had many discussions to develop standard terms for computational storage.  Can you share some of these definitions and why it was important to come to consensus?

BM:  The model defines Computational Storage Devices (CSxs) which are composed of Computational Storage Processors (CSPs), Computational Storage Drives (CSDs), and Computational Storage Arrays (CSAs). 

Each Computational Storage Device contains a Computational Storage Engine (CSE) and some form of host accessible memory for that engine to utilize. 

The Computational Storage Processor is a device that has a Computational Storage Engine but does not contain storage. The Computational Storage Drive contains a Computational Storage Engine and storage.  And the Computational Storage Array contains an array with an array processor and a Computational Storage Engine.

Finally, the Computational Storage Engine executes Computational Storage Functions (CSFs) which are the entities that define the particular computation.  

All of the computational storage terms can be found online in the SNIA Dictionary. 

SS: An architecture and programing model is necessary to allow vendor-neutral, interoperable implementations of this industry architecture, and clear, accurate definitions help to define how the computational storage hierarchy works.  The TWG spent many hours to define these standard nomenclatures to be used by providers of computational storage products. 

JM: It has been a work in process over the last 18 months, and the perspectives of all the different TWG member companies have brought more clarity to the terms and refined them to better meet the needs of the ecosystem.

BM: One example has been the change of what was called computational storage services to the more accurate and descriptive Computational Storage Functions.   The Model defines a list of potential functions such as compression/decompression, encoding/decoding, and database search.  These and many more are described in the document.

SOS: Is SNIA working with the industry on computational storage standards?

BM:    SNIA has an alliance with the NVM Express® organization and they are working on computational storage. As other organizations (e.g., CXL Consortium) develop computational storage for their interface, SNIA will pursue alliances with those organizations.  You can find details on SNIA alliances here.

SS:  SNIA is also monitoring other Technical Work Group activity inside SNIA such as the Smart Data Accelerator Interface (SDXI) TWG working on the memory layer and efforts around Security, which is a key topic being investigated now.

SOS:  Is a new release of the Computational Storage Architecture and Programming Model pending?

BM:  Stay tuned, the next release of the Model – v.06 – is coming very soon.  It will contain updates and an expansion of the architecture.

JM: We have also been working on an API document which will be released at the same time as the V.6 release of the Model.

SOS:  Who will write the software based on the Computational Storage Architecture and Programming Model?

JM:  Computational Storage TWG members will develop open-source software aligned with the API, and application programmers will use those libraries.

SOS: How can the public find out about the next release of the Model?

SS: We will announce it via our SNIA Matters newsletter. Version 0.6 of the Model as well as the API will be up for public review and comment at this link.  And we encourage companies interested in the future of computational storage to join SNIA and contribute to the further development of the Model and the API.  You can reach us with your questions and comments at askcmsi@snia.org.

SOS:  Where can our readers learn more about computational storage?

SS:  Eli Tiomkin, Chair of the SNIA Computational Storage Special Interest Group (CS SIG), Jason, and I sat down to discuss the future of computational storage in January 2021.  The CS SIG also has a series of videos that provide a great way to get up to speed on computational storage.  You can find them and “Geek Out on Computational Storage” here,

SOS:  Thanks for the update, and we’ll look forward to a future SNIA webcast on your computational storage work.

Computational Storage in the Real World

Computational storage has arrived, with real world applications meeting the goal of enabling parallel computation and/or alleviating constraints on existing compute, memory, storage, and I/O.  The SNIA Computational Storage Special Interest Group has gathered examples of computational storage use cases which demonstrate improvements in application performance and infrastructure efficiency through the integration of compute resources either directly with storage or between the host and the storage. First up in the SNIA Computational Storage Demo Series are our SIG member companies Eideticom Communications and NGD Systems. Their examples demonstrate proof of computational storage concepts.  They also illustrate how SNIA and the Compute Memory and Storage Initiative (CMSI) member companies are advancing the SNIA Computational Storage Architecture and Programing Model, which defines recommended behavior for hardware and software that supports computational storage.

The NGD Systems use case highlights a Microsoft Azure IoT System running on a computational storage device. The video walks through the steps to establish connections to agents and hubs, and shows how to establish event monitors and do image analysis to store images from the web into a computational storage device.

The Eideticom Communications use case highlights transparent compression via a stacked file system and a NVMe-based computational storage processor. The video walks through the steps to mount a no-load file systems, and run sys admin commands to read/write files to disk with compression illustrating speed and application transparency.

We invite you to visit our Computational Storage Use Cases page for these and more examples of real world computational storage applications. Questions? Send them to askcmsi@snia.org.

Experts Speak at Flash Memory Summit



2020 brought new developments in persistent memory and computational storage. SNIA Compute, Memory, and Storage Initiative was pleased to sponsor two tracks at the recent Flash Memory Summit where industry leaders captured the advances.  Videos and presentations are now available.

In the Persistent Memory Track, Dave Eggleston of Intuitive Cognition Consulting and Chris Petersen of Facebook combine to deliver a state of the union address for the industry effort underway to deliver persistent memory. They examine industry advances of persistent memory media, the new devices and form factors for persistent memory attachment, remote and direct-attached PM with low latency interfaces like CXL, and describe the best fit applications and use cases for persistent memory.

Jia Shi of Oracle and Yao Yue of Twitter then dive into a rapid-fire presentation on two examples of how persistent memory is changing the landscape – in appliances, in infrastructure, and in applications – from the perspective of a social networking company and a cloud and enterprise software provider.  They highlight the motivation for using persistent memory and the delivered results

Finally, Ginger Gilsdorf of Intel and Tom Coughlin of Coughlin Associates look ahead to how Persistent Memory technology is evolving, including maximizing performance in next-generation applications, and provide their perspective on PM market growth projections.

The track concludes with speakers reuniting in a panel to discuss the reasons that have stopped persistent memory from gaining wider usage and identifying breakthroughs that are beginning to appear.

The Computational Storage Track opens with an update by Chuck Sobey of Channel Science who discusses the shifting of compute power to the storage; use cases including database, big data, AI/ML, and edge applications; and how the framework for computational storage is driven by SNIA and the NVM Express standards groups.

Stephen Bates of Eideticom follows with an outline of the state of the nation in computational storage standards. He then describes computational storage examples already in use that illustrate ways storage challenges are being met, and comments on promising directions to explore for the future.

Andy Walls of IBM then discusses using computational storage to handle big data, allowing data to reside close to processing power, thus allowing processing tasks to be in-line with data accesses. He covers computational storage examples already in use for application distribution and other promising directions to explore for the future.

Neil Werdmuller and Jason Molgaard of Arm discuss flexible computational storage solutions, and how data-driven applications that benefit from database searches, data manipulation, and machine learning can perform better and be more scalable if developers add computation directly to storage.

A lively panel with Arm, Eideticom, NGD Systems, and ScaleFlux rounds out the track, discussing keys to making computational storage work in your applications.  

Enjoy these presentations and contact us at askcmsi@snia.org with your questions and comments!