Reaching a Computational Storage Milestone

Version 1.0 of the SNIA Computational Storage Architecture and Programming Model has just been released to the public at www.snia.org/csarch. The Model has received industry accolades, winning the Flash Memory Summit 2022 Best of Show Award for Most Innovative Memory Technology at their recent conference. Congratulations to all the companies and individuals who contributed large amounts of expertise and time to the creation of this Model. 

SNIAOnStorage sat down with SNIA Computational Storage Technical Work Group (CS TWG) co-chairs Jason Molgaard and Scott Shadley; SNIA Computational Storage Architecture and Programming Model editor Bill Martin; and SNIA Computational Storage Special Interest Group chair David McIntyre to get their perspectives on this milestone release and next steps for SNIA.

SNIAOnStorage (SOS): What is the significance of a 1.0 release for this computational storage SNIA specification?

Bill Martin (BM):  The 1.0 designation indicates that the SNIA membership has voted to approve the SNIA Computational Storage Architecture and Programming Model as an official SNIA specification.  This means that our membership believes that the architecture is something that you can develop computational storage-related products to where multiple vendor products will have similar complimentary architectures and with an industry standardized programming model.

Jason Molgaard (JM): The 1.0 release also indicates a level of maturity where companies can implement computational storage that reflects the elements of the Model.  The SNIA CS TWG took products into account when defining the Model’s reference architecture.  The Model is for everyone – even those who were not part of the 52 participating companies and 258 member representatives in the TWG – this is concrete, and they can begin development today.

SNIA Computational Storage Technical Work Group Company Members

SOS: What do you think is the most important feature of the 1.0 release?

Scott Shadley (SS):  Because we have reached the 1.0 release, there is no one specific area that makes one feature more important than anything else.  The primary difference from the last release and 1.0 was addressing the Security section. As we know, there are many new security discussions happening and we want to ensure our architecture doesn’t break or even create new security needs. Overall, all aspects are key and relevant.

JM:  I agree. The entire Model is applicable to product development and is a comprehensive and inclusive specification.  I cannot point to a single section to that is subordinate to other sections in the Model.

David McIntyre (DM):  It’s an interesting time for these three domains – compute, storage, and networking – which are beginning to merge and support each other.  The 1.0 Model has a nice baseline on definitions – before this there were none, but now we have Computational Storage Devices (CSxes), (Computational Storage Processors (CSPs), Computational Storage Drives (CSDs), and Computational Storage Arrays (CSAs)), and more; and companies can better define what is a CSP and how it connects to associated storage. Definitions help to educate and ground the ecosystems and the engineering community, and how to characterize our vendor solutions into these categories.

BM:  I would say that the four most important parts of the 1.0 Model are:  1) it defines terminology that can be used across different protocols; 2) it defines a discovery process flow for those architectures; 3) it defines security considerations for those architectures; and 4) it gives users some examples that can be used for those architectures.

SOS:  Who do you see as the audience/user for the Model?  What should these constituencies do with the Model? 

JM: The Model is useful for both hardware developers who are developing their own computational storage systems, as well as software architects, programmers, and other users to be educated on definitions and the common framework that the architecture describes for computational storage. This will enable everyone to be on the same playing field.  The intent is for everyone to have the same level of understanding and to carry on conversations with internal and external developers that are working on related projects. Now they can speak on the same plane.  Our wish is for folks to adhere to the model and follow it in their product development.  

DM: Having an industry developed reference architecture that hardware and application developers refer to is an important attribute of the 1.0 specification, especially as we get into cloud to edge deployment where standardization has not been as early.  Putting compute where data is at the edge – where data is being driven – gives the opportunity to provide normalization and standardization that application developers can refer to contributing computational storage solutions to the edge ecosystem.

SS: Version 1.0 is designed with customers to be used as a full reference document.  It is an opportunity to highlight that vendors and solutions providers are doing it in a directed and unified way.  Customers with a multi-sourcing strategy see this as something that resonates well to drive involvement with the technology.

SOS: Are there other activities within SNIA going along with the release of the Model?

BM:  The CS TWG is actively developing a Computational Storage API that will utilize the Model and provide an application programming interface for which vendors can provide a library that maps to their particular protocol, which would include the NVMe® protocol layer.

JM:  The TWG is also collaborating with the SNIA Smart Data Accelerator Interface (SDXI) Technical Work Group on how SDXI and computational storage can potentially be combined in the future.

There is a good opportunity for security to continue to be a focus of discussion in the TWG – examining the threat matrix as the Model evolves to ensure that we are not recreating or disbanding what is out there – and that we use existing solutions.

DM:  From a security standpoint the Model and the API go hand in hand as critical components far beyond the device level.  It is very important to evolve where we are today from device to solution level capabilities.  Having this group of specifications is very important to contribute to the overall ecosystem.

SOS:  Are there any industry activities going along with the release of version 1.0 of the Model?

BM:  NVM Express® is continuing their development effort on computational storage programs and Subsystems Local Memory that will provide a mechanism to implement the SNIA Architecture and Programming Model.

JM: Compute Express Link™ (CXL™) is a logical progression for computational storage from an interface perspective.  As time moves forward, we look for much work to be done in that area.

SS: We know from Flash Memory Summit 2022 that CXL is a next generation transport planned for both storage and memory devices.  CXL focuses on memory today and the high-speed transport expected there. CXL is the basically the transport beyond NVMe. One key feature of the SNIA Architecture and Programming Model is to ensure it can apply to CXL, Ethernet, or other transports as it does not dictate the transport layer that is used to talk to the Computational Storage Devices (CSxes).

DM:  Standards bodies have been siloed in the past. New opportunities of interfaces and protocols that work together harmoniously will better enable alliances to form.  Grouping of standards that work together will better support application requirements from cloud to edge.

SOS:  Any final thoughts?

BM: You may ask “Will there be a next generation of the Model?” Yes, we are currently working on the next generation with security enhancements and any other comments we get from public utilization of the Model. Comments can be sent to the SNIA Feedback Portal.

DM: We also welcome input from other industry organizations and their implementations.

BM: For example, if there are implications to the Model from work done by CXL, they could give input and the TWG would work with CXL to integrate necessary enhancements.

JM: CXL could develop new formats specific to Computational Storage.  Any new commands could still align with the model since the model is transport agnostic. 

SOS: Thanks for your time in discussing the Model.  Congratulations on the 1.0 release! And for our readers, check out these links for more information on computational storage:

Computational Storage Playlist on the SNIA Video Channel

Computational Storage in the SNIA Educational Library

SNIA Technology Focus Area – Computational Storage

Summit Success – and A Preview of What’s To Come

Last month’s SNIA Persistent Memory and Computational Storage Summit (PM+CS Summit) put on a great show with 35 technology presentations from 41 speakers. Every presentation is now available online with a video and PDF found at www.snia.org/pm-summit.

Recently, SNIA On Storage sat down with David McIntyre, Summit Chair from Samsung, on his impressions of this 10th annual event.

SNIA On Storage (SOS): What were your thoughts on key topics coming into the Summit and did they change based on the presentations?

David McIntyre (DM): We were excited to attract technology leaders to speak on the state of computational storage and persistent memory.  Both mainstage and breakout speakers did a good job of encapsulating and summarizing what is happening today.  Through the different talks, we learned more about infrastructure deployments supporting underlying applications and use cases. A new area where attendees gained insight was computational memory. 

I find it encouraging that as an industry we are moving forward on focusing on applications and use cases, and supporting software and infrastructure that resides across persistent memory and computational storage.  And with computational memory, we are now getting more into the system infrastructure concerns and making these technologies more accessible to application developers.

SOS: Any sessions you want to recommend to viewers?

DM: We had great feedback on our speakers during the live event.  Several sessions I might recommend are Gary Grider of Los Alamos National Labs (LANL), who explained how computational storage is being deployed across his lab; Chris Petersen of Meta, who took an infrastructure view on considerations for persistent memory and computational storage; and Andy Walls of IBM, who presented a nice viewpoint of his vision of computational storage and its underlying benefits that make the overall infrastructure more rich and efficient, and how to bring compute to the drives.  For a summary, watch Dave Eggleston of In-Cog Computing who led Tuesday and Wednesday panels with the mainstage speakers that provided a wide ranging discussion on the Summit’s key topics.

SOS: What do you see as the top takeaways from the Summit presenters?

DM: I see three:

  1. Infrastructure, applications, and use cases were paramount themes across a number of presentations
  2. Tighter coupling of technologies.  Cheolmin Park of Samsung, in his CXL and UCIe presentation, discussed how we already have point technologies that now need to interact together.  There is also the Persistent Memory/SSD/DRAM combination – a tiered memory configuration talked about for years.  We are seeing deployment use cases where the glue is interfacing the I/O technology with CXL and UCIe.
  3. Another takeaway strongly related to the above is heterogeneous operations and compute.  Compute can’t reside in one central location for efficiency.  Rather, it must be distributed – addressing real-time analytics and decision making to support applications.

SOS: What upcoming activities should Summit viewers plan to attend and why?

DM: Put Flash Memory Summit, August 1-4, 2022 on your calendars.  Here SNIA will go deeper into areas we explored at the Summit.

First, join SNIA Compute, Memory, and Storage Initiative (CMSI), underwriter of the PM+CS Summit, as we meet in person for the first time in a long time at the SNIA Reception on Monday evening August 1 at the Santa Clara Convention Center from 5:30 pm – 7:00 pm. Along with our SNIA teammates from the SNIA Storage Management Initiative, network with colleagues and share an appetizer or two as we gear up for three full days of activities. 

At the Summit, the SNIA-sponsored System Architectures Track will feature a day on persistent memory, a day on CXL, and a day on computational storage.  SNIA will also lead sessions on form factors, ethernet SSDs, sustainability, and DNA data storage.  I am Track Manager of the Artificial Intelligence Applications Track, where we will see how technologies like computational storage and AI work hand-in-hand.

SNIA will have a Demonstration Pavilion at booth 725 in the FMS Exhibit Hall with live demonstrations of computational storage applications, persistent memory implementations, and scalable storage management with SNIA Alliance Partners; hands-on form factor displays; a Persistent Memory Programming Workshop and Hackathon; and theater presentations on standards. Full details are at www.flashmemorysummit.com

In September, CMSI will be at the SNIA Storage Developer Conference where we will celebrate SNIA’s 25th anniversary and gather in person for sessions, demonstrations, and those ever popular Birds-of-a-Feather sessions.  Find the latest details at www.storagedeveloper.org.

SOS: Any final thoughts?

DM: On behalf of SNIA CMSI and the PM+CS Summit Planning Team, I’d like to thank all those who planned and attended our great event.  We are progressing in the right direction, beginning to talk the same language that application developers and solution providers understand.  We’ll keep building our strategic collaboration across different worlds at FMS and SDC.  I appreciate the challenges and working together.

What’s New in Computational Storage? A Conversation with SNIA Leadership

The latest revisions of the SNIA Computational Storage Architecture and Programming Model Version 0.8 Revision 0 and the Computational Storage API v0.5 rev 0 are now live on the SNIA website. Interested to know what has been added to the specifications, SNIAOnStorage met “virtually” with Jason Molgaard, Co-Chair of the SNIA Computational Storage Technical Work Group, and Bill Martin, Co-Chair of the SNIA Technical Council and editor of the specifications, to get the details.

Both SNIA volunteer leaders stressed that they welcome ideas about the specifications and invite industry colleagues to join them in continuing to define computational storage standards.  The two documents are working documents – continually being refined and enhanced. If you are not a SNIA member, you can submit public comments via the SNIA Feedback Portal. To learn if your company is a SNIA member, check the SNIA membership list. If you are a SNIA member,  go here to join the Computational Storage Technical Work Group member work area. The Computational Storage Technical Work Group chairs also welcome your emails.  Reach out to them at computationaltwg-chair@snia.org.

SNIAOnStorage (SOS):  What is the overall objective of the Computational Storage Architecture and Programming Model?

Jason Molgaard (JM): The overall objective of the document is to define recommended behavior for hardware and software that supports computational storage.  This is the second release of the Architecture and Programming Model, and it is very stable.  While changes are dramatic, this is primarily because of feedback we received both from the public and to a larger extent from new Technical Work Group members who have provided insight and perspective.

SOS: Could you summarize what has changed in the 0.8 version of the Model?

JM: Version 0.8 has four main takeaways:

  1. It renames the Computational Storage Processor.  The component within a Computational Storage Device (CSx) is now called a Computational Storage Engine (CSE).  The Computational Storage Processor (CSP) now only refers to a device that contains a Computational Storage Engine (CSE) and no storage.
  2. It defines a new architectural concept of a Computational Storage Engine Environment (CSEE).  This is something that is attached to a specific CSE and defines the environment that a Computational Storage Function (CSF) operates in.
  3. It defines a new architectural element of a Resource Repository that contains CSEEs that are available for activation on a CSE and also CSFs that are available for activation on a CSEE.
  4. Discovery and configuration flow are now documented in Version 0.8

SOS: Why did the TWG decide to work on the release of a unique API document?

Bill Martin (BM): The overall objective of the Computational Storage API document is to define an interface between an application and a CSx. Version 0.5 is the first release to the public by the Technical Work Group. 

There are three key takeaways from version 0.5:

  1. The document defines an Application Programming Interface (API) to CSxs
  2. The API allows a user application on a host to have a consistent interface to any vendor’s CSx.
  3. A Vendor defines a library for their device that implements the API.  Mapping to wire protocol for the device is done by this library.   Functions that are not available on a specific CSx may be implemented in software.

SOS: How can vendors use these documents?

BM: The Computational Storage Architecture and Programming Model is what I would categorize as a “descriptive” document.  There are no “shalls” or “shoulds” in the document. Rather, the Model is something to use and view the elements they should be considering. It shows the components that are in the architecture, what they mean, and how they interact with each other. This allows users to understand the frameworks and options that can be implemented with a common language and understanding.

The API document, in contrast, is a “prescriptive” document.  It describes how to use the elements defined within the architectural document – how to do discovery and configuration, and how to utilize the architecture.   

These documents are meant to be used together. Some implementations may not use all of the elements of the architecture, but all of the elements are logically there.

JM: Individuals who are looking to implement computational storage – and who are developing their own computational storage devices – should absolutely review both documents and use them to provide feedback and questions. Many vendors are considering what their computational storage device should look like. This architecture framework provides good guidance and baseline nomenclature we can all use to speak the same language.

SOS: Are there any specific areas where you are looking for feedback?

BM:  In the API specification, we’d like feedback on whether or not the discovery should cover everything implementers want to discover about a CSx. We’d like more depth and details and what things people think they want to discover about a device.  We’d also like comments on items that need to be added on how you interact with the devices to execute a Computational Storage Function (CSF).

JM:  For the Model, we’d like feedback on whether people see the value of this descriptive document and are actually following it.  We’d like to know if there are additional ideas or interests of definition that users want to see when constructing architectures, or whether there are gaps in defined activities.

SOS:  Where can folks find out more information about the Specifications?

BM: We invite everyone to attend the upcoming SNIA Storage Developer Conference (SDC) . We will be virtual this year on September 28 and 29.  Registrants can view 12 presentations on computational storage, including a Computational Storage Update from the Working Group that the Co-Chairs of the Computational Storage TWG, Scott Shadley and Jason Molgaard, are presenting, one I will be giving on Computational Storage Moving Forward with an Architecture and API, and another by my Computational Storage TWG colleague Oscar Pinto on Computational Storage APIs. 

And anyone with interest in computational storage can attend an open discussion during SDC on computational storage advances that will be featured in a Birds-of-a-Feather session via Zoom on September 29 at 4:00 pm Pacific. Go here to learn how to attend this SDC special event and all the Birds-of-a-Feather sessions.

SOS: Thanks to you both.  Our readers may want to know that  SNIA’s work in computational storage is led by the 250+ volunteer vendor members of the Computational Storage Technical Work Group.  In addition to these two specifications, the TWG has also updated computational storage terms in the Online SNIA Dictionary.

The SNIA Computational Storage Special Interest Group accelerates the awareness of computational storage concepts and influences industry adoption and implementation of the technical specifications and programming models. Learn more at http://www.snia.org/cmsi

What is eBPF, and Why Does it Matter for Computational Storage?

Recently, a question came up in the SNIA Computational Storage Special Interest Group on new developments in a technology called eBPF and how they might relate to computational storage. To learn more, SNIA on Storage sat down with Eli Tiomkin, SNIA CS SIG Chair with NGD Systems; Matias Bjørling of Western Digital; Jim Harris of Intel; Dave Landsman of Western Digital; and Oscar Pinto of Samsung.

SNIA On Storage (SOS):  The eBPF.io website defines eBPF, extended Berkeley Packet Filter, as a revolutionary technology that can run sandboxed programs in the Linux kernel without changing kernel source code or loading kernel modules.  Why is it important?

Dave Landsman (DL): eBPF emerged in Linux as a way to do network filtering, and enables the Linux kernel to be programmed.  Intelligence and features can be added to existing layers, and there is no need to add additional layers of complexity.

SNIA On Storage (SOS):  What are the elements of eBPF that would be key to computational storage? 

Jim Harris (JH):  The key to eBPF is that it is architecturally agnostic; that is, applications can download programs into a kernel without having to modify the kernel.  Computational storage allows a user to do the same types of things – develop programs on a host and have the controller execute them without having to change the firmware on the controller.

Using a hardware agnostic instruction set is preferred to having an application need to download x86 or ARM code based on what architecture is running.

DL:  It is much easier to establish a standard ecosystem with architecture independence.  Instead of an application needing to download x86 or ARM code based on the architecture, you can use a hardware agnostic instruction set where the kernel can interpret and then translate the instructions based on the processor. Computational storage would not need to know the processor running on an NVMe device with this “agnostic code”.

SOS: How has the use of eBPF evolved?

JH:  It is more efficient to run programs directly in the kernel I/O stack rather than have to return packet data to the user, operate on it there, and then send the data back to the kernel. In the Linux kernel, eBPF began as a way to capture and filter network packets.  Over time, eBPF use has evolved to additional use cases.

SOS:  What are some use case examples?

DL: One of the use cases is performance analysis. For example, eBPF can be used to measure things such as latency distributions for file system I/O, details of storage device I/O and TCP retransmits, and blocked stack traces and memory.

Matias Bjørling (MB): Other examples in the Linux kernel include tracing and gathering statistics.  However, while the eBPF programs in the kernel are fairly simple, and can be verified by the Linux kernel VM, computational programs are more complex, and longer running. Thus, there is a lot of work ongoing to explore how to efficiently apply eBPF to computational programs.

For example, what is the right set of run-time restrictions to be defined by the eBPF VM, any new instructions to be defined, how to make the program run as close to the instruction set of the target hardware.

JH: One of the big use cases involves data analytics and filtering. A common data flow for data analytics are large database table files that are often compressed and encrypted.  Without computational storage, you read the compressed and encrypted data blocks to the host, decompress and decrypt the blocks, and maybe do some filtering operations like a SQL query.  All this, however, consumes a lot of extra host PCIe, host memory, and cache bandwidth because you are reading the data blocks and doing all these operations on the host.  With computational storage, inside the device you can tell the SSD to read data and transfer it not to the host but to some memory buffers within the SSD.  The host can then tell the controller to do a fixed function program like decrypt the data and put in another local location on the SSD, and then do a user supplied program like eBPF to do some filtering operations on that local decrypted data.  In the end you would transfer the filtered data to the host.  You are doing the compute closer to the storage, saving memory and bandwidth.

SOS:  How does using eBPF for computational storage look the same?  How does it look different?

Jim – There are two parts to this answer.  Part 1 is the eBPF instruction set with registers and how eBPF programs are assembled.  Where we are excited about computational storage and eBPF is that the instruction set is common. There are already existing tool chains that support eBPF.   You can take a C program and compile it into an eBPF object file, which is huge.  If you add computational storage aspects to standards like NVMe, where developing a unique tool chain support can take a lot of work, you can now leverage what is already there for the eBPF ecosystem. 

Part 2 of the answer centers around the Linux kernel’s restrictions on what an eBPF program is allowed to do when downloaded. For example, the eBPF instruction set allows for unbounded loops, and toolchains such as gcc will generate eBPF object code with unbounded loops, but the Linux kernel will not permit those to execute – and rejects the program. These restrictions are manageable when doing packet processing in the kernel.  The kernel knows a packet’s specific data structure and can verify that data is not being accessed outside the packet.  With computational storage, you may want to run an eBPF program that operates on a set of data that has a very complex data structure – perhaps arrays not bounded or multiple levels of indirection.  Applying Linux kernel verification rules to computational storage would limit or even prevent processing this type of data.

SOS: What are some of the other challenges you are working through with using eBPF for computational storage?

MB:  We know that x86 works fast with high memory bandwidth, while other cores are slower.  We have some general compute challenges in that eBPF needs to be able to hook into today’s hardware like we do for SSDs.  What kind of operations make sense to offload for these workloads?  How do we define a common implementation API for all of them and build an ecosystem on top of it?  Do we need an instruction-based compiler, or a library to compile up to – and if you have it on the NVMe drive side, could you use it?  eBPF in itself is great- but getting a whole ecosystem and getting all of us to agree on what makes value will be the challenge in the long term.

Oscar Pinto (OP): The Linux kernel for eBPF today is more geared towards networking in its functionality but light on storage. That may be a challenge in building a computational storage framework. We need to think through how to enhance this given that we download and execute eBPF programs in the device. As Matias indicated, x86 is great at what it does in the host today. But if we have to work with smaller CPUs in the device, they may need help with say dedicated hardware or similar implemented using additional logic to aid the eBPF programs One question is how would these programs talk to them?  We don’t have a setup for storage like this today, and there are a variety of storage services that can benefit from eBPF.

SOS: Is SNIA addressing this challenge?

OP: On the SNIA side we are building on program functions that are downloaded to computational storage engines.  These functions run on the engines which are CPUs or some other form of compute that are tied to a FPGA, DPU, or dedicated hardware. We are defining these abstracted functionalities in SNIA today, and the SNIA Computational Storage Technical Work Group is developing a Computational Storage Architecture and Programming Model and Computational Storage APIs  to address it..  The latest versions, v0.8 and v0.5, has been approved by the SNIA Technical Council, and is now available for public review and comment at SNIA Feedback Portal.

SOS: Is there an eBPF standard? Is it aligned with storage?

JH:  We have a challenge around what an eBPF standard should look like.  Today it is defined in the Linux kernel.  But if you want to incorporate eBPF in a storage standard you need to have something specified for that storage standard.  We know the Linux kernel will continue to evolve adding and modifying instructions. But if you have a NVMe SSD or other storage device you have to have something set in stone –the version of eBPF that the standard supports.  We need to know what the eBPF standard will look like and where will it live.  Will standards organizations need to define something separately?

SOS:  What would you like an eBPF standard to look like from a storage perspective?

JH – We’d like an eBPF standard that can be used by everyone.  We are looking at how computational storage can be implemented in a way that is safe and secure but also be able to solve use cases that are different.

MB:  Security will be a key part of an eBPF standard.  Programs should not access data they should not have access to.  This will need to be solved within a storage device. There are some synergies with external key management. 

DL: The storage community has to figure out how to work with eBPF and make this standard something that a storage environment can take advantage of and rely on.

SOS: Where do you see the future of eBPF?

MB:  The vision is that you can build eBPFs and it works everywhere.  When we build new database systems and integrate eBPFs into them, we then have embedded kernels that can be sent to any NVMe device over the wire and be executed.  The cool part is that it can be anywhere on the path, so there becomes a lot of interesting ways to build new architectures on top of this. And together with the open system ecosystem we can create a body of accelerators in which we can then fast track the build of these ecosystems.  eBPF can put this into overdrive with use cases outside the kernel.

DL:  There may be some other environments where computational storage is being evaluated, such as web assembly.

JH: An eBPF run time is much easier to put into an SSD than a web assembly run time.

MB: eBPF makes more sense – it is simpler to start and build upon as it is not set in stone for one particular use case.

Eli Tiomkin (ET):  Different SSDs have different levels of constraints.  Every computational storage SSDs in production and even those in development have very unique capabilities that are dependent on the workload and application.

SOS:  Any final thoughts?

MB: At this point, technologies are coming together which are going to change the industry in a way that we can redesign the storage systems both with computational storage and how we manage security in NVMe devices for these programs.  We have the perfect storm pulling things together. Exciting platforms can be built using open standards specifications not previously available.

SOS:  Looking forward to this exciting future. Thanks to you all.

Continuing to Refine and Define Computational Storage

The SNIA Computational Storage Technical Work Group (TWG) has been hard at work on the SNIA Technical Document Computational Storage Architecture and Programming Model.  SNIAOnStorage recently sat down via zoom with the document editor Bill Martin of Samsung and TWG Co-Chairs Jason Molgaard of Arm and Scott Shadley of NGD Systems to understand the work included in the model and why definitions of computational storage are so important.

SNIAOnStorage (SOS): Shall we start with the fundamentals?  Just what is the Computational Storage Architecture and Programming Model?

Scott Shadley (SS):  The SNIA Computational Storage Architecture and Programming Model (Model) introduces the framework of how to use a new tool to architect your infrastructure by deploying compute resources in the storage layer.

Bill Martin (BM): The Model enables architecture and programming of computational storage devices. These kinds of devices include those with storage physically attached, and also those with storage not physically attached but considered computational because the devices are associated with storage.

SOS: How did the TWG approach creating the Model and what does it cover?

SS:  SNIA is known for bringing standardization to customized operations; and with the Model, users now have a common way to identify the different solutions offered in computational storage devices and a standard way to discover and interact with these devices. Like the way NVMe brought common interaction to the wild west of PCIe, the SNIA Model ensures the many computational storage products already on the market can align to interact in a common way, minimizing the need for unique programming to use solutions most effectively.  

Jason Molgaard (JM):  The Model covers both the hardware architecture and software application programming interface (API) for developing and interacting with computational storage.

BM:  The architecture sections of the Model cover the components that make up computational storage and the API provides a programming interface to those components.

SOS:  I know the TWG members have had many discussions to develop standard terms for computational storage.  Can you share some of these definitions and why it was important to come to consensus?

BM:  The model defines Computational Storage Devices (CSxs) which are composed of Computational Storage Processors (CSPs), Computational Storage Drives (CSDs), and Computational Storage Arrays (CSAs). 

Each Computational Storage Device contains a Computational Storage Engine (CSE) and some form of host accessible memory for that engine to utilize. 

The Computational Storage Processor is a device that has a Computational Storage Engine but does not contain storage. The Computational Storage Drive contains a Computational Storage Engine and storage.  And the Computational Storage Array contains an array with an array processor and a Computational Storage Engine.

Finally, the Computational Storage Engine executes Computational Storage Functions (CSFs) which are the entities that define the particular computation.  

All of the computational storage terms can be found online in the SNIA Dictionary. 

SS: An architecture and programing model is necessary to allow vendor-neutral, interoperable implementations of this industry architecture, and clear, accurate definitions help to define how the computational storage hierarchy works.  The TWG spent many hours to define these standard nomenclatures to be used by providers of computational storage products. 

JM: It has been a work in process over the last 18 months, and the perspectives of all the different TWG member companies have brought more clarity to the terms and refined them to better meet the needs of the ecosystem.

BM: One example has been the change of what was called computational storage services to the more accurate and descriptive Computational Storage Functions.   The Model defines a list of potential functions such as compression/decompression, encoding/decoding, and database search.  These and many more are described in the document.

SOS: Is SNIA working with the industry on computational storage standards?

BM:    SNIA has an alliance with the NVM Express® organization and they are working on computational storage. As other organizations (e.g., CXL Consortium) develop computational storage for their interface, SNIA will pursue alliances with those organizations.  You can find details on SNIA alliances here.

SS:  SNIA is also monitoring other Technical Work Group activity inside SNIA such as the Smart Data Accelerator Interface (SDXI) TWG working on the memory layer and efforts around Security, which is a key topic being investigated now.

SOS:  Is a new release of the Computational Storage Architecture and Programming Model pending?

BM:  Stay tuned, the next release of the Model – v.06 – is coming very soon.  It will contain updates and an expansion of the architecture.

JM: We have also been working on an API document which will be released at the same time as the V.6 release of the Model.

SOS:  Who will write the software based on the Computational Storage Architecture and Programming Model?

JM:  Computational Storage TWG members will develop open-source software aligned with the API, and application programmers will use those libraries.

SOS: How can the public find out about the next release of the Model?

SS: We will announce it via our SNIA Matters newsletter. Version 0.6 of the Model as well as the API will be up for public review and comment at this link.  And we encourage companies interested in the future of computational storage to join SNIA and contribute to the further development of the Model and the API.  You can reach us with your questions and comments at askcmsi@snia.org.

SOS:  Where can our readers learn more about computational storage?

SS:  Eli Tiomkin, Chair of the SNIA Computational Storage Special Interest Group (CS SIG), Jason, and I sat down to discuss the future of computational storage in January 2021.  The CS SIG also has a series of videos that provide a great way to get up to speed on computational storage.  You can find them and “Geek Out on Computational Storage” here,

SOS:  Thanks for the update, and we’ll look forward to a future SNIA webcast on your computational storage work.