2024 Year of the Summit Kicks Off – Meet us at MemCon

2023 was a great year for SNIA CMSI to meet with IT professionals and end users in “Summits” to discuss technologies, innovations, challenges, and solutions.  Our outreach at six industry events reached over 16,000 and we thank all who engaged with our CMSI members.

We are excited to continue a second “Year of the Summit” with a variety of opportunities to network and converse with you.  Our first networking event will take place March 26-27, 2024 at MemCon in Mountain View, CA.

MemCon 2024 focuses on systems design for the data centric era, working with data-intensive workloads, integrating emerging technologies, and overcoming data movement and management challenges. The agenda includes presentations and panels, featuring speakers from Meta, Microsoft, Netflix, Samsung, and Warner Brothers.   It’s the perfect event to discuss the integration of SNIA’s focus on developing global standards and delivering education on all technologies related to data.  SNIA and MemCon have prepared a video highlighting several of the key topics to be discussed.

MemCon 2024 Video Preview

At MemCon, SNIA CMSI member and SDXI Technical Work Group Chair Shyam Iyer of Dell will moderate a panel discussion on How are Memory Innovations Impacting the Total Cost of Ownership in Scaling-Up and Power Consumption , discussing impacts on hyperscalers, AI/ML compute, and cost/power.

SNIA Board member David McIntyre will participate in a panel on How are Increased Adoption of CXL, HBM, and Memory Protocol Expected to Change the Way Memory and Storage is Used and Assembled? , with insights on the markets and emerging memory innovations. The full MemCon agenda is here.

In the exhibit area, SNIA leaders will be on hand to demonstrate updates to the SNIA Persistent Memory Programming Workshop featuring new CXL® memory modules (get an early look at our Programming exercises here) and to provide a first look at a Smart Data Accelerator Interface (SDXI) specification implementation.  We’ll also provide updates on SNIA technical work on form factors like those used for CXL. We will feature a drawing for gift cards at the SNIA hosted coffee receptions and at the Tuesday evening networking reception.

SNIA colleagues and friends can register for MemCon with a 15% discount using code SNIA15.

And stay tuned for engaging with SNIA at upcoming events in 2024, including a return of the SNIA Compute, Memory, and Storage Summit in May 2024, August 2024 FMS-the Future of Memory and Storage; SNIA SDC in September, and SC24 in Atlanta in November 2024. We’ll discuss each of these in depth in our Year of the Summit blog series.

Emerging Memories Branch Out – a Q&A

Our recent SNIA Persistent Memory SIG webinar explored in depth the latest developments and futures of emerging memories – now found in multiple applications both as stand-alone chips and embedded into systems on chips. We got some great questions from our live audience, and our experts Arthur Sainio, Tom Coughlin, and Jim Handy have taken the time to answer them in depth in this blog. And if you missed the original live talk, watch the video and download the PDF here.

Q:  Do you expect Persistent Memory to eventually gain the speeds that exist today with DRAM?

A:  It appears that that has already happened with the hafnium ferroelectrics that SK Hynix and Micron have shown.  Ferroelectric memory is a very fast technology and with very fast write cycles there should be every reason for it to go that way. With the hooks that are in CXL™, , though, that shouldn’t be that much of a problem since it’s a transactional protocol. The reads, then, will probably rival DRAM speeds for MRAM and for resistive RAM (MRAM might get up to DRAM speeds with its writes too). In fact, there are technologies like spin-orbit torque and even voltage-controlled magnetic anisotropy that promise higher performance and also low write latency for MRAM technologies. I think that probably most applications are read intensive and so the read is the real place where the focus is, but it does look like we are going to get there.

Q:  Are all the new Memory technology protocols (electrically) compatible to DRAM interfaces like DDR4 or DDR5? If not, then shouldn’t those technologies have lower chances of adoption as they add dependency on custom in-memory controller?

A:  That’s just a logic problem.  There’s nothing innate about any memory technology that couples it tightly with any kind of a bus, and so because NOR Flash and SRAM are the easy targets so far, most emerging technologies have used a NOR flash or SRAM type interface.  However, in the future they could use DDR.  There’re some special twists because you don’t have to refresh emerging memory technologies. but you know in general they could use DDR.

But one of the beauties of CXL is that you put anything you want to with any kind of interface on the other side of CXL and CXL erases what the differences are. It moderates them so although they may have different performances it’s hidden behind the CXL network.  Then the burden goes on to the CXL controller designers to make sure that those emerging technologies, whether it’s MRAM or others, can be adopted behind that CXL protocol. My expectation is for there to be a few companies early on who provide CXL controllers that that do have some kind of a specialty interface on them whether it’s for MRAM or for Resistive RAM or something like that, and then eventually for them to move their way into the mainstream.  Another interesting thing about CXL is that we may even see a hierarchy of different memories within CXL itself which also includes as part of CXL including domain specific processors or accelerators that operate close to memory, and so there are very interesting opportunities there as well. If you can do processing close to memory you lower the amount of data you’re moving around and you’re saving a lot of power for the computing system.

Q: Emerging memory technologies have a byte-level direct access programming model, which is in contrast to block-based NAND Flash. Do you think this new programming model will eventually replace NAND Flash as it reduces the overhead and reduces the power of transferring Data?

A: It’s a question of cost and that’s something that was discussed very much in our webinar. If you haven’t got a cost that’s comparable to NAND Flash, then you can’t really displace it.  But as far as the interface is concerned, the NAND interface is incredibly clumsy. All of these technologies do have both byte interfaces rather than a block interface but also, they can write in place – they don’t need to have a pre-erased block to write into. That from a technical standpoint is a huge advantage and now it’s just a question of whether or not they can get the cost down – which means getting the volume up.

Q: Can you discuss the High Bandwidth Memory (HBM) trends? What about memories used with Graphic Processing Units (GPUs)?

A: That topic isn’t the subject of this webinar as this webinar is about emerging memory technologies. But, to comment, we don’t expect to see emerging memory technologies adopt an HBM interface anytime in the really near future because HBM does springboard off DRAM and, as we discussed on one of the slides, DRAM has a transition that we don’t know when it’s going to happen that it goes to another emerging memory technology.  We’ve put it into the early 2030s in our chart, but it could be much later than that and HBM won’t convert over to an emerging memory technology until long after that.

However, HBM involves stacking of chips and that ultimately could happen.  It’s a more expensive process right now –  a way of getting a lot of memory very close to a processor – and if you look at some of the NVIDIA applications for example,  this is an example of the Chiplet technology and HBM can play a role in those Chiplet technologies for GPUs..  That’s another area that’s going to be using emerging memories as well – in the Chiplets.  While we didn’t talk about that so much in this webinar, it is another place for emerging memories to be playing a role.

There’s one other advantage to using an emerging memory that we did not talk about: emerging memories don’t need refresh. As a matter of fact, none of the emerging memory technologies need refresh. More power is consumed by DRAM refreshing than by actual data accesses.  And so, if you can cut that out of it,  you might be able to stack more chips on top of each other and get even more performance, but we still wouldn’t see that as a reason for DRAM to be displaced early on in HBM and then later on in the mainstream DRAM market.  Although, if you’re doing all those refreshes there’s a fair amount of potential of heat generation by doing that, which may have packaging implications as well. So, there may be some niche areas in there which could be some of the first ways in which some of these emerging memories are potentially used for those kinds of applications, if the performance is good enough.

Q:  Why have some memory companies failed?  Apart from the cost/speed considerations you mention, what are the other minimum envelope features that a new emerging memory should have? Is capacity (I heard 32Gbit multiple times) one of those criteria?

A: Shipping a product is probably the single most important activity for success. Companies don’t have to make a discrete or standalone SRAM or emerging memory chip but what they need to do is have their technology be adopted by somebody who is shipping something if they’re not going to ship it themselves.  That’s what we see in the embedded market as a good path for emerging memory IP: To get used and to build up volume. And as the volume and comfort with manufacturing those memories increase, it opens up the possibility down the road of lower costs with higher volume standalone memory as well.

Q:  What are the trends in DRAM interfaces?  Would you discuss CXL’s role in enabling composable systems with DRAM pooling?

A:  CXL, especially CXL 3.0, has particularly pointed at pooling. Pooling is going to be an extremely important development in memory with CXL, and it’s one of the reasons why CXL probably will proliferate. It allows you to be able to allocate memory which is not attached to particular server CPUs and therefore to make more efficient and effective use of those memories. We mentioned this earlier when we said that right now DRAM is that memory with some NAND Flash products out there too. But this could expand into other memory technologies behind CXL within the CXL pool as well as accelerators (domain specific processors) that do some operations closer to where the memory lives. So, we think there’s a lot of possibilities in that pooling for the development and growth of emerging memories as well as conventional memories.

Q: Do you think molecular-based technologies (DNA or others) can emerge in the coming years as an alternative to some of the semiconductor-based memories?

A: DNA and other memory technologies are in a relatively early stage but there are people who are making fairly aggressive plans on what they can do with those technologies. We think the initial market for those molecular memories are not in this high performance memory application; but especially with DNA, the potential density of storage and the fact that you can make lots of copies of content by using genetic genomic processes makes them very attractive potentially for archiving applications.  The things we’ve seen are mostly in those areas because of the performance characteristics. But the potential density that they’re looking at is actually aimed at that lower part of the market, so it has to be very, very cost effective to be able to do that, but the possibilities are there.  But again, as with the emerging high performance memories, you still have the economies of scale you have to deal with – if you can’t scale it fast enough the cost won’t go down enough that will actually will be able to compete in those areas. So it faces somewhat similar challenges, though in a different part of the market.

Earlier in the webcast, we said when showing the orb chart, that for something to fit into the computing storage hierarchy it has to be cheaper than the next faster technology and faster than the next cheaper technology. DNA is not a very fast technology and so that automatically says it has to be really cheap for it to catch on and that puts it in a very different realm than the emerging memories that we’re talking about here. On the other hand, you never know what someone’s going to discover, but right now the industry doesn’t know how to make fast molecular memories.

Q:  What is your intuition on how tomorrow’s highly dense memories might impact non-load/store processing elements such as AI accelerators? As model sizes continue to grow and energy density becomes more of an issue, it would seem like emerging memories could thrive in this type of environment. Your thoughts?

A:  Any memory would thrive in an environment where there was an unbridled thirst for memory. as artificial intelligence (AI) currently is. But AI is undergoing some pretty rapid changes, not only in the number of the parameters that are examined, but also in the models that are being used for it. We recently read a paper that was written by Apple* where they actually found ways of winnowing down the data that was used for a large language model into something that would fit into an Apple MacBook Pro M2 and they were able to get good performance by doing that.  They really accelerated things by ignoring data that didn’t really make any difference. So, if you take that kind of an approach and say: “Okay.  If those guys keep working on that problem that way, and they take it to the extreme, then you might not need all that much memory after all.”  But still, if memory were free, I’m sure that there’d be a ton of it out there and that is just a question of whether or not these memories can get cheaper than DRAM so that they can look like they’re free compared to what things look like today.

There are three interesting elements of this:  First, CXL, in addition allowing mixing of memory types, again allows you to put in those domain specific processors as well close to the memory. Perhaps those can do some of the processing that’s part of the model, in which case it would lower the energy consumption. The other thing it supports is different computing models than what we traditionally use. Of course there is quantum computing, but there also is something called neural networks which actually use the memory as a matrix multiplier, and those are using these emerging memories for that technology which could be used for AI applications.  The other thing that’s sort of hidden behind this is that spin tunnelling is changing processing itself in that right now everything is current-based, but there’s work going on in spintronic based devices that instead of using current would use the spin of electrons for moving data around, in which case we can avoid resistive heating and our processing could run a lot cooler and use less energy to do so.  So, there’s a lot of interesting things that are kind of buried in the different technologies being used for these emerging memories that actually could have even greater implications on the development of computing beyond just the memory application themselves.  And to elaborate on spintronics, we’re talking about logic and not about spin memory – using spins rather than that of charge which is current.

Q:  Flash has an endurance issue (maximum number of writes before it fails). In your opinion, what is the minimum acceptable endurance (number of writes) that an emerging memory should support?

It’s amazing how many techniques have fallen into place since wear was an issue in flash SSDs.  Today’s software understands which loads have high write levels and which don’t, and different SSDs can be used to handle the two different kinds of load.  On the SSD side, flash endurance has continually degraded with the adoption of MLC, TLC, and QLC, and is sometimes measured in the hundreds of cycles.  What this implies is that any emerging memory can get by with an equally low endurance as long as it’s put behind the right controller.

In high-speed environments this isn’t a solution, though, since controllers add latency, so “Near Memory” (the memory tied directly to the processor’s memory bus) will need to have higher endurance.  Still, an area that can help to accommodate that is the practice of putting code into memories that have low endurance and data into higher-endurance memory (which today would be DRAM).  Since emerging memories can provide more bits at a lower cost and power than DRAM, the write load to the code space should be lower, since pages will be swapped in and out more frequently.  The endurance requirements will depend on this swapping, and I would guess that the lowest-acceptable level would be in the tens of thousands of cycles.

Q: It seems that persistent memory is more of an enterprise benefit rather than a consumer benefit. And consumer acceptance helps the advancement and cost scaling issues. Do you agree? I use SSDs as an example. Once consumers started using them, the advancement and prices came down greatly.

Anything that drives increased volume will help.  In most cases any change to large-scale computing works its way down to the PC, so this should happen in time here, too. But today there’s a growing amount of MRAM use in personal fitness monitors, and this will help drive costs down, so initial demand will not exclusively come from enterprise computing. At the same time, the IBM FlashDrive that we mentioned uses MRAM, too, so both enterprise and consumer are already working to simultaneously grow consumption.

Q: The CXL diagram (slide 22 in the PDF) has 2 CXL switches between the CPUs and the memory. How much latency do you expect the switches to add, and how does that change where CXL fits on the array of memory choices from a performance standpoint?

The CXL delay goals are very aggressive, but I am not sure that an exact number has been specified.  It’s on the order of 70ns per “Hop,” which can be understood as the delay of going through a switch or a controller. Naturally, software will evolve to work with this, and will move data that has high bandwidth requirements but is less latency-sensitive to more remote areas, while managing the more latency-sensitive data to near memory.

Q: Where can I learn more about the topic of Emerging Memories?

Here are some resources to review

 

* LLM in a Flash: Efficient Large Language Model Inference with Limited Memory, Kevin Avizalideh, et. al.,             arXiv:2312.11514 [cs.CL]

It’s A Wrap – But Networking and Education Continue From Our C+M+S Summit!

Our 2023 SNIA Compute+Memory+Storage Summit was a success! The event featured 50 speakers in 40 sessions over two days. Over 25 SNIA member companies and alliance partners participated in creating content on computational storage, CXL™ memory, storage, security, and UCIe™. All presentations and videos are free to view at www.snia.org/cms-summit.

“For 2023, the Summit scope expanded to examine how the latest advances within and across compute, memory and storage technologies should be optimized and configured to meet the requirements of end customer applications and the developers that create them,” said David McIntyre, Co-Chair of the Summit.  “We invited our SNIA Alliance Partners Compute Express Link™ and Universal Chiplet Interconnect Express™ to contribute to a holistic view of application requirements and the infrastructure resources that are required to support them,” McIntyre continued.  “Their panel on the CXL device ecosystem and usage models and presentation on UCIe innovations at the package level along with three other sessions on CXL added great value to the event.”

Thirteen computational storage presentations covered what is happening in NVMe™ and SNIA to support computational storage devices and define new interfaces with computational storage APIs that work across different hardware architectures.  New applications for high performance data analytics, discussions of how to integrate computational storage into high performance computing designs, and new approaches to integrate compute, data and I/O acceleration closely with storage systems and data nodes were only a few of the topics covered.

“The rules by which the memory game is played are changing rapidly and we received great feedback on our nine presentations in this area,” said Willie Nelson, Co-Chair of the Summit.  “SNIA colleagues Jim Handy and Tom Coughlin always bring surprising conclusions and opportunities for SNIA members to keep abreast of new memory technologies, and their outlook was complimented by updates on SNIA standards on memory-to memory data movement and on JEDEC memory standards; presentations on thinking memory, fabric attached memory, and optimizing memory systems using simulations; a panel examining where the industry is going with persistent memory, and much more.”

Additional highlights included an EDSFF panel covering the latest SNIA specifications that support these form factors, sharing an overview of platforms that are EDSFF-enabled, and discussing the future for new product and application introductions; a discussion on NVMe as a cloud interface; and a computational storage detecting ransomware session.

New to the 2023 Summit – and continuing to get great views – was a “mini track” on Security, led by Eric Hibbard, chair of the SNIA Storage Security Technical Work Group with contributions from IEEE Security Work Group members, including presentations on cybersecurity, fine grain encryption, storage sanitization, and zero trust architecture.

Co-Chairs McIntyre and Nelson encourage everyone to check out the video playlist and send your feedback to askcmsi@snia.org. The “Year of the Summit” continues with networking opportunities at the upcoming SmartNIC Summit (June), Flash Memory Summit (August), and SNIA Storage Developer Conference (September).  Details on all these events and more are at the SNIA Event Calendar page.  See you soon!

50 Speakers Featured at the 2023 SNIA Compute+Memory+Storage Summit

SNIA’s Compute+Memory+Storage Summit is where architectures, solutions, and community come together. Our 2023 Summit – taking place virtually on April 11-12, 2023 – is the best example to date, featuring a stellar lineup of 50 speakers in 40 sessions covering topics including computational storage real-world applications, the future of memory, critical storage security issues, and the latest on SSD form factors, CXL™, and UCIe™.

“We’re excited to welcome executives, architects, developers, implementers, and users to our 11th annual Summit,” said David McIntyre, C+M+S Summit Co-Chair, and member of the SNIA Board of Directors.  “We’ve gathered the technology leaders to bring us the latest developments in compute, memory, storage, and security in our free online event.  We hope you will watch live to ask questions of our experts as they present, and check out those sessions you miss on-demand.”

Memory sessions begin with Watch Out – Memory’s Changing! where Jim Handy and Tom Coughlin will discuss the memory technologies vying for the designer’s attention, with CXL™ and UCIe™ poised to completely change the rules. Speakers will also cover thinking memory, optimizing memory using simulations, providing capacity and TCO to applications using software memory tiering, and fabric attached memory.

Compute sessions include Steven Yuan of StorageX discussing the Efficiency of Data Centric Computing, and presentations on the computational storage and compute market, big-disk computational storage arrays for data analytics, NVMe as a cloud interface, improving storage systems for simulation science with computational storage, and updates on SNIA and NVM Express work on computational storage standards.

CXL and UCIe will be featured with presentations on CXL 3.0 and Universal Compute Interface Express™ On-Package Innovation Slot for Compute, Memory, and Storage Applications.

The Summit will also dive into security with a introductory view of today’s storage security landscape and additional sessions on zero trust architecture, storage sanitization, encryption, and cyber recovery and resilience.

For 2023, the Summit is delighted to present three panels – one on Exploring the Compute Express Link™ (CXL™) Device Ecosystem and Usage Models moderated by Kurtis Bowman of the CXL Consortium, one on Persistent Memory Trends moderated by Dave Eggleston of Microchip, and one on Form Factor Updates, moderated by Cameron Brett of the SNIA SSD Special Interest Group.

We will also feature the popular SNIA Birds-of-a-Feather sessions. On Tuesday April 11 at 4:00 pm PDT/7:00 pm EDT, you can join to discuss the latest compute, memory, and storage developments, and on Wednesday April at 3:00 pm PDT/6:00 pm EDT, we’ll be talking about memory advances.

Learn more in our Summit preview video, check out the agenda, and register for free to access our Summit platform!

Join Us as We Return Live to FMS!

SNIA is pleased to be part of the Flash Memory Summit 2022 agenda August 1-4, 2022 at the Santa Clara CA Convention Center, with our volunteer leadership demonstrating solutions, chairing and speaking in sessions, and networking with FMS attendees at a variety of venues during the conference.

The ever-popular SNIA Reception at FMS features the SNIA groups Storage Management Initiative, Compute Memory and Storage Initiative, and Green Storage Initiative, along with SNIA alliance partners CXL Consortium, NVM Express, and OpenFabrics Alliance.  Stop by B-203/204 at the Convention Center from 5:30 – 7:00 pm Monday August 1 for refreshments and networking with colleagues to kick off the week!

You won’t want to miss SNIA’s mainstage presentation on Wednesday August 3 at 2:40 pm in the Mission City Ballroom. SNIA Vice Chair Richelle Ahlvers of Intel will provide a perspective on how new storage technologies and trends are accelerating through standards and open communities.

In the Exhibit Hall, SNIA Storage Management Initiative and Compute Memory and Storage Initiative are FMS Platinum sponsors with a SNIA Demonstration Pavilion at booth #725.  During exhibit hours Tuesday evening through Thursday afternoon, 15 SNIA member companies will be featured in live technology demonstrations on storage management, computational storage, persistent memory, sustainability, and form factors; a Persistent Memory Programming Workshop and Hackathon; and theater presentations on SNIA’s standards and alliance work. 

Long standing SNIA technology focus areas in computational storage and memory will be represented in the SNIA sponsored System Architectures Track (SARC for short) – Tuesday for memory and Thursday for computational storage.  SNIA is also pleased to sponsor a day on CXL architectures, memory, and storage talks on Wednesday. These sessions will all be in Ballroom G.

A new Sustainability Track on Thursday morning in Ballroom A led by the SNIA Green Storage Technical Work Group includes presentations on SSD power management, real world applications and storage workloads, and a carbon footprint comparison of SSDs vis HDDs, followed by a panel discussion. SSDs will also be featured in two SNIA-led presentation/panel pairs – SSDS-102-1 and 102-2 Ethernet SSDs on Tuesday afternoon in Ballroom B and SSDS-201-1 and 201-2 EDSFF E1 and E3 form factors on Wednesday morning in Ballroom D. SNIA Swordfish will be discussed in the DCTR-102-2 Enterprise Storage Part 2 session in Ballroom D on Tuesday morning

And the newest SNIA technical work group – DNA Data Storage– will lead a new-to-2022 FMS track on Thursday morning in Great America Meeting Room 2, discussing topics like preservation of DNA for information storage, the looming need for molecular storage, and DNA sequencing at scale. Attendees can engage for questions and discussion in Part 2 of the track.

Additional ways to network with SNIA colleagues include the always popular chat with the experts – beer and pizza on Tuesday evening, sessions on cloud storage, artificial intelligence, blockchain, and an FMS theater presentation on real world storage workloads.

Full details on session times, locations, chairs and speakers for all these exciting FMS activities can be found at www.snia.org/fms and on the Flash Memory Summit website.  SNIA colleagues and friends can register for $100.00 off the full conference or single day packages using the code SNIA22 at www.flashmemorysummit.com.

What is eBPF, and Why Does it Matter for Computational Storage?

Recently, a question came up in the SNIA Computational Storage Special Interest Group on new developments in a technology called eBPF and how they might relate to computational storage. To learn more, SNIA on Storage sat down with Eli Tiomkin, SNIA CS SIG Chair with NGD Systems; Matias Bjørling of Western Digital; Jim Harris of Intel; Dave Landsman of Western Digital; and Oscar Pinto of Samsung.

SNIA On Storage (SOS):  The eBPF.io website defines eBPF, extended Berkeley Packet Filter, as a revolutionary technology that can run sandboxed programs in the Linux kernel without changing kernel source code or loading kernel modules.  Why is it important?

Dave Landsman (DL): eBPF emerged in Linux as a way to do network filtering, and enables the Linux kernel to be programmed.  Intelligence and features can be added to existing layers, and there is no need to add additional layers of complexity.

SNIA On Storage (SOS):  What are the elements of eBPF that would be key to computational storage? 

Jim Harris (JH):  The key to eBPF is that it is architecturally agnostic; that is, applications can download programs into a kernel without having to modify the kernel.  Computational storage allows a user to do the same types of things – develop programs on a host and have the controller execute them without having to change the firmware on the controller.

Using a hardware agnostic instruction set is preferred to having an application need to download x86 or ARM code based on what architecture is running.

DL:  It is much easier to establish a standard ecosystem with architecture independence.  Instead of an application needing to download x86 or ARM code based on the architecture, you can use a hardware agnostic instruction set where the kernel can interpret and then translate the instructions based on the processor. Computational storage would not need to know the processor running on an NVMe device with this “agnostic code”.

SOS: How has the use of eBPF evolved?

JH:  It is more efficient to run programs directly in the kernel I/O stack rather than have to return packet data to the user, operate on it there, and then send the data back to the kernel. In the Linux kernel, eBPF began as a way to capture and filter network packets.  Over time, eBPF use has evolved to additional use cases.

SOS:  What are some use case examples?

DL: One of the use cases is performance analysis. For example, eBPF can be used to measure things such as latency distributions for file system I/O, details of storage device I/O and TCP retransmits, and blocked stack traces and memory.

Matias Bjørling (MB): Other examples in the Linux kernel include tracing and gathering statistics.  However, while the eBPF programs in the kernel are fairly simple, and can be verified by the Linux kernel VM, computational programs are more complex, and longer running. Thus, there is a lot of work ongoing to explore how to efficiently apply eBPF to computational programs.

For example, what is the right set of run-time restrictions to be defined by the eBPF VM, any new instructions to be defined, how to make the program run as close to the instruction set of the target hardware.

JH: One of the big use cases involves data analytics and filtering. A common data flow for data analytics are large database table files that are often compressed and encrypted.  Without computational storage, you read the compressed and encrypted data blocks to the host, decompress and decrypt the blocks, and maybe do some filtering operations like a SQL query.  All this, however, consumes a lot of extra host PCIe, host memory, and cache bandwidth because you are reading the data blocks and doing all these operations on the host.  With computational storage, inside the device you can tell the SSD to read data and transfer it not to the host but to some memory buffers within the SSD.  The host can then tell the controller to do a fixed function program like decrypt the data and put in another local location on the SSD, and then do a user supplied program like eBPF to do some filtering operations on that local decrypted data.  In the end you would transfer the filtered data to the host.  You are doing the compute closer to the storage, saving memory and bandwidth.

SOS:  How does using eBPF for computational storage look the same?  How does it look different?

Jim – There are two parts to this answer.  Part 1 is the eBPF instruction set with registers and how eBPF programs are assembled.  Where we are excited about computational storage and eBPF is that the instruction set is common. There are already existing tool chains that support eBPF.   You can take a C program and compile it into an eBPF object file, which is huge.  If you add computational storage aspects to standards like NVMe, where developing a unique tool chain support can take a lot of work, you can now leverage what is already there for the eBPF ecosystem. 

Part 2 of the answer centers around the Linux kernel’s restrictions on what an eBPF program is allowed to do when downloaded. For example, the eBPF instruction set allows for unbounded loops, and toolchains such as gcc will generate eBPF object code with unbounded loops, but the Linux kernel will not permit those to execute – and rejects the program. These restrictions are manageable when doing packet processing in the kernel.  The kernel knows a packet’s specific data structure and can verify that data is not being accessed outside the packet.  With computational storage, you may want to run an eBPF program that operates on a set of data that has a very complex data structure – perhaps arrays not bounded or multiple levels of indirection.  Applying Linux kernel verification rules to computational storage would limit or even prevent processing this type of data.

SOS: What are some of the other challenges you are working through with using eBPF for computational storage?

MB:  We know that x86 works fast with high memory bandwidth, while other cores are slower.  We have some general compute challenges in that eBPF needs to be able to hook into today’s hardware like we do for SSDs.  What kind of operations make sense to offload for these workloads?  How do we define a common implementation API for all of them and build an ecosystem on top of it?  Do we need an instruction-based compiler, or a library to compile up to – and if you have it on the NVMe drive side, could you use it?  eBPF in itself is great- but getting a whole ecosystem and getting all of us to agree on what makes value will be the challenge in the long term.

Oscar Pinto (OP): The Linux kernel for eBPF today is more geared towards networking in its functionality but light on storage. That may be a challenge in building a computational storage framework. We need to think through how to enhance this given that we download and execute eBPF programs in the device. As Matias indicated, x86 is great at what it does in the host today. But if we have to work with smaller CPUs in the device, they may need help with say dedicated hardware or similar implemented using additional logic to aid the eBPF programs One question is how would these programs talk to them?  We don’t have a setup for storage like this today, and there are a variety of storage services that can benefit from eBPF.

SOS: Is SNIA addressing this challenge?

OP: On the SNIA side we are building on program functions that are downloaded to computational storage engines.  These functions run on the engines which are CPUs or some other form of compute that are tied to a FPGA, DPU, or dedicated hardware. We are defining these abstracted functionalities in SNIA today, and the SNIA Computational Storage Technical Work Group is developing a Computational Storage Architecture and Programming Model and Computational Storage APIs  to address it..  The latest versions, v0.8 and v0.5, has been approved by the SNIA Technical Council, and is now available for public review and comment at SNIA Feedback Portal.

SOS: Is there an eBPF standard? Is it aligned with storage?

JH:  We have a challenge around what an eBPF standard should look like.  Today it is defined in the Linux kernel.  But if you want to incorporate eBPF in a storage standard you need to have something specified for that storage standard.  We know the Linux kernel will continue to evolve adding and modifying instructions. But if you have a NVMe SSD or other storage device you have to have something set in stone –the version of eBPF that the standard supports.  We need to know what the eBPF standard will look like and where will it live.  Will standards organizations need to define something separately?

SOS:  What would you like an eBPF standard to look like from a storage perspective?

JH – We’d like an eBPF standard that can be used by everyone.  We are looking at how computational storage can be implemented in a way that is safe and secure but also be able to solve use cases that are different.

MB:  Security will be a key part of an eBPF standard.  Programs should not access data they should not have access to.  This will need to be solved within a storage device. There are some synergies with external key management. 

DL: The storage community has to figure out how to work with eBPF and make this standard something that a storage environment can take advantage of and rely on.

SOS: Where do you see the future of eBPF?

MB:  The vision is that you can build eBPFs and it works everywhere.  When we build new database systems and integrate eBPFs into them, we then have embedded kernels that can be sent to any NVMe device over the wire and be executed.  The cool part is that it can be anywhere on the path, so there becomes a lot of interesting ways to build new architectures on top of this. And together with the open system ecosystem we can create a body of accelerators in which we can then fast track the build of these ecosystems.  eBPF can put this into overdrive with use cases outside the kernel.

DL:  There may be some other environments where computational storage is being evaluated, such as web assembly.

JH: An eBPF run time is much easier to put into an SSD than a web assembly run time.

MB: eBPF makes more sense – it is simpler to start and build upon as it is not set in stone for one particular use case.

Eli Tiomkin (ET):  Different SSDs have different levels of constraints.  Every computational storage SSDs in production and even those in development have very unique capabilities that are dependent on the workload and application.

SOS:  Any final thoughts?

MB: At this point, technologies are coming together which are going to change the industry in a way that we can redesign the storage systems both with computational storage and how we manage security in NVMe devices for these programs.  We have the perfect storm pulling things together. Exciting platforms can be built using open standards specifications not previously available.

SOS:  Looking forward to this exciting future. Thanks to you all.

Q&A on Data Movement and Computational Storage

Recently, the SNIA Compute, Memory, and Storage Initiative hosted a live webcast “Data Movement and Computational Storage”, moderated by Jim Fister of The Decision Place with Nidish Kamath of KIOXIA, David McIntyre of Samsung, and Eli Tiomkin of NGD Systems as panelists.  We had a great discussion on new ways to look at storage, flexible computer systems, and how to put on your security hat.

During our conversation, we answered audience questions, and raised a few of our own!  Check out some of the back-and-forth, and tune in to the entire video for customer use cases and thoughts for the future.

Q:  What is the value of computational storage?

A:  With computational storage, you have latency sensitivity – you can make decisions faster at the edge and can also distribute computing to process decisions anywhere.

Q:  Why is it important to consider “data movement” with regard to computational storage?

A:  There is a reduction in data movement that computational storage brings to the system, along with higher efficiencies while moving that data and a reduction in power which users may not have yet considered.   

Q: How does power use change when computational storage is brought in?

A:  You want to “move” compute to that point in the system where operations can be accomplished where the data is “at rest”. In traditional systems, if you need to move data from storage to the host, there are power costs that may not even be currently measured.  However, if you can now run applications and not move data, you will realize that power reduction, which is more and more important with the anticipation of massive quantities of data coming in the future.

Q: Are the traditional processing/storage transistor counts the same with computational storage?

A:  With computational storage, you can put the programming where it is needed – moving the compute to that point in the system where it can achieve the work with limited amount of overhead and networking bandwidth. Compute moves to where the data sits at rest, which is especially important with the explosion of data sets.

Q:  Does computational storage play a role in data security and privacy?

A: Security threats don’t always happen at the same time, so you need to consider a top-down holistic perspective. It will be important both today and in the future to consider new security threats because of data movement.

There is always a risk for security when the data is moving; however, computational storage reduces the data movement significantly, and can play as a more secure way to treat data because the data is not moving as much. Computational storage allows you to lock the data, for example, medical data, and only process when needed and if needed in an authenticated and secure fashion.  There’s no requirement to build a whole system around this.

Q:  What are the computational storage opportunities at the edge? 

A:  We need to understand the ecosystem the computational storage device is going into. Computational storage sits at the front line of edge applications and management of edge infrastructure pieces in the cloud.  It’s a great time to embrace existing cloud policies and collaborate with customers on how policies will migrate and change to the edge.

Q: In your discussions with customers, how dynamic do they expect the sets of code running on computational storage to be? With the extremes being code never changing (installed once/updated rarely) to being different for every query or operation. Please discuss how challenges differ for these approaches.

A:  The heavy lift comes into play with the application and the system integration.  To run flexible code, customers want a simple, straightforward, and seamless programming model that enables them to run as many applications as they need and change them in an easy way without disrupting the system.  Clients are using computational storage to speed up the processing of their data with dynamic reconfiguring in cutting edge applications.  We are putting a lot of effort toward this seamless and transparent model with our work in the SNIA Computational Storage Technical Work Group.

Q:  What does computational storage mean for data in the future?

A: The infrastructure of data and data movement will drastically change in the future as edge emerges and cloud continues to grow. Using computational storage will be extremely beneficial in the new infrastructure, and we will need to work together as an ecosystem and under SNIA to make sure we are all aligned to provide the right solutions to the customer.  

Dive – or Dip – into SNIA Persistent Memory + Computational Storage Summit Content

SNIA’s 9th annual Summit was a success with a new name and an expanded focus – Persistent Memory + Computational Storage – from the data center to the edge.  

The Summit moved to a two-day virtual platform and drew twice as many attendees as the previous year. We experimented with 20-minute sessions to great success.  Attendees saw leading technology experts discussing real world applications and use cases, providing insights on technology trends and futures, and networking  in “live via the internet” panels and Birds-of-a-Feather sessions.

The recap of our 2021 event – agenda – abstracts – speaker bios – links to videos and presentations – is summarized on the PM+CS Summit home page

But we know your time is precious – so here are a few ways to sample a lot of great content presented over two full days.

  1. Read our colleague Tom Coughlin’s Forbes blog on the event
  2. Not only did Tom and Jim Handy present on memory futures at the event, but they also provided the fastest sub-7 minute recaps of both Wednesday’s and Thursdays sessions with their lively commentary.
  3. New to persistent memory and/or computational storage technologies?  Check out our tutorials featuring Persistent Memory and Computational Storage Special Interest Group leaders giving you what you need to know.
  4. Love the back and forth?  You’ll enjoy the recordings of our live panel sessions where colleagues debate (and sometimes agree) on the topics of today:
  5. Is Persistent Memory your focus?  We’ve sorted the Persistent Memory Summit content for you in our SNIA Educational Library
  6. A Computational Storage man or woman?  Here is the list of all the Computational Storage content during the Summit to watch via our SNIA Educational Library.
  7. Want to get hands-on?  We have extended the opportunity to experience the  Persistent Memory Workshop and Hackathon with access to new cloud-based PM systems for more learning opportunities.

We extend a thank you and shout-out to our SNIA Compute, Memory, and Storage Initiative members and colleagues who presented in sessions and participated in panels. They represent these leading companies in the industry.

AMD, Arm, Coughlin Associates, Dell, Eideticom, Facebook, Futurewei Technologies, G2M Communications, Hewlett Packard Enterprise, Intuitive Cognition Consulting, Intel, Lenovo,  Los Alamos National Laboratory, MemVerge, Micron, Microsoft, MKW Ventures Consulting, NGD Systems, NVIDIA, Objective Analysis, Samsung, ScaleFlux, Silinnov Consulting, and SMART Modular Technologies.

We thank our Summit sponsors: Eideticom, MemVerge, Futurewei Technologies, SMART Modular Technologies, and NGD Systems; and the SNIA Compute Memory and Storage Initiative members who underwrote the event.

Finally, we thank you for your interest in SNIA Compute, Memory, and Storage Initiative outreach and education.  We look forward to seeing you at upcoming SNIA events, including our Storage Developer Conferences in EMEA, India, and the U.S.  Find out more details on SDC.

Continuing to Refine and Define Computational Storage

The SNIA Computational Storage Technical Work Group (TWG) has been hard at work on the SNIA Technical Document Computational Storage Architecture and Programming Model.  SNIAOnStorage recently sat down via zoom with the document editor Bill Martin of Samsung and TWG Co-Chairs Jason Molgaard of Arm and Scott Shadley of NGD Systems to understand the work included in the model and why definitions of computational storage are so important.

SNIAOnStorage (SOS): Shall we start with the fundamentals?  Just what is the Computational Storage Architecture and Programming Model?

Scott Shadley (SS):  The SNIA Computational Storage Architecture and Programming Model (Model) introduces the framework of how to use a new tool to architect your infrastructure by deploying compute resources in the storage layer.

Bill Martin (BM): The Model enables architecture and programming of computational storage devices. These kinds of devices include those with storage physically attached, and also those with storage not physically attached but considered computational because the devices are associated with storage.

SOS: How did the TWG approach creating the Model and what does it cover?

SS:  SNIA is known for bringing standardization to customized operations; and with the Model, users now have a common way to identify the different solutions offered in computational storage devices and a standard way to discover and interact with these devices. Like the way NVMe brought common interaction to the wild west of PCIe, the SNIA Model ensures the many computational storage products already on the market can align to interact in a common way, minimizing the need for unique programming to use solutions most effectively.  

Jason Molgaard (JM):  The Model covers both the hardware architecture and software application programming interface (API) for developing and interacting with computational storage.

BM:  The architecture sections of the Model cover the components that make up computational storage and the API provides a programming interface to those components.

SOS:  I know the TWG members have had many discussions to develop standard terms for computational storage.  Can you share some of these definitions and why it was important to come to consensus?

BM:  The model defines Computational Storage Devices (CSxs) which are composed of Computational Storage Processors (CSPs), Computational Storage Drives (CSDs), and Computational Storage Arrays (CSAs). 

Each Computational Storage Device contains a Computational Storage Engine (CSE) and some form of host accessible memory for that engine to utilize. 

The Computational Storage Processor is a device that has a Computational Storage Engine but does not contain storage. The Computational Storage Drive contains a Computational Storage Engine and storage.  And the Computational Storage Array contains an array with an array processor and a Computational Storage Engine.

Finally, the Computational Storage Engine executes Computational Storage Functions (CSFs) which are the entities that define the particular computation.  

All of the computational storage terms can be found online in the SNIA Dictionary. 

SS: An architecture and programing model is necessary to allow vendor-neutral, interoperable implementations of this industry architecture, and clear, accurate definitions help to define how the computational storage hierarchy works.  The TWG spent many hours to define these standard nomenclatures to be used by providers of computational storage products. 

JM: It has been a work in process over the last 18 months, and the perspectives of all the different TWG member companies have brought more clarity to the terms and refined them to better meet the needs of the ecosystem.

BM: One example has been the change of what was called computational storage services to the more accurate and descriptive Computational Storage Functions.   The Model defines a list of potential functions such as compression/decompression, encoding/decoding, and database search.  These and many more are described in the document.

SOS: Is SNIA working with the industry on computational storage standards?

BM:    SNIA has an alliance with the NVM Express® organization and they are working on computational storage. As other organizations (e.g., CXL Consortium) develop computational storage for their interface, SNIA will pursue alliances with those organizations.  You can find details on SNIA alliances here.

SS:  SNIA is also monitoring other Technical Work Group activity inside SNIA such as the Smart Data Accelerator Interface (SDXI) TWG working on the memory layer and efforts around Security, which is a key topic being investigated now.

SOS:  Is a new release of the Computational Storage Architecture and Programming Model pending?

BM:  Stay tuned, the next release of the Model – v.06 – is coming very soon.  It will contain updates and an expansion of the architecture.

JM: We have also been working on an API document which will be released at the same time as the V.6 release of the Model.

SOS:  Who will write the software based on the Computational Storage Architecture and Programming Model?

JM:  Computational Storage TWG members will develop open-source software aligned with the API, and application programmers will use those libraries.

SOS: How can the public find out about the next release of the Model?

SS: We will announce it via our SNIA Matters newsletter. Version 0.6 of the Model as well as the API will be up for public review and comment at this link.  And we encourage companies interested in the future of computational storage to join SNIA and contribute to the further development of the Model and the API.  You can reach us with your questions and comments at askcmsi@snia.org.

SOS:  Where can our readers learn more about computational storage?

SS:  Eli Tiomkin, Chair of the SNIA Computational Storage Special Interest Group (CS SIG), Jason, and I sat down to discuss the future of computational storage in January 2021.  The CS SIG also has a series of videos that provide a great way to get up to speed on computational storage.  You can find them and “Geek Out on Computational Storage” here,

SOS:  Thanks for the update, and we’ll look forward to a future SNIA webcast on your computational storage work.

Cutting Edge Persistent Memory Education – Hear from the Experts!

Most of the US is currently experiencing an epic winter.  So much for 2021 being less interesting than 2020.  Meanwhile, large portions of the world are also still locked down waiting for vaccine production.  So much for 2020 ending in 2020.  What, oh what, can possibly take our minds off the boredom?

Here’s an idea – what about some education in persistent memory programming?  SNIA and UCSD recently hosted an online conference on Persistent Programming In Real Life (PIRL), and the videos of all the sessions are now available online.  There are nearly 20 hours of content including panel discussions, academic, and industry presentations.  Recordings and PDFs of the presentations have been posted on the PIRL site as well as in the SNIA Educational Library.

In addition, SNIA is now in planning for our April 21-22, 2021 virtual Persistent Memory and Computational Storage Summit, where we’ll be featuring the latest content from the data center to the edge. Complimentary registration is now open. If you’re interested in helping us plan, or proposing content, you can contact us to provide input.

Spring will be here soon, with some freedom from cold, lockdown, and boredom.  We hope to see you virtually at the summit, full of knowledge from your perusal of SNIA education content.