Open Standards Featured at FMS 2023

SNIA welcomes colleagues to join them at the upcoming Flash Memory Summit, August 8-10, 2023 in Santa Clara CA.

SNIA is pleased to join standards organizations CXL Consortium™ (CXL™), PCI-SIG®, and Universal Chiplet Interconnect Express™ (UCIe™) in an Open Standards Pavilion, Booth #725, in the Exhibit Hall.  CMSI will feature SNIA member companies in a computational storage cross industry demo by Intel, MINIO, and Solidigm and a Data Filtering demo by ScaleFlux; a software memory tiering demo by VMware; a persistent memory workshop and hackathon; and the latest on SSD form factors E1 and E3 work by SNIA SFF TA Technical work group. SMI will showcase SNIA Swordfish® management of NVMe SSDs on Linux with demos by Intel Samsung and Solidigm.

CXL will discuss their advances in coherent connectivity.  PCI-SIG will feature their PCIe 5.0 architecture (32GT/s) and PCIe 6.0 (65GT/s) architectures and industry adoption and the upcoming PCIe 7.0 specification development (128GT/s).  UCIe will discuss their new open industry standard establishing a universal interconnect at the package-level.

SNIA STA Forum will also be in Booth #849 – learn more about the SCSI Trade Association joining SNIA.

These demonstrations and discussions will augment FMS program sessions in the SNIA-sponsored System Architecture Track on memory, computational storage, CXL, and UCIe standards.  A SNIA mainstage session on Wednesday August 9 at 2:10 pm will discuss Trends in Storage and Data: New Directions for Industry Standards.

SNIA colleagues and friends can receive a $100 discount off the 1-, 2-, or 3-day full conference registration by using code SNIA23.

Visit snia.org/fms to learn more about the exciting activities at FMS 2023 and join us there!

So just what is an SSD?

It seems like an easy enough question, “What is an SSD?” but surprisingly, most of the search results for this get somewhat confused quickly on media, controllers, form factors, storage interfaces, performance, reliability, and different market segments. 

The SNIA SSD SIG has spent time demystifying various SSD topics like endurance, form factors, and the different classifications of SSDs – from consumer to enterprise and hyperscale SSDs.

“Solid state drive is a general term that covers many market segments, and the SNIA SSD SIG has developed a new overview of “What is an SSD? ,” said Jonmichael Hands, SNIA SSD Special Interest Group (SIG)Co-Chair. “We are committed to helping make storage technology topics, like endurance and form factors, much easier to understand coming straight from the industry experts defining the specifications.”  

The “What is an SSD?” page offers a concise description of what SSDs do, how they perform, how they connect, and also provides a jumping off point for more in-depth clarification of the many aspects of SSDs. It joins an ever-growing category of 20 one-page “What Is?” answers that provide a clear and concise, vendor-neutral definition of often- asked technology terms, a description of what they are, and how each of these technologies work.  Check out all the “What Is?” entries at https://www.snia.org/education/what-is

And don’t miss other interest topics from the SNIA SSD SIG, including  Total Cost of Ownership Model for Storage and SSD videos and presentations in the SNIA Educational Library.

Your comments and feedback on this page are welcomed.  Send them to askcmsi@snia.org.

Your Questions Answered on Persistent Memory, CXL, and Memory Tiering

With the persistent memory ecosystem continuing to evolve with new interconnects like CXL™ and applications like memory tiering, our recent Persistent Memory, CXL, and Memory Tiering-Past, Present, and Future webinar was a big success.  If you missed it, watch it on demand HERE!

Many questions were answered live during the webinar, but we did not get to all of them.  Our moderator Jim Handy from Objective Analysis, and experts Andy Rudoff and Bhushan Chithur from Intel, David McIntyre from Samsung, and Sudhir Balasubramanian and Arvind Jagannath from VMware have taken the time to answer them in this blog. Happy reading!

Q: What features or support is required from a CXL capable endpoint to e.g. an accelerator to support the memory pooling? Any references?

A: You will have two interfaces, one for the primary memory accesses and one for the management of the pooling device. The primary memory interface is the .mem and the management interface will be via the .io or via a sideband interface. In addition you will need to implement a robust failure recovery mechanism since the blast radius is much larger with memory pooling.

Q: How do you recognize weak information security (in CXL)?

A: CXL has multiple features around security and there is considerable activity around this in the Consortium.  For specifics, please see the CXL Specification or send us a more specific question.

Q: If the system (e.g. x86 host) wants to deploy CXL memory (Type 3) now, is there any OS kernel configuration, BIO configuration to make the hardware run with VMWare (ESXi)? How easy or difficult this setup process?

A: A simple CXL Type 3 Memory Device providing volatile memory is typically configured by the pre-boot environment and reported to the OS along with any other main memory.  In this way, a platform that supports CXL Type 3 Memory can use it without any additional setup and can run an OS that contains no CXL support and the memory will appear as memory belonging to another NUMA code.  That said, using an OS that does support CXL enables more complex management, error handling, and more complex CXL devices.

Q: There was a question on ‘Hop” length. Would you clarify?

A: In the webinar around minute 48, it was stated that a Hop was 20ns, but this is not correct. A Hop is often spoken of as “Around 100ns.”  The Microsoft Azure Pond paper quantifies it four different ways, which range from 85ns to 280ns.

Q: Do we have any idea how much longer the latency will be?  

A: The language CXL folks use is “Hops.”   An address going into CXL is one Hop, and data coming back is another.  In a fabric it would be twice that, or four Hops.  The  latency for a Hop is somewhere around 100ns, although other latencies are accepted.

Q: For memory semantic SSD:  There appears to be a trend among 2LM device vendors to presume the host system will be capable of providing telemetry data for a device-side tiering mechanism to decide what data should be promoted and demoted.  Meanwhile, software vendors seem to be focused on the devices providing telemetry for a host-side tiering mechanism to tell the device where to move the memory.  What is your opinion on how and where tiering should be enforced for 2LM devices like a memory semantic SSD?

A: Tiering can be managed both by the host and within computational storage drives that could have an integrated compute function to manage local tiering- think edge applications.

Q: Re VM performance in Tiering: It appears you’re comparing the performance of 2 VM’s against 1.  It looked like the performance of each individual VM on the tiering system was slower than the DRAM only VM.  Can you explain why we should take the performance of 2 VMs against the 1 VM?  Is the proposal that we otherwise would have required those 2 VM’s to run on separate NUMA node, and now they’re running on the same NUMA node?

A: Here the use case was, lower TCO & increased capacity of memory along with aggregate performance of VM’s v/s running few VM’s on DRAM. In this use case, the DRAM per NUMA Node was 384GB, the Tier2 memory per NUMA node was 768GB. The VM RAM was 256GB.

In the DRAM only case, if we have to run business critical workloads e.g., Oracle with VM RAM=256GB,  we could only run 1 VM (256GB) per NUMA Node (DRAM=384GB), we cannot over-provision memory in the DRAM only case as every NUMA node has 384GB only. So potentially we could run 4 such VM’s (VM RAM=256Gb) in this case with NUMA node affinity set as we did in this use case OR if we don’t do NUMA node affinity, maybe 5 such VM’s without completely maxing out the server RAM.  Remember, we did NUMA node affinity in this use case to eliminate any cross NUMA latency.78

Now with Tier2 memory in the mix, each NUMA node has 384GB DRAM and 768GB Tier2 Memory, so theoretically one could run 16-17 such VM’s (VM RAM =256GB), hence we are able to increase resource maximization, run more workloads, increase transactions etc , so lower TCO, increased capacity and aggregate performance improvement.

Q: CXL is changing very fast, we have 3 protocol versions in 2 years, as a new consumer of CXL what are the top 3 advantages of adopting CXL right away v/s waiting for couple of more years?

A: All versions of CXL are backward compatible.  Users should have no problem using today’s CXL devices with newer versions of CXL, although they won’t be able to take advantage of any new features that are introduced after the hardware is deployed.

Q: (What is the) ideal when using agilex FPGAs as accelerators?

A: CXL 3.0 supports multiple accelerators via the CXL switching fabric. This is good for memory sharing across heterogeneous compute accelerators, including FPGAs.

Thanks again for your support of SNIA education, and we invite you to write askcmsi@snia.org for your ideas for future webinars and blogs!

It’s A Wrap – But Networking and Education Continue From Our C+M+S Summit!

Our 2023 SNIA Compute+Memory+Storage Summit was a success! The event featured 50 speakers in 40 sessions over two days. Over 25 SNIA member companies and alliance partners participated in creating content on computational storage, CXL™ memory, storage, security, and UCIe™. All presentations and videos are free to view at www.snia.org/cms-summit.

“For 2023, the Summit scope expanded to examine how the latest advances within and across compute, memory and storage technologies should be optimized and configured to meet the requirements of end customer applications and the developers that create them,” said David McIntyre, Co-Chair of the Summit.  “We invited our SNIA Alliance Partners Compute Express Link™ and Universal Chiplet Interconnect Express™ to contribute to a holistic view of application requirements and the infrastructure resources that are required to support them,” McIntyre continued.  “Their panel on the CXL device ecosystem and usage models and presentation on UCIe innovations at the package level along with three other sessions on CXL added great value to the event.”

Thirteen computational storage presentations covered what is happening in NVMe™ and SNIA to support computational storage devices and define new interfaces with computational storage APIs that work across different hardware architectures.  New applications for high performance data analytics, discussions of how to integrate computational storage into high performance computing designs, and new approaches to integrate compute, data and I/O acceleration closely with storage systems and data nodes were only a few of the topics covered.

“The rules by which the memory game is played are changing rapidly and we received great feedback on our nine presentations in this area,” said Willie Nelson, Co-Chair of the Summit.  “SNIA colleagues Jim Handy and Tom Coughlin always bring surprising conclusions and opportunities for SNIA members to keep abreast of new memory technologies, and their outlook was complimented by updates on SNIA standards on memory-to memory data movement and on JEDEC memory standards; presentations on thinking memory, fabric attached memory, and optimizing memory systems using simulations; a panel examining where the industry is going with persistent memory, and much more.”

Additional highlights included an EDSFF panel covering the latest SNIA specifications that support these form factors, sharing an overview of platforms that are EDSFF-enabled, and discussing the future for new product and application introductions; a discussion on NVMe as a cloud interface; and a computational storage detecting ransomware session.

New to the 2023 Summit – and continuing to get great views – was a “mini track” on Security, led by Eric Hibbard, chair of the SNIA Storage Security Technical Work Group with contributions from IEEE Security Work Group members, including presentations on cybersecurity, fine grain encryption, storage sanitization, and zero trust architecture.

Co-Chairs McIntyre and Nelson encourage everyone to check out the video playlist and send your feedback to askcmsi@snia.org. The “Year of the Summit” continues with networking opportunities at the upcoming SmartNIC Summit (June), Flash Memory Summit (August), and SNIA Storage Developer Conference (September).  Details on all these events and more are at the SNIA Event Calendar page.  See you soon!

50 Speakers Featured at the 2023 SNIA Compute+Memory+Storage Summit

SNIA’s Compute+Memory+Storage Summit is where architectures, solutions, and community come together. Our 2023 Summit – taking place virtually on April 11-12, 2023 – is the best example to date, featuring a stellar lineup of 50 speakers in 40 sessions covering topics including computational storage real-world applications, the future of memory, critical storage security issues, and the latest on SSD form factors, CXL™, and UCIe™.

“We’re excited to welcome executives, architects, developers, implementers, and users to our 11th annual Summit,” said David McIntyre, C+M+S Summit Co-Chair, and member of the SNIA Board of Directors.  “We’ve gathered the technology leaders to bring us the latest developments in compute, memory, storage, and security in our free online event.  We hope you will watch live to ask questions of our experts as they present, and check out those sessions you miss on-demand.”

Memory sessions begin with Watch Out – Memory’s Changing! where Jim Handy and Tom Coughlin will discuss the memory technologies vying for the designer’s attention, with CXL™ and UCIe™ poised to completely change the rules. Speakers will also cover thinking memory, optimizing memory using simulations, providing capacity and TCO to applications using software memory tiering, and fabric attached memory.

Compute sessions include Steven Yuan of StorageX discussing the Efficiency of Data Centric Computing, and presentations on the computational storage and compute market, big-disk computational storage arrays for data analytics, NVMe as a cloud interface, improving storage systems for simulation science with computational storage, and updates on SNIA and NVM Express work on computational storage standards.

CXL and UCIe will be featured with presentations on CXL 3.0 and Universal Compute Interface Express™ On-Package Innovation Slot for Compute, Memory, and Storage Applications.

The Summit will also dive into security with a introductory view of today’s storage security landscape and additional sessions on zero trust architecture, storage sanitization, encryption, and cyber recovery and resilience.

For 2023, the Summit is delighted to present three panels – one on Exploring the Compute Express Link™ (CXL™) Device Ecosystem and Usage Models moderated by Kurtis Bowman of the CXL Consortium, one on Persistent Memory Trends moderated by Dave Eggleston of Microchip, and one on Form Factor Updates, moderated by Cameron Brett of the SNIA SSD Special Interest Group.

We will also feature the popular SNIA Birds-of-a-Feather sessions. On Tuesday April 11 at 4:00 pm PDT/7:00 pm EDT, you can join to discuss the latest compute, memory, and storage developments, and on Wednesday April at 3:00 pm PDT/6:00 pm EDT, we’ll be talking about memory advances.

Learn more in our Summit preview video, check out the agenda, and register for free to access our Summit platform!

“Year of the Summit” Kicks Off with Live and Virtual Events

For 11 years, SNIA Compute, Memory and Storage Initiative (CMSI) has presented a Summit featuring industry leaders speaking on the key topics of the day.  In the early years, it was persistent memory-focused, educating audiences on the benefits and uses of persistent memory.  In 2020 it expanded to a Persistent Memory+Computational Storage Summit, examining that new technology, its architecture, and use cases.

Now in 2023, the Summit is expanding again to focus on compute, memory, and storage.  In fact, we’re calling 2023 the Year of the Summit – a year to get back to meeting in person and offering a variety of ways to listen to leaders, learn about technology, and network to discuss innovations, challenges, solutions, and futures.

We’re delighted that our first event of the Year of the Summit is a networking event at MemCon, taking place March 28-29 at the Computer History Museum in Mountain View CA.

At MemCon, SNIA CMSI member and IEEE President elect Tom Coughlin of Coughlin Associates will moderate a panel discussion on Compute, Memory, and Storage Technology Trends for the Application Developer.  Panel members Debendra Das Sharma of Intel and the CXL™ Consortium, David McIntyre of Samsung and the SNIA Board of Directors, Arthur Sainio of SMART Modular and the SNIA Persistent Memory Special Interest Group, and Arvind Jaganath of VMware and SNIA CMSI will examine how applications and solutions available today offer ways to address enterprise and cloud provider challenges – and they’ll provide a look to the future.

SNIA leaders will be on hand to discuss work in computational storage, smart data acceleration interface (SDXI), SSD form factor advances, and persistent memory trends.  Share a libation or two at the SNIA hosted networking reception on Tuesday evening, March 28.

This inaugural MemCon event is perfect to start the conversation, as it focuses on the intersection between systems design, memory innovation (emerging memories, storage & CXL) and other enabling technologies. SNIA colleagues and friends can register for MemCon with a 15% discount using code SNIA15.

April 2023 Networking!

We will continue the Year with a newly expanded SNIA Compute+Memory+Storage Summit coming up April 11-12 as a virtual event.  Complimentary registration is now open for a stellar lineup of speakers, including Stephen Bates of Huawei, Debendra Das Sharma of  Universal Chiplet Interconnect Express™, Jim Handy of Objective Analysis, Shyam Iyer of Dell, Bill Martin of Samsung, Jake Oshins of Microsoft, Andy Rudoff of Intel, Andy Walls of IBM, and Steven Yuan of StorageX.

Summit topics include Memory’s Headed for Change, High Performance Data Analytics, CXL 3.0, Detecting Ransomware, Meeting Scaling Challenges, Open Standards for Innovation at the Package Level, and Standardizing Memory to Memory Data Movement. Great panel discussions are on tap as well.  Kurt Lender of the CXL Consortium will lead a discussion on Exploring the CXL Device Ecosystem and Usage Models, Dave Eggleston of Microchip will lead a panel with Samsung and SMART Modular on Persistent Memory Trends, and Cameron Brett of KIOXIA will lead a SSD Form Factors Update.   More details at www.snia.org/cms-summit.

Later in 2023…

Opportunities for networking will continue throughout 2023. We look forward to seeing you at the SmartNIC Summit (June 13-15), Flash Memory Summit (August 8-10), SNIA Storage Developer Conference (September 18-21), OCP Global Summit (October 17-19), and SC23 (November 12-17). Details on SNIA participation coming soon!

Is EDSFF Taking Center Stage? We Answer Your Questions!

Enterprise and Data Center Form Factor (EDSFF) technologies have come a long way since our 2020 SNIA CMSI webinar on the topic.  While that webinar still provides an outstanding framework for understanding – and SNIA’s popular SSD Form Factors page gives the latest on the E1 and E3 specifications – SNIA Solid State Drive Special Interest Group co-chairs Cameron Brett and Jonmichael Hands joined to provide the latest updates at our live webcast: EDSFF Taking Center Stage in the Data Center.  We had some great questions from our live audience, so our experts have taken the time to answer them in this this blog.

Q: What does the EDSFF roadmap look like? When will we see PCIe® Gen5 NVMe™, 1.2, 2.0 CXL cx devices?

As the form factors come out into the market, we anticipate that there will be feature updates and smaller additions to the existing specifications like SFF TA 1008 and SFF TA 1023.  There may also be changes around defining LEDs and stack updates.  The EDSFF specifications, however, are mature and we have seen validation and support on the connector and how it works at higher interface speeds. You now have platforms, backplanes, and chassis to support these form factors in the marketplace.  Going forward, we may see integration with other device types like GPUs, support of new platforms, and alignment with PCIe Gen 5.  Regarding CXL, we see the buzz, and having this form factor be the kind of vehicle for CXL will have a huge momentum. 

Q:  I’m looking for thoughts on recent comments I read about PCIe5 NVMe drives likely needing/benefitting from larger form-factors (like 25mm wide vs 22) for cooling considerations. With mass market price optimizations, what is the likelihood that client compute will need to transition away from existing M.2 (esp 2280) form factors in the coming years and will that be a shared form-factor shared with server compute (as has been the case with 5.25″,3.5″,2.5″ drives)?

We are big fans of EDSFF being placed on reference platforms for OEMs and motherboard makers. Enterprise storage support would be advantageous on the desktop.  At the recent OCP Global Summit, there was discussion on Gen 5 specifications and M.2 and U.2. With the increased demands for power and bandwidth, we think if you want more performance you will need to move to a different form factor, and EDSFF makes sense. 

Q:  On E1.S vs E3.S market dominance, can you refer to their support on dual-port modules? Some traditional storage server designs favor E3.S because of the dual port configuration. More modern storage designs do not rely on dual port modules, and therefore prefer E1.S. Do you agree to this correlation ? How will this affect the predictions on market share?

A:  There is some confusion about the specification support versus what vendors support and what customers are demanding.  The EDSFF specifications share a common pin out and connection specifications.  If a manufacturer wishes to support the dual port functionality, they can do so now.  Hyperscalers are now using E1.S in compute designs and may use E3 for their high availability enterprise storage requirements.  Our webcast showed the forecast from Forward Insights on larger shipments of E3 further out in time, reflecting the transition away from 2.5-inch to E3 as server and storage OEMs transition their backplanes.

Q:  Have you investigated enabling conduction cooling of E1.S and E3.S to a water cooled cold plate? If not, is it of interest?

OCP Global Summit featured a presentation from Intel about immersion cooling with a focus on the sustainability aspect as you can get your power usage effectiveness (PUE) down further by eliminating the fans in system design while increasing cooling.  There doesn’t seem to be anything eliminating the use of EDSFF drives for immersion cooling. New CPUs have heat pipes, and new OEM designs have up to 36 drives in a 2U chassis.  How do you cool that?  Many folks are talking about cooling in the data center, and we’ll just need to wait to see what happens!

Illustration of Dell PowerEdge AMD Genoa Servers with 32 E3.S SSD bays

Thanks again for your interest in SNIA and Enterprise and Data Center SSD Form Factors.  We invite you to visit our SSD Form Factor page where we have videos, white papers, and charts explaining the many different SSD sizes and formats in a variety of form factors. You may also wish to check out a recent article from Storage Review which discusses an E3.S implementation.