Power Efficiency Measurement – Our Experts Make It Clear – Part 4

Measuring power efficiency in datacenter storage is a complex endeavor. A number of factors play a role in assessing individual storage devices or system-level logical storage for power efficiency. Luckily, our SNIA experts make the measuring easier!

In this SNIA Experts on Data blog series, our experts in the SNIA Solid State Storage Technical Work Group and the SNIA Green Storage Initiative explore factors to consider in power efficiency measurement, including the nature of application workloads, IO streams, and access patterns; the choice of storage products (SSDs, HDDs, cloud storage, and more); the impact of hardware and software components (host bus adapters, drivers, OS layers); and access to read and write caches, CPU and GPU usage, and DRAM utilization.

Join us on our final installment on the  journey to better power efficiency – Part 4: Impact of Storage Architectures on Power Efficiency Measurement.

And if you missed our earlier segments, click on the titles to read them:  Part 1: Key Issues in Power Efficiency Measurement,  Part 2: Impact of Workloads on Power Efficiency Measurement, and Part 3: Traditional Differences in Power Consumption: Hard Disk Drives vs Solid State Drives.  Bookmark this blog series and explore the topic further in the SNIA Green Storage Knowledge Center.

Impact of Storage Architectures on Power Efficiency Measurement

Ultimately, the interplay between hardware and software storage architectures can have a substantial impact on power consumption. Optimizing these architectures based on workload characteristics and performance requirements can lead to better power efficiency and overall system performance.

Different hardware and software storage architectures can lead to varying levels of power efficiency. Here’s how they impact power consumption.

Hardware Storage Architectures

  1. HDDs v SSDs:
    Solid State Drives (SSDs) are generally more power-efficient than Hard Disk Drives (HDDs) due to their lack of moving parts and faster access times. SSDs consume less power during both idle and active states.
  2. NVMe® v SATA SSDs:
    NVMe (Non-Volatile Memory Express) SSDs often have better power efficiency compared to SATA SSDs. NVMe’s direct connection to the PCIe bus allows for faster data transfers, reducing the time components need to be active and consuming power. NVMe SSDs are also performance optimized for different power states.
  3. Tiered Storage:
    Systems that incorporate tiered storage with a combination of SSDs and HDDs optimize power consumption by placing frequently accessed data on SSDs for quicker retrieval and minimizing the power-hungry spinning of HDDs.
  4. RAID Configurations:
    Redundant Array of Independent Disks (RAID) setups can affect power efficiency. RAID levels like 0 (striping) and 1 (mirroring) may have different power profiles due to how data is distributed and mirrored across drives.

Software Storage Architectures

  1. Compression and Deduplication:
    Storage systems using compression and deduplication techniques can affect power consumption. Compressing data before storage can reduce the amount of data that needs to be read and written, potentially saving power.
  2. Caching:
    Caching mechanisms store frequently accessed data in faster storage layers, such as SSDs. This reduces the need to access power-hungry HDDs or higher-latency storage devices, contributing to better power efficiency.
  3. Data Tiering:
    Similar to caching, data tiering involves moving data between different storage tiers based on access patterns. Hot data (frequently accessed) is placed on more power-efficient storage layers.
  4. Virtualization
    Virtualized environments can lead to resource contention and inefficiencies that impact power consumption. Proper resource allocation and management are crucial to optimizing power efficiency.
  5. Load Balancing:
    In storage clusters, load balancing ensures even distribution of data and workloads. Efficient load balancing prevents overutilization of certain components, helping to distribute power consumption evenly
  6. Thin Provisioning:
    Allocating storage on-demand rather than pre-allocating can lead to more efficient use of storage resources, which indirectly affects power efficiency

Emerging Memories Branch Out – a Q&A

Our recent SNIA Persistent Memory SIG webinar explored in depth the latest developments and futures of emerging memories – now found in multiple applications both as stand-alone chips and embedded into systems on chips. We got some great questions from our live audience, and our experts Arthur Sainio, Tom Coughlin, and Jim Handy have taken the time to answer them in depth in this blog. And if you missed the original live talk, watch the video and download the PDF here.

Q:  Do you expect Persistent Memory to eventually gain the speeds that exist today with DRAM?

A:  It appears that that has already happened with the hafnium ferroelectrics that SK Hynix and Micron have shown.  Ferroelectric memory is a very fast technology and with very fast write cycles there should be every reason for it to go that way. With the hooks that are in CXL™, , though, that shouldn’t be that much of a problem since it’s a transactional protocol. The reads, then, will probably rival DRAM speeds for MRAM and for resistive RAM (MRAM might get up to DRAM speeds with its writes too). In fact, there are technologies like spin-orbit torque and even voltage-controlled magnetic anisotropy that promise higher performance and also low write latency for MRAM technologies. I think that probably most applications are read intensive and so the read is the real place where the focus is, but it does look like we are going to get there.

Q:  Are all the new Memory technology protocols (electrically) compatible to DRAM interfaces like DDR4 or DDR5? If not, then shouldn’t those technologies have lower chances of adoption as they add dependency on custom in-memory controller?

A:  That’s just a logic problem.  There’s nothing innate about any memory technology that couples it tightly with any kind of a bus, and so because NOR Flash and SRAM are the easy targets so far, most emerging technologies have used a NOR flash or SRAM type interface.  However, in the future they could use DDR.  There’re some special twists because you don’t have to refresh emerging memory technologies. but you know in general they could use DDR.

But one of the beauties of CXL is that you put anything you want to with any kind of interface on the other side of CXL and CXL erases what the differences are. It moderates them so although they may have different performances it’s hidden behind the CXL network.  Then the burden goes on to the CXL controller designers to make sure that those emerging technologies, whether it’s MRAM or others, can be adopted behind that CXL protocol. My expectation is for there to be a few companies early on who provide CXL controllers that that do have some kind of a specialty interface on them whether it’s for MRAM or for Resistive RAM or something like that, and then eventually for them to move their way into the mainstream.  Another interesting thing about CXL is that we may even see a hierarchy of different memories within CXL itself which also includes as part of CXL including domain specific processors or accelerators that operate close to memory, and so there are very interesting opportunities there as well. If you can do processing close to memory you lower the amount of data you’re moving around and you’re saving a lot of power for the computing system.

Q: Emerging memory technologies have a byte-level direct access programming model, which is in contrast to block-based NAND Flash. Do you think this new programming model will eventually replace NAND Flash as it reduces the overhead and reduces the power of transferring Data?

A: It’s a question of cost and that’s something that was discussed very much in our webinar. If you haven’t got a cost that’s comparable to NAND Flash, then you can’t really displace it.  But as far as the interface is concerned, the NAND interface is incredibly clumsy. All of these technologies do have both byte interfaces rather than a block interface but also, they can write in place – they don’t need to have a pre-erased block to write into. That from a technical standpoint is a huge advantage and now it’s just a question of whether or not they can get the cost down – which means getting the volume up.

Q: Can you discuss the High Bandwidth Memory (HBM) trends? What about memories used with Graphic Processing Units (GPUs)?

A: That topic isn’t the subject of this webinar as this webinar is about emerging memory technologies. But, to comment, we don’t expect to see emerging memory technologies adopt an HBM interface anytime in the really near future because HBM does springboard off DRAM and, as we discussed on one of the slides, DRAM has a transition that we don’t know when it’s going to happen that it goes to another emerging memory technology.  We’ve put it into the early 2030s in our chart, but it could be much later than that and HBM won’t convert over to an emerging memory technology until long after that.

However, HBM involves stacking of chips and that ultimately could happen.  It’s a more expensive process right now –  a way of getting a lot of memory very close to a processor – and if you look at some of the NVIDIA applications for example,  this is an example of the Chiplet technology and HBM can play a role in those Chiplet technologies for GPUs..  That’s another area that’s going to be using emerging memories as well – in the Chiplets.  While we didn’t talk about that so much in this webinar, it is another place for emerging memories to be playing a role.

There’s one other advantage to using an emerging memory that we did not talk about: emerging memories don’t need refresh. As a matter of fact, none of the emerging memory technologies need refresh. More power is consumed by DRAM refreshing than by actual data accesses.  And so, if you can cut that out of it,  you might be able to stack more chips on top of each other and get even more performance, but we still wouldn’t see that as a reason for DRAM to be displaced early on in HBM and then later on in the mainstream DRAM market.  Although, if you’re doing all those refreshes there’s a fair amount of potential of heat generation by doing that, which may have packaging implications as well. So, there may be some niche areas in there which could be some of the first ways in which some of these emerging memories are potentially used for those kinds of applications, if the performance is good enough.

Q:  Why have some memory companies failed?  Apart from the cost/speed considerations you mention, what are the other minimum envelope features that a new emerging memory should have? Is capacity (I heard 32Gbit multiple times) one of those criteria?

A: Shipping a product is probably the single most important activity for success. Companies don’t have to make a discrete or standalone SRAM or emerging memory chip but what they need to do is have their technology be adopted by somebody who is shipping something if they’re not going to ship it themselves.  That’s what we see in the embedded market as a good path for emerging memory IP: To get used and to build up volume. And as the volume and comfort with manufacturing those memories increase, it opens up the possibility down the road of lower costs with higher volume standalone memory as well.

Q:  What are the trends in DRAM interfaces?  Would you discuss CXL’s role in enabling composable systems with DRAM pooling?

A:  CXL, especially CXL 3.0, has particularly pointed at pooling. Pooling is going to be an extremely important development in memory with CXL, and it’s one of the reasons why CXL probably will proliferate. It allows you to be able to allocate memory which is not attached to particular server CPUs and therefore to make more efficient and effective use of those memories. We mentioned this earlier when we said that right now DRAM is that memory with some NAND Flash products out there too. But this could expand into other memory technologies behind CXL within the CXL pool as well as accelerators (domain specific processors) that do some operations closer to where the memory lives. So, we think there’s a lot of possibilities in that pooling for the development and growth of emerging memories as well as conventional memories.

Q: Do you think molecular-based technologies (DNA or others) can emerge in the coming years as an alternative to some of the semiconductor-based memories?

A: DNA and other memory technologies are in a relatively early stage but there are people who are making fairly aggressive plans on what they can do with those technologies. We think the initial market for those molecular memories are not in this high performance memory application; but especially with DNA, the potential density of storage and the fact that you can make lots of copies of content by using genetic genomic processes makes them very attractive potentially for archiving applications.  The things we’ve seen are mostly in those areas because of the performance characteristics. But the potential density that they’re looking at is actually aimed at that lower part of the market, so it has to be very, very cost effective to be able to do that, but the possibilities are there.  But again, as with the emerging high performance memories, you still have the economies of scale you have to deal with – if you can’t scale it fast enough the cost won’t go down enough that will actually will be able to compete in those areas. So it faces somewhat similar challenges, though in a different part of the market.

Earlier in the webcast, we said when showing the orb chart, that for something to fit into the computing storage hierarchy it has to be cheaper than the next faster technology and faster than the next cheaper technology. DNA is not a very fast technology and so that automatically says it has to be really cheap for it to catch on and that puts it in a very different realm than the emerging memories that we’re talking about here. On the other hand, you never know what someone’s going to discover, but right now the industry doesn’t know how to make fast molecular memories.

Q:  What is your intuition on how tomorrow’s highly dense memories might impact non-load/store processing elements such as AI accelerators? As model sizes continue to grow and energy density becomes more of an issue, it would seem like emerging memories could thrive in this type of environment. Your thoughts?

A:  Any memory would thrive in an environment where there was an unbridled thirst for memory. as artificial intelligence (AI) currently is. But AI is undergoing some pretty rapid changes, not only in the number of the parameters that are examined, but also in the models that are being used for it. We recently read a paper that was written by Apple* where they actually found ways of winnowing down the data that was used for a large language model into something that would fit into an Apple MacBook Pro M2 and they were able to get good performance by doing that.  They really accelerated things by ignoring data that didn’t really make any difference. So, if you take that kind of an approach and say: “Okay.  If those guys keep working on that problem that way, and they take it to the extreme, then you might not need all that much memory after all.”  But still, if memory were free, I’m sure that there’d be a ton of it out there and that is just a question of whether or not these memories can get cheaper than DRAM so that they can look like they’re free compared to what things look like today.

There are three interesting elements of this:  First, CXL, in addition allowing mixing of memory types, again allows you to put in those domain specific processors as well close to the memory. Perhaps those can do some of the processing that’s part of the model, in which case it would lower the energy consumption. The other thing it supports is different computing models than what we traditionally use. Of course there is quantum computing, but there also is something called neural networks which actually use the memory as a matrix multiplier, and those are using these emerging memories for that technology which could be used for AI applications.  The other thing that’s sort of hidden behind this is that spin tunnelling is changing processing itself in that right now everything is current-based, but there’s work going on in spintronic based devices that instead of using current would use the spin of electrons for moving data around, in which case we can avoid resistive heating and our processing could run a lot cooler and use less energy to do so.  So, there’s a lot of interesting things that are kind of buried in the different technologies being used for these emerging memories that actually could have even greater implications on the development of computing beyond just the memory application themselves.  And to elaborate on spintronics, we’re talking about logic and not about spin memory – using spins rather than that of charge which is current.

Q:  Flash has an endurance issue (maximum number of writes before it fails). In your opinion, what is the minimum acceptable endurance (number of writes) that an emerging memory should support?

It’s amazing how many techniques have fallen into place since wear was an issue in flash SSDs.  Today’s software understands which loads have high write levels and which don’t, and different SSDs can be used to handle the two different kinds of load.  On the SSD side, flash endurance has continually degraded with the adoption of MLC, TLC, and QLC, and is sometimes measured in the hundreds of cycles.  What this implies is that any emerging memory can get by with an equally low endurance as long as it’s put behind the right controller.

In high-speed environments this isn’t a solution, though, since controllers add latency, so “Near Memory” (the memory tied directly to the processor’s memory bus) will need to have higher endurance.  Still, an area that can help to accommodate that is the practice of putting code into memories that have low endurance and data into higher-endurance memory (which today would be DRAM).  Since emerging memories can provide more bits at a lower cost and power than DRAM, the write load to the code space should be lower, since pages will be swapped in and out more frequently.  The endurance requirements will depend on this swapping, and I would guess that the lowest-acceptable level would be in the tens of thousands of cycles.

Q: It seems that persistent memory is more of an enterprise benefit rather than a consumer benefit. And consumer acceptance helps the advancement and cost scaling issues. Do you agree? I use SSDs as an example. Once consumers started using them, the advancement and prices came down greatly.

Anything that drives increased volume will help.  In most cases any change to large-scale computing works its way down to the PC, so this should happen in time here, too. But today there’s a growing amount of MRAM use in personal fitness monitors, and this will help drive costs down, so initial demand will not exclusively come from enterprise computing. At the same time, the IBM FlashDrive that we mentioned uses MRAM, too, so both enterprise and consumer are already working to simultaneously grow consumption.

Q: The CXL diagram (slide 22 in the PDF) has 2 CXL switches between the CPUs and the memory. How much latency do you expect the switches to add, and how does that change where CXL fits on the array of memory choices from a performance standpoint?

The CXL delay goals are very aggressive, but I am not sure that an exact number has been specified.  It’s on the order of 70ns per “Hop,” which can be understood as the delay of going through a switch or a controller. Naturally, software will evolve to work with this, and will move data that has high bandwidth requirements but is less latency-sensitive to more remote areas, while managing the more latency-sensitive data to near memory.

Q: Where can I learn more about the topic of Emerging Memories?

Here are some resources to review

 

* LLM in a Flash: Efficient Large Language Model Inference with Limited Memory, Kevin Avizalideh, et. al.,             arXiv:2312.11514 [cs.CL]

So just what is an SSD?

It seems like an easy enough question, “What is an SSD?” but surprisingly, most of the search results for this get somewhat confused quickly on media, controllers, form factors, storage interfaces, performance, reliability, and different market segments. 

The SNIA SSD SIG has spent time demystifying various SSD topics like endurance, form factors, and the different classifications of SSDs – from consumer to enterprise and hyperscale SSDs.

“Solid state drive is a general term that covers many market segments, and the SNIA SSD SIG has developed a new overview of “What is an SSD? ,” said Jonmichael Hands, SNIA SSD Special Interest Group (SIG)Co-Chair. “We are committed to helping make storage technology topics, like endurance and form factors, much easier to understand coming straight from the industry experts defining the specifications.”  

The “What is an SSD?” page offers a concise description of what SSDs do, how they perform, how they connect, and also provides a jumping off point for more in-depth clarification of the many aspects of SSDs. It joins an ever-growing category of 20 one-page “What Is?” answers that provide a clear and concise, vendor-neutral definition of often- asked technology terms, a description of what they are, and how each of these technologies work.  Check out all the “What Is?” entries at https://www.snia.org/education/what-is

And don’t miss other interest topics from the SNIA SSD SIG, including  Total Cost of Ownership Model for Storage and SSD videos and presentations in the SNIA Educational Library.

Your comments and feedback on this page are welcomed.  Send them to askcmsi@snia.org.

How Many IOPS? Users Share Their 2017 Storage Performance Needs

New on the Solid State Storage website is a whitepaper from analysts Tom Coughlin of Coughlin Associates and Jim Handy of Objective Analysis which details what IT manager requirements are for storage performance.The paper examines how requirements have changed over a four-year period for a range of applications, including databases, online transaction processing, cloud and storage services, and scientific and engineering computing. Users disclose how many IOPS are needed, how much storage capacity is required,  and what system bottlenecks prevent them for getting the performance they need. Read More

Flash Memory Summit Highlights SNIA Innovations in Persistent Memory & Flash

SNIA and the Solid State Storage Initiative (SSSI) invite you to join them at Flash Memory Summit 2016, August 8-11 at the Santa Clara Convention Center. SNIA members and colleagues receive $100 off any conference package using the code “SNIA16” by August 4 when registering for Flash Memory Summit at fms boothhttp://www.flashmemorysummit.com

On Monday, August 8, from 1:00pm – 5:00pm, a SNIA Education Afternoon will be open to the public in SCCC Room 203/204, where attendees can learn about multiple storage-related topics with five SNIA Tutorials on flash storage, combined service infrastructures, VDBench, stored-data encryption, and Non-Volatile DIMM (NVDIMM) integration from SNIA member speakers. Read More

SNIA’s Persistent Memory Education To Be Featured at Open Server Summit 2016

sssi boothIf you are in Silicon Valley or the Bay Area this week, SNIA welcomes you to join them and the Solid State Storage Initiative April 13-14 at the Santa Clara Convention Center for Open Server Summit 2016, the industry’s premier event that focuses on the design of next- generation servers with topics on data center efficiency, SSDs, core OS, cloud server design, the future of open server and open storage, and other efforts toward combining industry-standard hardware with open-source software.

The SNIA NVDIMM Special Interest Group is featured at OSS 2016, and will host a panel Thursday April 14 on NVDIMM technology, moderated by Bill Gervasi of JEDEC and featuring SIG members Diablo Technology, Netlist, and SMART Modular. The panel will highlight the latest activities in the three “flavors” of NVDIMM , and offer a perspective on the future of persistent memory in systems. Also, SNIA board member Rob Peglar of Micron Technology will deliver a keynote on April 14, discussing how new persistent memory directions create new approaches for system architects and enable entirely new applications involving enormous data sets and real-time analysis.

SSSI will also be in booth 403 featuring demonstrations by the NVDIMM SIG, discussions on SSD data recovery and erase, and updates on solid state storage performance testing.  SNIA members and colleagues can register for $100 off using the code SNIA at http://www.openserversummit.com.

SNIA NVM Summit Delivers the Persistent Memory Knowledge You Need

by Marty Foltyn

The discussion, use, and application of Non-volatile Memory (NVM) has come a long way from the first SNIA NVM Summit in 2013.  The significant improvements in persistent memory, with enormous capacity, memory-like speed and non-volatility, will make the long-awaited promise of the convergence storage and memory a reality. In this 4th annual NVM Summit, we will see how Storage and Memory have now converged, and learn that we are now faced with developing the needed ecosystem.  Register and join colleagues on Wednesday, January 20, 2016 in San Jose, CA to learn more, or follow http://www.snia.org/nvmsummit to review presentations post- event.

The Summit day begins with Rick Coulson, Senior Fellow, Intel, discussing the most recent developments in persistent memory with a presentation on All the Ways 3D XPoint Impacts Systems Architecture.

Ethan Miller, Professor of Computer Science at UC Santa Cruz, will discuss Rethinking Benchmarks for Non-Volatile Memory Storage Systems. He will describe the challenges for benchmarks posed by the transition to NVM, and propose potential solutions to these challenges.

Ken Gibson, NVM SW Architecture, Intel will present Memory is the New Storage: How Next Generation NVM DIMMs will Enable New Solutions That Use Memory as the High-Performance Storage Tier . This talk reviews some of the decades-old assumptions that change for suppliers of storage and data services as solutions move to memory as the new storage

Jim Handy, General Director, Objective Analysis, and Tom Coughlin, President, Coughlin Associates will discuss Future Memories and Today’s Opportunities, exploring the role of NVM in today’s and future applications. They will give some market analysis and projections for the various NVM technologies in use today.

Matt Bryson, SVP-Research, ABR, will lead a panel on NVM Futures-Emerging Embedded Memory Technologies, exploring the current status and future opportunities for NVM technologies and in particular both embedded and standalone MRAM technologies and associated applications.

Edward Sharp, Chief, Strategy and Technology, PMC-Sierra, will present Changes Coming to Architecture with NVM. Although the IT industry has made tremendous progress innovating up and down the computing stack to enable, and take advantage of, non-volatile memory, is it sufficient, and where are the weakest links to fully unlock the potential of NVM.

Don Jeanette, VP and John Chen, VP of Trendfocus will review the Solid State Storage Market, discuss what is happening in various segments, and why, as it relates to PCIe.

Dejan Vucinc, HGST San Jose Research Center will discuss Latency in Context: Finding Room for NVMs in the Existing Software Ecosystem. HGST Research has been working diligently to find out where is there room in the existing hardware/software ecosystem for emerging NVM technology when viewed as block storage rather than main memory. Vucinc will show an update on previously published results using prototype PCI Express-attached PCM SSDs and our custom device protocol, DC Express, as well as measurements of its latency and performance through a proper device driver using several different kinds of Linux kernel block layer architecture.

Arthur Sainio, Director Marketing, SMART Modular and Co-Chair, SNIA NVDIMM SIG, will lead a panel on NVDIMM. discussing how new media types are joining NAND Flash, and enhanced controllers and networking are being developed to unlock the latency and throughput advantages of NVDIMM.

Neal Christiansen, Principal Development Lead, Microsoft, Microsoft will discuss Storage Class Memory Support in the Windows OS. Storage Class Memories (SCM) have been the topic of R&D for the last few years and with the promise of near term product delivery, the question is how will Windows be enabled for such SCM products and how can applications take advantage of these capabilities.

Jeff Moyer, Principal Software Engineer, Red Hat will give an overview of the current state of Persistent Memory Support in the Linux Kernel.

Cristian Diaconu, Principal Software Engineer, Microsoft will present Microsoft SQL Hekaton – Towards Large Scale Use of PM for In-memory Databases, using the example of Hekaton (Sql Server in-memory database engine) to break down the opportunity areas for non-volatile memory in the database space.

Tom Talpey, Architect File Server Team, Microsoft, will discuss Microsoft Going Remote at Low Latency: A Future Networked NVM Ecosystem. As new ultra-low latency storage such as Persistent Memory and NVM is deployed, it becomes necessary to provide remote access – for replication, availability and resiliency to errors.

Kevin Deierling, VP Marketing, Mellanox will discuss the role of the network in developing Persistent Memory over Fabrics, and what are the key goals and key fabric features requirements.

Data Recovery and Selective Erasure of Solid State Storage a New Focus at SNIA

The rise of solid state storage has been incredibly beneficial to users in a variety of industries. Solid state technology presents a more reliable and efficient alternative to traditional storage devices. However, these benefits have not come without unforeseen drawbacks in other areas. For those in the data recovery and data erase industries, for example, solid state storage has presented challenges. The obstacles to data recovery and selective erasure capabilities are not only a problem for those in these industries, but they can also make end users more hesitant to adopt solid state storage technology.

Recently a new Data Recovery and Erase Special Interest Group (SIG) has been formed within the Solid State Storage Initiative (SSSI) within the Storage Networking Industry Association (SNIA). SNIA’s mission is to “lead the storage industry worldwide in developing and promoting standards, technologies and educational services to empower organizations in the management of information.” This fantastic organization has given the Data Recovery and Erase SIG a solid platform on which to build the initiative.

The new group has held a number of introductory open meetings for SNIA members and non-members to promote the group and develop the group’s charter. For its initial meetings, the group sought to recruit both SNIA members and non-members that were key stakeholders in fields related to the SIG. This includes data recovery providers, erase solution providers and solid state storage device manufacturers. Aside from these groups, members of leading standards bodies and major solid state storage device consumers were also included in the group’s initial formation.

The group’s main purpose is to be an open forum of discussion among all key stakeholders. In the past, there have been few opportunities for representatives from different industries to work together, and collaboration had often been on an individual basis rather than as a group. With the formation of this group, members intend to cooperate between industries on a collective basis in order to foster a more constructive dialogue incorporating the opinions and feedback of multiple parties.

During the initial meetings of the Data Recovery and Erase SIG, members agreed on a charter to outline the group’s purpose and goals. The main objective is to foster collaboration among all parties to ensure consumer demands for data recovery and erase services on solid state storage technology can be performed in a cost-effective, timely and fully successful manner

In order to achieve this goal, the group has laid out six steps needed, involving all relevant stakeholders:

  1. Build the business case to support the need for effective data recovery and erase capabilities on solid state technology by using use cases and real examples from end users with these needs.
  2. Create a feedback loop allowing data recovery providers to provide failure information to manufacturers in order to improve product design.
  3. Foster cooperation between solid state manufacturers and data recovery and erase providers to determine what information is necessary to improve capabilities.
  4. Protect sensitive intellectual property shared between data recovery and erase providers and solid state storage manufacturers.
  5. Work with standards bodies to ensure future revisions of their specifications account for capabilities necessary to enable data recovery and erase functionality on solid state storage.
  6. Collaborate with solid state storage manufacturers to incorporate capabilities needed to perform data recovery and erase in product design for future device models.

The success of this special interest group depends not only on the hard work of the current members, but also in a diverse membership base of representatives from different industries. We will be at Flash Memory Summit in booth 820 to meet you in person! Or you can visit our website at www.snia.org/forums/sssi for more information on this new initiative and all solid state storage happenings at SNIA.   If you’re a SNIA member and you’d like to learn more about the Data Recovery/Erase SIG or you think you’d be a good fit for membership, we’d love to speak with you.  Not a SNIA member yet? Email marty.foltyn@snia.org for details on joining.

Solid State Summit Webinar Presentations Now Available for Viewing

The April 21/22, 2015 Solid State Storage Summit, presented by SNIA and the Evaluator Group on the SNIA Brighttalk Channel, was a great success.  Attendees raved about the high quality content and knowledgable speakers.

Did you miss it?

No worries!  Now you can listen to  SNIA Solid State Storage Initiative experts and analysts from the Evaluator Group on the latest updates on Solid State Technology.  Click on the title of each presentation to listen to this great technical information.

Day 1Solid State Systems – 5 different webcasts from Intel, Load Dynamix, Evaluator Group, EMC, and HP

Day 2 – Solid State Components – 5 different webcasts from the San Diego Supercomputer Center, NetApp, Micron, Toshiba, and SMART Modular

New SIG for SSD Data Recovery/Erase Formed – Calls Open to All Interested Participants

SSDs present particular challenges when trying to erase all data or attempting to recover data from a broken drive. To address these issues, a new Data Recovery/Erase Special Interest Group has been formed within the SNIA Solid State Storage Initiative.

The goal of the SIG is to provide a forum in which solution providers and solid state storage manufacturers can collaborate to enable data recovery and erase capabilities in solid state storage in such a way as to ensure that customer demands for these services can be met in a cost-effective and timely manner, with a high likelihood of success. A key to the success of the SIG is obtaining input and participation from all of the key stakeholders: solid state storage manufacturers, data recovery and erase solution providers, and solid state storage customers.

The SIG will be having a limited number of conference calls that will be open to non-members. Go to http://www.snia.org/forums/sssi/dresig for more details and to register for the first open meeting.