SSD Blind Survey at Flash Memory Summit

Calypso recently presented an Industry Blind Survey of SSD Performance at the Flash Memory Summit.

The survey compared (9) MLC, (8) SLC, and (1) 15K RPM SAS HDD.  The Chart shows all sample drives at RND 4K IOPS x Block Size for 65:35 R/W mix.  Small Blocks are in the back, large Blocks are in the front, IOPS are the Y axis.  This Chart clearly shows the general Steady State performance of SLC and MLC SSDs while referencing a 15K RPM SAS HDD.

Take aways?  There is a lot of variance in performance between SSDs, but it is nice to see an apples to apples comparison on a Device Level.  RND 4K IOPS at a 65:35 R/W mix is a good corner case benchmark.  All  numbers are Steady State and comply with the recently released SNIA SSS Performance Test Specification.   All measurements were taken on the SNIA compliant Calypso Reference Test Platform.

SSS Performance Test Specification Coming Soon

SSS Performance Test Specification Coming Soon

A new Performance Test Specification (PTS) for solid state storage is about to be released for Public Technical Review by the SNIA SSSI and SSS TWG.  The SNIA PTS is a device level performance specification for solid state storage testing that sets forth standard terminologies, metrics, methodologies, tests and reporting for NAND Flash based SSDs.  SNIA plans to release the final PTS v 1.0 later this year as a SNIA architecture tracking for INCITS and ANSI standards treatment.

Why do we need a Solid State Storage Performance Test Specification?

Lack of Industry Standards / Difficulty in Comparing SSD Performance

There has been no industry standard test methodology for measuring solid state storage (SSS) device performance. As a result, each SSS manufacturer has utilized different measurement methodologies to derive performance specifications for their solid state storage (SSS) products. This made it difficult for purchasers of SSS to fairly compare the performance specifications of SSS products from different manufacturers.

The SNIA Solid State Storage Technical Working Group (SSS TWG), working closely with the SNIA Solid State Storage Initiative (SSSI), has developed the Solid State Storage Performance Test Specification (SSS PTS) to address these issues. The SSS PTS defines a suite of tests and test methodologies that effectively measure the performance characteristics of SSS products. When executed in a specific hardware/software environment, SSS PTS provides measurements of performance that may be fairly compared to those of other SSS products measured in the same way in the same environment.

Key Concepts

Some of the key concepts of the PTS include proper pre test preparation, setting the appropriate test parameters, running the prescribed tests, and reporting results consistent with PTS protocol.  For all testing, the Device Under Test (DUT) must first be Purged (to ensure a repeatable test start point), preconditioned (by writing a prescribed access pattern of data to ensure measurements are taken when the DUT is in a steady state), and measurements taken in a prescribed steady state window (defined as a range of five rounds of data that stay within a prescribed excursion range for the data averages).

Standard Tests

The PTS sets forth three standard tests for client and enterprise SSDs:  IOPS, Throughput and Latency and measured in IOs per second, MB per second and average msec.  The test loop rounds consist of a Random data pattern stimulus in a matrix of R/W mixes and Block Sizes at a prescribed demand intensity (outstanding IOs – queue depth and thread count).  The user can extract performance measurements from this matrix that relate to workloads of interest.  For example, 4K RND W can equate to small block IO workloads typical in OLTP applications while 128K R can equate to large block sequential workloads typical in video on demand or media streaming applications.

Reference Test Environment

The SNIA PTS is hardware and software agnostic.  This means that the specification does not require any specific hardware, OS or test software to be used to run the PTS.  However, SSD performance is greatly affected by the system hardware, OS and test software (the test environment).  Because SSD performance is 100 to 1,000 times faster than HDDs, care must be taken not to introduce performance bottlenecks into the test measurements from the test environment.

The PTS addresses this by setting forth basic test environment requirements and lists a suggested Reference Test Platform in an informative annex.  This RTP was used by the TWG in developing the PTS.  Other hardware and software can be used and the TWG is actively seeking industry feedback using the RTP and other test environment results.

Standard Reporting

The PTS also sets forth an informative annex with a recommended test reporting format.  This sample test format reports all of the PTS required test and result information to aid in comparing test data for solid state storage performance.

Facilitate Market Adoption of Solid State Storage

The SSS PTS will facilitate broader market adoption of Solid State Storage technology within both the client and enterprise computing environments.

SSS PTS version 0.9 will be posted very shortly at http://www.snia.org/publicreview for public review. The public review phase is a 60-day period during which the proposed specification is publicly available and feedback is gathered (via http://www.snia.org/tech_activities/feedback/) across the worldwide storage industry. Upon completion of the public review phase, the SSS TWG will remove the SSS PTS from the web site, consider all submitted feedback, make modifications, and ultimately publish version 1.0 of the ratified SSS PTS.

PTS Press Release……

Watch for the press release on or about July 12, and keep an eye on http://www.snia.org/forums/sssi for updates.

Combining HDD and Flash in Computers Cures Many Issues

SDDs have tried to displace HDDs in computers for a few years but the higher cost of flash memory has been a major barrier to wide-spread adoption.  Lower flash memory prices will help adoption but HDDs decrease in $/GB at about the same rate as SSDs so the relative ratio of prices doesn’t improve much in SSDs’ favor.  At the same time there are serious issues in performance for many computers with HDDs, associated with the slower access time of HDDs. 

There have attempts to combine the advantages of HDDs and flasy memory in the past such as Intel’s Turbo Memory and the hybrid hard disk alliance but these were mostly  dependent upon the operating system to provide performance advantages.  The latest initiatives to combine flash memory and hard disk drives to create tiered storage systems in computers are known as Blink Boot, hyperHDD and the solid state hybrid hard drive.

Most of these approaches (hyperHDD and solid state hybrid hard drive) don’t depend upon the operating system to manage the use of the flash memory and the HDDs.  In the case of the recent solid state hybrid hard drive from Seagate the 4 GB of  flash memory  on the PCB board of the hard drive is used to store the most recently accessed data that the computer is using.  This is done internally by the hard drive providing a boost in access speed for this content without any special requirements on the computer operating system.

By adding a little flash memory to a hard disk drive for frequently accessed data or even for OS and application booting while still keeping the HDD for inexpensive mass storage makes a lot of sense.  Computer storage tiering with flash memory and HDDs could finally help flash memory become mainstream in computers.

Kaminario – A New Name for Solid State Storage

An Israeli start-up named Kaminario is attacking Texas Memory Systems’ home turf with a DRAM SSD that offers speeds as fast as 1.5 million IOPS.  While TMS has built itself a comfortable niche business using custom hardware, Kaminario’s K2 SSD, announced on June 16, is made using standard off-the-shelf low-profile blade servers from Dell.  Only the software is proprietary.

DRAM SSDs are an interesting product that serves niches which flash SSDs are unlikely to penetrate.  Objective Analysis’ new report on Enterprise SSDs explores the price and speed dynamics that separate these two technologies.  See the Objective Analysis Reports page for more information.

Some of the K2’s internal servers are dedicated to handling I/O, and are called “io Directors.”  The bandwidth of the storage system scales linearly with the number of io Directors used – a pair of io Directors provides 300K IOPS, and ten io Directors will support 1.5M IOPS.  Below the io Directors are other servers called “Data Nodes” which manage the storage.  Capacity scales linearly with the addition of Data Nodes.  Today’s limit is 3.5TB, but this number will increase over time.

Redundancy is a key feature of the Kaminario K2: There are at least two of any device: io Directors, Data Nodes, and HDDs per Data Node, since the DRAM-based data is stored onto HDDs in the event of an unexpected power failure.  The system can communicate with the host through a range of interfaces, with FCOE offered at introduction.

Kaminario’s K2 boasts a significantly smaller footprint and price tag than HDD-based systems with competing IOPS levels.

To find out more about Kaminario visit Kaminario.com

Violin Memory wants to Replace your Storage Array

Violin Memory introduced their 3000 series memory appliance in mid-May.  This million-plus-IOPS device piles 10-20 terabytes of NAND flash storage into a single 3U cabinet at a price that Violin’s management claims is equivalent to that of high-end storage arrays.

The system, introduced at $20/GB, or $200,000, is intended to provide enough storage at a low enough price to eliminate any need to manage hot data into and out of a limited number of small solid state drives.  Instead, Violin argues, the appliance’s capacity is big enough and cheap enough that an entire database can be economically stored within it, giving lightning-fast access to the entire database at once.

Note that Violin acquired Gear6 a month later, in mid-June.  This seems to reveal that the company is hedging its bets, taking advantage of a distressed caching company’s expertise to assure a strong position in architectures based upon a smaller memory appliance managed by caching software.

There is a good bit of detail about how and why both of these approaches make sense in Objective Analysis’ newest Enterprise SSD report.  See the Objective Analysis Reports page for more information.

But in regard to the Series 3000, CIOs whose databases are even larger than 10TB will be comforted to hear that Violin will be introducing appliances with as much as 60TB of storage by year-end.

Violin’s 3000 series can be configured through a communications module to support nearly any interface: Fibre Channel, 10Gb Ethernet, FCOE, PCIe, with Violin offering to support “Even InfiniBand, if asked.”  Inside are 84 modules, each built of a combination of DRAM and both SLC and MLC NAND flash, configured to assure data and pathway redundancy.

This high level of redundancy and fault management is one of Violin’s hallmarks.

Violin’s website is Violin-Memory.com

Nimbus: No Fast HDDs

San Francisco’s Nimbus Data Systems launched a solid state storage system in late April that is intended to replace all the HDDs used in a system except for slow disks used in near line storage.  Nimbus holds a viewpoint that solid-state drives eliminate the need for fast disk storage, and that in future times all data centers will be built using only SSDs for speed and capacity drives (slow HDDs) for mass storage.  This viewpoint is gaining a growing following.

Nimbus’ S-Class Enterprise Flash Storage System uses a proprietary 6GB SAS flash module, rather than off-the-shelf SSDs, to keep the costs low in their systems.  Storage capacity is 2.5-5.0TB per 2U enclosure, and can be scaled up to 100TB.  Throughput is claimed to be 500K IOPS through 10Gb Ethernet connections.   Prices are roughly $8/GB.

Although Nimbus previously sold systems based on a mix of SSDs and HDDs, they have moved away from using HDDs, and expect for data center managers to adopt this new approach.

There’s merit to this argument, but it will probably take a few years before CIOs agree on the role of NAND flash vs. enterprise HDDs vs. capacity HDDs in the data center. There’s a lot more detail on the approaches being considered for flash in the enterprise data center in Objective Analysis’ new Enterprise SSD report.  See the Objective Analysis Reports page for more information.

You can find out more at  NimbusData.com

New Article: Solid State Drives for Energy Savings

A new article, co-authored by myself and Tom Coughlin, can now be read from the SNIA Europe website.  “Solid State Drives for Energy Savings” explains the energy benefits that are being discovered when IT managers start to bring SSDs into their data centers. 

The article is a quick two pager, and it introduces SNIA’s new TCO Calculator (Total Cost of Ownership), a clever tool that helps estimate the power, rack space, and other savings that come along with a conversion of fast storage from enterprise HDDs to SSDs.

[Update: After clicking on the above link, it will be necessary to download the April 2010 edition of  Storage Networking Times, in order to read the article.]

Understanding Flash Impact on Enterprise Storage Architectures

In London this week, I was invited to present on solid state storage at the SNIA Europe Data Center Academy event.  It was  a well-attended event with a good mix of  IT managers, consultants and press in the audience – including Chris Mellor from The Register who subsequently published this blog.  Chris is one of the most perceptive journalists in the industry and really nailed the key issues in his blog.  Check it out!

BTW, the presentation is among the latest batch of SNIA tutorials, which you can find here.

Capacities and Storage Devices

Some SSD advocates project that SSD price per gigabyte will cross over that of HDDs, due to slower growth in areal density of HDDs in the future than it has grown in the past.  HDD price per GB declines will slow as a result of a slower areal density growth.  The argument is that this would allow flash gigabyte prices to blow past HDD prices just as they slipped below DRAM gigabyte prices in 2004.

 Some of these advocates recently predict that 3.5″ HDDs will “only” reach 6TB by 2015.  Although we find it likely that we will have 6 TB HDDs in mass production by then, Coughlin Associates expects to see 10TB maximum announced product capacity of 3.5-inch hard disk drives by that time. 

 Slower areal density growth of hard disk drives may result from transition difficulties to new recording technologies such as patterned media and heat assisted magnetic recording.  It appears likely that we may see areal density growth slowing from 40-50% annually today to 20% or possibly even less over the next few years.  Today the HDD industry is shipping 2 TB 3.5-inch HDDs and 1 TB 2.5-inch HDDs and will likely ship 3 TB or larger (3.5-inch) drives in the second half of 2010.  If the areal density of HDDs increased only 20% annually from 2010 through 2015 this would give us 7.5 TB 3.5-inch HDDs and over 3 TB 2.5-inch HDDs.

 Although a slow down is likely during a technology transition phase it is likely going to slow down gradually from today’s roughly 40% annual areal density growth rate.  So let’s say we have one more year of 40% growth (2010-2011), one year of 30% growth (2011-2012) and then 20% growth for the three remaining years to 2010.  With a 3 TB capacity in 2010 that would give us a 9.4TB capacity in 2010.  There is enough uncertainty in these numbers that the actual capacity could be between 8 and 11 TB so let’s say the maximum storage capacity in 2010 for 3.5-inch drives is 10 TB.  Likewise because of the geometry differences the maximum 2.5-inch storage capacity would be about 5 TB. 

 If the HDD industry stays true to its history, these 10TB HDDs will cost $50, giving a price per terabyte of $5.  Meanwhile, NAND flash terabyte prices will have declined to $50-100, preventing SSDs from displacing HDDs at least through 2015!

Solid State Storage Initiative – Transition in Leadership: Looking Back and Going Forward

SSSI and SSS TWG:

The SNIA Solid State Storage Initiative (SSSI) recently noted the transition in leadership from founding Chair Phil Mills of IBM.  Phil Mills, Secretary of the SNIA Board of Directors, drove formation of the SSSI in late 2008 and was instrumental in developing the Initiative’s organizational mission and objectives as well as forming the associated SNIA Solid State Storage Technical Working Group.  Due in large part to Phil’s Herculean efforts, the SSSI was successful in attaining critical mass by recruiting 26 founding members – a number that continued to grow to the current membership of 34 SSSI member companies, along with 56 SSS TWG member companies.

SSSI Mission:

The founding mission statement of the SSSI articulated a dedication to “foster the growth and success of the solid state storage market for client and enterprise applications… that encompasses marketing outreach, education, collaboration with SSS industry standards bodies, development of SNIA SSS Specifications and Standards (such as the SSS TWG), and a close following of advancements in non-volatile memory for solid state mass storage.”

Progress to Date:

Taking stock of the SSSI at this transition point, it is clear that the SSSI has been very successful in effecting these goals.  The SSSI has active committees for Marketing, Business Development, Education, Technical Development and SSS Total Cost of Ownership.  In conjunction with the work in the SSS TWG, the SSSI has made great strides in all areas.  Deliverables achieved to date include establishment of the SSSI as THE authoritative voice on Solid State Storage Performance, collaboration and cooperation with other Standards groups and Trade Associations (JEDEC, SSDA, and others), completion of several key white papers and tutorials on many aspects of solid state storage, industry noted presentations and presence at trade shows and events, completion of a TCO calculator for SSS, on-going investigations in emerging areas of SSS technologies (such as “drive pairing” and “storage tiering”) and imminent release of the SSS TWG Performance Test Specification for Public Technical Review – the industry’s first SSS Performance Specification whose goal is to help standardize the nomenclature, metrics, methodologies, tests and reporting of SSS performance.

Looking Forward:

Phil leaves the Initiative at at time of great market activity and leaves in place a capable team to carry the SSSI forward.  The SSSI Governing Board has appointed Paul Wassenberg of Marvell as the acting SSSI Chair.  Paul has been the chair of the TechDev sub committee and allows for a seamless transition with Paul’s extensive interaction with Education, Busdev, and the SSS TWG.  Paul and Marvell are also one of the founding companies of the SSSI and has been instrumental in getting the Initiative to where it is and will be key in moving us forward.

Come Join Us:

The SSSI continues to evangelize Solid State Storage and will have numerous opportunities to contribute to standardization of the SSS industry and widespread adoption and deployment of SSS mass storage.

Thank You Phil:

The entire SSSI wants to thank Phil for his dedication, leadership and vision and hopes to take the SSSI to the next level.  In that vein, the Initiative actively seeks to expand its membership and to address topics of interest to the SSS community and the Initiative’s membership.  People and companies that have an interest in participating in this exciting industry are invited to visit the SSSI at http://www.snia.org/forums/sssi