Power Efficiency Measurement – Our Experts Make It Clear – Part 1

Measuring power efficiency in datacenter storage is a complex endeavor. A number of factors play a role in assessing individual storage devices or system-level logical storage for power efficiency. Luckily, our SNIA experts make the measuring easier!

In this SNIA Experts on Data blog series, our experts in the SNIA Solid State Storage Technical Work Group and the SNIA Green Storage Initiative explore factors to consider in power efficiency measurement, including the nature of application workloads, IO streams, and access patterns; the choice of storage products (SSDs, HDDs, cloud storage, and more); the impact of hardware and software components (host bus adapters, drivers, OS layers); and access to read and write caches, CPU and GPU usage, and DRAM utilization.

Join us on our journey to better power efficiency as we begin with Part 1: Key Issues in Power Efficiency Measurement. Bookmark this blog and check back in February, March, and April for the continuation of our four-part series. And explore the topic further in the SNIA Green Storage Knowledge Center.

Part 1: Key Issues in Power Efficiency Measurement

Ensuring accurate and precise power consumption measurements is challenging, especially at the individual device level, where even minor variations can have a significant impact. Achieving reliable data necessitates addressing factors like calibration, sensor quality, and noise reduction.

Furthermore, varying workloads in systems require careful consideration to accurately capture transient power spikes and average power consumption. Modern systems are composed of interconnected components that affect each other’s power consumption, making it difficult to isolate individual component power usage.

The act of measuring power itself consumes energy, creating a trade-off between measurement accuracy and the disturbance caused by measurement equipment. To address this, it’s important to minimize measurement overheads while still obtaining meaningful data.

Environmental factors such as temperature, humidity, and airflow, can unpredictably influence power consumption, emphasizing the need for standardized test environments. Rapid workload changes can lead to transient power behavior that may require specialized equipment for accurate measurement.

Software running on a system significantly influences power consumption, emphasizing the importance of selecting representative workloads and ensuring consistent software setups across measurements. Dynamic voltage and frequency scaling are used in many systems to optimize power consumption, and understanding their effects under different conditions is crucial.

Correctly interpreting raw power consumption data is essential to draw meaningful conclusions about efficiency. This requires statistical analysis and context-specific considerations. Real-world variability, stemming from manufacturing differences, component aging, and user behavior, must also be taken into account in realistic assessments.

Addressing these challenges necessitates a combination of precise measurement equipment, thoughtful experimental design, and a deep understanding of the system and device being investigated.

In our next blog, Part 2, we will examine the impact of workloads on power efficiency measurement.

Dynamic Speakers on Tap for the 2022 SNIA Persistent Memory + Computational Storage Summit

Our 10th annual Persistent Memory + Computational Storage Summit is right around the corner on May 24 and 25, 2022.  We remain virtual this year, and hope this will offer you more flexibility to watch our live-streamed mainstage sessions, chat online, and catch our always popular Computational Storage birds-of-a-feather session on Tuesday afternoon without needing a plane or hotel reservation!

As David McIntyre of Samsung, the 2022 PM+CS Summit chair, says in his 2022 Summit Preview Video, “You won’t want to miss this event!”   

This year, the Summit agenda expands knowledge on computational storage and persistent memory, and also features new sessions on computational memory, Compute Express Link TM (CXL)TM, NVM Express, SNIA Smart Data Accelerator Interface (SDXI), and Universal Chiplet Interconnect Express (UCIe).

We thank our many dynamic speakers who are presenting an exciting lineup of talks over the two days, including:

  • Yang Seok Ki of Samsung on Innovation with SmartSSD for Green Computing
  • Charles Fan of MemVerge on Persistent Memory Breaks Through the Clouds
  • Gary Grider of Los Alamos National Labs on HPC for Science Based Motivations for Computation Near Storage
  • Alan Benjamin of the CXL Consortium on Compute Express Link (CXL): Advancing the Next Generation of Data Centers
  • Cheolmin Park of Samsung on CXL and The Universal Chiplet Interconnect Express (UCIe)
  • Stephen Bates and Kim Malone of NVM Express on NVMe Computational Storage – An Update on the Standard
  • Andy Walls of IBM on Computational Storage for Storage Applications

Our full agenda is at www.snia.org/pm-summit.

We’ll have great networking opportunities, a virtual reception, and the ability to connect with leading companies including Samsung, MemVerge, and SMART Modular who are sponsoring the Summit. 

Complimentary registration is now available at https://www.snia.org/events/persistent-memory-summit/pm-cs-summit-2022-registration.  We will see you there!

What is eBPF, and Why Does it Matter for Computational Storage?

Recently, a question came up in the SNIA Computational Storage Special Interest Group on new developments in a technology called eBPF and how they might relate to computational storage. To learn more, SNIA on Storage sat down with Eli Tiomkin, SNIA CS SIG Chair with NGD Systems; Matias Bjørling of Western Digital; Jim Harris of Intel; Dave Landsman of Western Digital; and Oscar Pinto of Samsung.

SNIA On Storage (SOS):  The eBPF.io website defines eBPF, extended Berkeley Packet Filter, as a revolutionary technology that can run sandboxed programs in the Linux kernel without changing kernel source code or loading kernel modules.  Why is it important?

Dave Landsman (DL): eBPF emerged in Linux as a way to do network filtering, and enables the Linux kernel to be programmed.  Intelligence and features can be added to existing layers, and there is no need to add additional layers of complexity.

SNIA On Storage (SOS):  What are the elements of eBPF that would be key to computational storage? 

Jim Harris (JH):  The key to eBPF is that it is architecturally agnostic; that is, applications can download programs into a kernel without having to modify the kernel.  Computational storage allows a user to do the same types of things – develop programs on a host and have the controller execute them without having to change the firmware on the controller.

Using a hardware agnostic instruction set is preferred to having an application need to download x86 or ARM code based on what architecture is running.

DL:  It is much easier to establish a standard ecosystem with architecture independence.  Instead of an application needing to download x86 or ARM code based on the architecture, you can use a hardware agnostic instruction set where the kernel can interpret and then translate the instructions based on the processor. Computational storage would not need to know the processor running on an NVMe device with this “agnostic code”.

SOS: How has the use of eBPF evolved?

JH:  It is more efficient to run programs directly in the kernel I/O stack rather than have to return packet data to the user, operate on it there, and then send the data back to the kernel. In the Linux kernel, eBPF began as a way to capture and filter network packets.  Over time, eBPF use has evolved to additional use cases.

SOS:  What are some use case examples?

DL: One of the use cases is performance analysis. For example, eBPF can be used to measure things such as latency distributions for file system I/O, details of storage device I/O and TCP retransmits, and blocked stack traces and memory.

Matias Bjørling (MB): Other examples in the Linux kernel include tracing and gathering statistics.  However, while the eBPF programs in the kernel are fairly simple, and can be verified by the Linux kernel VM, computational programs are more complex, and longer running. Thus, there is a lot of work ongoing to explore how to efficiently apply eBPF to computational programs.

For example, what is the right set of run-time restrictions to be defined by the eBPF VM, any new instructions to be defined, how to make the program run as close to the instruction set of the target hardware.

JH: One of the big use cases involves data analytics and filtering. A common data flow for data analytics are large database table files that are often compressed and encrypted.  Without computational storage, you read the compressed and encrypted data blocks to the host, decompress and decrypt the blocks, and maybe do some filtering operations like a SQL query.  All this, however, consumes a lot of extra host PCIe, host memory, and cache bandwidth because you are reading the data blocks and doing all these operations on the host.  With computational storage, inside the device you can tell the SSD to read data and transfer it not to the host but to some memory buffers within the SSD.  The host can then tell the controller to do a fixed function program like decrypt the data and put in another local location on the SSD, and then do a user supplied program like eBPF to do some filtering operations on that local decrypted data.  In the end you would transfer the filtered data to the host.  You are doing the compute closer to the storage, saving memory and bandwidth.

SOS:  How does using eBPF for computational storage look the same?  How does it look different?

Jim – There are two parts to this answer.  Part 1 is the eBPF instruction set with registers and how eBPF programs are assembled.  Where we are excited about computational storage and eBPF is that the instruction set is common. There are already existing tool chains that support eBPF.   You can take a C program and compile it into an eBPF object file, which is huge.  If you add computational storage aspects to standards like NVMe, where developing a unique tool chain support can take a lot of work, you can now leverage what is already there for the eBPF ecosystem. 

Part 2 of the answer centers around the Linux kernel’s restrictions on what an eBPF program is allowed to do when downloaded. For example, the eBPF instruction set allows for unbounded loops, and toolchains such as gcc will generate eBPF object code with unbounded loops, but the Linux kernel will not permit those to execute – and rejects the program. These restrictions are manageable when doing packet processing in the kernel.  The kernel knows a packet’s specific data structure and can verify that data is not being accessed outside the packet.  With computational storage, you may want to run an eBPF program that operates on a set of data that has a very complex data structure – perhaps arrays not bounded or multiple levels of indirection.  Applying Linux kernel verification rules to computational storage would limit or even prevent processing this type of data.

SOS: What are some of the other challenges you are working through with using eBPF for computational storage?

MB:  We know that x86 works fast with high memory bandwidth, while other cores are slower.  We have some general compute challenges in that eBPF needs to be able to hook into today’s hardware like we do for SSDs.  What kind of operations make sense to offload for these workloads?  How do we define a common implementation API for all of them and build an ecosystem on top of it?  Do we need an instruction-based compiler, or a library to compile up to – and if you have it on the NVMe drive side, could you use it?  eBPF in itself is great- but getting a whole ecosystem and getting all of us to agree on what makes value will be the challenge in the long term.

Oscar Pinto (OP): The Linux kernel for eBPF today is more geared towards networking in its functionality but light on storage. That may be a challenge in building a computational storage framework. We need to think through how to enhance this given that we download and execute eBPF programs in the device. As Matias indicated, x86 is great at what it does in the host today. But if we have to work with smaller CPUs in the device, they may need help with say dedicated hardware or similar implemented using additional logic to aid the eBPF programs One question is how would these programs talk to them?  We don’t have a setup for storage like this today, and there are a variety of storage services that can benefit from eBPF.

SOS: Is SNIA addressing this challenge?

OP: On the SNIA side we are building on program functions that are downloaded to computational storage engines.  These functions run on the engines which are CPUs or some other form of compute that are tied to a FPGA, DPU, or dedicated hardware. We are defining these abstracted functionalities in SNIA today, and the SNIA Computational Storage Technical Work Group is developing a Computational Storage Architecture and Programming Model and Computational Storage APIs  to address it..  The latest versions, v0.8 and v0.5, has been approved by the SNIA Technical Council, and is now available for public review and comment at SNIA Feedback Portal.

SOS: Is there an eBPF standard? Is it aligned with storage?

JH:  We have a challenge around what an eBPF standard should look like.  Today it is defined in the Linux kernel.  But if you want to incorporate eBPF in a storage standard you need to have something specified for that storage standard.  We know the Linux kernel will continue to evolve adding and modifying instructions. But if you have a NVMe SSD or other storage device you have to have something set in stone –the version of eBPF that the standard supports.  We need to know what the eBPF standard will look like and where will it live.  Will standards organizations need to define something separately?

SOS:  What would you like an eBPF standard to look like from a storage perspective?

JH – We’d like an eBPF standard that can be used by everyone.  We are looking at how computational storage can be implemented in a way that is safe and secure but also be able to solve use cases that are different.

MB:  Security will be a key part of an eBPF standard.  Programs should not access data they should not have access to.  This will need to be solved within a storage device. There are some synergies with external key management. 

DL: The storage community has to figure out how to work with eBPF and make this standard something that a storage environment can take advantage of and rely on.

SOS: Where do you see the future of eBPF?

MB:  The vision is that you can build eBPFs and it works everywhere.  When we build new database systems and integrate eBPFs into them, we then have embedded kernels that can be sent to any NVMe device over the wire and be executed.  The cool part is that it can be anywhere on the path, so there becomes a lot of interesting ways to build new architectures on top of this. And together with the open system ecosystem we can create a body of accelerators in which we can then fast track the build of these ecosystems.  eBPF can put this into overdrive with use cases outside the kernel.

DL:  There may be some other environments where computational storage is being evaluated, such as web assembly.

JH: An eBPF run time is much easier to put into an SSD than a web assembly run time.

MB: eBPF makes more sense – it is simpler to start and build upon as it is not set in stone for one particular use case.

Eli Tiomkin (ET):  Different SSDs have different levels of constraints.  Every computational storage SSDs in production and even those in development have very unique capabilities that are dependent on the workload and application.

SOS:  Any final thoughts?

MB: At this point, technologies are coming together which are going to change the industry in a way that we can redesign the storage systems both with computational storage and how we manage security in NVMe devices for these programs.  We have the perfect storm pulling things together. Exciting platforms can be built using open standards specifications not previously available.

SOS:  Looking forward to this exciting future. Thanks to you all.

Accelerating Disaggregated Storage to Optimize Data-Intensive Workloads

Thanks to big data, artificial intelligence (AI), the Internet of things (IoT), and 5G, demand for data storage continues to grow significantly. The rapid growth is causing storage and database-specific processing challenges within current storage architectures. New architectures, designed with millisecond latency, and high throughput, offer in-network and storage computational processing to offload and accelerate data-intensive workloads.

On June 29, 2021, SNIA Compute, Memory and Storage Initiative will host a lively webcast discussion on today’s storage challenges in an aggregated storage world and if a disaggregated storage model could optimize data-intensive workloads.  We’ll talk about the concept of a Data Processing Unit (DPU) and if a DPU should be combined with a storage data processor to accelerate compute-intensive functions.   We’ll also introduce the concept of key value and how it can be an enabler to solve storage problems.

Join moderator Tim Lustig, Co- Chair of the CMSI Marketing Committee, and speakers John Kim from NVIDIA and Kfir Wolfson from Pliops as we shift into overdrive to accelerate disaggregated storage. Register now for this free webcast.

Everyone Wants Their Java to Persist

In this time of lockdown, I’m sure we’re all getting a little off kilter. I mean, it’s one thing to get caught up listening to tunes in your office to avoid going out and alerting your family of the fact that you haven’t changed your shirt in two days. It’s another thing to not know where a clean coffee cup is in the house so you can fill it and face the day starting sometime between 5AM and Noon. Okay, maybe we’re just talking about me, sorry. But you get the point.

Wouldn’t it be great if we had some caffeinated source that was good forever? I mean… persistence of Java? At this point, it’s not just me.

Okay, that’s not what this webinar will be talking about, but it’s close. SNIA member Intel is offering an overview of the ways to utilize persistent memory in the Java environment. In my nearly two years here at SNIA, this has been one of the most-requested topics. Steve Dohrmann and Soji Denloye are two of the brightest minds in enabling persistence, and this is sure to be an insightful presentation.

Persistent memory application capabilities are growing significantly.  Since the publication of the SNIA NVM Programming Model developed by the SNIA Persistent Memory Programming Technical Work Group, new language support seems to be happening every day.  Don’t miss the opportunity to see the growth of PM programming in such a crucial space as Java.

The presentation is on BrighTALK, and will be live on May 27th at 10am PST. You can see the details at this link.

Now I just have to find a clean cup.

This post is also cross-posted at the PIRL Blog.  PIRL is a joint effort by SNIA and UCSD’s Non-Volatile Systems Lab to advance the conversation on persistent memory programming.  Check out other entries here.

SNIA Exhibits at OCP Virtual Summit May 12-15, 2020 – SFF Standards featured in Thursday Sessions

All SNIA members and colleagues are welcome to register to attend the free online Open Compute Project (OCP) Virtual Summit, May 12-15, 2020.

SNIA SFF standards will be represented in the following presentations (all times Pacific):

 Thursday, May 14:

The SNIA booth (link live at 9:00 am May 12) will be open from 9 am to 4 pm each day of OCP Summit, and feature Chat with SNIA Experts:  scheduled times where SNIA volunteer leadership will answer questions on SNIA technical work, SNIA education, standards, and adoption, and vision for 2020 and beyond.

The current schedule is below – and is continually being updated with more speakers and topics – so  be sure to bookmark this page.

And let us know if you want to add a topic to discuss!

Tuesday, May 12:

  • 10:00 am – 11:00 am – SNIA Education, Standards, and Technology Adoption, Erin Weiner, SNIA Membership Services Manager
  • 11:00 am – 12:00 pm – Uniting Compute, Memory, and Storage, Jenni Dietz, Intel/SNIA CMSI Co-Chair
  • 11:00 am – 12:00 pm – Computational Storage standards and direction, Scott Shadley, NGD Systems & Jason Molgaard, ARM, SNIA CS TWG Co-Chairs
  • 11:00 am – 12:00 pm – SNIA SwordfishTM and Redfish Standards and Adoption, Don Deel, NetApp, SNIA SMI Chair
  • 12:00 pm – 1:00 pm – SSDs and Form Factors, Cameron Brett, KIOXIA, SNIA SSD SIG Co-Chair
  • 12:00 pm – 1:00 pm – Persistent Memory Standards and Adoption, Jim Fister, SNIA Persistent Memory Enablement Director
  • 1:00 pm – 2:00 pm – SNIA Technical Activities and Direction, Mark Carlson, KIOXIA, SNIA Technical Council Co-Chair
  • 1:00 pm – 2:00 pm – SNIA Education, Standards, Promotion, Technology Adoption, Michael Meleedy, SNIA Business Operations Director

Wednesday, May 13:

  • 11:00 am – 12:00 pm – SNIA Education, Standards, and Technology Adoption, Arnold Jones, SNIA Technical Council Managing Director
  • 11:00 am – 12:00 pm – Persistent Memory Standards and Adoption, Jim Fister, SNIA Persistent Memory Enablement Director
  • 12:00 pm – 1:00 pm – Computational Storage Standards and Adoption, Scott Shadley, NGD Systems & Jason Molgaard, ARM, SNIA CS TWG Co-Chairs
  • 12:00 pm – 1:00 pm – SNIA Technical Activities and Direction, Bill Martin, Samsung, SNIA Technical Council Co-Chair
  • 12:00 pm – 1:00 pm – SSDs and Form Factors, Jonmichael Hands, Intel, SNIA SSD SIG Co-Chair
  • 1:00 pm – 2:00 pm – SNIA NVMe and NVMe-oF standards and direction, Mark Carlson, KIOXIA, SNIA Technical Council Co-Chair
  • 1:00 pm – 2:00 pm – SNIA SwordfishTM and Redfish standards and direction, Richelle Ahlvers, Broadcom, SNIA SSM TWG Chair, and Barry Kittner, Intel, SNIA SMI Marketing Chair

Thursday, May 14

  • 11:00 am – 12:00 pm – Uniting Compute, Memory, and Storage, Jenni Dietz, Intel, SNIA CMSI Co-Chair
  • 11:00 am – 12:00 pm – SNIA Education, Standards, and Technology Adoption, Erin Weiner, SNIA Membership Services Manager
  • 12:00 pm – 1:00 pm – SNIA SwordfishTM and Redfish standards activities and direction, Richelle Ahlvers, Broadcom, SNIA SSM TWG Chair, and Barry Kittner, Intel, SNIA SMI Marketing Chair
  • 12:00 pm – 1:00 pm – SNIA Technical Activities and Direction, Bill Martin, Samsung, SNIA Technical Council Co-Chair
  • 1:00 pm – 2:00 pm – Computational Storage Standards and Adoption, Scott Shadley, NGD Systems & Jason Molgaard, ARM, SNIA CS TWG Co-Chairs
  • 1:00 pm – 2:00 pm – SNIA NVMe and NVMe-oF standards activities and direction , Bill Martin, Samsung, SNIA Technical Council Co-Chair

Friday, May 15: 

  • 11:00 am – 12:00 pm – Computational Storage Standards and Adoption, Scott Shadley, NGD Systems & Jason Molgaard, ARM, SNIA CS TWG Co-Chairs
  • 11:00 am – 12:00 pm – SNIA SwordfishTM and Redfish standards and direction, Richelle Ahlvers, Broadcom, SNIA SSM TWG Chair
  • 12:00 pm– 1:00 pm – SNIA NVMe and NVMe-oF standards activities and direction, Mark Carlson, KIOXIA, SNIA Technical Council Co-Chair
  • 12:00 pm – 1:00 pm – SNIA Technical Activities and Direction, Bill Martin, Samsung, SNIA Technical Council Co-Chair
  • 1:00 pm – 2:00 pm – SNIA Education, Standards and Technology Adoption, Michael Meleedy, SNIA Business Services Director


Register today to attend the OCP Virtual Summit. Registration is free for all attendees and is open for everyone, not just those who were registered for the in-person Summit. The SNIA exhibit will be found here once the Summit is live.
 
Please note, the virtual summit will be a 3D environment that will be best experienced on a laptop or desktop computer, however a simplified mobile responsive version will also be available for attendees. No additional hardware, software or plugins are required.

Feedback Needed on New Persistent Memory Performance White Paper

A new SNIA Technical Work draft is now available for public review and comment – the SNIA Persistent Memory Performance Test Specification (PTS) White Paper.

A companion to the SNIA NVM Programming Model, the SNIA PM PTS White Paper (PM PTS WP) focuses on describing the relationship between traditional block IO NVMe SSD based storage and the migration to Persistent Memory block and byte addressable storage.  

The PM PTS WP reviews the history and need for storage performance benchmarking beginning with Hard Disk Drive corner case stress tests, the increasing gap between CPU/SW/HW Stack performance and storage performance, and the resulting need for faster storage tiers and storage products. 

The PM PTS WP discusses the introduction of NAND Flash SSD performance testing that incorporates pre-conditioning and steady state measurement (as described in the SNIA Solid State Storage PTS), the effects of – and need for testing using – Real World Workloads on Datacenter Storage (as described in the SNIA Real World Storage Workload PTS for Datacenter Storage), the development of the NVM Programming model, the introduction of PM storage and the need for a Persistent Memory PTS.

The PM PTS focuses on the characterization, optimization, and test of persistent memory storage architectures – including 3D XPoint, NVDIMM-N/P, DRAM, Phase Change Memory, MRAM, ReRAM, STRAM, and others – using both synthetic and real-world workloads. It includes test settings, metrics, methodologies, benchmarks, and reference options to provide reliable and repeatable test results. Future tests would use the framework established in the first tests.

The SNIA PM PTS White Paper targets storage professionals involved with: 

  1. Traditional NAND Flash based SSD storage over the PCIe bus;
  2. PM storage utilizing PM aware drivers that convert block IO access to loads and stores; and
  3. Direct In-memory storage and applications that take full advantage of the speed and persistence of PM storage and technologies. 

The PM PTS WP discussion on the differences between byte and block addressable storage is intended to help professionals optimize application and storage technologies and to help storage professionals understand the market and technical roadmap for PM storage.

Eden Kim, chair of the SNIA Solid State Storage TWG and a co-author, explained that SNIA is seeking comment from Cloud Infrastructure, IT, and Data Center professionals looking to balance server and application loads, integrate PM storage for in-memory applications, and understand how response time and latency spikes are being influenced by applications, storage and the SW/HW stack. 

The SNIA Solid State Storage Technical Work Group (TWG) has published several papers on performance testing and real-world workloads, and the  SNIA PM PTS White Paper includes both synthetic and real world workload tests.  The authors are seeking comment from industry professionals, researchers, academics and other interested parties on the PM PTS WP and anyone interested to participate in development of the PM PTS.

Use the SNIA Feedback Portal to submit your comments.

Share Your Experiences in Programming PM!

by Jim Fister, SNIA Director of Persistent Memory Enabling

Last year, the University of California San Diego (UCSD) Non-Volatile Systems Lab (NVSL) teamed with the Storage Networking Industry Association (SNIA) to launch a new conference, Persistent Programming In Real Life (PIRL). While not an effort to set the record for acronyms in a conference announcement, we did consider it a side-goal.  The PIRL conference was focused on gathering a group of developers and architects for persistent memory to discuss real-world results. We wanted to know what worked and what didn’t, what was hard and what was easy, and how we could help more developers move forward.

You don’t need another pep talk about how the world has changed and all the things you need to do (though staying home and washing your hands is a pretty good idea right now).  But if you’d like a pep talk on sharing your experiences with persistent memory programming, then consider this just what you need.

We believe that continuing the spirit of PIRL — discussing the results of persistent memory programming in real life — should continue.

If you’re not aware, SNIA has been delivering some very popular webcasts on persistent programming, cloud storage, and a variety of other topics.  SNIA has a great new webcast featuring PIRL alumni Steve Heller, SNIA CMSI co-chair Alex McDonald, and me on the SNIA NVDIMM programming challenge and the winning entry. You can find more information and check the on-demand viewing at https://www.brighttalk.com/webcast/663/389451.

We would like to highlight more “In Real Life” topics via our SNIA webcast channel.  Therefore, SNIA and UCSD NVSL have teamed up to create a submission portal for anyone interested in discussing their real-world persistent memory experiences.  You can submit a topic here  https://docs.google.com/forms/d/e/1FAIpQLSe_Ypo_sf1xxFcPD1F7se02jOWrdslosUnvwyS0RwcQpWAHiA/viewform where we will evaluate your submission.  Acceptable submissions will be featured in conjunction with the SNIA channel over the coming months.

As a final note, this year’s PIRL conference is currently scheduled for July.  Even though most software developers are already used to social isolation and distancing from their peers, our organizing team has kept abreast of all the latest information to make a decision on the capability to do an in-person conference on that date.  In our last meeting, we agreed that it would not be prudent to hold the conference on the July date, and have tentatively rescheduled the in-person conference to October 13-14 of 2020. We will announce an exact date and our criteria for moving forward on that date in the coming weeks, so stay tuned!

Show Your Persistent Stuff – and Win!

Persistent Memory software development has been a source of server development innovation for the last couple years.  The availability of the open source PMDK libraries (http://pmem.io/pmdk/) has provided a common interface for developing across PM types as well as server architectures.  Innovation beyond PMDK also continues to grow, as more experimentation yields open and closed source products and tools.

Read More

New Conference Seeking PIRLs of Wisdom

UCSD Computer Science and Engineering, the Non-Volatile Systems Laboratory, and the Storage Networking Industry Association (SNIA) are inviting submissions of proposals for presentation at the first annual Persistent Programming in Real Life (PIRL) conference.  PIRL brings together software development leaders interested in learning about programming methodologies for persistent memories and sharing their experiences with others. This is a meeting for developer project leads on the front lines of persistent programming; not sales, marketing, or non-technical management.

PIRL is small, with attendance limited to under 100 people, including speakers.  It will discuss what real developers have done, and want to do, with persistent memory.  It will involve what worked, what didn’t, what was easy and hard, what was surprising, and what others can learn from the experience.  Presenters are encouraged, and even expected, to show and write code live in the presentation in a comfortable and dynamic peer environment.

Possibilities for presentations include, but are not limited to:

•  Experiences on a particular project

•  Live code development showing new concepts

•  Code challenges

•  New tools for programming

All attendees will be provided access to a development environment to respond to code challenges, or to show their own work in small forums.  This is intended to be a competition-free atmosphere for peers to network with each other to advance the use of persistent memory in the industry and academia.  By combining many of the industry leaders with the academic lights driving practical applications of new technology, peers at PIRL will encourage forward progress for adoption of persistent memory in the marketplace.

Keynote speakers include key personnel from Dreamworks, VMWare, Oracle, Eideticom, and Intel.

PIRL will be hosted by the Non-Volatile Systems Laboratory at the University of California, San Diego.  It will be held at Scripps Forum on July 22nd to 23rd, 2019, with optional events starting July 21st. Pre-registration will be $400.

We’re excited to present this new conference, and we’re excited for you to participate.  Submit your presentation or code challenge idea today. Submissions are due by Monday, June 10th.