IBM Elastic Storage — “Data Democracy” in Action

By Jane Clabby, Clabby Analytics


In an effort to improve the economics of storage, IBM unveiled its latest software defined storage (SDS) offering – code named “Elastic Storage” – at the company’s Fast Data Forum in Boston on May 12.  Elastic Storage has been designed to address the accelerating growth of “Big Data” (particularly unstructured data which, according to IBM, represents 80% of data generated today) as well as “new era” workloads that encompass cloud, mobile, social and analytics technologies. Using Elastic Storage, enterprises can improve the economics of storage by: (1) increasing speed and scope of data access; (2) enhancing scalability; (3) optimizing data placement; (4) virtualizing storage pools for better utilization; and, (5) providing continuous operation and high availability.

By taking advantage of flash technologies, as well as storage tiering and other software-based capabilities, IBM says that Elastic Storage provides up to a 6x performance improvement and a reduction in storage costs of up to 90%. Drawing on technology from Watson (IBM’s analytics platform), Elastic Storage can scan 10 billion files, on a single cluster, in only 43 minutes. IBM boasts 3000 Elastic Storage customers and 100K delivered systems to date (this is because Elastic Storage is based on existing IBM products-more on this later).

Customer Feedback

The value of Elastic Storage was nicely summarized by Russell Schneider, Principal Storage Consultant at Jeskell Inc., an IT solutions provider specializing in government and aerospace who has adopted the technology to manage the creation of and access to, big data located globally and used for meteorological study.   In a panel discussion, Mr. Schneider discussed Elastic Storage’s ability to put data access and data storage on “auto-pilot” with the system determining automatically what data to keep and what data to discard by scanning, ingesting and curating that data. Schneider described a “Data Democracy” where built-in analytics look at user patterns to decide where data should reside at any given point-in-time. Users benefit from the scale and performance of Elastic Storage without requiring administrators to learn and understand usage patterns that will evolve and change over time.

Another customer, Alan Malek, Director of IT, HPC and Platform Strategy at Cypress Semiconductor, described how by deploying Elastic Storage, they were able to eliminate storage bottlenecks and gain an 8.5 to 13x performance improvement and significantly reduce total development cycle time without replacing any hardware.

Elastic Storage – A Closer Look

Last year, I wrote a Pund-IT Review article describing IBM’s SDS strategy, noting that ‘it’s not the technology that’s new, it’s the term being used to describe it”. The same can be said about this latest announcement. Elastic Storage draws functionality from existing IBM solutions that the company’s customers already know and love, including IBM General Parallel File System (GPFS) for big data file management and global file sharing as the core technology, as well as Watson (for analytics and high performance).  Other offerings in the software defined storage portfolio include IBM Virtual Storage Center (for storage virtualization and management) and server-side flash storage (for fast access to mission-critical data). By combining and optimizing these elements, customers get a “whole” that is greater than the sum of the parts.

Being software-defined means abstracting storage from the underlying hardware and eliminating storage siloes, as well as the storage bottlenecks associated with them. This provides flexibility in hardware deployment and eliminates vendor lock-in. In fact, Elastic Storage supports over 250 IBM and non-IBM hardware devices. New features and functions are added in software so they are available across this broad range of hardware.

Major Features of Elastic Storage, include:

  • Storage virtualization that enables multiple systems and applications to share common pools pf storage
  • Support for block, file and object storage, as well as structured and unstructured data
  • Support for OpenStack Cinder and Swift, as well as Posix and Hadoop
  • Single namespace across multiple data centers which allows users to share data globally
  • Policy-based data management
  • Automatic data placement and movement between tape, flash and disk based on usage patterns and policies
  • Availability as as a SoftLayer cloud service later this year
  • Native encryption and data protection
  • Scalability up to 1 billion petabytes

What will the Future bring?


One of the most exciting aspects of these IBM events is the opportunity to hear from IBM Research and IBM Distinguished Engineers on future innovations. A primary focus of this event was on something called storlets”; unstructured fixed data (such as photos) that are stored along with descriptive metadata so that value can be directly derived from the data. Think of storlets as a software-defined mechanism to customize the behavior of an object store or (more simply) as an app for your objects.

Storlets extend an object store by moving computation (filtering, transforming, analyzing) to the data rather than vice versa. With storlets, computation is dynamically loaded and performed in the object store. As a result, storlets reduce cost, improve performance, simplify operations and enhance security because the data isn’t moving. Worth noting is that storlets are also integrated with OpenStack and Swift.

One of the largest efforts around storlets is with a European Union funded consortium that seeks out use cases, needs and requirements from IBM partners. The Vision Cloud Project combines content-centric storage with storlets to take advantage of metadata associated with objects. For example, a storlet for personal photographs could tag who is in a photo, where the photo is taken, the occasion of the photo etc. so that photos can be organized and viewed in a variety of different ways “on-the-fly”.

Cognitive Computing

Another major theme of the IBM event focused on cognitive computing. With Watson technology playing a large role in Elastic Storage, we were reminded of the breadth and scope of practical uses for Watson. But beyond that, IBM identified cognitive computing as the same type of disruptive force that the internet was back in the 1990’s.

With the growth and confluence of mobile, social and cloud, “keyword search” may not be the best way to way to find and manage data. Cognitive computing will enable new business models where systems are adaptive and responsive, by being aware of context and of what we are trying to do. These systems will enable users to exploit big data to “find a needle in the river”. Cognitive computing technology will be critical to the next era of elastic storage that will enable users to manage data where it lives.

Summary Observations

Elastic Storage takes technology from well-known and established IBM products, such as GPFS and Watson.  By combining these and other best-of-breed products into an end-to-end storage solution family, IBM customers get a comprehensive solution that provides policy-based data management, automated data movement to optimize price/performance, massive scalability and built-in data protection.

IBM’s commitment to OpenStack also assures customers of investment protection and multi-vendor interoperability. In fact, by leveraging software-defined technologies, Elastic Storage supports over 250 IBM and non-IBM hardware platforms today. By creating a solution that uses many familiar IBM products, while still providing integration to OpenStack and other evolving standards, customers can easily adopt Elastic Storage both for traditional and new era workloads. In doing so, they should see improved storage economics – better performance, increased utilization, and lower costs.


Leave a Reply

Your email address will not be published. Required fields are marked *