With the Market Shift Toward Containers, Will IBM’s Storage Portfolio Remain “Simple?”

Clabby Analytics recently attended a video-briefing presented by Eric Herzog, the chief marketing officer for IBM’s storage organization. The title of Mr. Herzog’s presentation was “Storage Made Simple,” a claim that only a few years ago IBM could not back up. In fact, in 2016 Jane Clabby wrote: “storage software products from around the IBM organization (GPFS, SVC, LTFS) needed more clearly articulated product positioning, improved integration, and less complexity around ordering and deployment.”
But then things changed. The company started listening more closely to its storage customers and business partners (who were complaining about the complexity of IBM storage packaging as well as pricing) – and to feedback from research analysts. In 2016, IBM focused strongly on streamlining its product packaging, its pricing, and its messaging. Accordingly, at the end of that year, Jane wrote in this report about how she was seeing “better integration between the products, as well as the ability to manage the suite of products from a single, consistent user interface. Not only does this make it simpler for IBM customers to deploy and use IBM Spectrum Storage, it makes it much easier for new customers to take advantage of IBM’s software defined storage offerings. In fact, IBM reports that since the rebranding they have added 2000 new Spectrum Storage customers.”
Now, in 2020, IBM faces a new challenge in storage. One of the company’s cornerstone strategic initiatives is to make it possible for its customers to build device/vendor transparent hybrid cloud environments. To do this, the company needs to move its customers from traditional virtualization schemes and public/private clouds to a more open, vendor-agnostic, containerized hybrid cloud environments that utilize container-native storage to break down siloes of data and enable transparency across multiple cloud architectures – both public and private.
Will this migration be “simple?” Probably not. Customers will need to learn a new technology (containers) – and new standards (OpenShift and Kubernetes). They will need to rework parts of their infrastructure to better accommodate a hybrid cloud environment. They will need to learn how to use new development tools and understand new methodologies. But the benefits of making this shift far outweigh the inconvenience of modifying existing infrastructure (see the “Background” section below.)
In 2017 IBM’s storage organization started to prepare its customers for the move to containers (“container-ready storage”). In 2019, with containerized offerings introduced to the portfolio, IBM started to deliver “container-native” storage solutions. And this container-native approach to solutions delivery is key. Container-native solutions are turnkey containerized offerings that will make it easier than ever to deploy and manage storage
While IBM’s October 27, 2020 storage announcement included storage for both hybrid cloud and containers, this review focuses primarily on storage for containers.


A “container” is a software environment that contains a complete deployment unit that allows an application to be automated, tracked and rapidly deployed. It differs from a virtual machine by allowing multiple workloads to run on an operating system, rather than running multiple OS instances on underlying virtual machines.) By moving to a containerized approach to computing, information technology (IT) executives can expect to see:
• Lower cost and better return-on-investment (ROI) – fewer infrastructure resources (servers, software stacks, management software, etc.) are needed to run and manage the same application;
• Consistency across computing environments – standardization across development, build test and production environments will bring solutions to bear more quickly – for this reason, containers are fundamental for building a hybrid cloud;
• Compatibility/maintainability – containerized images run the same, regardless of where they are deployed – enabling IT administrators to write once, deploy anywhere;
• Isolation and security – containers own their own resources, and applications running in containers are isolated from each other (and cannot be viewed by other containers – making it easy to secure containers).
In short, containers are more efficient (less resource intensive); more flexible (from a development/deployment perspective); and more secure than the now “traditional” virtualized resource approach to computing.
It is also noteworthy that containers can be managed by Kubernetes. Kubernetes is a standard for building portable, extensible environment that can be used to manage containerized cloud workloads and services. The way IBM is containerizing its storage solutions relies on using Kubernetes (the cloud management environment) as the control plane that provides self-service capabilities while delivering additional scalability, agility and portability.
Finally, it is also worth mentioning Red Hat’s Ansible environment (which we describe in more detail in this report). The Ansible management environment provides capabilities to build and maintain private cloud infrastructure at web-scale, with a set of automation and orchestration modules for IBM FlashSystem for Red Hat Ansible. Red Hat Ansible also supports IBM Spectrum Virtualize, enabling Ansible Collection on Ansible Galaxy and Automation Hub integration. Further, Ansible provides support for creating, deleting and managing hosts, volumes, pools, and mdisks. Ansible can collect facts, and can manage snapshots and volume clones. It can also manage data replication. IBM’s Ansible support also extends to non-IBM storage virtualized by its IBM Spectrum Virtualize software.

IBM’s Container Offerings

IBM offers a myriad of storage system solutions that use mechanical disk, Flash, and tape storage. IBM also offers a myriad of storage software products that provide management facilities, security, and resilience functions. An extensive list of IBM’s comprehensive storage portfolio can be found here.
IBM’s container-native strategy positions the IBM storage portfolio in an industry-leading role in simplified, secure, resilient storage management. But the move to container-native is not an either-or situation. Enterprises will continue to offer traditional storage environments. For the next several years expect IBM to continue to focus on enabling direct attach and external storage to address various performance and capacity needs; and on moving the maturing ecosystem of enterprise level data management services to into the container-world to allow support for containerized mission-critical applications.
As for the move to containerization, many but not all of the products in IBM’s storage software portfolio are candidates to become container-native offerings. But all of IBM’s “Spectrum” line should ultimately be containerized. Container-native storage provides more efficient data management for enterprise and mission-critical applications and exploits the full range of capabilities enabled by Kubernetes including portability, scalability and consistency.

IBM Spectrum Protect Plus

IBM recently introduced its Spectrum Protect Plus server can be deployed as a service inside a OpenShift/Kubernetes container. As such, deployment is greatly simplified; management operations execute more quickly; and service level agreement (SLA) policies contained in the container offer backup, recovery, replications and long-term data retention facilities. It should be noted that IBM’s Spectrum Protect Plus also includes the ability to recover applications, namespaces and clusters across different locations (for disaster recovery purposes, as well as for data reuse for testing, analytics etc.). Further, support for IBM Cloud Object Storage provides immutability features that allow for cyber resiliency.
With container-ready storage (using IBM Spectrum Virtualize CSI snapshots) IBM has been able to support its own storage systems, as well as 500+ storage offerings from other vendors. With container- native storage using Red Hat OpenShift Container Storage and Ceph – and the write-once, run anywhere approach (offered by running native containers), IBM is attempting to transparently extend its storage management solutions across the entire storage ecosystem.
To make it easier to manage data across multiple cloud environments, IBM has introduced its “Cloud Pak for Multicloud Management” (MCM) – integrating data protection with overall cluster management. This application-centric, AI-driven management platform has been designed to provide full visibility and control over workloads in disparate cloud environments.

Storage for Data and AI in Containers

To eliminate data siloes by providing transparent, seamless access to a single pool of data across hybrid cloud environments, IBM has introduced Spectrum Scale container-native storage access that leverages OpenShift to create storage for Kubernetes optimizing performance, and eliminating duplicate data. Spectrum Scale CSI Operator for Red Hat Open Shift enables administrators to easily configure and dynamically provision container access nodes from an OpenShift console.
IBM Spectrum Discover adds support for data ingest from Red Hat OCS including real-time updates in addition to all the other major storage platforms it already supports, providing the ability to easily support IBM Watson solutions to search file and object data with auto catalog and indexing in real-time.
Other enhancements include container object access using the s3fs plugin delivered with Red Hat OpenShift, which enables Linux file access for existing applications -increasing efficiency and reducing complexity. IBM also showed off solutions involving AI Container Storage for RedHat OpenShift that provide threat detection, data encryption and cyber resiliency.
Storage for Hybrid Cloud with Containers
With respect to container-native storage, IBM issued the following Statement of Direction, “IBM intends to deliver a software-defined, container-native storage solution for RedHat OpenShift and Kubernetes container environments.” Stay tuned.

Summary Observations

As customers shift to containerized architecture, it is vital that the move be as painless as possible. Or, in other words, as “simple” as possible. IBM is doing a lot of the programming work needed to simplify the deployment of its storage software into containers, while also adding value to its hardware offerings. Is IBM’s “storage made simple” claim still justifiable? We think that with the simplicity that containerization brings to the computing world, the answer is a definite “yes!” In fact, we believe that the new name of IBM’s storage presentation should be “storage made simpler.”

Leave a Reply

Your email address will not be published. Required fields are marked *