Defining the road ahead for storage

Evan Powell says the future of storage will be software defined

Imagine a datacentre that dynamically responds to the needs of the enterprise, a datacentre that "sees" that, for example, your customers need more storage and computing power for desktops and adjusts to address this and other demanding computing loads on demand, without harnessing additional technical wizardry and bespoke engineering.

I call this the software-defined datacentre. It promises flexibility, end-to-end flow through provisioning and management, along with application awareness to deploy applications on demand and the ability to access appropriate resources on demand.

The goal is to tie IT directly to the productivity delivered by new applications. If this happened, innovative solutions could be adopted more quickly, with benefits accruing to the enterprise.

As per Moore's Law, chip performance doubles every 18 months, yet an average enterprise sees its productivity increase by no more than 10-15 per cent per year. I might add that the productivity growth rate of developed economies has been estimated at about two or three per cent a year. Imagine if we could get economies and enterprises to boost their performance two-fold every 18 months?

I believe that, if we could get the benefits of Moore's Law to flow through to the enterprise, as it were, we will have not just improved IT but opened up potentialy growth in wealth and standards of living.

Unfortunately, however, we're seeing legacy storage vendors using their enormous marketing budgets to exploit the concept of software-defined storage. This risks confusing the situation.

Software-defined storage is not storage hardware that is driven by APIs. Nor is it storage software that only works with one vendor's hardware. Just because a vendor can drive storage hardware via some software doesn't mean its management user interface or multi-system management capabilities qualify as software-defined storage.

Software-defined storage is not something sold by a legacy storage hardware vendor. A hardware company might start calling itself a software company because it has hired some software developers – but that doesn't make it one.

Software-defined storage needs to provide a consistent storage offering across virtual and physical infrastructures alike, so it is not something that only works as a virtual storage appliance. Further, it cannot sacrifice fundamental data storage capabilities because it has to be enterprise-class.

When it comes to software-defined storage there are two main requirements. It's an approach to data storage and management that abstracts the underlying hardware, enabling the flexibility and dynamism promised by virtualisation to be fully achieved and extended into the management of storage.

Second, the storage must be defined by software, which means able to respond on demand to the requirements of the datacentre.

Software-defined storage has to be an open system that extends from on-disk formats through APIs and business models.

It has to be widely available and able to work with all major protocols, so it probably has to be open source or freemium to achieve widespread distribution. It also has to be able to serve block, file and object protocols.

Abstraction – the separation of data from the data control layers – is important. Everything should be delivered as software that define the attributes on the storage in the server or on the JBOD on the fly when required. And when I say everything, I mean RAID, HA, replication, NFS, CIFS and a variety of protocols.

To define a product as software-defined storage, it must be possible to inherit SLA requirements from the compute level or from the overall datacentre business logic. VMware, CloudStack and OpenStack are all heading in the same direction in terms of being able to pass along requirements from the application provisioning and management layer through the entire stack, including storage.

It must also be possible to run storage protection, replication, cloning and other capabilities co-resident on compute boxes for certain use cases. Ironically, this may mean removing aspects of that logic from the box's storage capabilities and letting the storage deliver services that are instantiated on the physical device or rack as needed.

If the storage industry does this right, we can bring openness and a fundamentally more flexible storage approach to the market in the months and years to come.

Evan Powell is chief executive officer of Nexenta