Prioritising storage ahead of disaster
Adrian Groeneveld claims application-aware storage platforms are a necessity
Groeneveld: How do you do more with less and keep data available?
Doing more with less means centralisation and consolidation, both of which can affect high-availability projects and disaster recovery. So the datacentre should be a focus for overhaul.
Usage rates of less than 40 per cent offer plenty of opportunity to resellers. The difficulties currently come from the rapid rate of storage growth. Corporate governance has encouraged the development of complex storage architectures with multiple tiers of storage.
The pressure is on to consolidate multiple applications on fewer infrastructure resources. Yet most offerings struggle to do this.
Storage architecture needs to be designed and built from the ground up with resilience and availability in mind. It is not simply about extracting every drop of performance from the capacity pool, but also about delivering system control, which is crucial for preserving availability.
Storage platforms must be application aware. This gives an administrator the ability to balance performance against cost by varying the storage configuration and prioritising various business needs. Automated tiering takes that control away from the administrator.
If storage is application aware, bands of operational storage within a respective storage class can be prioritised. You can control priority, CPU, cache and network resources and thereby deliver or even migrate the quality of service (QoS).
This level of control, I believe, is a necessity if resellers are to sell systems with easy-to-use, policy-based profiles. Policy-based approaches mean more choice for data protection. Tight integration with applications and application-aware systems are certainly better for protecting valuable data resources.
The message must be one of prevention and minimal outage impact. A recovery profile based on identifying an application’s priority would give double redundancy to data written to the system.
And a level of artificial intelligence can prevent a disaster before it happens. By employing adaptive learning, you may detect suspect drives in the storage pool and pre-emptively copy data to a spare drive before calling for a replacement.
Data remains safe and available, and it shortens the drive rebuild times as it does most of the work before a failure occurs.
This of course all requires planning rather than just reacting to an unfolding disaster scenario. Companies continue to throw money at data storage, and few do so with any method behind it.
The partner able to give its customers what they want first time around will be the one that succeeds.
Adrian Groeneveld is EMEA marketing director at Pillar Data Systems