With business growth comes the inevitable hurdle of how to deal with all the additional data, while increasing application and information system responsiveness.
Storage caching, tiering and tuning techniques can help extend the capabilities of serial-attached SCSI (SAS), NearLine-SAS and SSD storage, helping maximise system performance. Storage caching, tiering and tuning each have a role to play.
Caching allows storage system controllers to use SSD or flash technology to extend on-board or Level 2 (L2) cache. I/O requests are usually served from primary, Level 1 (L1) cache, but if there are too many requests, there may not be enough primary cache. Caching's goal is to increase I/O by increasing the cache hit ratio.
The storage system identifies frequently accessed or "hot" data using the most often or most recently used algorithms. As I/O patterns change, storage systems monitor and record them, aligning storage resources automatically to workflow requirements.
The amount of hot data may be small compared with total system storage capacity, but be 50 to 60 per cent of all I/O activity. I/O intensity is why L2 cache is so valuable. A large proportion of I/O can be served from secondary cache, reducing the time required to access data, compared to traditional spinning hard drives.
If using the extended cache would maximise I/O performance, the storage system migrates data to L2 cache, without administrator intervention, updating its understanding of I/O demand by continuously monitoring I/O patterns.
Advanced caching functions can benefit dynamic environments, such as virtualised servers, and applications requiring low latency, because I/O patterns can change dramatically and daily. Caching may increase I/O performance by up to 800 per cent, depending on the application.
However, multiple, intersecting factors must be evaluated, including every transaction, as well as the input/output operations per second (IOPS). Automatic caching can help maintain good performance during hardware failures. And every business knows that performance is crucial if customers are to be kept happy.
In an enterprise-class storage system with two controllers in active/active mode, if one controller fails, the other takes over. The surviving controller must cope with twice the traffic until the failed hardware is replaced, and in the absence of automatic caching, performance will probably suffer.
If there is a secondary L2 cache, performance degradation is minimised and up to 80 per cent of I/O performance can be maintained. Data tiering means business systems analyse traffic in real time and choose storage based on performance thresholds, without any downtime.
Adding an automatic tuning feature to the mix makes the picture complete. No administrator wants to manually tune the storage system if it can be avoided – let alone during a growth period.
But automation has improved the tuning of basic storage systems, and sophisticated storage management software may ensure devices get the IOPS they require, using dynamic performance analysis, maintaining system performance even if the hardware fails.
Customers with an adaptable, dynamic storage architecture will be able to redeploy IT resources.
Joe Disher is senior director of product marketing at Overland Storage
Nima Green asks what is driving public cloud uptake in Germany
In the wake of yet another lawsuit involving Oracle, we run through 10 of the vendor's biggest court battles
CEO Chuck Robbins says Cisco will use the Catalyst 9000 product range as a template for future launches
Today saw 14 of the UK IT channel's biggest hitters come together to determine the winners of CRN's WiC awards. But what does being a WiC judge actually involve? Doug Woodburn reports