More about enterprise SSD variations
Michael Letschin talks about different options for SSD adoption
Many businesses believe flash or solid-state storage can help speed up applications and virtual machines and make systems run faster overall. But what are enterprises to do when SSD fails? The failure rate on SSD, because of the cell technology involved, is of huge concern. As a result, resellers will need to advise businesses on how to solve this problem.
Some vendors have added wear levelling, or cell care, or even included capacity that was not advertised to provide cells for new data writes, while deleting the old blocks in the background. But how can enterprises save disks that depend in this way on the drive manufacturer?
When the enterprise bought its storage did it get a choice of SSD? I think this is unlikely. Resellers need to give their customers this choice, and various drives are available.
A ZFS-type combined file system and logical volume manager can display what the enterprise has and how the life span and speed should be managed. Commodity drives can be replaced as better technology arises.
Combine best-in-class SSD protection with a file system built to optimise SSD usage by favouring DRAM as much as possible and isolating the reads and writes needed. ZFS uses the hybrid storage pool for all data storage. It inherently separates the read and write cache, each using its own SSD so it can be specifically selected.
SSD wear is more commonly associated with write operations. In ZFS, the ZFS intention log (ZIL) handles this, using single layer cell (SLC) drives or RamSSD, like ZeusRAM. SLC drives wear much more slowly. Only synchronous writes are written to the ZIL, and only after they are first written to the adaptive replacement cache (ARC) or the server's DRAM.
Once data blocks are written to the ZIL, a response is sent to the client, and data is asynchronously written to the spinning disk.
The writes from the client are not the only SSD writes. When using tiered storage, blocks of data must be written to the read cache prior to being read. This is the case with ZFS and hybrid storage pools as well, but the differentiator is how often the blocks are written to the layer 2 adaptive replacement cache (L2ARC).
The L2ARC is normally placed on MLC or eMLC SSD drives and is the second place the system looks to find data blocks that are commonly used after the ARC or DRAM.
Other file systems may use a similar approach, but they use the least recently used (LRU) algorithm. This does not account for a situation where data blocks may be used on a frequent basis but a large data read is done, from a backup for instance, and the blocks are shifted. The algorithm used for ARC and L2ARC accounts for these blocks and maintains data blocks based on both most recently as well as most frequently used data.
The way data is moved from and to SSD with ZIL and L2ARC has a significant effect not just on the wear time on the SSD but also on power consumption, which will be of paramount importance in the datacentre of the future.
This approach allows systems to be built using all SSD footprints and minimal power, or even with slower drives for the large capacity, while maintaining high-level performance.
ZFS offers power and economy wrapped up in one neat package. Customers can tune the system up or down as required – easily done by adding or removing SSD to the mix either in ZIL or L2ARC capacities.
Michael Letschin is director of sales engineering at Nexenta