The horrors of VDI storage

Alex Aizman puts forward his take on the issues surrounding the virtualised desktop

At first sight, desktop virtualisation looks great to recommend to companies seeking to manage and maintain their PC infrastructure.

What's not to like about something that allows a business to install, update, manage, maintain and secure a common desktop operating system centrally across all PCs?

However, making things easier at one point can add complexity somewhere else. It's a bit like the scene in a horror film where the hero or heroine runs into a room and closes the door, only to find himself or herself confronted by something even worse.

Remember that the next time someone extolls the benefits of virtualised desktop infrastructure (VDI).

It's not quite a horror movie, but there are some things to consider and address behind the scenes. One of the biggest of these issues is the need for more storage. Now that can be scary.

Consolidating dozens or even hundreds of desktops on a single server means loading the latter with small-size random I/O operations, generated by all those desktops.

This is the type of traffic commonly associated with boot storms, login storms, and "bursty" random write workloads. Storage requirements are not just physical but are also related to IOPS.

The storage subsystem to support hundreds and thousands of VDI desktops using Windows is very demanding. Boot storms, for instance, may mean it takes minutes for virtual desktops to boot.

In the server environment, peak workloads are randomised among applications. In a VDI environment, however, peak workloads are often concurrent.

Whereas in server virtualisation, performance fluctuations have arguably little impact on immediate end-user experience, they can have a profound negative effect on VDI users.

Storage in a traditional infrastructure often has a fairly sequential load, with read/write ratio at or about 80 per cent read, and 20 per cent write.

Combining hundreds of desktops in the VDI environment shifts this pattern from sequential to random with read/write ratios at 40:60 or worse. This inevitably creates new challenges for storage software and hardware.

One answer is to use separate SSD caching read/write operations to speed up the whole process, as part of a tiered storage strategy. SSD can help customers get more from VDI.

Sometimes companies may use their existing storage to meet VDI requirements. More commonly, it is better to put forward something different to overcome the VDI performance and cost challenges.

Implementations must be planned well and the right amount of storage allocated. Monitoring and capacity planning is important while inline compression, data deduplication and thin provisioning can help mitigate the cost.

Many of the issues surrounding VDI are now out in the open and a few vendors are launching technologies with a view to mitigating these problems.

This particular horror story could have a happy ending.

Alex Aizman is chief technology officer at Nexenta