Addressing cloud latency

Alex Watson-Jackson talks up how to address distance, congestion and monitoring issues in the cloud

Many organisations are still failing to get to grips with latency issues in the cloud. This is somewhat surprising, as file access and application performance – how quickly information is delivered to the end user or how long it takes to download or open a file – is often a high-end user priority.

A common cause of latency is the distance between the office and the datacentre. Others include congestion on the network, as well as packet loss and windowing. It is often the physical issue of distance that needs addressing.

For example, many organisations may locate their office in a different country from their datacentre. For some applications, the distance between the two can lead to slow file transfers and inefficient application performance.

Clearly, organisations should look to service providers that can supply multiple datacentres across Europe.

When it comes to tackling latency, it is not just the physical passage of light that causes issues, but the way the signals are sent through networks. Congestion affects latency because when too much data is sent down any connection, the network node equipment holds some of it back in a queue, while it sends the data that it received first.

Windowing is caused by the way signals check that they have been correctly received. The receiver has to send a message that it has the first packet before the sender will send another. This means more delay and more congestion for every packet and, if packets are lost, this gets worse because replacement ones must be sent. This queuing, confirming receipt and resending obviously slows the speed of travel from A to B.

This can often become the source of poor user experience after moving traditional desktop applications into a cloud environment because the network connection to the cloud service does not prioritise traffic appropriately.

Often applications such as SharePoint are critical to the day-to-day running of the business and users will not tolerate 10–20-second delays when they open, save or close a document. And congestion may be caused by applications like Exchange, in which users are perhaps prepared to tolerate slightly slower response times.

If you combine the competition for bandwidth between business applications with the increase in online recreational video streaming in the workplace, managing application performance to end users becomes much harder.

Traditional methods like WAN acceleration or network Quality of Service do not suit companies trying to prioritise cloud-based services from public providers. As a result, it is becoming more important for enterprises to consider cloud service providers offering application performance guarantees and stringent service level agreements.

Once distance and latency have been addressed, the service provider should monitor end-user activity in the cloud to help minimise bottlenecks or capacity shortages. Sophisticated monitoring tools can provide continuous analysis of service performance to ensure the promised benefits of the cloud reach end users.

The good news is that many businesses are starting to overcome distance, congestion and the need for more granular monitoring. At the end of the day, the full benefits of the cloud are through having a flexible, elastic computing environment that guarantees security as well as performance.

This is now being achieved by companies using service providers to supply a full enterprise-quality cloud infrastructure service, with performance measures that include the network.

Alex Watson-Jackson is IT infrastructure and services solutions marketing manager at Colt