Many factors are putting pressure on networks. Big data, for example, reflects a strong trend for the centralisation of data, prompted by the desire for increased efficiency and cost reduction.
We are seeing more datacentres, datacentre consolidation and cloud computing; virtualisation of server and desktop; and real-time collaboration. But big data can mean big problems.
When data is centralised, performance and scalability issues are not always taken sufficiently into account.
Users find that the volume of data keeps expanding, and the WAN infrastructure they have set up begins to suffer problems such as latency, packet loss, slow applications, VoIP degradation and backup issues.
Centralising large volumes of data is new to many users, as are the problems they encounter as a result. Having set out to cut costs, they find that a new set of problems has appeared, and that staff are still complaining that their computers are too slow.
On top of that, there is increasing use of peer-to-peer networking, streaming media, videoconferencing and so on.
One obvious answer is to increase bandwidth, but for most networks that is not a great idea. The cost of additional bandwidth and the infrastructure to support that extra bandwidth, added to the disruption involved, tends to negate the initial cost savings. Just as adding more motorway lanes, in most cases, has not proportionately improved traffic flow.
A lot of VARs make margin by advising corporates on network performance management, and some of this advice may well seem old hat.
However, big data has added issues of data management in big pipes, which is different from traditional application and WAN acceleration.
There are several ways to address these issues. For example, many companies have no idea how their pipes are performing, so the first step to maximising bandwidth use is to get some visibility of the network and applications being used.
Bandwidth-hungry, non-business applications such as peer-to-peer apps, YouTube or Skype can be extremely demanding. Traffic can be managed here using deep packet inspection (DPI) offerings that can monitor, manage and report on network, application and big data activity on the WAN can help clear the pipes.
Another option is WAN optimisation and acceleration. This can improve traffic contention problems stemming from packet loss, improving performance in a relatively cost-effective, less disruptive way.
Data centralisation can save organisations money but for many it has not delivered the performance benefits they expected, or were perhaps promised. At the same time, the complexity involved with the changes has reduced visibility of what – and sometimes where – the actual problems are.
Adding more bandwidth and the associated infrastructure costs money and is difficult to sell. Providing visibility, coupled with flow management, for leaps in data volumes can be cheaper while remaining scaleable.
Demands on networks are going to keep increasing over the next few years. That means more opportunities for VARs to sell offerings that optimise the available bandwidth and improve application performance without involving wholesale infrastructure upgrades.
Ian Kilpatrick is chairman of Wick Hill Group
A summary of what you get if you subscribe to our premium market intelligence service
Matthew Polly says CrowdStrike is looking to branch out from the UK and into mainland Europe
Southampton-based VAR states that further acquisitions are in the pipeline
With UKFast launching a public cloud consultancy, Tom Wright asks if this is the way forward for all local hosting providers