Stopping what you can't see
A lack of detailed insight of network traffic means that cyber attacks often cannot be detected, let alone prevented, writes Scott Sullivan
In the past few months Sony Pictures suffered a cyber attack resulting in hackers making away with 100TB of data, and a breach at office supplies firm Staples saw nearly 1.16 million customer payment card details stolen.
The acceptable wisdom now seems to be that attacks like these will take place however hard you’re trying to prevent them – and they will probably happen over a period of time.
That means that a key priority is the ability to see, monitor, control and understand attacks earlier on in order to better deploy appropriate counter measures.
We’ve all heard the old adage ‘it’s not if, but when’ applied to the threat of a security breach, but in looking for precisely where a breach might take place on a network, the answer is ‘it’s not where, but everywhere’.
Thankfully, because there is growing recognition that attacks are becoming more distributed, blended and complex in their nature, most organisations are realising the limitations of the ‘old fashioned’ approach to security, which is simply to focus on constructing as hard an outer shell as possible.
A good example of why this approach is failing is that modern cyber attacks will by-pass this model because they often concentrate on exploiting and hijacking legitimate users that are already on the inside of the shell.
That is not to say that network perimeter security is becoming any less relevant – there are plenty of less advanced, more automated attack types that are distributing huge amounts of malicious traffic that will successfully find and exploit weaknesses without perimeter solutions in place.
However, a hardened outer shell needs to form just one part of a framework of network solutions that crucially provide the ability to perform deeper statistical analysis on network traffic, and that can better identify and prevent not just conventional threats, but any kind of suspicious network activity.
The traditional approach to network security is to have a number of devices (firewalls, anti-spam, anti-malware, IDS etc) sitting on the production network in the data path; or ‘in band’.
This older model looks like a castle with moats and drawbridges representing an obstacle to would-be attackers. But, as network speeds and loads increase from 1GbE to 10GbE and beyond, this model becomes increasingly difficult to maintain – the security tools can become a serious bottle neck and, if they fail, will result in unscheduled down time which leaves the network even more vulnerable.
Another issue is that, because a range of security tools may need to to analyse the same traffic in order to detect different types of attacks, there can be a daisy-chain of processes – a series of security devices that process traffic in sequence and through which each data packet must pass.
This is another reason why each security tool can present reliability, performance and scalability risk for organisations due to the potential of tool over-subscription or failure.
A way that this effect is often mitigated is by introducing traffic management policies that can bypass devices under certain conditions in order to avoid the loss of critical services – from a security perspective this is not ideal for obvious reasons but, from an operational perspective, it is at times unavoidable.
These drawbacks have driven the adoption of ‘out of band’ network designs, where tools are out of the flow of network traffic, in order to improve security performance and make the process of traffic monitoring more efficient and reliable.
This security environment looks less like a castle and more like a hotel, where you focus on controlling who checks in and then subsequently limit what rooms they have access to and track who has accessed what and when. The out of band model lends itself to analysing network performance, as well as giving security teams the ability to look for and identify changes in network behaviour over time, supporting forensic investigations into understanding and replaying the history of security incidents.
Although sampling traffic may have a limited impact on monitoring general network performance, it has a more significant implication for monitoring for security incidents – as the whole picture is, by definition, not being made available.
Overall, there are a variety of factors which are making it difficult for organisations to achieve a detailed enough view of their networks. A lack of traffic visibility, brought about by limitations in network design and specification is unacceptable in this day and age – the situation is leaving networks far too vulnerable and hence the situation is driving demand for better monitoring tools and solutions that provide a complete and more reliable view of all network traffic.
Active visibility is needed to derive security intelligence from the vast amounts of operational data in the network and solve issues in real time – this will increasingly become a core prerequisite of contemporary network security architecture.
Scott Sullivan is vice president of Worldwide Channel Sales at Gigamon