City infrastructure in a high-speed world

Datacentre networking must excel itself to best serve financial services, says Charles Ferland

Communication and data are the backbone of the global capital markets. Financial services transactions happen not between people, but between servers. The extent of these communications enacted virtually – at the network edge – account for most of all financial services’ exchanges and correspondence.

Ensuring that data arrives with minimal, deterministic and 'fair' latency is vital. Financial institutions, however, are increasingly faced with the challenge of deploying a network that champions zero latency and high throughput at the lowest possible total cost of ownership (TCO).

Having reached the end of an extraordinary decade of technology advances, financial companies commonly manage hundreds of trades per millisecond over direct 10 Gigabit Ethernet (10GbE) connections. With each trade preceded by up to 15,000 quotes, the sheer volume of information transacted every second is incredible.

Managing this fast-growing volume of data is difficult. Even more challenging for network providers, though, is managing financial data networks fast enough to support competitive advantage. Competitive success in financial networks today is defined in microseconds.

Studies have shown that tens of milliseconds of delay in data delivery can represent a ten per cent drop in revenue, and delays of even five microseconds per trade can cost hundreds of thousands of pounds.

Speed and throughput, therefore, is often a priority for extreme and complex scaleability when serving IT infrastructure that supports capital markets.

Scaling out with components that are cost-effective, energy efficient and easy to manage generates greater returns than scaling up by adding more power and complexity to a smaller number of expensive components.

A new approach that flattens the datacentre architecture into a more dense configuration of racks and rows can shrink the datacentre footprint, enable faster communication across fewer hops, and minimise overall application latency.

Financial services datacentres are increasingly interested in the potential for server virtualisation to reduce TCO. Server virtualisation calls for innovative advances in virtualisation-aware networking.

However, server virtualisation has both potential benefits and drawbacks. It can maximise underutilised resources and minimise infrastructure spending – but add complexity and administrative overhead for the network administrator.

The latest ‘best-of-breed’ network infrastructures are beginning to address this problem by automatically migrating network policies along with virtual machines as they move across different physical servers.

Conventional network switches are unaware of Virtual Machines (VMs), which leaves the network vulnerable to risks of service outages and security breaches due to incorrect network configuration. Sophisticated networks that are better equipped to handle transient VMs enable the unique identification and network configuration for each Virtual Machine. They 'see' VMs as they migrate from server to server, reconfiguring the network in real-time – automatically preserving essential security, access and performance policies, within or across datacentres, and even automatically unifying and synchronising physical and virtual networks between geographically dispersed datacentres.

In the past, different discreet devices have been dedicated to a very specific role. For example, routers were used for Layer 3 networking, while switches were used for Layer 2 networking and server interconnects. Each of these devices added hops, latency, complexity, power consumption and potential risk of failure.

The way to reduce latency and meet financial services objectives is to use multi-service devices that can provide Layer 2 switching and Layer 3 routing in a single device, and in today’s emerging flat network topologies to eliminate Layer 3 routing altogether, reducing the complexity and latency between the devices.

Ethernet continues to evolve, with advances such as Datacentre Bridging (DCB) that provides an even, reliable and regulated flow of traffic between high-speed Ethernet nodes, enabling data and storage traffic to be converged on a unified networking fabric and high-speed 'lanes' to be dedicated to high-priority traffic. DCB brings to Ethernet, which was formerly a 'best effort' technology, the lossless capabilities, which ensure that mission-critical applications, like the ones used in the financial sector such as algorithmic trading and market data feeds, are never delayed or paused due to congestion.

With these emerging technologies, network equipment providers and innovative City firms are quickly realising how they can address the challenge of management, utilisation, scaleability, low latency and high bandwidth at a low TCO.

Charles Ferland is vice president and general manager of EMEA at Blade Network Technologies