Networks are getting smarter and faster

Greg Huff says there are now multiple VAR options for smarter enterprise networking

Architects and managers of enterprise and storage networks are struggling under the formidable pressure of expanding data volumes.

There are two options: the traditional brute force approach of deploying systems beefed up with more general-purpose processors, or "smart" systems powered by purpose-built hardware accelerators integrated with multi-core processors.

Resellers can help customers understand that adding faster general-purpose processors to routers, switches and other networking equipment can improve performance but this adds to system costs and power consumption while doing little to address latency – a major cause of network performance problems.

Smart silicon can reduce performance bottlenecks and latency for certain processing tasks. I believe that system designers will increasingly choose smart silicon as a solution.

In the past, hardware and software generally advanced in lock-step. As processor performance improved, more sophisticated features could be added into the software.

Parallel improvements made it possible to create more abstracted software, with better functionality built more quickly and with less programming effort.

Today, however, these layers of abstraction are making it harder to perform more complex tasks.

General-purpose processors, regardless of their core count and clock rate, are too slow for functions such as classification and traffic management that must operate deep inside each and every packet.

What's more, these specialised functions must often be performed sequentially, restricting the opportunity to process them in parallel, in multiple cores.

General-purpose processors, and other specialised types of processing, are ideal for smarter silicon, and it is increasingly common to have multiple intelligent acceleration engines integrated with multiple cores in specialised System-on-Chip (SoC) communications processors.

The number of function-specific acceleration engines available continues to grow, and shrinking geometries now make it possible to put more engines on a single SoC.

It is also possible to have faster, smaller, more power-efficient networking architectures on a single SoC.

The biggest bottleneck in datacentres today is caused by five orders of magnitude difference in I/O latency between main memory in servers (100 nanoseconds) and traditional hard disk drives (10 milliseconds).

The latency to external SANs and NAS is even worse, because of the intervening network and performance restrictions of single resource serving multiple, simultaneous requests sequentially, in deep queues.

When NAND flash and flash storage are combined with more intelligent caching algorithms, it's possible to break through the traditional scalability barrier to make caching an effective, powerful and cost-efficient way to accelerate application performance.

Solid state storage can offer far lower latency than hard disk drives with comparable capacity.

Besides delivering higher application performance, caching enables virtual servers to perform more work, cost-effectively, with the same number of software licences.

Solid state storage typically produces the highest performance gains when the flash cache is directly in the server, on the PCIe bus. Intelligent caching software is used to place hot, or most frequently accessed, data in low-latency flash storage.

The hot data can be accessed quickly under any workload since there is no external connection, no intervening SAN or NAS connection, and no possibility of associated traffic congestion and delay.

Some flash cache acceleration cards now support multiple terabytes of solid state storage, enabling the storage of entire databases or other datasets as hot data.

Enterprise networks and datacentre storage architectures are evolving rapidly. Resellers can benefit from the data deluge opportunity by adopting smarter silicon solutions.

Greg Huff is chief technology officer at LSI