Reducing the cost and complexity of storage
Jan Ursi says recent engineering developments should prove a boon to the datacentre market
One of the axioms of engineering is that the more moving parts a system has, the more likely it is to break down. Engineers work on the principle that the more you can simplify something, the less chance it can go wrong.
At the same time, it is important that you keep updating all the elements, so you don't get an imbalance between different components. Automation usually revolves around a chain of events, and the top speed is dictated by the weakest link in that chain. So which vendor is the weakest link in the datacentre? To whom should we say goodbye?
Until now, the technology used in servers and storage has been just about good enough to meet the demands of a modern datacentre. Each carried out its own job in its own sweet time and this was never a problem. Much of the hard disk technology used in storage essentially had not changed for decades.
But the advent of virtualisation technology changed all that.
As the speed of development in datacentres has accelerated, the weaknesses of some of the components of the machine have begun to be exposed. Virtualisation has enabled parts of the IT infrastructure to grow at warp speed. Servers can be created and managed within a few keystrokes, whereas this used to be a laborious job that took hours, or even days.
So virtualisation created the potential for rapid advancement.
But its strengths have been neutralised elsewhere. It's like having a fantastic engine but a set of dodgy differential joints. The wheels have not actually come off – yet – but there's a knocking sound under the bonnet and we're not going as fast as we could.
The problem is that two of the major components of the IT machine are not integrated with each other particularly well. Spreading storage across a network has proven an expensive mistake.
The technology was not good enough, as some of the giants of computing were to discover. In companies with massive datacentre requirements, such as Facebook, Google and Yahoo, the engineers who built these vast, rapidly expanding computing powerhouses found that storage and compute were becoming a drag. Not only were there moving parts, but they did not move very quickly.
Engineers around the globe have tried to address this problem. A team of Google engineers, for example, pooled their expertise and experience and created a way to integrate storage and computing power into one unit.
By re-engineering the interaction of these elements, and putting them on one piece of hardware, they removed the moving parts. Packing all the elements into one box means that miniature units of datacentre technology can either stand alone or be aggregated.
They are like stackable datacentre units that enable a computing infrastructure to grow rapidly. They cannot spread at the speed of virtualisation (nothing can do that) but they can certainly keep pace.
The reception from the target market should be enthusiastic.
Jan Ursi is EMEA channel sales and field marketing director at Nutanix