Innovating an intelligent infrastructure

Giridhar Lakkavalli looks at IaaS and how it needs to develop

Infrastructure-as-a-service (IaaS) has been prominent since Amazon launched the EC2 or Elastic Compute offering. But was there really no IaaS before that?

Infrastructure in the computing world can mean everything from the datacentre where the servers were hosted to the servers, storage, networking, cabling and cooling. Indeed, infrastructure has also been defined as the layer that enables the software to execute or run, and the software layers including the OS and the applications hosted thereby.

Long before EC2, third-party datacentres were renting out space, power and cooling to organisations that wanted to host their own computing infrastructure. Instead of having their own servers, some organisations rented out servers, storage and networking along with the space, power and cooling provided by the datacentres.

Typically, the third-party providers charged a subscription cost for their services, and this was all in a world where everything was physical.

Then EC2 came on the scene. Compute infrastructure that had been seen as being firmly attached to the physical infrastructure – and always expected to take at least a couple of weeks to become available after a request had been made – became instantly available. It had no link-up to the physical infrastructure and cost only a fraction of what the physical infrastructure had done.

There has been a lot of innovation in this model. New vendors have appeared, such as Rackspace, Terremark and Savvis. They can give users the ability to choose from various configurable templates.

The datacentres that host the compute themselves have moved up their capabilities. They are now able, for example, to provide almost zero per cent downtime, enhanced security which typically includes perimeter security, biometrics and other forms of access control.

Innovations have also been made in pricing, based on the time of day and the overall load. Users may also avail themselves of a "spot" price. This keeps the provider happy as more of the infrastructure is being used and the user is also happy because the spot price is typically lower than the list price.

Datacentres must also comply with various stipulations, such as HIPAA. Indeed, vendors such as Rightscale and Firehost support multiple certifications for the healthcare and the financial industry. And all these vendors represent the public cloud.

At the same time in the enterprise arena, vendors such as VMware, Microsoft and Eucalyptus are pushing to create an Amazon-like experience. A user gets access to a self-service portal, can select templates and also be expected to be charged in the same subscription-type manner. This is all with the support of secure multi-tenancy.

So the time is right for the next generation of IaaS. This next generation may involve infrastructure with application intelligence. If the infrastructure layer can include everything from the hardware to the OS and any run-time libraries required by the application, then infrastructure should enable it to deliver the best performance.

The infrastructure will need to be intelligent. It must be able to learn from the behaviour of the application, the load conditions and how it is accessed at different times of the day, and must also be able to tweak the resources provided to the different components so the overall application performance never suffers.

The second factor would be where the infrastructure "understands" the notion of cost. By this I mean the cost of running the servers, the cost of software licences and the cost of the power and cooling. With this understanding, the infrastructure could "suggest" and carry out actions that minimise the cost element and therefore increase the RoI.

The third factor in evolving IaaS would incorporate the concept of seamless infrastructure. While this is available in pockets in the form of Amazon VPC, it will develop and mean that an organisation can see its networks seamlessly extending or shrinking based on the need to add or relocate its computing resources.

Then there will be infrastructure consolidation. SMBs will stop thinking of having their own infrastructure and instead rely totally on public datacentres. Computing resources, such as power, cooling and water, will be a metered resource that can be accessed at any time.

The final factor would be what I call "fault-tolerant" infrastructure. Fault tolerance or high availability is today typically linked to servers or applications. This would need to be extended to include the organisation's entire infrastructure. In the case of the primary infrastructure not being available, the secondary infrastructure would kick in straight away.

Giridhar Lakkavalli is head of vmUnify at MindTree