Nobody expects to flip the light switch or turn on the tap to be greeted with a faint flicker or a dirty dribble. We assume that electricity, water, gas and phone calls will be instantly available, brought to us by some unfathomable apparatus and paid for at the point of use.
Now the same is happening to computing power. Grid computing promises to create vast supercomputers which will run applications in the most efficient way for the user. The grid decides on the best place or places to run an application, parcels it up into bite-sized chunks, farms this out to hardware with spare capacity, gathers the results and passes them back to the user.
A bank’s credit-risk analysis could be parceled up and processed by an array of blade servers in Stevenage, some lightly-used graphics work-stations in Swansea and the odd corner of a mainframe in Stirling, with the data stored... Well, why should the bank care where it’s stored, if the results come back twice as fast as from its conventional system and the transaction costs half as much?
“The key test is that the complexity should be hidden,” said Alan Hartwell, vice-president of marketing at Oracle, a leading light in grid software development.
The hardware could be a dedicated system which, if new, is likely to be composed of cheap, multiple microprocessors such as blade servers running Linux. Alternatively, it could be a mixture of existing mainframes, Unix servers and PC LANs, perhaps dedicated to particular systems but offering any spare capacity to the grid. For example, it could utilise a web server which sits idle at night and could help with overnight processing.
By tracking and utilising this spare capacity dynamically, the grid can achieve far more efficient use of processing and storage resources.
“Compared to the old world of mainframes and mid-range Unix servers, where system utilisation would run at 80 per cent, dedicating applications to single servers typically generates utilisation levels of only 15-20 per cent,” said Garry Owen, head of enterprise computing at Fujitsu Siemens, a member of the European Grid Alliance.
“This means users are wasting money on system capacity they never use. Grid computing can maintain utilisation at 80 per cent and more.”
In some ways, grids are a logical progression from virtualised storage, ICT outsourcing and the open systems movement. They are the final link in the chain which allows commodity and legacy hardware to be fused into a single, virtual supercomputer.
Businesses also admit to holding data without permission of subjects
Zedsphere says end-point security vendor's offerings will be a 'key' feature of its wider portfolio
New acquisition will bring UK cloud service provider's global headcount to over 700
Law firm claims that Oracle lied to investors over what is driving its cloud revenue growth and boosted sales through 'threats and extortive tactics'