Sometimes, it's difficult for resellers to keep a straight face. I mean, we could all cope with Lans and Wans. OK, maybe the 'wide' in Wans should have been 'broad' or even 'large', but Bans doesn't sound too promising and another Lans was out of the question. Too confusing, even by networking standards.
Things started to go decidedly pear-shaped with the introduction of metropolitan area networks. Not that there's anything wrong with the concept of a city-wide network, but Mans? Come off it. Stretching the integrity of network acronyms still further comes Tans - total area networks - a clear case of an acronym dictating technology.
Take this approach further into the niche markets and there's no end of possibilities. What about an initiative led by Nottingham council to create a site for Balti fans to order takeaways on the internet, resulting in a General Access Request for Local Indian Curries in Nottingham Area Network? Work it out for yourself.
And now we have Sans - storage area networks. But what exactly is a San?
The answer, in the tradition of the network industry, depends on who you speak to. If we take a storage vendor such as StorageTek, the answer is as follows: 'A San is a high-speed network dedicated to moving information between users and a pool of flexible storage devices. By giving storage its own highway, users no longer compete with server I/O for network bandwidth. As a result, applications are no longer impeded by routine operations such as backup, thus creating a more efficient network.'
On the other hand, if we ask a company such as San Ltd, the answer is this: 'It is a high-speed network that establishes a direct connection between heterogeneous storage devices and servers. It is distinct from the corporate Lan/Wan data infrastructure as a network behind the network, located in the data centre, with inherent distance limitations.'
In the case of San Ltd, the distance limitations are significant, as the vendor is providing a San package for the Wan. But otherwise there are many common themes. Andrew Rowney, sales and marketing director at San Ltd, expands on the definition with an example:
'Imagine a network manager needs to allocate 30Gb of storage to another part of the network, say for an email server running Lotus Notes. With current storage, if users are connected to a server with Windows NT Server 4 installed, this task would probably require the email server to be switched off as another disk volume is mounted and prepared for use. With San technology, this 30Gb of data may already be connected to a network as part of a redundant disk array that can be allocated easily and quickly, without the need to power-down the email server.
'The ultimate goal of San technology is to reduce the complexity of managing storage devices over a network. It involves removing as much human intervention as possible, without damaging performance or availability.'
As far as San Ltd is concerned, this is a long-term goal, and the closer the technology to the corporate IT world, the better. Sans will hopefully eliminate the issues that arise from point-to-point storage architectures.
For Sans to succeed, however, there needs to be a clear definition of what they are and how they can help users. For years, elements of the network have been getting faster, while others have moved forward very slowly or not at all. At the same time, some areas have received great attention to design and redesign, to cope with the changes new demands and technologies have brought.
A classic example is the Lan backbone. Not so long ago, it was a bus topology 10Mbps Ethernet design - a physical cable with drop cables feeding shared 10Mbps Lan segments. Now, the same backbone is likely to be collapsed into a switch chassis with 50Gb or more of gigabit Ethernet or ATM-based bandwidth available. And these feed into switched 10/100Mbps Ethernet segments.
Routers are being upgraded, uprooted or given a change of identity, while Wan connections are finally being acknowledged as running at more than 64Kbps or even 2Mbps. This means we are moving towards an information superhighway model, the only problem being that much of that information is still locked away in storage systems - those towers of disks in the machine rooms. Once you come off the six-lane highway, you have to queue to get down a single-track lane to where the data resides.
Exaggeration? Maybe, but while advances in disk subsystems have undoubtedly taken place - SCSI bandwidth capabilities have doubled every two years or so and fibre channel is positioned to rewrite the performance rules once more - the way they are accessed across the network remains largely unchanged.
The classic 'user to application to stored data' relationship has obvious performance limitations. PCs are connected to the main network. The network servers controlling access are also on the network. In small Lans, the data and processor are in the same box, so there is a simple system which has clear limitations but potentially low usage.
On larger networks, the servers typically act as front-end processors to storage systems sitting behind the servers, traditionally on a one-to-one basis. And the connecting technology has been SCSI, in its various incarnations, which has done a sterling job in the classic server storage role but is certainly not networking technology.
So, now that server clustering - long associated with mid-range systems and upwards - is available at the Lan server level, there is a greater need than ever for an efficient method of connecting many different servers to one or to other storage devices.
The move towards clustering is right at the heart of the San argument.
With Microsoft's Wolfpack bringing what was once an expensive high-end technology down to the NT server level, clustering is becoming increasingly commonplace for ensuring highly available open systems that match the uptime of the formerly proprietary and very expensive non-stop computers from companies such as Tandem, which is now part of Compaq. The intention is that a cluster of Windows NT servers will enable storage to match the availability of the attached servers, giving 24-hour-a-day, seven-days-a- week support.
So how does a San help here? Put simply, it liberates storage from the server bus and enables the distribution of server clusters. With a San, all storage devices can be accessible to all servers, and storage administration becomes easier and less expensive. A switch-based San can reduce bus congestion and increase aggregate throughput. Here lies the argument for Sans and, to a lesser degree, for fibre channel over SCSI (see box opposite).
And the movement towards a San model is gaining momentum. While initial steps in this direction have naturally come from storage specialists such as Adaptec and StorageTek, most of the leading network server vendors have also started to offer some kind of San system.
For example, Compaq's enterprise network storage architecture (Ensa) is not a specific product, but a blueprint for a San system combining Compaq hardware with third-party products to create a San. Ensa will include storage products, servers, network infrastructure, linking devices (bridges, switches and hubs), storage management software and components to administer and manage the entire environment.
The really clever bit is the provision of transparent connections between these elements, which is key to Sans. Compaq is relying heavily on the expertise of other storage and management-related hardware and software partners to enable it to create an open, standards-based San system.
With Ensa, Compaq is focusing on what it calls virtual storage, allowing vast amounts of storage to be pooled across an enterprise for use by application servers, deployed anywhere they can be contacted. From a management viewpoint, storage resources can be allocated online from a single, common pool, scaling up to petabytes if necessary. It also means data can be replicated instantly for backup, testing, additional application access, or backups and restores initiated by the user.
Primary and backup storage resources can be set up easily, then administered centrally using policy-based management practices. These include dynamic allocation, automatic redeployment, intelligent data replication and protection and performance tuning. Compaq claims it will offer Ensa systems at all cost levels.
A key part of the Compaq San concept is open standards. For once, it seems that the computer and networking industries are in agreement on the need for standards to be put in place to develop San technology. However, it is one thing to agree on standards being important, and another to agree on what makes up those standards.
For example, within a San, where should the intelligence reside and which devices should be controlling or dominant from a management point of view?
Mike Harper, network line of business manager at StorageTek, believes true, standards-based data sharing is still two or three years away.
To this end, the Storage Networking Industry Association has been formed to ensure that standards-based Sans are developed over the next few years (see box opposite). In the meantime, the answer is to make a sensible choice based on a vendor's willingness to support future standards as and when they emerge.
This article was first published in Network Solutions, March 1999.
FUTURE SAN STANDARDS
Standards are clearly going to be a vital ingredient in the success or decline of Sans. The Storage Networking Industry Association (SNIA) has been created, with the aim of ensuring that storage networks become efficient, complete and trusted systems across the IT community.
The SNIA is a collaboration of developers of storage and networking products, system integrators, application vendors, and service providers, aimed at defining standards-based progress for storage networking. It aims to deliver architectures, education and services to broaden the San market.
The members effectively form a cross-section of storage specialists, server vendors, backup and storage software developers such as Quantum, StorageTek, IBM, Intel, Seagate, Legato, EMC, Compaq and Veritas.
As part of its mission, the SNIA has been holding quarterly technical and industry conferences aimed at educating users, but to date, this appears to have been restricted to the US.
BACKBONE BATTLE: fibre channel vs SCSI
For some time, fibre channel has been hyped as the successor to SCSI as the backbone technology for high-performance disk subsystems. Originating at the beginning of the decade, it was a classic example of a technology looking for an application. To some extent, it has found this within the gigabit Ethernet specification. However, companies such as IBM and Hewlett Packard have long been promoting it as a SCSI-beater and most of the storage and server vendors offer fibre channel products.
The ace in the pack for fibre channel is its performance, which is greater than any existing SCSI system. A cross between ATM-style switch technology and SCSI-style channel-based technology, it is a 1Gbps data transfer interface that supports several common transport protocols, including IP and SCSI.
The multi-protocol support allows fibre channel to merge high-speed I/O and networking functionality in a single connectivity technology. It is an open standard, as defined by American National Standards Institute (ANSI) and Open Systems Interconnect (OSI) standards, and operates over copper and fibre-optic cabling at distances of up to 10km.
Beyond pure performance, this is the first advantage over SCSI. SCSI extenders can go beyond the otherwise severe limitations of the SCSI bus length, but are expensive and provide none of the other benefits of fibre channel.
Fibre channel is also unique in its support of multiple interoperable topologies, including point-to-point, arbitrated-loop and switching, on top of which it offers several classes of service for network traffic optimisation. It also supports large packet sizes, making it ideal for classic storage, video, graphics and bulk data transfer applications such as backup. In other words, it appears to be the ideal fit within a San.
SCSI is unlikely to disappear and still makes a strong case for itself.
An Ultra-2 SCSI controller provides data transfers of 80Mbps, disk transfer rates of up to 320Mbps and total buffer bandwidth of 132Mbps. Even so, fibre channel comfortably exceeds these limits.
Another area in which fibre channel claims a significant advantage over SCSI is resilience.
SCSI cannot repair a server without causing a disruption in service on the SCSI bus. Because a SCSI bus requires active termination, a server cannot be removed from a bus without employing expensive Y-cables. These will allow a server to be taken offline for repair. However, all I/O traffic must be removed for this to happen, which basically means bringing the server down.
In contrast, a fibre channel hub connection provides isolation between connections so one of these connections can fail and there is no impact on the other connections. This functionality allows a server in a clustered arrangement to be powered down and serviced without any impact to the other servers. When service is complete and power is restored, the off-line server can be brought back into the cluster without affecting the second server.
The deal builds on distie's earlier promise to distribute a broader range of electrical goods
Services firm sees revenue increase 23 per cent
Execs Zak Virdi and Neil Lomax open up on the rationale behind acquisition
CEO Steve Brazier slams vendor titans at annual event in Barcelona