Market Overview - Network Cards: Under the Ether

All parts of a system can cause bottlenecks, but the network card is currently the main culprit. Terence Green looks at life in the Fast Ethernet lane

Network interface cards ? in common with nearly every essential component that has propelled the computer industry onwards ? originated in the fertile imaginations of the inhabitants of Xerox Palo Alto Research Center (Parc) in Stanford, California, nearly a quarter of a century ago.

High-resolution, bit-mapped displays, personal computers, graphical interfaces with icons and overlapping windows, high-level programming languages, mice, Wysiwyg word processors, laser printers and Lans spilled out of Xerox Parc in various states of development long before Xerox thought to patent them.

Famous for fumbling the future, Xerox spent a fortune on Parc but left the best minds of a generation with no option but to take their ideas elsewhere to realise them in silicon. There is a curious symmetry at work, however, because, although it failed to capitalise on the work at Parc, Xerox could afford to fund the basic research because it had become successful by taking on and exploiting the photocopier technology that IBM rejected in the 50s.

Bob Metcalfe came up with Ethernet while at Xerox Parc, but had to leave and start 3Com to put his ideas into production. Which is not to say that Xerox didn?t popularise networking ? it just failed to turn its attention to product issues. The Xerox Network System (XNS) became well-established and at one time rivalled TCP/IP. One well-known software company was still dependent on XNS as late as 1994. But all good things come to an end and Microsoft, in common with nearly 80 per cent of the networked world, has moved over to Ethernet and TCP/IP.

3Com gave Ethernet a good start in life. So good, in fact, that it has seen off nearly all of the opposition and now commands some 80 per cent of the market for network interface cards. But a transition to newer, faster Ethernet technology is under way.

Metcalfe developed Ethernet to speed up the transmission of bit-mapped laser printer files over the Parc network. At the time, Metcalfe thought 10Mbps Ethernet would suffice to the end of the century. He was close, but the way we?re going most computers will have moved to 100Mbps Fast Ethernet by the millennium.

3Com had a good run at Ethernet, but of late Intel has picked up the ball with Fast Ethernet and its efforts at pushing the 10-times faster Ethernet technology came close to crippling 3Com in January. When Intel announced huge price cuts on its Ethernet hardware, 3Com shares went into freefall, but have since crept back up. Intel aims to establish Fast Ethernet as the standard, supplanting 10Mbps as soon as possible.

There?s a slight problem here because most of the installed base is still running on 10Mbps network cards. And all those 10Mbps Ethernet cards are supported by a further huge investment in 10Mbps hubs, bridges and routers, not to mention the cabling. Nevertheless, anyone who isn?t already making the transition from 10Mbps to 100MBps Ethernet should at least have scribbled a reminder note on their blotter to think about it before too long.

You might be wondering why Intel would be so interested in promoting network interface cards. Surely Intel?s purpose in life is to attempt to come up with processors that don?t hoist a white flag and surrender when faced with the latest iteration of Microsoft Office? Well, that?s a noble aim, but if you?ve recently bought a big Intel Pentium Pro-powered multiprocessor server to speed up the service to your desktop clients and seen little or no benefit, you?ve probably discovered the reason.

Yes, it?s the very same reason Metcalfe had to come up with Ethernet at Parc back in 1973. Now that we have fast processors, fast operating systems, fast hard disks and fast printers, it?s the turn of the wire between the computers to be the bottleneck again. What goes around comes around (and slowly) as they say.

There are two main requirements for a network interface card. One is to pass information; the other is to do it quickly. Every advance in computing ? the network computer as well as the Windows and Intel vision of multimedia computing ? stokes network bandwidth.

The combination of Ethernet and TCP/IP was doing quite well before the internet came along and it has been pretty much fixed in its ascendance by the internet ideal of a common layer underpinning applications everywhere so that the whole world could network work in harmony.

Ethernet is the predominant networking technology and is likely to become stronger since all the standards committees are clustering around it and extending it to 100Mbps and 1Gbps. The combination of Ethernet and TCP/IP as the lingua franca of the internet and intranets has ordained this.

There are several hardware variations on the Ethernet theme, but these largely revolve around the cables that connect network interface cards. Since the roadmap for Ethernet leads inexorably to Fast Ethernet, the cables have to be twisted pair.

Unless you?re doing it with pocket money, the cabling will be high-class Category 5 unshielded twisted pair with RJ45 connectors.

Thin and thick Ethernet (using BNC and AIU connectors respectively) are heading for the graveyard so network interface cards don?t need to have multiple connectors unless they?re backing up a transition phase.

That leaves only the bus type to consider. This connection between the network interface card and the computer is the major factor external to the card?s electronics which affects the performance of a network interface card. The interface is important because network interface card performance, rather than the processor or disk access, is now the only constraint on network throughput that cannot be factored out by applying money.

Back in the good old days of Novell networking we thrived on 8bit NE1000 cards in the old 286 server box until we fell in love with the high-performing 16bit ISA NE2000. The NE2000 and its innumerable clones served to drive the price down, but they weren?t designed for servers doing more than simple file serving.

Busy servers needed faster network interface cards, for which they needed a faster bus than the 16bit ISA AT bus. Faster buses duly appeared and 32bit cards duly arrived in the form of 32bit EISA and MCA-bus network cards.

Apart from the ill-starred NE3200 (which suffered compatibility problems) and its Intel clone, these 32bit cards were smart. They were bus-mastering adaptors using direct memory access and didn?t need to interrupt the CPU for every network packet transferred. The EISA bus triumphed over IBM?s MCA bus thanks to IBM?s marketing skills and still powers most of the really mission-critical servers thanks to Compaq.

Today, although the 32bit PCI bus is predominant in servers and on the desktop, many servers include an EISA bus for backwards compatibility with EISA network cards. The only caveat is that PCI cards should be used wherever possible since the EISA bus is effectively hung off the PCI bus and thus sharing its bandwidth. So, an EISA network card going flat out will be using up a share of the PCI bus.

To network efficiently you need a PCI-bus card in the desktops and one or more PCI or EISA-bus cards in the server. You might settle for a network interface card on a chip in the desktop, but only at the bottom end because network cards are inexpensive these days and it?s sometimes a lot easier to switch cards than to try to figure out why they won?t plug and play.

So, as a rule of thumb for the busy Windows network the network card should be 10Mbps Ethernet or 100Mbps Ethernet and it should be sitting in a PCI bus. There are a few ISA Fast Ethernet cards around, but they?re only really useful in transitional situations since there?s an obvious mismatch between the available bandwidth of the ISA bus and Fast Ethernet.

That?s pretty much where the network card is today. There is a huge installed base of Ethernet hardware which needs to be moved on a generation for the promised benefits of Windows and network computers to be realised.

There?s really no alternative to Fast Ethernet, although Hewlett Packard tried hard with its 100VG Anylan 100Mbps technology. The advantage of HP?s approach was that 100VG was far more refined than Ethernet and by implication Fast Ethernet since that is exactly as the name says ? simply faster. HP?s 100VG is both faster and smarter and it was better tuned to multimedia applications as well as being a suitable migration path for Token Ring and 10Mbps Ethernet.

Although 100VG continues for the time being, HP has thrown in its lot with the Gigabit Ethernet crew, as has almost everyone else. Since the best way for desktops and servers to prepare to exploit Gigabit Ethernet backbones is through Fast Ethernet there is really no alternative. ATM on the desktop was an exciting prospect, but it remains a niche while Ethernet is good enough.

So, with Ethernet established as the predominant network interface and faster Ethernet as the favourite to take us into the next millennium, is there any real opposition? IBM?s Token Ring takes the next largest segment of the market, well behind Ethernet worldwide but with a sizeable following in IBM?s specific big business markets. Beyond that there might still be a few Arcnet adaptors knocking about, but in truth the only realistic opposition to Ethernet is ATM. And competition from ATM to the desktop is receding in the face of commercial reality, simply stated as ?how do we implement it??

So far 10Mbps Ethernet has coped with the growth of networking by turning to switched hubs which are more sparing of bandwidth, by segmenting networks into smaller chunks as the network operating system software became smarter, and by taking advantage of the increased performance of PC systems. The next step is to migrate to Fast Ethernet.

Network interface cards are also going to get smarter. They?ll have smarter electronics so that they can process faster and they will be smarter in the management sense so they become an integral part of systems management, not just network management, which manages network interface cards without regard for the way they support applications.

3Com is talking up new technology which it says will enable the network interface card to react dynamically to changing system conditions, to configure itself on the fly, and to raise management control to a new level. 3Com Dynamic Access enables the network interface card to optimise itself and to prioritise traffic, for example to deliver streaming video to the desktop. Like HP?s 100VG, which offers similar traffic prioritisation schemes, 3Com?s scheme is proprietary and requires matching 3Com hardware for the full benefits of the investment to be realised.

Digital has just announced a Fast Ethernet chip with power management that requires a software signal.

It?s a chip that can participate in a power managed system, but will wake up when it receives a network request. It?s a simple idea, but how much time do computers spend waiting for a sleeping desktop with a shared drive somewhere on the network to wake up?

To be really effective, advances in systems management for network interface cards have to mesh into the operating system. In a way smarter cards are possible now, but they are waiting for smarter operating systems. Smarter operating systems are also a pre-requisite for the next major advance in server technology aimed at preventing CPU bottlenecks.

Having claimed that network interface cards are the only bottleneck in the system, it has to be admitted that a busy server with two or three PCI network cards going full steam can start to stress the CPU.

In part this is an operating system issue sometimes revealed when adding extra processors to an SMP box fails to result in better performance. But it?s also a hardware issue, because even with busmastering adaptors a dual Pentium Pro has to devote a lot of CPU cycles to push 200 to 300Mbps through a PCI I/O bus. Inevitably, when PCI goes to 64-bit and Intel pushes up the bus speed to 66MHz, the bottleneck will start to move back to the CPU.

Fear not ? Intel has an answer (it says here). The Intelligent I/O bus or I2O, which Intel has been bringing to life with the assistance of server manufacturers such as Compaq and Hewlett Packard, is set to relieve the server CPU of much of the mundane work of shuffling data around the computer network.

I2O simply bypasses the CPU, which gets on with the business of processing while the I2O bus handles the off-CPU activities.

In a way that?s simply a refinement of the idea that Metcalfe came up with when he invented Ethernet. Thus what comes around now goes around faster, which is probably a good thing.