Basic steps to efficient data migration

Mountains may have to be moved next year, says David Galton-Fenzi

Galton-Fenzi: As the economy recovers, data migration projects will move up the priority ranks

Next year, we think more companies than ever will be transferring lots of data between storage types, formats or systems. As the global economy recovers, merger and acquisition activity may pick up, forcing businesses to integrate their systems.

Many organisations also delayed storage upgrades in 2009, adding disk space to existing storage infrastructure as a temporary measure.

The goal should be to move it transparently from A to B without changing the profile of the data. Taking a structured approach avoids costly delays and disruption to business continuity.

There is no standard protocol for data migration (DM). That is unsurprising, as it would be impossible to develop one that encompasses the full range of hardware and software. However, there should be some guidelines.

Three steps apply universally and should provide a general framework for every data migration project. Do a data audit: senior management must develop an understanding of what the company is dealing with in terms of data migration.

This means profiling the organisation’s data with a file analysis tool and asking what needs to be moved and why, what is the new storage device and how will the data be read in the new environment, and what is the desired profile of the data post migration.

The second step is to plan the migration. IT managers should choose the method that suits their data and needs. It is easier to create a package of technologies to suit the data than to tailor the data to a specific method. They should ask what it copies and how it does it, whether it simply copies the file or all other attributes such as megadata, and who needs to control access to the information.

A decade ago migration meant data copied to disk or tape and transported to its new environment where it was replicated to a new set-up. This is relatively unreliable and labour intensive but cheap.

You can also use wide area links to transfer data down a line, chewing up bandwidth and interrupting real-time work on both sides of the network. This can take weeks.

Some organisations now use data migration software, which profiles data, replicates it to a consolidated device and mirrors it at both locations. It then creates snapshots of new volumes, which can be tested in parallel before the final cutover. This uses compression, de-duplication and encryption.

Most organisations use a combination of tools, with mission-critical data often physically transferred to a new site, while large volumes of less crucial data are moved over the network. Once the data has been transferred, most organisations will look at their internal data storage and perhaps opt for a cost-effective tiered storage system.

In tiering, different types of data are assigned to different storage media, based on factors such as frequency of use or the security level required. This helps reduce total storage costs, as reserving more expensive storage for tier one data.

It helps organisations make the most of their storage space, but does require a strategy for transferring new data to different storage environments.

Customers need to do classify data according to access criteria, and decide how much of the migration across the internal network will be automated. Then, transfer could involve host-based (including data backup) applications or migrating between virtual machines on different physical servers.

David Galton-Fenzi is group sales director at Zycko