The Universal Volume Manager
(UVM) feature on the USP enables LUN virtualisation. To access external storage, storage ports on the USP are configured as “External” and connected either directly or through a fabric to the external storage. See the first diagram as an example of how this works.
As far as the external storage is concerned, the USP is a Windows host and the settings on the array should match this. Within Storage Navigator, each externally presented LUN appears as a RAID group. This RAID group can then be presented as a single LUN or if required, carved up into multiple individual LUNs.
The ability to subdivide external storage isn’t often mentioned by HDS; it’s usually assumed that external storage will be passed through the USP on a 1:1 basis and if the external storage is to be detached in the future then this is essential. However if a configuration is being built from scratch then external storage could be presented as larger LUNs and subdivided within the USP. This is highlighted in the second diagram.
At this point, external storage is being passed through the USP but the data still resides on the external array. The next step is to move the data onto LUNs within the USP itself. Here’s the tricky part. The target LUNs in the USP need to be exactly the same size as the source LUNs on the external array. What’s more, they need to be the same size as the way the USP views them – which is *not* necessarily the same as the size on the external storage itself. This LUN size issue occurs because of the way the USP represents storage in units of tracks. From experience, the best way to solve this problem was to actually present the LUN to the USP and see what size the LUN appears as. When I first used UVM, HDS were unable to provide a definitive method to calculate the size a LUN would appear within Storage Navigator.
The benefits of virtualisation for migration can fall down at this point. If the source array is particularly badly laid out, the target array will retain the multiple LUN sizes. In addition, a lot of planning needs to be performed to ensure the migration of the LUNs into the USP doesn’t suffer from performance issues.
Data is migrated into the USP using Volume Migration, ShadowImage
). This clones the source LUN within the USP to a LUN on an internal RAID group. At this point, depending on the migration tool it may be necessary to stop the host to remap to the new LUNs. This completes the migration process. See the additional diagrams, which conceptualise migration with TSM.
Now, this example is simple; imagine the complexities if the source array is replicated. Replication has to be broken, potentially requiring an outage for the host. Replication needs to be re-established within the USP but data has to be fully replicated to the remote location before the host data can be confirmed as consistent for recovery. This process could take some time.
In summary, here are the points that must be considered when using USP virtualisation for migration:
- Configuring the external array to the USP requires licensing Universal Volume Manager.
- UVM is not free!
- Storage ports on the USP have to be reserved for connecting to the external storage.
- LUN sizes from the source array have to be retained.
- LUN sizes aren’t guaranteed to be exactly the same as the source array.
- Once “externalised” LUNs are replicated into the USP using ShadowImage/TSM/VM.
- A host outage may be required to re-zone and present the new LUNs to the host.
- If the source array is replicated, this adds additional complication.
I’ll be writing this blog up as a white paper on my consulting company’s website at www.brookend.com. Once it’s up, I’ll post a link on the blog. If anyone needs help with this kind of migration, then please let me know!