This is a series of posts that cover the features of Hitachi’s new enterprise storage platform, the VSP (Virtual Storage Platform), also sold by HP as the P9500 array. Previous posts:
Hitachi have modified the VSP array to provide significant performance improvements over the previous USP and USP V models. These changes may not be immediately significant but are worthy of discussion, as with any new technology, the devil is in the detail. As a background to this post, I suggest you read my previous discussion on Monolithic v Modular architectures.
USP V Ports
The USP V array design (reproduced here in schematic format) consists of a central switched architecture with both shared memory and cache. Processing takes place on FEDs (Front-End Directors) and BEDs (back-end directors) that take care of host and disk I/O respectively.
Front-end processors are shared between port pairs, so for example on a 16-port front-end card there are 8 processors. It is typical to see scenarios where either the port bandwidth or the port processing power is fully utilised. For example, with very small blocksize I/O, a FED processor can be max’ed out. This requires the storage administrator to be aware of host I/O profiles and distribute workload accordingly, or risk performance impact as ports are loaded up with hosts. This fixed design isn’t desirable (in any array) and in fact when external storage virtualisation is used can result in the over-purchase of storage ports purely to ensure sufficient capacity is available; bear in mind that port pairs (i.e. the processor) can only have one identity, either host port, external port, or source/target port for replication.
The VSP changes the FED/BED architecture by sharing the processors for use across all physical ports. This is shown schematically in the following diagram (reproduced from Hitachi presentation – I will be working on a better representation). Port processors are now on the Virtual Storage Directors (VSDs).
By decoupling the physical port and processor, the VSP provides the ability to maximise both port and processor utilisation; it enables the driving of more work through the array. This is a key benefit when virtualisating external storage, as the static nature of the previous design has been overcome, so more storage can be externalised.
There is also an additional benefit to abstracting ports and processors; as firmware/code upgrades are performed on the VSP, there is no need to worry about path failover. Typically, code upgrades temporarily disrupt I/O to hosts. This isn’t usually a problem as production environments dual-path all connections, however if connectivity problems exist, then code uploads can cause host outages. This doesn’t occur with the VSP.
Although processor sharing is a simple change, it has wider implications; array performance is improved and made more efficient and this improves the ability to manage variable workloads. However, the change also provides a basis for future enhancements that could be even more compelling. Virtualising processor workload introduces the ability to:
- Implement QOS (Quality of Service) on I/O requests. Although basic server prioritisation occurs today, full virtualisation enables real QOS to be implemented on I/O workload in a much more granular fashion.
- Implement Multi-tenancy. The USP V already offered workload segmentation through Storage Partitions (SLPRs) and Cache Partitions (CLPRs). The VSP has the ability to create virtual partitions that are also prioritised in terms of workload. This meets requirements of organisations to offer secure multi-tenancy without having to dedicate physical hardware.
Hitachi have moved forward by producing a platform that is more scalable and potentially offers future enhancements for highly scalable environments. Although the VSP is a step up from USP, looks to me like only a single step on an evolving journey.