The launch of vSphere 5 and its new storage related features will set the precedent for a complete rethink on how a new datacenter’s storage infrastructure should be designed and deployed. vSphere 5’s launch is not only an unabashed attempt at cornering every single aspect of the server market but is also a result for the growing need for methodical scalability that merges the I.T. silos and consequently combines the information of applications, servers, SANs and storage into a single comprehensive stack. In an almost ironic shift back towards the original principles of mainframe, VMware’s importance has already influenced vendors such as EMC with their VMAX and HDS with their VSP in adopting a scale out as opposed to scale up approach to Storage. With this direction being at the forefront of most storage vendors’ roadmaps for the foreseeable future it subsequently dictates a criterion far beyond the storage capacity requirement I.T. departments are traditionally used to. With such considerations the end of the traditional Storage Array could be sooner than we think as a transformation takes place on the makeup of future datacenters.
With the emergence of the Private Cloud, Storage Systems are already no longer being considered or accepted by a majority of end users as ‘data banks’ but rather ‘on demand global pools’ of processors and volumes. End users have finally matured into accepting dynamic tiering, thin / dynamic provisioning, wide striping, SSDs etc. as standard for their storage arrays as they demand optimum performance. What will dictate the next phase is that the enterprise storage architecture will no longer be accepted as a model for unused capacity but rather one that facilitates further consolidation and on demand scalability. Manoeuvres towards this direction have already taken place as some vendors have taken to adopting Sub-Lun tiering (in which LUNs are now merely containers) and 2.5 inch SAS disks. The next key shift towards this datacenter transformation is the new integration of the server and storage stack as brought about with vSphere 5’s initiatives, such as VAAI, VASA, SDRS, Storage vMotion etc.
Leading this revolution is Paul Maritz, who upon becoming the head of VMware, set his vision clear: rebrand VMware from a virtualization platform to the essential element of the Cloud. In doing that VMware needed to showcase their value beyond the Server Admin’s milieu. Introduced with vSphere 4.1, vSphere API Array integration (VAAI) was the first major shift towards the comprehensive Cloud vision that incorporated the Storage stack. Now with further enhancements in vSphere 5, VAAI compatibility will be an essential feature for any Storage Array.


The Second Primitive, Block Zeroing is also related to the virtual machine cloning process which in essence is merely a file copy process. When a VM is copied from one datastore to another it would copy all of the files that make up that VM. So for a 100 GB virtual disk file with only 5 GB of data, there would be blocks that are full as well as empty ones with free space i.e. where data is yet to be written. Any cloning process would entail not just IOPS for the data but also numerous repetitive SCSI commands to the array for each of the empty blocks that make up the virtual disk file.
Block zeroing instead removes the need for having to send these redundant SCSI commands from the host to the array. By simply informing the storage array of which blocks are zeros the host offloads the work to the array without having to send commands to zero out every block within the virtual disk.



To counter this the VAAI Thin Provisioning primitive has an Out-of-Space Condition which monitors the space usage on thin-provisioned LUNs. Via advanced warning users can prevent themselves from catastrophically running out of physical space.
Another aspect of this primitive is what is termed Dead Space Reclamation (originally coined as SCSI UNMAP). This provides the ability to reclaim blocks of thin-provisioned LUNs whenever virtual disks are deleted or migrated to different datastores.
Previously when a deletion of a snapshot, a VM or a Storage vMotion took place the VMFS would delete a file and sometimes leading to the filesystem reallocating pointers instead of issuing a SCSI WRITE ZERO command to zero out the blocks. This would lead to blocks previously used by the virtual machine being still reported as in use, resulting in the array providing incorrect information with regards to storage consumption.

Back in 2009, Paul Maritz boldly proclaimed the dawning of ‘a software mainframe’. With vSphere 5 and VAAI, part of that strategy is aiming at transforming the role of the Storage Array from a monolithic box of capacity to an off-loadable virtual resource of processors for ESX servers to perform better. Features such as SRM 5.0, SDRS, VASA (details on these to follow soon in upcoming blogs) are aimed at further enhancing this push. At VMworld 2011, Maritz proudly stated that a VM Is born every 6 seconds and that there are more than 20 million VMs in the world with more than 5.5 vmotions per second. With such serious figures, whether you’re deploying VMware or not, it’s a statement that would be foolish to ignore when budgeting and negotiating for your next Storage Array.