On January 26th just prior to the official announcement of Oracle’s takeover of SUN Microsystems, I confidently predicted in my article “SUN’s Oracle Merger” with regards to SUN’s storage portfolio that “One certainty is that the OEM partnership with HDS’ enterprise arrays will continue.” Perhaps it’s time to eat some humble pie. If current indications are anything to go by, it’s more than likely that the SUN agreement of reselling HDS Enterprise Storage may be coming to an abrupt end.
Oracle clearly have a different business approach to their customers than SUN Microsystems did, and that includes dealing with Hitachi Data Systems. Admittedly I‘ve never been a great fan of SUN’s storage systems, often finding them to be the epitome of a server company that builds storage i.e. a box with lots of disks and sparse in-built intelligence. But with the recent launch of the 7000 series (which still may come under scrutiny by NetApp considering it’s more than coincidental similarities) and the intelligent storage systems built by Larry Ellison’s other plaything Pillar Data Systems, their modular market is now pretty well covered. How Oracle/SUN could plan to cover the gap that will be left from a potential removal of the USPV / USPVM (ST9990V /ST9985V) could lie with the approach shown with the recent Exadata V2. Oracle databases that are directly attached to boxes chocked full of flash drives may well be the answer Oracle/ SUN will be offering to get themselves free from the entanglement of Enterprise storage vendors.
While there seems to be a game plan of some sort in the Oracle/SUN camp, if this supposition were to come true it will have major implications on Hitachi Data Systems and their next steps forward. Personally I’m happy if this will happen as it may at last be the kick up the ‘Back End Director’ that HDS need to finally start marketing and addressing a customer base, certainly within the EMEA region that is still oblivious to its existence. I’ve often shown my frustration at HDS and their inability and lack of drive to push forward their brand and products to consumers who have settled for inferior products from other vendors that were merely marketed better. Resting on the laurels that HP and SUN were rebadging and reselling their Enterprise Systems and doing all the hard work for them, the downside was that HDS’ cross bar architecture storage systems and virtualization technology were firmly placed in thousands of datacenters, unbeknownst to the IT Directors that bought them. Another issue was that unlike the SUN relationship in which only the colour of the doors and the SUN badge changed, HP buy HDS Enterprise Storage Systems and actually change the microcode making them more ‘HP’ than ‘HDS’. So a true untainted HDS USPV could now potentially only be purchased from Hitachi Data Systems themselves. This could be the beginning of a HDS revolution or a slow withering death of sales.
But I’m confident if the leadership at HDS takes the right steps and investment, this could finally be the key to a future market share they have been lacking in. There is no doubting the quality of the HDS Enterprise range from the still reliant cross bar architecture and vIrtualisation through the array USPV systems. Hence maintaining those sales and support deals with existent SUN customers may not be such a great overhaul especially with an updated USPV on the horizon. Where the real challenge lies is drawing customers to the equally good modular AMS and WMS range which are rarely found in Datacenters yet alone virtualized behind their own Enterprise Storage Systems. Also the HNAS range made by BlueArc are also a range to be reckoned with but are hardly making NetApp sales guys break a sweat as potential customers are often unaware of their existence. Plus all the latest initiatives which HDS have taken such as High Availability Manager, IT Operations Analyzer, or the Hitachi Content Archive platform HCP, as excellent as they are, are still not making the waves and marketing noise their credentials deserve.
So in a twist of fate, should the continuation of the SUN OEM relationship fall through, Hitachi Data Systems may be forced into being the marketing machine it up to now has shied away from in order to maintain and advance its presence in the industry. The positive thing is that the products are and always have been good enough – now it’s time for the marketing guys to promote it.
The Best Back Up Solution for VSphere4?
When VMware first introduced VCB as part of the ESX package, it never did seem more than a temporary / complimentary solution for customers who had a small environment of 100 VMs or less. With the launch of VSphere4 and the subsequent introduction of APIs which allowed external applications and scripts to communicate directly to the ESX, it was apparent that VMware was beginning the gradual move to offload the backup solution to the Backup experts. Now having run with VSphere4 for more than six months, it seems a good time to assess and evaluate who has taken advantage and the lead with incorporating all the latest features of ESX4.
To recap; with VMs being encapsulated in a single disk file the principal was that image-level backups instead of traditional file-level allowed backups to be much faster. With VSphere4, VMware introduced improved support for thin provisioning which not only had the potential to reduce the amount of actual storage allocated but also shorten backup windows. The idea was straightforward; by using thin provisioning the system admin is given the ability to over-commit disk space, especially handy as the majority of VMs don’t use all of their allocated disk space. Thus this eradicates the problem with the majority of disk-to-disk backup applications using image-level backups i.e. no more backing up of a complete virtual disk file of a VM when most of it wasn’t actually used.
Another new addition to VSphere4 and it’s vStorage APIs was the feature of CBT (Changed Block Tracking). With CBT the VMkernel could now track the changed blocks of a VM’s virtual disk. Just by querying the information using an API call to the VMkernel, CBT alleviates the burden from the backup applications having to scan or keep track of changed blocks. This results in much quicker incremental backups as the overhead of scanning the whole VM image for changes since the last backup could now be eradicated.
So in looking around for solutions which best incorporated these new features I eventually came across Veeam. Utilising the thin provisioning feature to remove the overhead of no longer seeking out empty disk blocks and the unnecessary backing up of those empty blocks, Veeam also incorporates compression algorithms on the target backup device. Hence Veeam have a solution that not only reduces the amount of space used on the source host datastores but also the target backup storage device.
Furthermore Veeam are currently the only third party API that offer support for CBT, although ESXpress are promising something similar with their upcoming version 4. Veeam have come up with several different modes to utilise the CBT feature namely, SAN mode, Virtual Appliance mode and Network mode. Each mode brings less I/O to each device depending on your setup and thus less resource consumption when performing backups consequently leading to reduced backup windows.
So while Veeam are currently leading the way, the time is certainly ripe for more third party APIs to be developed and incorporated making VM backup nightmares a thing of the past.
To recap; with VMs being encapsulated in a single disk file the principal was that image-level backups instead of traditional file-level allowed backups to be much faster. With VSphere4, VMware introduced improved support for thin provisioning which not only had the potential to reduce the amount of actual storage allocated but also shorten backup windows. The idea was straightforward; by using thin provisioning the system admin is given the ability to over-commit disk space, especially handy as the majority of VMs don’t use all of their allocated disk space. Thus this eradicates the problem with the majority of disk-to-disk backup applications using image-level backups i.e. no more backing up of a complete virtual disk file of a VM when most of it wasn’t actually used.
Another new addition to VSphere4 and it’s vStorage APIs was the feature of CBT (Changed Block Tracking). With CBT the VMkernel could now track the changed blocks of a VM’s virtual disk. Just by querying the information using an API call to the VMkernel, CBT alleviates the burden from the backup applications having to scan or keep track of changed blocks. This results in much quicker incremental backups as the overhead of scanning the whole VM image for changes since the last backup could now be eradicated.
So in looking around for solutions which best incorporated these new features I eventually came across Veeam. Utilising the thin provisioning feature to remove the overhead of no longer seeking out empty disk blocks and the unnecessary backing up of those empty blocks, Veeam also incorporates compression algorithms on the target backup device. Hence Veeam have a solution that not only reduces the amount of space used on the source host datastores but also the target backup storage device.
Furthermore Veeam are currently the only third party API that offer support for CBT, although ESXpress are promising something similar with their upcoming version 4. Veeam have come up with several different modes to utilise the CBT feature namely, SAN mode, Virtual Appliance mode and Network mode. Each mode brings less I/O to each device depending on your setup and thus less resource consumption when performing backups consequently leading to reduced backup windows.
So while Veeam are currently leading the way, the time is certainly ripe for more third party APIs to be developed and incorporated making VM backup nightmares a thing of the past.
Infiniband – Boldly Going Where No Architecture Has Gone Before
Back in 2005 we all knew that Fibre Channel and Ethernet would eventually support transmission rates of 10 Gbit/s and above and now in 2010 that day has pretty much dawned on us. In the excitement of those days what was always a concern was that the host’s I/O bus would need to transmit data at the same rate. But with all the advancements of PCI-E, the nature of all parallel buses is that their transmission rate can only be increased to a limited degree so how was this potential barrier ever going to be solved? The solution being penned around at the time was InfiniBand. Not only did it carry a name that seemed straight out of a Star-Trek episode but it also promised a ‘futuristic’ I/O technology which replaced the PCI bus with a serial network. That was five years ago and bar a few financial services companies that run trading systems I hadn’t really seen any significant implementations or developments of the technology that was marketed with the phrase ‘to Infiniband and beyond’. But two weeks ago that suddenly changed.
Before I delve into the latest development of the architecture that’s bold enough to imply ‘infinity’ within its name one should ascertain as to what exactly justifies the ‘infinite’ nature of Infiniband. As with most architectures the devices in Infiniband communicate by means of messages. That communication is transmitted in full duplex via an InfiniBand switch which forwards the data packets to the receiver. Also like Fibre Channel, InfiniBand uses 8b/10b encoding enabling it to package together four or twelve links to produce a high transmission rate in both directions. Using Host Channel Adapters (HCAs) and Target Channel Adapters (TCAs) as the end points, the HCAs act as the bridge between the InfiniBand network and the system bus while the TCAs make the connection between InfiniBand networks and the peripheral devices that are connected via SCSI, Fibre Channel or Ethernet. In other words for SAN and NAS folk that basically means HCAs are the equivalent to PCI bridge chips while the TCAs are in the same vein as HBAs or NICs.
Additionally HCAs carry the ability to be used for not just interprocessor networks, attaching I/O subsystems, but also for multi-protocol switches such as Gbit Ethernet switches. Herein lies the promise of a sound future with Infiniband due to its independence from any particular technology. Indeed the standard is not just limited to the interprocessor network segment, with error handling, routing, prioritizing and the ability to break up messages into packets and reassemble them. Even messages can be a read or write operation, a channel send or receive message, a multicast transmission or even a reversable transaction-based operation. With RMDA existent between the HCA and TCA, rapid transfer rates are also easily produced as the HCA and TCA each allow permission to read or write to the memory of the other. Once that permission is granted write or read location is instantly provided thus enabling the superior performance boost. With such processes, control of information and it’s route occurring at the buslevel, it’s not surprising that the InfiniBand Trade Association view the bus itself as a switch. Add to the equation that InfiniBand uses Internet Protocol Version 6, you’re faced with an almost ‘infinite’ amount of device expansion as well as potential throughput.
So fast forwarding to the end of January 2010 and I finally read headlines such as ‘Voltaire’s Grid Director 4036E delivering 2.72 terabits per second’. At last the promise of Infiniband is beginning to be fulfilled as a product featuring 34 40 Gb/s InfiniBand ports i.e. a collective 2.72 terabits per second proved this was no longer Star Trek talk. With an integrated Ethernet gateway which bridges traffic between Ethernet-based networks via an InfiniBand switch, the Voltaire 4036E is one of many new developments we will soon witness utilsing Infinband to provide unsurpassed performance. With high performance requirements for ERP applications, virtualization and ever growing data warehouses always increasing, converging Fibre Channel and Ethernet with InfiniBand networks into a unified fabric now seems the obvious step forward in terms of scalability. Couple that with the cost savings on switches, Network interface cards, power /cooling, cables and cabinet space and you have a converged network which incorporates an already existent Ethernet infrastructure.
InfiniBand suppliers such as Mellanox and Voltaire may have their work cut for them with regards to marketing their technology in the midst of an emerging 10gigE evolution but by embracing it they may just ensure that Infinband does indeed last the distance of ‘infinity and beyond’.
Before I delve into the latest development of the architecture that’s bold enough to imply ‘infinity’ within its name one should ascertain as to what exactly justifies the ‘infinite’ nature of Infiniband. As with most architectures the devices in Infiniband communicate by means of messages. That communication is transmitted in full duplex via an InfiniBand switch which forwards the data packets to the receiver. Also like Fibre Channel, InfiniBand uses 8b/10b encoding enabling it to package together four or twelve links to produce a high transmission rate in both directions. Using Host Channel Adapters (HCAs) and Target Channel Adapters (TCAs) as the end points, the HCAs act as the bridge between the InfiniBand network and the system bus while the TCAs make the connection between InfiniBand networks and the peripheral devices that are connected via SCSI, Fibre Channel or Ethernet. In other words for SAN and NAS folk that basically means HCAs are the equivalent to PCI bridge chips while the TCAs are in the same vein as HBAs or NICs.
Additionally HCAs carry the ability to be used for not just interprocessor networks, attaching I/O subsystems, but also for multi-protocol switches such as Gbit Ethernet switches. Herein lies the promise of a sound future with Infiniband due to its independence from any particular technology. Indeed the standard is not just limited to the interprocessor network segment, with error handling, routing, prioritizing and the ability to break up messages into packets and reassemble them. Even messages can be a read or write operation, a channel send or receive message, a multicast transmission or even a reversable transaction-based operation. With RMDA existent between the HCA and TCA, rapid transfer rates are also easily produced as the HCA and TCA each allow permission to read or write to the memory of the other. Once that permission is granted write or read location is instantly provided thus enabling the superior performance boost. With such processes, control of information and it’s route occurring at the buslevel, it’s not surprising that the InfiniBand Trade Association view the bus itself as a switch. Add to the equation that InfiniBand uses Internet Protocol Version 6, you’re faced with an almost ‘infinite’ amount of device expansion as well as potential throughput.
So fast forwarding to the end of January 2010 and I finally read headlines such as ‘Voltaire’s Grid Director 4036E delivering 2.72 terabits per second’. At last the promise of Infiniband is beginning to be fulfilled as a product featuring 34 40 Gb/s InfiniBand ports i.e. a collective 2.72 terabits per second proved this was no longer Star Trek talk. With an integrated Ethernet gateway which bridges traffic between Ethernet-based networks via an InfiniBand switch, the Voltaire 4036E is one of many new developments we will soon witness utilsing Infinband to provide unsurpassed performance. With high performance requirements for ERP applications, virtualization and ever growing data warehouses always increasing, converging Fibre Channel and Ethernet with InfiniBand networks into a unified fabric now seems the obvious step forward in terms of scalability. Couple that with the cost savings on switches, Network interface cards, power /cooling, cables and cabinet space and you have a converged network which incorporates an already existent Ethernet infrastructure.
InfiniBand suppliers such as Mellanox and Voltaire may have their work cut for them with regards to marketing their technology in the midst of an emerging 10gigE evolution but by embracing it they may just ensure that Infinband does indeed last the distance of ‘infinity and beyond’.