The Evolution & Emergence of the Hybrid Cloud

How have most organizations been building hybrid clouds to date?

Despite all the marketing and promotion surrounding the benefits of dynamically bursting into a hybrid cloud from inception, this rarely seems to be the case. If anything the current trend towards building hybrid clouds still stems from an organic growth and demand that has emanated from either an existent public or private cloud deployment. Certainly private clouds are the most common origins of hybrid clouds as organisations look towards adding further agility to the many benefits they've attained.

A lot of organizations have also recently been pushed towards the many new vendor hybrid cloud offerings that have hit the market. Here the initiative is drawn to promises of a seamless management experience across they're already deployed private cloud and newly considered public cloud as well as vice versa.


What have been the limitations of these methods?

Quite simply it's people and processes. Agility is the key driver towards a hybrid cloud model as applications can dynamically move across either the public or private cloud platform depending on its requirements and criticality. To achieve this successfully there needs to be drastic changes in the traditional IT operational processes where a siloed IT culture focuses on in-house technical expertise. Where hybrid clouds have emerged from initial private cloud deployments, the existent, stagnated siloed processes aren't suitable for the fully automated, self-service service-oriented approach of hybrid clouds.

Very few organizations are thinking of a hybrid cloud from the onset where the idea of also outsourcing their IT operations to the public cloud is a genuine consideration. Instead it's been a staged progression that has coincided with a staged changing of mindset. Private clouds that have adopted converged infrastructure platforms have made headway in this regard as many of the traditional siloes are broken down in favor of more dynamic and centralized teams but there's still a long way to go for most organisations.

As for the new hybrid cloud offerings that are being touted by a lot of the mainstream vendors, they still lack the maturity to help organizations transform their operations. While a technology offering may provide a technical solution, the people and processes challenge still doesn't get easily solved.
Platforms need to enable applications to seamlessly move to and from a public cloud based service 

Are organizations right to be scared of the public cloud when it comes to business critical data?

The reality is that most large enterprise companies such as financials and pharmaceuticals will always be hesitant to move their critical data and IT operations into a public cloud. If such organizations are using the public cloud it's typically for test and development environments or backup and archiving. In such instances the hybrid cloud is adopted to maintain control over the large internal infrastructures that host their critical data while concurrently optimizing the performance of those applications.

Whether such organizations are right to be scared of the public cloud mainly stems from the ambiguity that lies in the liability for mission critical application SLAs, their security, performance and their requirement to be available 24/7. For this to change the consumer needs to place different demands on the public cloud provider that incorporates clarity from such ambiguity where for example SLAs are not just focused on uptime and availability but also performance metrics and response times. For organizations to feel comfortable to relinquish the control of their mission critical data, the public cloud needs to provide an improvement on what the consumer can provide themselves internally in terms of performance, security, availability and SLAs.

CIOs are blinded by conflicting information about cloud. How can they decide what data to put where?

Any such decision or initiative requires some form of classification and understanding of the organization's data value. Additionally there needs to be an assessment of the criticality of that data in terms of data loss and consequently the risk it introduces to the business. For example archived data may not be needed on an internal high-end performance public cloud infrastructure, yet its criticality could be measured by the fact that any security breach of that archived data could mean the end of the business. Any consideration that involves data being migrated to an external provider (public cloud storage), requires a thorough understanding of the potential impact and revenue loss should that data be compromised.

Moreover the service provider should not just be considered for its technical, security and service merits but also its stability as a business. The last thing an organization needs is to have its data migrated offsite to a cloud provider that eventually goes bankrupt or taken over by another company. As long as the groundwork is done in terms of researching the data as well as the stability of the potential provider the public cloud is a more than viable option for a large number of workloads such as archives, backup copies and test environments.

The rise of hybrid cloud architectures has led to the creation of Cloud Service Brokers - is this a necessary role going forward?

While still an emerging role, for organizations that are considering moving to the cloud and are finding that it's not a simple process dealing with multiple relations, contracts, vendors, providers etc. the Cloud Service Broker is a necessity. Having a dedicated resource whether internal or external to the organization that can work closely amongst a multitude of cloud providers to negotiate and attain the best price, offerings and services on your behalf is essential in attaining the most benefit from your cloud initiative.

Additionally brokers are also key to saving time by alleviating organizations the burden of researching services from different vendors and how they will coincide with the organization's work processes, budgets and data values as well as financial background checks of potential providers.

The role and benefit of a Cloud Service Broker is not just key to the pre-deployment process but also the post Cloud deployment phase. Having been the broker to negotiate the best deals, services and offerings on your behalf based on their existent relationships, the Cloud Service Broker also offers the opportunity to be the first point of call if and when any issues or problems occur. If there are problems with the service provider such as breach of contracts, missing SLAs etc. the Cloud Service Broker is an integral role in resolving any disputes while consequently isolating your organization from having to deal with the issues. As the hybrid cloud market matures and grows the role of the Cloud Service Broker will certainly become more prevalent.

How important is it to easily move data between private and public clouds?

The challenge of creating a hybrid cloud is far greater than a dedicated private or public deployment. The main challenge is that the processes required to scale and shift data across the hybrid cloud can't be successfully achieved with the traditional methods used in migrating data to and from public and private clouds. The ability to seamlessly move data across the hybrid cloud based on application requirements and demands as well as data classification is key to the hybrid model being adopted by the mainstream.

The hybrid model is being considered by many because it offers the opportunity to improve efficiencies, geographical coverage and economies of scale. To truly achieve this workloads need to be moved seamlessly between private and public clouds based on their requirements, where a standardized and centralized portal alongside common management tools present the hybrid cloud as a ubiquitous pool of resources. In this instance the requirement for the simplification, automation and ease of data movement is axiomatic.

How is the movement towards SDDCs impacting the hybrid cloud trend?

While the adoption of the SDDC is still in its infancy, its ability to provide the standardized framework for data to be moved to and from public clouds based on application requirements, regardless of geographical location is quintessential. The SDDC provides the opportunity for organizations to reap the agility of the cloud computing while maintaining legacy applications that aren't suitable for the public cloud for whatever reason. For example in the hybrid model new services could benefit from development and coding that goes directly into the private cloud, while the core services that run the business can remain onsite while still benefiting from any bug fixes and code releases in a seamless manner.

With the SDDC providing the intelligence and consequent automation of workloads, the hybrid cloud model can quickly be emancipated from the shackles of traditional operational processes and siloes making its benefits and subsequent adoption considerably greater. Furthermore it will lead the way for hybrid clouds to be considered as an immediate initiative as opposed to the evolutionary process we are currently seeing in the market.

What are the most common hybrid architectures that we will see over the next year and why?

We will certainly see the adoption of hybrid architectures and models grow, much like we saw the adoption of private clouds. Over the next year the model that we'll more than likely see is the continuation of legacy application infrastructures that preserve organizations' large investments and are then coupled with hybrid automation solutions that enable them to leverage on demand cloud resources. The security concerns of public clouds as well as the need to ensure application performance and optimization will be the key drivers for this.

Indeed we may see a further shift away of some workloads from the public to the hybrid model as organizations start to reassess the financial benefits of public cloud deployments based on the economies of scale they could achieve with a Private Cloud deployment. As organizations become more familiar with their on going cloud costs, there'll be more of a demand for platforms that enable them to seamlessly move their applications to and from a public cloud-based service. Adopters of converged infrastructure are certainly moving towards this path as they've recognized the agility, speed and consequent capex and opex savings they've achieved with their Private Clouds.

Indeed the anticipated and most likely approach for organizations next year will not be a decision of whether to utilize a large existent infrastructure investment or a scalable on demand public cloud service but rather the most effective strategy to leverage both.

Taken from the June 2014 Archie Hendryx interview with Information Age magazine.

Ten Tips for Maximising the Benefits of a Converged Infrastructure Implementation

Archie Hendryx looks at the benefits and challenges of managing multiple IT components through a single support solution.
Many public sector organisations are not changing their ICT legacy systems. As a result, they face increasing number of inefficiencies and challenges. These issues can nevertheless be overcome, and performance increased, by managing these legacy systems within a converged infrastructural environment. This approach can also reduce the risks associated with legacy systems, such as security, missing functionality, increased complexity and operational expenditure

Converged infrastructure simplifies support as it packages multiple IT components from different vendors into one single, optimised computing solution. So it offers end-to-end support unlike reference architectures, which require organisations to deal with a multitude of disparate vendors to gain support. Converged infrastructure removes the need to manage multiple upgrade cycles, offers pre-tested interoperability and release management resources. It enables organisations to centralize the management of IT resources.

A converged infrastructure has pre-configured and pre-tested configurations while reference architectures follow certain guidelines. Converged infrastructures tend to have defined scalability. Reference architectures are completely flexible but they have no defined performance boundaries, making it less predictable than a converged infrastructure.
The components of a converged infrastructure include servers, data storage devices, networking equipment, software for IT infrastructure management, automation and orchestration. A converged infrastructure can be used as a platform for private, hybrid and public cloud computing services: e.g. infrastructure-as-a-service (IaaS), platform-as-a-service (PaaS) and software-as-a-service (SaaS).
The journey towards operating converged infrastructure should begin by establishing a business case: both technical and commercial goals and objectives such as the ability to deploy new products and services to market much faster than a traditional IT model all so often permits. Reliability, data security and a reduction in infrastructural complexity will no doubt be at the core of your proposition. There may be a number of other factors in play, and many of these could point you towards developing a converged cloud infrastructure because it delivers efficiency, reduces complexity without reducing security.
10 Tips you should consider to maximise the benefits of a Converged Infrastructure investment
Tips to improve performance
1. Restructure your complex and disparate IT technology-focused silos to a centralised, streamlined and service-oriented converged infrastructure team that will be able to do a lot more a lot quicker. Traditional designs are based on in house and siloed expertise that inevitably introduce risk as success depends on this expertise being shared and correlated between teams prior to building the infrastructure.

Another key factor is the support model, which should provide a single and seamless model. Instead of a dial in number that ultimately sends you to various departments for all the infrastructure components, the support model should be set up to manage and maintain the single product, namely the converged infrastructure. This will consequently reduce the mean time to resolution for the public sector considerably if and when any issues arise.

2. Prepare for simplified operations. Converged infrastructure offers the opportunity to simplify and centrally monitor your workloads and capacity management as well as application performance. Speed to deployment and agility is a key factor with converged infrastructure where new project rollouts can be reduced from months to days. To enhance this, a model that includes a capacity on demand buffer model should be looked at. This enables the public sector to have additional inventory installed in their converged infrastructure for unplanned growth and projects, where the payment of such inventory is only made once it’s consumed. This consequently reduces project delivery times drastically.

3. Look to fully virtualise your environment. Converged infrastructures can enable you to immediately consolidate and virtualise your workloads. This provides the flexibility to seamlessly move resources from one virtual machine to another and also avoid vendor lock in should you decide to move to another platform at a later date. Legacy systems can often cause headaches, but with a converged cloud infrastructure an opportunity arises to gain a highly virtualised, optimised, secure and a highly available but also stable platform that will improve performance while enhancing existing services.

4. Plan for data protection. Accidental data loss is perceived as the biggest security threat to public sector organisations so a backup and recovery solution that comes pretested and pre-integrated with your converged infrastructure should be a serious consideration.

5Prepare to streamline your security policies. Converged Infrastructures offer the ability to enhance security controls and increase compliance while still offering the benefits associated with cloud. For example instead of complex and costly deployments of physical security infrastructure, virtualised appliances and enterprise software security suites offer the same features but with the added agility of being virtualised.

6. Implement a disaster recovery solution. Converged infrastructures offer the ability to quickly standardise the platform specifications between two sites as well as shift away from complex, convoluted and manual failover methods. With pretested and pre-integrated disaster recovery and avoidance solutions, failover and failback processes can be automated tasks that mitigate risk and reduce OPEX costs being introduced when protecting against a disaster.

7. Initiate a private cloud strategy. A converged infrastructure removes a lot of the infrastructure challenges associated with deploying a private cloud and offers a standardised and optimised platform that can allow you to focus on designing service catalogues, blueprints, chargeback, billing, approval processes etc. By implementing an integrated self-service provisioning model, the converged infrastructure enables a Private Cloud to be quickly deployed allowing workloads and resources to be provisioned and allocated in minutes.

8. Include a strategy to accommodate a software defined data centre (SDDC) approach. Converged infrastructure offerings are already incorporating solutions that enable a SDDC that consequently help free up even more administration time and resources. SDDC extends the benefits of virtualisation by incorporating orchestration, automation and management to core components enabling IT admin to become proactive and subsequently innovative.

Additionally the load sharing and balancing of resources within converged infrastructures and SDDC models allows for higher efficiency and utilisation as well as streamlined infrastructure scalability enabling the business to meet future and unplanned demands in a seamless manner.

The higher availability and testable disaster recovery solutions also allow the business to safeguard against the unnecessary costs and fines associated with unplanned downtime. Ultimately the streamlining of manual and complex processes into functions that are automated and orchestrated allows the business to focus on new initiatives and application delivery and consequently forget about the infrastructure that supports them.
 
9. Establish new internal architectural and compliance standards that cater for converged infrastructure. Instead of having to design, document and adhere to standards for every single component within a traditional infrastructure, converged infrastructures are a comprehensive product. This simplifies internal processes considerably as the standardisation, design, integration, security, availability, support and maintenance of converged infrastructures are already catered for by the vendor.

A converged infrastructure should incorporate a release certification matrix that validates all of the patches, maintenance and upgrades of all of the product’s components throughout its lifecycle. To ensure this compliance an API that can scan, validate, present and monitor the infrastructure as a whole is a necessity. Unless a converged infrastructure includes this it ultimately remains a well-marketed collection of components that will offer little if any advantage over a traditional infrastructure.

Furthermore the converged infrastructure offering should include quality assurance that fully tests and validates all software updates for security and availability vulnerabilities consequently reducing ongoing risks. This will allow public sector organisations to extend their capabilities by integrating additional security and compliance controls to meet their objectives. The key here is that security and high availability is built in at the outset rather than added as an afterthought, ensuring the confidentiality, integrity and availability of the business infrastructure.

10. Simplify process management. Change management, incident management, asset management etc. are all processes that can be significantly simplified with a converged infrastructure offering and in most cases automated. For example asset management becomes simplified in that the configuration management database input now replaces thousands of complex components with a single entity that has all of its components supported by a single vendor as a single product. Additionally monitoring, alerting, logging, maintenance, patching and upgrading procedures can all be automated with converged infrastructure, mitigating the risk associated with manual tasks.

With traditional deployments you need to keep patches and firmware up to date with multiple vendors, components and devices which requires internal IT to assess the criticality of each patch and relevance to each platform as well as validate firmware compatibility with other components. This requires costly mirrored Production Test Labs and then also having rollback mechanisms if there are any issues.

Coupled with these risks, the multiple vendors needed to support a traditional infrastructure lead to prolonged resolution times when issues occur. Logging a support call for a traditional infrastructure first needs the identification of who is responsible: this might be the storage vendor, the networking company, the hypervisor or the server manufacturer.

Archie Hendryx is Principal vArchitect at converged infrastructure supplier VCE
Original article taken from Public Technology Magazine: 

Answering a CIO's concerns around SDDC

Question 1. Vendors are racing to lead the movement towards a softwaredefined data centre. Where are we up to in this journey, and how far are we from seeing this trend widely adopted?

Considering most organisations have still not fully virtualized or moved towards a true Private Cloud model, SDDC is still in its infancy in terms of mainstream adoption and certainly won’t be an overnight process. While typical early adopters are advancing quickly down the software-defined route these are mostly organizations with large scale multi site data centres who are already mature in terms of their IT processes. Such large scale organisations are not the norm and while the SDDC is certainly on the mindset of senior IT executives, establishing such a model requires several key challenges and tasks.

Typical environments are still characterised by numerous silos, complex & static configurations and partially virtualized initiatives. Isolated component and operational silos need to be replaced with expertise that cover the whole infrastructure so that organisations can focus on defining their business policies. In this instance the converged infrastructure model is ideal as it enables the infrastructure to be managed, maintained and optimised as a single entity by a single silo. Subsequently such environments also need to dramatically rearrange their IT processes to accommodate features such as orchestration, automation, metering and billing as they all have a knock on effect to service delivery, activation and assurance as well as change management and release management procedures. The SDDC necessitates a cultural shift and change to IT as much as a technical one and the latter historically always takes longer. It could still be several years before we really see the SDDC be adopted widely but it’s definitely being discussed and planned for the future.
You can't have a successful software-defined model with a hardware-defined mentality


Question 2. Looking at all the components of a data centre, which one poses the most challenges to being virtualized and software-defined?

The majority of data centre components have experienced considerable technological advancements in past few years. Yet in comparison to networking, compute and hypervisor, storage arrays still haven’t seen that many drastic changes beyond new features of auto-tiering, thin-provisioning, deduplication and the introduction of EFDs. Moreover Software Defined’s focus is applications and dynamically meeting the changing requirements of an application and service offering. Beyond quality of service monitoring based on IOPS and backend/frontend processor utilisation, there are still considerable limitations with storage arrays in terms of application awareness.

Additionally with automation being integral to a software-defined strategy that can dynamically shift resources based on application requirements, automation technologies within storage arrays are up to now still very limited. While storage features such as dynamic tiering may be automated, they are still not based on real-time metrics and consequently not responsive to real-time requirements.

This leads to the fact that storage itself has moved beyond the array and is now encompassed in numerous forms such as HDD, Flash, PCM and NVRAM etc. each with their own characteristics, benefits and challenges. As of yet the challenge is still to have a software layer that can abstract all of these various formats as a single resource pool. The objective should be that regardless of where these formats reside whether that’s within the server, the array cache or the backend of the array etc. they can still dynamically be shifted across platforms to meet application needs as well as provide resiliency and high availability.


Question 3. Why has there been confusion about how software-defined should be interpreted, and how has this effected the market?

Similar to when the Cloud concept first emerged in the industry, the understanding of the software-defined model quickly became somewhat blurred as marketing departments of traditional infrastructure vendors jumped on the bandwagon. While they were quick to coin the Software-Defined terminology to their offerings, there was little if anything different to their products or product strategy. This led to various misconceptions such as software- defined was just another term for Cloud, if it was virtualised it was software-defined or even more ludicrously that software-defined meant the non-existence or removal of hardware.

To elaborate, all hardware components need software of some kind to function but this does not necessitate them to be software-defined. For example Storage arrays use various software technologies such as replication, snapshotting, auto-tiering and dynamic provisioning. Some storage vendors even have the capability of virtualising third party vendor arrays behind their own or via appliances and consequently abstracting the storage completely from the hardware whereby an end user is merely looking at a resource pool. But this in itself does not define the array as software defined and herein lies the confusion that some end users face as they struggle to understand the latest trend being directed at them by their C-level execs.

Question 4. The idea of a software-defined data centre (virtualizing and automating the entire infrastructure wildly disrupts the make-up of a traditional IT team. How can CIOs handle the inevitable resistance some of their IT employees will make?

First and foremost you can’t have a successful Software- defined model if your team still have a hardware-defined mentality. Change is inevitable and whether it's embraced or not it will happen. For experienced CIOs this is not the first time they've experienced this technological and consequently cultural change in IT. There was resistance to change from the mainframe team when open systems took off, there was no such thing as a virtualisation team when VMware was first introduced and only now are we seeing Converged infrastructure teams being established despite the CI market being around for more than three years. For the traditional IT teams to accept this change they need to recognise how it will inevitably benefit them.

Market research is unanimous in its conclusion that currently IT administrators are far too busy doing maintenance tasks that involve firefighting "keeping the lights" on exercises. Generally figures point to a 77% mark of overall time spent for IT admin on doing mundane maintenance and routine tasks with very little time spent on innovation, optimisation and focus of delivering value to the business.  For these teams the software-defined model offers the opportunity to move away from such tasks and free up their time enabling them to be proactive as opposed to reactive. With the benefits of orchestration and automation, IT admin can focus on the things they are trained and specialised in such as delivering performance optimisation, understanding application requirements and aligning their services and work to business value.


Question 5. To what extent does a software-defined model negate the need to deploy the public cloud? What effect will this have on the market?

The software defined model shouldn't and most likely won't negate the public cloud, if anything it will make its use case even clearer. The SDDC is a natural evolution of cloud, and particularly the private cloud. The private cloud is all about IT service consumption and delivery of IT services whether this be layered upon converged infrastructure or self assembled infrastructures. Those that have already deployed a private cloud and are also utilising the public cloud have done so with the understanding and assessment of their data; it's security and most typically it's criticality. The software defined-model introduces a greater level of intelligence via software where application awareness and requirements linked to business service levels are met automatically and dynamically. Here the demand is being dictated by the workload and the software is the enabler to provision the adequate resources for that requirement.

Consequently organisations will have a greater level of flexibility and agility to previous private cloud and even public cloud deployments, thus providing more lucidity in the differentiation between the private and public cloud. Instead of needing to request from a cloud provider permission, the software defined model will provide organisations on-demand access to their data as well as independently dictate the level of security. While this may not completely negate the requirement for a public cloud, it will certainly diminish the immediate benefits and advantages associated with it.


Question 6. For CIOs looking for pure bottom-line incentives they can take to senior management, what is the true value of a software-defined infrastructure?

The true value of a software defined model is that it empowers IT to be a true business enabler. Most business executives still see IT as an expensive overhead as opposed to a business enabler.  This is typically because of IT’s inability to respond quicker to ever changing service requirements, market trends and new project roll-outs that the business demands. Much of this is caused by the deeply entrenched organizational silos that exist within IT where typical infrastructure deployments can take up to months. While converged infrastructure solutions have gone some way to solving this challenge, the software defined model builds on this by providing further speed and agility to the extent that organisations can encapsulate their business requirements into business delivery processes. In this instance infrastructure management processes become inherently linked to business rules that incorporate compliances, performance metrics and business policies. In turn via automation and orchestration these business rules dynamically drive and provision the infrastructure resources of storage, networking and compute in real time to the necessary workloads as the business demands it.


Question 7. To what extent will a software-defined infrastructure change the way end-users should approach security in the data centre?

A software-defined model will change the way data centre security is approached in several ways.  Traditional physical data center security architecture is renowned for being inflexible and complex due to its reliance on segmented numbers of dedicated appliances to provide numerous requirements such as load balancing, gateways, firewalls, wire sniffers etc. Within a software-defined model, security can potentially not only be delivered as a flexible and agile service but also as a feature that’s built into the architecture. Whether that is based on an approach of security being embedded within the servers, storage or network, a software-defined approach has to take advantage of being able to dynamically distribute security policies and resources that are logically managed and scaled via a single pane.

From a security perspective a SDDC provides immediate benefits. Imagine how simplified it will become when automation can be utilized to restructure infrastructure components that have become vulnerable to security threats? Even the automation of isolating malware infected network end points will drastically simplify typical security procedures but will then consequently need to be planned for differently.

Part of that planning is acknowledging not just the benefits but the new types of risk they inevitably introduce. For example, abstracting the security control plane from the security processing and forwarding planes means that any potential configuration errors or security issues can have far more complex consequences than in the traditional data centre. Furthermore centralising the architecture ultimately means a greater security threat should that central control be compromised. These are some of the security challenges that organisations will face and there are already movements in the software defined security space to cater for this.

Question 8. Where do you see the software-defined market going over the next couple of years?

The concept of the SDDC is going to gain even more visibility and acceptance within the industry and the technological advances that have already come about with Software-Defined Networking will certainly galvanise this. Vendors that have adopted the software-defined tagline will have to mature their product offerings and roadmaps to fit such a model as growing industry awareness will empower organizations to distinguish between genuine features and marketing hyperbole.


For organisations that have already heavily virtualized and built private clouds the SDDC is the next natural progression. For those that have adopted the converged infrastructure model this transition will be even easier as they will have already put the necessary IT processes and models in place to simplify their infrastructure as a fully automated, centrally managed and optimized baseline from which the SDDC will emanate from. It is fair to say that it won’t be a surprise to see a lot of the organisations that embraced the converged infrastructure model to also be the pioneers of a successful SDDC.


The above interview with Archie Hendryx is taken from the May 2014 issue of Information Age:





Voting for the Top Virtualisation Blog of the Year - 2014

Incredibly it's that time of year again when voting commences for the Top Virtualization blog of the year and I've just been pinged a note that The SANMAN blog has been nominated again for voting! Last year's nomination was also a nice surprise as the blog ended up being a New Entry in the charts at 172 - sure it's not a One Direction hit single that went straight to number one but 172 is still a chart number Kajagoogoo would have been proud of (-;


Whether you decide to vote for The SANMAN or not it's still worth having a look at the other nominees and casting your vote for them as there are some great resources across the tech spectrum, including some faves of mine such as the Wikibon blog, TechHead and StorageIO.

So while I don't hold any hopes of breaking the Top 100, I did want to take the opportunity to thank all of the readers that visit this site and find it worthwhile. It's been another really busy year at VCE and trying to find time to write meaningful, insightful and useful articles can be a struggle. Despite this there's no greater motivation than knowing that people from across the world take time out to read my posts.

Thanks for your support and happy voting!

http://www.surveygizmo.com/s3/1553027/Top-VMware-virtualization-blogs-2014



Interview with CloudTech - Why virtualisation isn't enough in cloud computing

I was recently interviewed for an article with CloudTech around the topic of whether virtualisation in itself was enough for a successful cloud computing deployment. Below is an excerpt of the article. For the full article which also includes viewpoints from other analysts please follow the link:
While it is generally recognised that virtualisation is an important step in the move to cloud computing, as it enables efficient use of the underlying hardware and allows for true scalability,  for virtualisation in order to be truly valuable it really needs to understand the workloads that run on it and offer clear visibility of both the virtual and physical worlds.

On its own, virtualisation does not lend itself to creating sufficient visibility about the multiple applications and services running at any one time. For this reason a primitive automation system could cause a number of errors to occur, such as the spinning up of another virtual machine to offset the load on enterprise applications that are presumed to be overloaded.
Well that’s the argument that was presented by Karthikeyan Subramaniam in his Infoworld article last year, and his viewpoint is supported by experts at converged cloud vendor VCE.
“I agree absolutely because server virtualisation has created an unprecedented shift and transformation in the way datacentres are provisioned and managed”, affirms Archie Hendryx – VCE’s Principal vArchitect. He adds that, "server virtualisation has brought with it a new layer of abstraction and consequently a new challenge to monitor and optimise applications."
Hendryx has also experienced first hand how customers address this challenge "as a converged architecture enables customers to quickly embark on a virtualisation journey that mitigates risks and ensures that they increase their P to V ratio compared to standard deployments.”
In his view there's a need to develop new ways of monitoring provides end users more visibility concerning the complexities of their applications, their interdependencies and how they correlate with the virtualised infrastructure. “Our customers are now looking at how they can bring an end-to-end monitoring solution to their virtualised infrastructure and applications to their environments”, he says. In his experience this is because customers want their applications to have the same benefits of orchestration, automation, resource distribution and reclamation that they obtained with their hypervisor.
Virtual and physical correlations
Hendryx adds: “By having a hypervisor you would have several operating system (OS) instances and applications. So for visibility you would need to correlate what is occurring on the virtual machine and the underlying physical server, with what is happening with the numerous applications.” He therefore believes that the challenge is to try to understand the behaviour of an underlying hypervisor that has several applications running simultaneously on it. For example, if a memory issue were to arise relating to an operating system of a virtual machine, it would be possible to find that the application either has no memory left, or it might be constrained, yet the hypervisor might still present metrics that there is sufficient memory available.
Hendryx says these situations are quite common: “This is because the memory metrics – from a hypervisor perspective – are not reflective of the application as the hypervisor has no visibility into how its virtual machines are using their allocated memory.” The problem being that the hypervisor has no knowledge of whether the memory it allocated to a virtual machine is, for cache, paging or pooled memory. What it understands in actuality is that it has made provision for memory and this is why errors can often occur.
Complexities
This lack of inherent visibility and correlation between the hypervisor, the operating system and the applications that run them could cause another virtual machine to spin up. “This disparity occurs because setting up a complex group of applications is far more complicated than setting up a virtual machine”, says Hendryx. There is no point in cloning a virtual machine with an encapsulated virtual machine either; this approach just won’t work, and that’s because it will fail to address what he describes as “the complexity of multi-tiered applications and their dynamically changing workloads.”
It’s therefore a must to have some application monitoring in place that correlates with the metrics that are being constantly monitored by the hypervisor and the application interdependencies.
“The other error that commonly occurs is caused when the process associated with provisioning is flawed and not addressed”, he comments. When this occurs the automation of that process will remain unsound to the extent that further issues may arise. He adds that automation from a virtual machine level will fail to allocate its resources adequately to the key applications and this will have a negative impact on response times and throughput – leading to poor performance.
Possible solutions
According to Hendryx, VCE has ensured customers have visibility within a virtualised and converged cloud environment by deploying VMWare’s vCenter Operations Manager to monitor the Vblock’s resource utilisation. He adds that “VMware’s Hyperic and Infrastructure Navigator has provided them with the visibility of virtual machine to application mapping as well as application performance monitoring, to give them the necessary correlation between applications, operating system, virtual machine and server…” It also offers them the visibility that has been so lacking.
Archie Hendryx then concluded with best practices for virtualisation within a converged infrastructure:
1. If it’s successful and repeatable, then it’s worth standardising and automating because automation will enable you to make successful processes repeatable.
2.  Orchestrate it because even when a converged infrastructure is deployed there will still need to be changes that require rolling out; such as operating system updates, capacity changes, security events, load-balancing or application completions. These will all need to be placed in a certain order and you can automate the orchestration process.
3.  Simplify the front end by recognising that virtualisation has transformed your environment into a resource pool that end users should be able to request and provision for themselves and be consequently charged for. This may involve eliminating manual processes in favour of automated workflows, and simplification will enable a business to recognise the benefits of virtualisation.
4.  Manage and monitor: You can’t manage and monitor what you can’t see. For this reason VCE customers have an API that provides visibility and context to all of the individual components within a Vblock. They benefit from integration with VMware’s vCenter and vCenter Operations Manager and VCE’s API called Vision IOS. From these VCE’s customers gain visibility and the ability to immediately discover, identify and validate all of the components and firmware levels within the converged infrastructure as well as monitor its end-to-end health. This helps to eliminate any bottlenecks that might otherwise occur by allowing overly provisioned resources to be reclaimed.