Showing posts with label virtualization. Show all posts
Showing posts with label virtualization. Show all posts

Virtualization: What are the Key Steps?

By Trevor Dearing, Portfolio Marketing Manager, Juniper Networks

For those of us who grew up praying in the temple of the mainframe, the concept of virtualization is nothing new. Maximizing resources by the use of virtual machines on a single platform has always made good sense. For years the economics of personal computing pushed us to a distributed model. The PC was cheap and if we could use all of the desktop resources we could avoid central processing. Unfortunately this ideal was ruined by reality: the concept of so much information being distributed in an uncontrolled manner through an organization became a security nightmare. Equally, the cost of managing the applications and licenses across so many desktops was prohibitive. The development of web technology allowed us to return to a more controlled and centralized model.

Unfortunately most server farms are built from traditional PC technology on a one- application-to-one or –many-machine basis and this is wasteful of space, resource and power. Blade technology provides a good first step to solving this problem, enabling the consolidation of a number of individual servers into a smaller rackspace and less power consumption. This provides many cost benefits as well as controlling the speed with which we need to extend or renew datacentres. However longer term, new virtualization techniques will provide us with much better utilization and a reduction of space and power. This can either be implemented on individual severs, blade technology or more likely the new generation of super servers.

However there is much more to virtualization than just consolidation.

Virtualization delivers the capability to deploy, move, or clone an application from one platform to another over a network, even when it is running. Live migration of applications at this speed and scale demands new levels of performance, reliability, and standardization from networks. That’s why thoughtful planning of network architectures is the first step toward virtualization's full value.

Fortunately, virtualization's requirements are evolutionary - natural extensions of capabilities that networking solution providers have been improving for years. But large-scale virtualization initiatives should take a close look at their networks early in the planning process, to assure that they offer capabilities like these:

Link aggregation and virtual chassis - link aggregation, or trunking, bundles multiple links to deliver more bandwidth and higher availability. Long used as a cost-effective way to build internal Ethernet backbones, link aggregation is an attractive alternative to hardware replacement when a network needs bandwidth to meet new requirements.

Unfortunately, standard IEEE 802.3ad link aggregation won’t work unless ports reside on the same switch - a restriction that greatly complicates network topography and introduces delay, complexity, and risk. New network virtualization techniques like virtual chassis allow link aggregation between two switches, even at separate locations. The result is more bandwidth where it's needed, freed from the constraints of physical switch locations - an ideal complement to server virtualization.

Wire-rate high-density core switching - at the data-center core, server virtualization can raise demands on network bandwidth and latency. Wire-rate network performance allows processing of sustained and bursty traffic without dropped packets, avoiding TCP retransmissions that increase application latency.

Architecture counts most at the core, and dense wire-rate 10GbE ports can help weed out multiple layers of switching - in all but the largest enterprise networks, it can even eliminate the aggregation layer entirely. Simplification of the core cuts latency, complexity, and cost, and improves reliability: all key elements for a successful virtualization initiative.

Security without latency - virtualization providers have done an excellent job of addressing user concerns about security - most users now see virtual machines as no less secure than the physical machines on which they run. But live migration of virtual machines and the applications they carry creates new network security tradeoffs. Firewalls that protect sensitive network legs or sub-networks may introduce latencies that can cripple a running application on a virtual machine, even though they might be invisible to a physical server. And the risk of failure creates an incentive for removing protection, with obvious risks.

Here, there is simply no substitute for performance. Rather than play a dangerous game trying to balance availability against security to defer a hardware purchase, it's time to upgrade critical firewalls, focusing on latency and throughput metrics.

Network operating environment consistency - server administrators rarely think about the operating systems of network infrastructure - but they should learn more. Most data center networks today run between six to ten different network operating systems, adding complexity, inconsistency, and delay in qualifying new features.

Optimizing network performance for virtual environments is difficult enough without the challenge of a different operating system on every switch, router, VPN appliance, firewall, and more. When you standardize on a single operating system (not OS “family”) for network hardware, you’ll get faster project turnaround, better network performance, and more reliable operation of applications running in virtual environments.

Virtualization - and beyond

Virtualization is a great reason to upgrade the performance and reliability of corporate networks - but not the only one. Up-to-date, optimized networks deliver business benefits that not only support the latest technologies, but unlock your organization’s ability to:

* stay in the race - with networks that deliver basic IT services with utility-grade reliability, to support business users, satisfy regulators, and delight customers

* outpace the competition - with technologies that improve productivity, cut costs, and lock your competitors in a never-ending struggle just to keep up
* change the game - using innovative technologies to craft new services that redefine your competitive landscape

Your organization’s decision to adopt virtualization signals its intention to compete - and win - using the most advanced technology available. But even a powerful new approach like virtualization doesn’t perform in a vacuum. Careful consideration of the bandwidth, latency, security and consistency of your network environment will help you overcome hurdles and delays on the way to your virtualization goals - to create a network that supports your virtualization targets, maintains your quality-of-service and availability commitments, and exceeds the most demanding requirements of your business future.

Juniper Networks is exhibiting at Infosecurity Europe 2009, the No. 1 industry event in Europe held on 28th – 30th April in its new venue Earl’s Court, London. The event provides an unrivalled free education programme, exhibitors showcasing new and emerging technologies and offering practical and professional expertise. For further information please visit www.infosec.co.uk

Courtesy: Inforsecurity PR
<>

Virtualisation is here to stay

By Kevin Bailey, Director Product Marketing EMEA, Symantec

Virtualisation has now been around for a number of years, yet many businesses are still not implementing engines in a strategic way.

As IT managers face increased pressure to meet the changing demands of an IT infrastructure, supporting a growing number of applications on desktops from deployment through to retirement, as well as coping with the daily deluge of software conflicts can be a daunting task.

The increase in usage of software virtualisation technology will ease this pressure by eliminating conflicts, allowing organisations to deploy and fix applications quickly and ultimately reduce support costs and improve application reliability. However, storage virtualisation is not the solution, but just one component of an end-to-end storage management solution. Virtualisation engines should be planned alongside other IT strategies, such as business continuity, disaster recovery and general availability procedures, so that IT managers can holistically integrate it into their IT.

One of the challenges IT managers are facing is that most current systems management tools deployed to monitor the enterprise IT infrastructure are not always built with virtualisation in mind. A common complaint is that configuration database management tools do not work properly in dynamic virtual environments.

Users commonly experience problems with their applications slowing down and PCs failing to reboot as their system gets older and more littered with applications. Magnify this problem by a thousand users and it’s clear to see how productivity within an organisation could suffer and how quickly this could become an expensive problem. These problems occur when users install new software or application updates which share common resources and codes, resulting in conflicts, application failure or the reintroduction of security holes that were previously patched.

With this in mind, IT managers should seriously consider taking a look at software virtualisation technology which enables desktop applications to be run as virtual software packages, allowing users to switch applications on and off instantaneously to eliminate any conflicts. Applications can then be reinstalled remotely without adversely affecting the base Windows configuration. By simply switching an application on or off without needing to reboot, a user can keep their PC’s capacity under control as well as maximise its performance and resilience.

The technology works by deploying the software to a part of the file system that is normally hidden from Windows. As a result, the resources that are used by applications like Microsoft Word are isolated from the operating system or other applications that might have conflicting drivers.

IT managers can also use software virtualisation technology when testing and rolling out new versions of an application. Performing a successful upgrade to a business critical application is essential, but there is always a risk attached to changing or upgrading a package. If the application doesn’t work properly for some reason, the management team will not be interested in understanding why, they will just expect the application to be working again quickly.

Virtualisation technology can resolve upgrading issues by allowing users to simply roll back to the old version so they can continue working. This gives IT managers time to repair the damaged application before making the new package available again. In addition to this, virtualisation allows users to host multiple versions of an application on the same system giving them sufficient time to become familiar and comfortable with the new features of the package before they feel confident to move away from the old version.

Even though awareness of virtualisation has been around for some time, many IT managers still do not understand the technology and how it will change the way software is managed in the future. However, once more IT managers start looking at software virtualisation and begin to see the true value of the technology, it will only be a matter of time before IT infrastructures become completely virtualised. Organisations shouldn’t make the mistake of turning a blind eye to virtualisation as it is here to stay and will be used in the future by many IT departments in their quest to standardise IT infrastructures and achieve financial efficiencies.

Symantec (UK) Ltd is exhibiting at Storage Expo 2008 the UK’s definitive event for data storage, information and content management. Now in its 8th year, the show features a comprehensive FREE education programme and over 100 exhibitors at the National Hall, Olympia, London from 15 - 16 October 2008 www.storage-expo.com

Source: StoragePR
<>

Beating the data deluge with storage virtualisation

As data volumes explode, businesses face the daunting prospect of unmanageable storage growth. Steve Murphy, UK Managing Director for Hitachi Data Systems reveals how organisations can use storage virtualisation to consolidate their systems, increase utilisation and efficiency and reduce costs.

As virtualisation is a technique rather than a specific technology, applied to areas such as servers, storage, applications, desktops and networks, it is often poorly understood. This article attempts to bring clarity to how virtualisation is applied to storage systems and the benefits it can deliver.

Fundamentally, virtualisation aims to abstract software from hardware, making the former independent of the latter and shielding it from the complexity of underlying hardware resources.

Storage virtualisation often performs two functions. It makes many storage systems look like one, simplifying the management of storage resources and in some cases provides partitioning so one storage system appears as many, providing security for applications that need to be separated.

Across most industries, data volumes are spiralling out of control as the amount of information and content we create grows exponentially. Data storage within most organisations is increasing by about 60% annually. This means that organisations have to increase their capital and operational expenditure in areas such as IT staff, power, cooling and even data centre space.

Traditionally, organisations have tried to deal with growing data volumes by buying more disks. However many organisations are finding that their storage infrastructures are becoming unmanageable, while the utilisation of these systems is unacceptably low, often running at 25-30%.

Another challenge is that while data volumes grow, IT managers still need to meet the same demands: supporting new and existing applications and users so that the business remains competitive, managing risk and ensuring business continuity and maintaining compliance with government regulations and specific industry standards.

There is a strong argument for organisations to stop buying more capacity and, instead, look for ways to consolidate their existing estate and increase utilisation and reduce costs. Storage virtualisation is an increasingly popular way for organisations to address these challenges.

Storage virtualisation aims to ‘melt’ groups of heterogeneous storage systems into a common pool of storage resources. Vendors have adopted a range of methods to achieve this. One technique is to let the server handle storage virtualisation, although as the server is removed from the storage system and has other functions to manage, performance can suffer.

One of the most widely used approaches is to use the intelligent storage controller as the virtualisation engine. By installing an intelligent storage controller in front of their infrastructure, companies can aggregate existing storage systems and virtualise the services provided to host applications such as data protection, replication, authorisation and monitoring. This offers advantages such as simplified management, increased utilisation of storage resources, seamless migration across tiers of storage, lowered interoperability barriers and better integration of common functionality.

Virtualisation brings about cost reductions and efficiencies, by reducing the need for additional software applications and licences, the need for additional hardware (which in turn means lower power, cooling and space costs) and also labour costs and resources required to manage spiralling data volumes. Typically, administrators can manage from three to 10 times more storage capacity once virtualisation is implemented.

Storage virtualisation also allows organisations to consolidate and utilise existing storage assets, extending their shelf life so they continue to deliver value. Organisations can also consolidate their management and storage services, using a single standard interface to manage storage, archive and backup functions.

Storage virtualisation allows organisations to consolidate systems and increase utilisation, significantly cutting the power required to both operate and cool their data centres. This reduces energy costs, which makes good business sense from an environmental and cost saving perspective.

Hitachi Data Systems is exhibiting at Storage Expo 2008 the UK’s definitive event for data storage, information and content management. Now in its 8th year, the show features a comprehensive FREE education programme and over 100 exhibitors at the National Hall, Olympia, London from 15 - 16 October 2008 www.storage-expo.com

Source: StoragePR
<>