By Bill Beverley - Security Technology Manager, F5 Networks
Introduction
The concepts behind application and operation system virtualisation are not new. The recent rate of virtualisation adoption however, especially that of software operating system virtualisation, has grown exponentially in the past few years. Virtual machines have finally come into their own, and are quickly moving into the enterprise data centre and becoming a universal tool for all people and groups within IT departments everywhere.
So what exactly is a virtual machine? VMware defines a virtualisation as “an abstraction layer that decouples the physical hardware from the operating system...”. Today, we commonly think of virtual machines within the scope of one hardware platform running multiple software operating systems. Most often this concept is implemented in the form of one operating system on one hardware box (the host platform) running multiple independent operating systems on virtual hardware platforms in tandem (the guests).
Platform virtualisation usually relies on full hardware segmentation: allowing individual guest platforms to use specific portions of the physical host hardware without conflicting or impacting the host platform, allowing the host and guest(s) to run in tandem without stepping on top of each other.
There are two primary types of platform virtualisation: transparent and host-aware. Transparent virtualisation is implemented so that the guest is not aware that it’s running in a virtualised state. The guest consumes resources as if it were natively running on the hardware platform, oblivious to the fact that it’s being managed by an additional component, called the VMM (Virtual Machine Monitor), or hypervisor. The more standard forms of virtualisation today, such as those by VMware, implement transparent hypervisor systems. These systems can be thought of as proxies: the hypervisor will transparently proxy all communication between the guest and the host hardware, hiding its existence from the guest so the guest believes it’s the only system running on that hardware.
Host-aware implementations differ in that the guest has some form of virtualised knowledge built into the kernel. There is some portion of the guest operating system kernel that knows about the existence of the hypervisor and communicates with it directly. Xen (pronounced ‘zen’), a popular virtualisation implementation for Linux, uses a host aware architecture, requiring special hypervisor command code actively running in both the host and all running virtualised guests.
One of the driving factors in virtualisation adoption is the open nature of hardware support for VMMs: Hardware platforms, which run and manage the primary host operating system, and the VMM are not specialized devices or appliances. This flexibility, the move of virtualisation software to everyday hardware, has allowed everyone direct and inexpensive access to run virtualised environments. Virtualisation allows a company to purchase one high end hardware device to run 20 virtual operating systems instead of purchasing 20 commoditized lower-end devices, one for each single operating platform.
Virtualised Threat Vectors
The benefits of virtualisation are obvious: more bang for your buck. But everything has a pro/con list, and virtualisation is no exception. The pro column is a large one, but the con list isn’t so obvious. What could be bad about running 20 servers for the price of one? Although by no means considered to be a large threat today, security of virtual machines and environments is typically not considered, not because the security of these implementations is a technological mystery, but because it is generally an unknown vector by the groups that are implementing wide-spread virtualisation. In other words, virtualisation is usually implemented with no specific regard to the new security risks it brings.
Virtualisation brings an entire new set of security issues, problems, and risks. Security administrators are familiar with phrases such as “hardened operating system,” “walled garden,” and “network segmentation” in the one-box-for-one-application world, but how do administrators apply these concepts to the uncharted waters of the virtual data centres? How can we protect ourselves in new environments we don’t understand? Today’s system and security administrators need to begin focusing on virtual security, preparing for a new threat arena for distributed and targeted attacks.
There are many, many security risks and considerations that virtual infrastructure administrators should be aware of and prepared for, many of which were not covered in this discussion. And there are many questions that still need to be addressed before moving to a fully virtualised environment, such as:
- How will our current analysis, debugging, and forensics tools adapt themselves to virtualisation?
- What new tools will security administrators be required to master between all of the virtualisation platforms?
- How does patch management impact the virtual infrastructure for guests, hosts, and management subsystems?
- Will new security tools, such as hardware virtualisation built into CPUs, help protect the hypervisor by moving it out of software?
- How will known security best practices, such as no-exec stacks, make a difference when fully virtualised? Will hardware virtualisation pave the way to a truly secure VMM?
- Virtualisation and shared storage: What happens if we virtualised all the way down to the iSCSI transport layer? Are we opening up a floodgate which bypasses built-in SAN security?
F5 Networks is exhibiting at Infosecurity Europe 2009, the No. 1 industry event in Europe held on 28th – 30th April in its new venue Earl’s Court, London. The event provides an unrivalled free education programme, exhibitors showcasing new and emerging technologies and offering practical and professional expertise. For further information please visit www.infosec.co.uk
Source: Infosecurity PR
<>