Monitoring for effective data protection

By Geoff Sweeney, CTO, Tier 3 www.tier-3.com

Recent security breaches in both the private and public sector have highlighted the need for organisations to ensure personal information is processed and stored securely. Ever growing collections of personal data, more remote access and the prevalence of crime such as identity theft all create vulnerabilities. It is essential that effective data protection policies and practices are in place, combined with vigilance and strong governance at all levels in all organisations, to ensure data protection is taken seriously.

Individuals expect the Data Protection Act to shield the security of their information. At the same time information security is increasingly at risk. As part of its new data protection strategy launched in March 2008 the UK’s privacy watchdog, the Information Commissioner’s Office (ICO), disclosed its plans to promote the importance of appropriate security, the use its regulatory powers against organisations that neglect their responsibilities in this area and to help individuals to protect their own information.

In May this was reinforced when The Criminal Justice and Immigration Act received Royal Assent creating tough new sanctions for the ICO. This new legislation gives the ICO the power to impose substantial fines on organisations that deliberately or recklessly commit serious breaches of the Data Protection Act and represents a step up from the ICO's previous power to simply issue enforcement notices.

This isn’t necessarily the end of the changes and there may be more regulation to come as, towards the end of May, the European Network and Information Security Agency (ENISA), called for laws tougher than those in the US to force companies to reveal when their computer systems have been breached. In its General Report 2007 the EU's top security body said governments, businesses and consumers are still underestimating the scope of the IT security problem, in part because of the lack of transparency when breaches occur, and mandatory disclosure of security breaches would be a step toward raising recognition of the seriousness of security threats. In the US, there are two laws which force organisations to publish details of security breaches. One is the California Breach Law (SB1386), which requires organisations doing business in California to tell customers about possible security breaches. Similar laws are planned for other states. The second is Sarbanes-Oxley, which obliges executives to keep informed about material aspects of their business, including security breaches.

Whether mandatory disclosure of information security breaches is ultimately adopted in the UK or not is not yet known but clearly advances in IT have made the collection, storage and sharing of all sorts of information easier and available to a wider population. Undoubtedly these advancements have resulted in enhanced services across many sectors but it has also increased the challenge of managing and protecting information. The vulnerability of data protection is evidenced almost daily with costly data leakage incident regularly impacting individuals and the organisations charged with the custody of their sensitive information.

The connectivity of WANs and the internet means that there are now few barriers to sharing information. The consequence however is that it is increasingly apparent that organisations can quickly lose control of who is sharing the information, where it is going and whether it is being used appropriately?

With this in mind the best way for organisations to meet their data protection obligations is to understand the information flows and uses within their business environment. A systematic risk based approach which matches the data monitoring and protection capabilities of the organisation with the risks associated with the loss of information based on its sensitivity/value and its likely impact to the individual and the organisation is increasingly important. Security policies, processes and technology are all part of the operational risk management process of identifying, monitoring and controlling information security breaches which may cause highly public exposure to your organisation and its stakeholders.

Increasingly, with the massive data volumes involved, this risk management loop requires the integration of skilled operational staff and competent technology to provide appropriate monitoring and control to ensure the use and movement of confidential information is within policy and adequately protected.

The good news in all this is that the security management process shouldn’t be to onerous and indeed should be part of the overall IT security effort. Technology is available which readily monitors who is accessing information, when and for what purpose. Using data protection systems which employ behavioural analysis an organisation can easily distinguish between legitimate use of its confidential information and inappropriate usage. One of the most damaging breaches is when an authorised user who has “legitimate” access to sensitive information either accidentally or maliciously chooses to misuse or leak that information. A behavioural analysis based security system can detect unexpected or risky data movement even where other systems can’t.

By recording the movement and use of information a behavioural analysis based security system establishes a profile that incorporates the characteristics of normal system use. By constantly monitoring and profiling user and system activity the system immediately recognises when information is accessed, changed or shared in an unusual or uncharacteristic manner and immediately alerts the accountable manager for remediation and evidentiary audit purposes. Specific business and policy rules can complement the system to enable early warning of any specific forbidden or unacceptable practices eg. Theft or fraud.

The scale and task of protecting stored and transmitted sensitive information is undoubtedly becoming greater. The problem for organisations, however, is that their responsibility for information assurance remains unchanged and with the intrinsic risk associated with its storing and sharing information owners continue to need ongoing visibility of who is accessing data, for what purpose and where are they taking it. Behavioural based security monitoring technology provides the ability to continuously manage and report the status of access and usage of confidential information for any organisation.

Source: Eskenzi PR
<>

“Self-Service” Storage: Has Its Moment Arrived?

Geoff Hough, Director Product Management, 3PAR


Today, most of us buy fuel for our cars at so-called "self-service" stations. We pull up to any platform, select whichever variety of fuel we want (leaded, unleaded, diesel), and securely process our own electronic payments (debit or credit). But as some will remember, it was not always like this. Younger readers, if your parents have never told you this story, prepare yourself for a surprise.

As late as the 1970s, most filling stations worked like this: upon arrival, you were directed to pull up to a specific platform according to the type of fuel you were purchasing, as pumping stations were designed to serve just a single type of fuel. Next, the service attendant would proceed to pump the fuel into your car. The attendant would then take your cash or credit card, note the fuel charge displayed on the pump, and process your payment using the cash box or a credit card imprinter. After receiving your change or signing your credit card slip, you were on your way.

Some may mourn those bygone days. After all, attendants would often wash your windshield for you! However, for the most part, this old filling station model was quite inefficient. Jams were likely to develop behind the pumps that were in demand at a given moment in the day. If the attendant and his card imprinter were otherwise occupied, you would wait until he could get to you. All of this resulted in greater fuel cost, more stations, and much more time spent at the pump. In addition, the security and accuracy of your transaction was a matter of human diligence and integrity.

What is the point? Consider that storage today is provisioned much like fuel was pumped 40 years ago. In today's datacentres, the consumers of storage are directed to an appropriate (and hopefully available) array where they must wait for their storage to be provisioned and then rely on the ongoing diligence and integrity of others to ensure the security of their data. In other words, system, application, and database administrators must rely heavily on storage administrators for routine storage provisioning and data security. Meanwhile, precious storage expertise is consumed with these routine activities, leaving less time for IT initiatives with higher value. Given our understanding of traditional datacentre processes and technologies, can a more efficient model even be imagined? Is the idea of "self-service” storage even conceivable?

Returning to our filling station analogy, one can see that the move to a more efficient "self-service" model was enabled principally by shifts in technology. Pumps were developed that could serve multiple types of fuel. Pumps were fitted with dual hoses, instead of one, enabling greater and more flexible use. Secure transaction processing capabilities — today's card readers — were embedded into the pumps themselves. The innovations and capabilities that we take for granted today have enabled a new and infinitely more efficient model that has changed the experience — and economics of fuelling — forever.

As for the idea of “self-service” storage, we should ask ourselves: What technical or procedural innovations are necessary to enable such a model in the datacentre? One requirement that immediately comes to mind is that storage provisioning would have to be made simple — so simple that storage consumers could properly provision storage for their applications without deep storage knowledge or configuration expertise. Conceivably, there may be a number of ways to achieve this, but the most obvious and efficient place to effect this simplicity is in the storage array itself (the “fuelling platform”). Some array vendors — mostly new guard players such as 3PAR®, Compellent, and EqualLogic — are far along in this respect.

Presuming a storage device easy enough for anyone to use, the next requirement would be to provide multiple and highly scalable levels of service for multiple users, much like the gas pump that evolved to support multiple fuel types and higher levels of use. In the context of data storage, this would mean abundantly and independently scalable levels of connectivity (Fibre Channel and iSCSI), performance (transactional and sequential), and capacity (Fibre Channel and ATA). Typically, these sorts of needs have been met only in high-end arrays. Of course, “high end” usually means high cost. Mid-range arrays offer lower cost, but scalability requires multiple devices. Also, mid-range arrays are optimised for single, not multiple, service levels — which returns us to the model whereby users must be directed to the most appropriate and available "fuelling platform." N-way clustered architectures provide an attractive alternative to the classic modular versus monolithic dilemma, but users may observe that quality of service can be limited. For example, EqualLogic only accommodates iSCSI drives and IBM® Nextra only accommodates ATA drives. The customer loses the ability to choose between premium and regular octane.

The last requirement for enabling “self-service” storage is one surrounding secure application and administrative segregation. One of the reasons that consumers and fuel station managers both became comfortable with the idea of "self-service" was that privacy and control concerns were adequately addressed for all parties. With the advent of specifically designed and integrated credit card readers, if desired, consumers could conduct their payment transactions safely and without interference while station managers kept control of the cash box and any non-routine (exception) processing. In storage, this becomes a very challenging requirement. Persuading different user groups to share the same devices and infrastructure has traditionally been an uphill battle for centralised IT departments and service providers. Politics and fears of compromised service levels become difficult barriers to overcome. Even more troublesome for a storage administrator is the idea of shared control. As the guardian of data availability and security for multiple applications and user groups, the administrator can only imagine the potential consequences of sharing device control among multiple parties. One inadvertently deleted or improperly exported volume, and it’s “game over”!

What is required in these circumstances is something similar to the functionality that a hypervisor like VMware® provides for server environments. So-called “virtual machines” are secure, so the administrator of one virtual machine cannot deliberately or mistakenly affect the integrity of another. Service levels are assured by the "master administrator," who sizes, groups, and sets rebalancing polices for virtual machines across various physical hosts.

If a virtual machine-like capability is the last piece of the puzzle for enabling “self-service” storage, what are the options today? Storage array partitioning, which provides administrative segregation, can be found in a handful of high-end arrays. However, while it may seem like a plausible solution, the drawbacks are many. First, as mentioned previously, there is the cost of high-end storage. There is also the complexity of administration, which makes it impractical for non-experts to use high-end arrays. A final impediment is that the partitioning schemes employed by high-end arrays are physically and coarsely defined. This means that a partition's service level can only be as good as the number, type, and configuration of physical devices that can be affordably assigned to it. Unfortunately, this provides little price/performance incentive for a given user group to share a device.

Another and more promising alternative for secure application and administrative segregation within a shared array is functionality that actually mirrors that of a true hypervisor. This technology is available today and allows a master administrator to create "virtual arrays" within a single physical device. Capacity limits, host permissions, service level parameters like RAID level and disk type, and authorised users are defined by a master administrator on a "virtual array”-by-“virtual array" basis. Security is provided by device control, which is tightly circumscribed for each user group. And just as a hypervisor virtualises physical components and allows them to be shared by multiple user groups, so does this "virtual array" technology. In fact, by aggregating "virtual array" workloads across all physical components (controllers, cache, drives, etc.), user groups are able to attain higher levels of service more affordably than they can in a dedicated environment — an attractive incentive to move to a shared infrastructure.

It is an odd but exciting prospect to contemplate the idea of "self-service” storage. At the time, “self-service” fuelling was met by some with trepidation and others with anticipation. No doubt, the same will be true for “self-service” storage. And yet the technology enablers have come into place for the first time. These include:
  • Storage arrays and interfaces that are simple to operate by non-experts
  • Multiple and highly scalable levels of service that can be delivered affordably in the fewest arrays possible
  • "Virtual array" technology that provides secure application and administrative segregation with attractive cost and QoS advantages
Has the time for “self-service” storage arrived, as inevitably as self-service fuelling? It would appear so. "Leading indicator" IT shops, such as those at major investment banks, have already taken steps in this direction by manually creating and scripting their own "storage provisioning windows" where users can go to request and receive provisioned capacity. And over the Internet, a variety of storage applications are increasingly being offered via application service provider (ASP) and Software as a Service (SaaS) approaches that offer ease of use, scalability, and security. So what are we waiting for? In ten years, when “self-service” storage is mainstream, we may be wondering how the storage provisioning model could have ever been any other way.

3PAR is exhibiting at Storage Expo 2008 the UK’s definitive event for data storage, information and content management. Now in its 8th year, the show features a comprehensive FREE education programme and over 100 exhibitors at the National Hall, Olympia, London from 15 - 16 October 2008 www.storage-expo.com

Source: StoragePR

Developing greener data centres

By Phil Andrews, operations director, Data Centre, Cisco for European Markets

The ever-increasing power consumption of data centres is rapidly putting energy efficiency at the top of the data centre manager's agenda. Even if energy costs and the threat of a shortage of power to support data centres weren't driving this efficiency agenda, the threat of carbon capping and legislation soon will. Reducing the demands being made on the data centre by businesses is not an option so what are the alternatives?

This article looks at a combination of subtle trends driving the impending power crisis in the data centre, strategies and technologies to reduce power consumption whilst meeting the evolving needs of the business, and approaches for the short-term and long-term future.

From barely being an issue a few years ago, the environmental impact of data centres has risen to the top of many IT managers’ agendas. The very real concerns over power consumption need a convergence of technological and non-technological solutions to mitigate/address the issue, says [name to come] from Cisco Systems.

How green is your data centre? Until recently, such a question would have raised eyebrows among IT managers. But with rising storage requirements and the levels of data centre infrastructure, the increase in power consumption of such facilities has been getting harder to ignore in recent years, despite the fact that accurate measures of data centre power use are difficult to come by.

Historically, power consumption has not been an issue for data centre managers for a number of reasons. First and foremost, data centres have often sat at the heart of strategic moves to expand or improve the business, and as such have not usually had to contend with cost-containment measures.

A second reason is that IT divisions have not traditionally had responsibility for the environmental impact of their data centres. Facilities departments usually foot the power bill and are often in charge of implementing environmentally-friendly practices.

Thirdly, there has never been much of a green alternative to data centres. Unlike, say, corporate air travel, you cannot just stop using IT storage systems and expect the enterprise to carry on as before.

As a consequence, some data centres have been allowed to turn into the gas guzzlers of the IT world. It takes about 830 pounds of coal to run a computer for a year. And in the case of servers, research by Intel shows less than 20 percent of power actually goes to the CPU.

This carefree attitude to power use is changing now, though, as companies face spiralling bills to maintain their sprawling data centre operations.

Data storage requirements are currently expanding at a compound annual growth rate of between 40 percent and 70 percent. Server use grew by 12 percent in 2005 and is expected to increase.

As a result, energy costs are expected to mushroom from 10 percent up to 30 percent of average IT budgets, overtaking all other forms of data centre expenditure and meaning IT managers will effectively loose a fifth of their budget to power consumption.

Exacerbating the problem is the fact that cooling tends to become less efficient as power consumption rises. The simplest way to increase cooling to a given rack of equipment is to simply open up more floor tiles.

While this is simple fix in the short run, it does not work much above two or three because cooling air being provided to one rack will be ‘stolen’ from adjacent racks, reducing the amount of cooling provided to neighbouring racks.

Another reason is that as more floor tiles are opened up for a particular rack, the distance from the tile to the rack increases. The cooling system ends up being less efficient because it ends up cooling the atmosphere in the data centre in addition to the equipment.

Both of these effects result in higher cooling bills, a reduced ability to cool equipment in the data centre on a per-rack basis and a less efficient cooling system.

Since cooling and heat removal are typically growth constraints in the data centre, this wasted cooling capability will act as a drag or a cap on growth.

Over the next three years, says Gartner, 50 percent of large organisations will face an annual energy bill that is higher than their yearly server budget. Google has already notoriously reached this point. And it gets worse.

In 2005, the University of Buffalo paid US$2.3 million for a new supercomputer, only to find there was not enough power to switch it all on.

An increasing number of data centre managers are similarly finding that there simply is not enough power available to expand their operations any further.

Gartner says most data centres are now operating at 100 percent capacity in terms of power and cooling, versus 70 percent capacity for data storage, meaning that energy, not memory, is now the main limiting factor on growth. (Availability of suitable space is also an issue.)

This puts data centre managers in a difficult position, since demand for IT storage is not going to go away.

If anything, compliance requirements such as the banking sector’s Basel II or Sarbanes-Oxley regulations, combined with the need to roll out ever faster and more complex IT applications, are increasing the demand for data centre services.

As a result, the only way to go is to cut power consumption and thereby reduce the environmental impact of data centre operations. Doing this is not easy. The actual amount of power required by data centre devices is only part of the equation.

Each watt consumed by IT infrastructure carries an additional ‘burden factor’ of between 1.8 and 2.5 for power consumption associated with cooling, lighting, conversion and distribution, all essential energy-consuming services that have to be taken into account in efficiency plans.

In addition, simply checking the power rating on the back of a device will not necessarily give you an accurate picture of how wasteful it is; its processing power and utilisation are also critical factors in determining its overall efficiency.

Because of all this, it is not easy to accurately measure and track data centre power consumption and even now few IT managers are building operating efficiency considerations into their purchasing criteria, although it is likely many will need to soon.

The good news is that recent developments by equipment vendors have led to a number of innovations that can help data centres run more efficiently. Server manufacturers, for example, are looking at introducing variable power consumption based on CPU activity.

The beneficial effects of this will be tempered, however, by the fact that server virtualisation strives to increase CPU utilisation to upwards of 80 percent.

Another option is the creation of blade centres and multi-core CPUs. This will raise the percentage of power going to the CPUs on a per-server basis, improving the overall power efficiency.

It will not necessarily reduce the power per rack, though, without other measures such as IO consolidation.

Where there is perhaps more scope for improvement is in the data centre’s network components, which can be used to create efficiencies in three ways:

  • By switching to devices that offer more processing power per watt.
  • By incorporating more services into each device, so that redundant devices can be removed from the infrastructure.
  • By using virtualisation to ensure that the remaining devices are used as efficiently as possible.
Looking at perhaps the most obvious measure for reducing power consumption, which is the efficiency of the devices themselves, it is fair to say that virtually all equipment manufacturers are working hard to bring leaner machines to market.

As an example, the efficiency of power supplies for the Cisco Catalyst 6500, the most widely used switch on the market, has improved from 70 percent to 80 percent since it was introduced in 1999.

Forthcoming Cisco power supplies are expected to be 90 percent efficient. At the same time, Cisco is continuing to reduce the power per port required by its data centre platforms, with a 30 percent to 50 percent reduction goal.

What is also significant about many of these new, more efficient platforms is that they can support a greater range of services. This can have a major impact on power consumption.

A typical application server may have multiple appliances associated with it, such as firewalls, secure sockets layer termination devices and load balancers, each with its own power and cooling requirements.

A rough and ready calculation shows these could represent up to an additional 2700W of power and cooling load per server, representing a considerable drain across the entire data centre.

Nowadays, however, functions such as security and load balancing can be incorporated into the network fabric, making it possible to eliminate the appliances and their associated power loads.

Doing this has several added bonuses. It lowers the complexity of the overall infrastructure, making it more manageable, reducing latency and eliminating single points of failure.

Finally, virtualisation can further increase disk utilisation by around 70 percent simply by incorporating all a data centre’s disparate storage devices into a single fabric that is then compartmentalised logically rather than physically.

In a virtual storage area network, each device can be ‘filled up’ to full capacity with data from various sources and applications, so fewer devices need to be used at any point in time.

In addition, the network can give priority to more efficient devices, so that those that represent the greatest drain on resources are only used when absolutely necessary.

The benefits of virtualisation can be significant. Taking a tape subsystem offline can save nearly EURO€3000 in power and cooling per year.

Taken together, these measures could reduce data centre power requirements by up to 85 percent, certainly enough to allow significant further expansion in storage area network use at current energy levels.

Storage area networking technologies can also help reduce server power requirements in a number of other ways.

Aside from power conversion losses, peripheral component interconnect cards and hard drives are the two biggest non-CPU power loads on a typical dual-core server, so moving to diskless servers will potentially remove a 72W load.

This translates into approximately 1.2kW per rack, in addition to reducing costs and improving the availability of servers. Another big area of opportunity is multifabric input/output and server I/O consolidation.

Consolidating storage and Ethernet connections on a single link reduces the number of network interface card ports required on the server (as well as switch ports), reducing the amount of cabling needed and thus improving airflow around the rack.

Furthermore, there are other areas of technical innovation that could help create further savings.

As an example, Cisco has an Automated Power Management System (AMPS) to control energy consumption in laboratories where it develops and tests new equipment.

These labs represent approximately 20 percent of Cisco's real estate, although the testing equipment is rarely used continuously. The system identifies equipment not in use and automatically switches it off.

Separately, Cisco is also partnering with the U.S. Department of Energy's Lawrence Berkeley National Laboratory to research technologies that could significantly reduce energy demands, as well as improve reliability and lengthen equipment life in data centres.

The technology eliminates power conversion losses by using DC (direct current) rather than AC (alternating current) power to provide electricity throughout the data centre.

According to Intel, AC to DC power conversion losses account for around 36 percent of the total server power budget in a typical data centre.

On a more general level, using IP networks to monitor and control energy use can help reduce power consumption across the business as a whole, a concept which Cisco has dubbed ‘Connected Real Estate’.

With all this, technology clearly remains only part of the answer to the issue of data centre power consumption. As indicated above, there can be challenges in identifying whose responsibility it is to deal with energy supply in the first place.

Organisations need to take a holistic view of the problem. However, it is a fact that technology can now have a significant impact on power consumption and it makes sense to start assessing developments in this field now.

Currently the power consumption of data centres is not regulated, but with climate change moving inexorably up the political agenda worldwide this is unlikely to remain the case for long.

And there are other pressing reasons to evolve to more environmentally-friendly operations as soon as possible, including the growing likelihood of outages as power and cooling systems come under increased stress.

Specifically regarding the network components of the data centre, there are a number of steps you can take now to reduce power consumption. They are:
  • Consolidate networks – fewer networks equals less cost and a reduced storage power draw.
  • Avoid gateways and consolidate functions – specialized appliances are not power efficient due to redundant internal cooling, switching and power conversion elements.
  • Bring in virtualisation – one network or network element per customer is inefficient in terms of power and space, so consider technologies such as Multiprotocol Label Switching to enable future virtualisation.
  • View power requirements holistically and prioritise efforts based upon reducing overall power consumption.
The need to save energy for the sake of the planet is now well established. Within data centres, the need to save energy is no less critical, not just for the sake of the environment but in order to ensure the enterprise’s viability, too. Now is the time to go green.

Cisco is exhibiting at Storage Expo 2008 the UK’s definitive event for data storage, information and content management. Now in its 8th year, the show features a comprehensive FREE education programme and over 100 exhibitors at the National Hall, Olympia, London from 15 - 16 October 2008 www.storage-expo.com

Source: StoragePR

How to multiply the uses of your Business Continuity Infrastructure

Ian Master, sales and marketing director UK, Double-Take Software

Introduction
Business continuity (BC) infrastructure is typically thought of as a means to get data offsite. IT managers don’t necessarily realise that the BC infrastructure they are building can provide much more. A well thought through deployment can provide the ability to move information technology systems anytime, anywhere, for whatever purpose, without interfering with ongoing operations. Whether recovering from a disaster, simplifying routine server maintenance or even migrating whole data centres, a good deployment can provide a dynamic infrastructure that ensures effective business continuity planning as well as making the data centre manager’s life a whole lot easier.

A day in the life of a data centre manager
To state the obvious, data centre managers don’t spend their working lives exclusively worrying about large-scale disasters. Their day-to-day experience is more likely to include managing smaller business continuity and infrastructure issues. How can they maintain full service when they know a shared disk is starting to malfunction and needs to be swapped out? How can they replace a physical server because it is no longer performing optimally? What happens when entire clusters of servers need to be moved because the nodes lack disk or processing ability? What happens if the entire data centre needs to be moved to a different location?

Building a dynamic infrastructure
Data replication solutions, which copy data in real time from one server to another to create a complete duplicate on a live backup system, provide very high levels of data protection and availability. However, data replication is just that; it only protects an application’s data, not the application itself. In the event of a disaster, system administrators will have to hope that all of the application backups are valid and can be restored, because if not, they’ll have no choice but to find the installation disks and sometimes even that isn’t an option. To overcome this, the more sophisticated data replication solutions provide byte-level replication for application system states so that administrators have the ability to provision an entire server at the touch of a button and keep business critical applications up and running.

Another tool used to reduce hardware costs and manage infrastructure more flexibly is virtualisation. Virtualisation provides data centre managers with the ability to move servers “dynamically” to a different virtual machine where more processing power or disk space may be available. However, the process of moving virtual machines is limited to the virtual infrastructure and sometimes only the same physical server where the technology is hosted. By combining data replication that moves data and the application system state, virtualisation, WAN accelerators, operational monitoring and security tools, you now have the ability protect and dynamically manage your entire data centre, regardless of the situation.

Dynamic infrastructure in operation
Using host-based replication allows you to replicate data and operating systems, independent of hardware and in real time, while systems are still in production. Administrators are able to replicate from physical to a virtual environment or vice versa, physical-to-physical or virtual-to-virtual, all while the end users are accessing the data.
Data centre managers are using dynamic infrastructures to move entire data centres without end users even being aware, easing operational management as well as meeting the most stringent business continuity requirements. If a server is in need of maintenance, the data centre manager isn’t committed to a 2.00 am Sunday morning change control window just to tweak a configuration setting or perform a reboot. The operation of that server is dynamically moved to another without interruption, allowing the technician to take as long as needed to perform maintenance or repair that server. Maybe the part from the vendor won’t be available for 10 days? Operations continue uninterrupted and the maintenance window is open to whenever it is convenient.

Conclusion
If you have the ability to move systems anywhere, anytime, for whatever reason, without interruption to users, you have just exceeded a rather large piece of your company’s business continuity requirements and, more importantly, maximised data centre uptime. Dynamic infrastructures are providing the ability to restore business operations after a disaster not only to a functional level but also to the level of service that your end users expect, as well as providing the ability to seamlessly manage data centre operations.

Source: StoragePR
<>

Angelina Jolie Guest Stars in Malware Scheme

Spammers use sensationalized headlines to lure unsuspecting computer users

BitDefender researchers have identified a new wave of spam messages that use fake events related to actor Angelina Jolie in order to trick users into downloading and installing Trojan malware onto their computers.

This new campaign of spreading malware is mostly carried via spam messages based around an alleged adult video footage with the movie star. In order to watch the movie, users have to download binary file, video-nude-anjelina.avi.exe, which is infected with Trojan.Agent.AGGZ.

The spam message is comprised of an explicit image of Angelina Jolie, along with some text claiming that the mail has been sent as part of the MSN Featured Offers program. The text message plays a double role by it trying to trick the user into thinking that this is a legitimate news message and by preventing spam filters from labelling the entire mail as spam message.

“The spam wave is part of a larger category of unsolicited mail messages that rely on social engineering techniques in order to lure unwary users into installing Trojans,” said Vlad Valceanu, Head Of Antispam Research. “This type of attack seems to be extremely successful, as the number of messages has quickly escalated over the last couple months. In order to achieve their goals, spammers usually rely on international celebrities and their pictures, along with catchy, yet fake news leads.”

This is not the only incident involving Angelina Jolie. Recently, the actor has given birth to two children, and spammers took advantage of the event in order to infect more computers. The spam campaign following the event wrongfully announced the fact that Jolie gave birth to no less than five children, and even offered users a link to a website allegedly hosting a small video with the event. The announcement, combined with Angelina Jolie’s fame was meant to take advantage of users’ hunger for sensational events.

Once on the respective page, users were shown an image impersonating a flash video player. When the user landed on the compromised webpage, the download started immediately, without any user intervention (a procedure also refered to as drive-by download). The binary file was infected with Trojan.Downloader.Exchanger.Gen.1, a piece of malware that has been widely used in another spam campaign promoting an alleged antivirus utility, called Antivirus XP 2008.

Although the approach is relatively new, the underlying technique has been widely used in the past. This campaign mostly targets computer users who are not educated in computer security - as they are not aware about free online scanners offered by major security providers.


The spam message directs the user to a legitimate webpage who’s index page has been doubled to facilitate the attack. For instance, while the normal home page is index.php, the compromised URL would always end in index1.php. This secondary index page is neatly crafted using the Windows Vista look-and-feel (the Aero wallpaper and icon buttons). The professional look dramatically contributes to gaining users’ confidence, but there are a few details that should tip off the visitor about the scam.

For instance, the virus top on the upper right side of the screen displays the most aggressive viruses that were active during May - meaning the page has not been updated. Secondly, the other text elements are written in plain English, with ambiguous explanations (such as ”Trojan attacks damage more than $3 million/hour.”) The spam message itself is written using poor grammar, with multiple obfuscations to trick spam filters.

”This spam wave built on an older recipe, making heavy use of text obfuscation in order to prevent spam filters from identifying and marking the message as junk,” said Vlad Valceanu. “The message itself should be enough of a warning for the user that the advertised piece of software is not legitimate and might come from ’unorthodox’ sources. More than that, users should pay extra attention to webpages that automatically try to download a file on the computer.”

Once installed on the computer, the rogue antivirus utility would stealthily start installing other high security risks such as adware, spyware or other malware from multiple servers or sources on the internet. Also, when run, the antivirus would display that it found multiple fake or false security threats on the host computer. This is a common tactic for rogue security applications, as they try to mislead unaware computer users and make them pay for the “full” version of a bogus utility.

Source: BitDefender News Center
<>

P.S. Such headlines are not limited to dealing with Angelina Jolie or other such celebrities and such malware also comes by means of emails with other titles, such as claiming to be news and weather information, news of military operations by US and allied forces in Iraq, Afghanistan, or claims that attack on Iran has started, and many other such headlines.

The advice can only be as always... DO NOT OPEN such emails.

BitDefender Protects Against Zero-Day Microsoft Word Bug

The BitDefender Labs released a signature update to protect clients against the latest unpatched Word exploit.

The vulnerability affects Word 2002 SP3 , could be exploited by an attacker to "gain the same user rights as the local user", according to Microsoft. The exploit is already being used in the wild.

"The samples we retrieved were already being detected as malicious by BitDefender software, as the exploit was being used to drop a malicious executable file that we had already signed. As of this morning, we've also added detection for the exploit itself, blocking this avenue of attack against our clients once and for all" explained Senior BitDefender AV Researcher Attila Balazs.

The dropped component is a backdoor detected by BitDefender as Backdoor.PoisonIvy.CV. Once installed, PoisonIvy grants complete control over the affected computer to an attacker.

Malicious files containing the exploit are detected by BitDefender as Exploit.Word.MS-953635.A. The vulnerability itself is detailed in Microsoft Security Advisory 953635. An analysis of the PoisonIvy backdoor variant used in the attacks is ongoing and will be published on the BitDefender website as soon as possible.

Source: BitDefender News Releases
<>

Comment: Now, that's why I am glad to have BitDefender as my protection software. And while it takes a little longer - a lot, in fact - to do a full system scan compared to AVG which previously was run here, the thoroughness of the scan seems to be superior and now we can see that the 2-hourly or so update frequency (automatic, even in the FREE version) has a good reason.

GROUP Technologies: How to implement legally compliant e-mail management

15 and 16 October will see the eagerly awaited Storage Expo 2008 – the UK’s leading storage and information management event – held at London’s Olympia Exhibition Centre. For the first time, GROUP Technologies UK will be represented with a booth of its own. All aspects of legally secure e-mail management will once again be under the spotlight.

“We are delighted to have the opportunity of presenting our entire legally secure e-mail management portfolio to a wide audience,” says Andreas Richter, International Product Marketing Manager at GROUP Technologies. “Our expert visitors are set to gain a comprehensive insight into the world of efficient e-mail management via the leading mail platforms.”

Visitors to the fair will have the chance to profit from the expertise of GROUP Technologies’ e-mail specialists at GROUP booth 515, where they can find out about current requirements and opportunities in the field of corporate e-mail systems. The focus of attention will be on the central control of e-mail messaging as well as the intelligent linking of e-mail security and e-mail archiving.

Other topics of interest will include server offloading and the tamper-proof storage of business-critical e-mails. This is an area in which GROUP’s iQ.Suite provides the perfect solution to increasingly stringent corporate confidentiality and storage record requirements.

Source: Storage Expo Press Center
<>

Know Your Cybercrime Enemy – Finjan Unveils the Latest Cybercrime Organizational Structures and Modus Operandi

In its Q2 2008 Web Security Trends Report, Finjan outlines the latest developments in the cybercrime commercialization economy

Farnborough, United Kingdom, July 2008 - Finjan Inc., a leader in secure web gateway products, on July 15, 2008 announced the latest findings by its Malicious Code Research Center (MCRC). In its latest trends report for Q2 2008, the MCRC identifies and analyzes the latest Crimeware business operations, and provides a first-of-its-kind insider’s look at the organizational structure of Cybercrime organizations. It all makes the cybercrime more successful and profitable than ever.

The report includes real documented discussions conducted by Finjan’s researchers with resellers of stolen data and their “bosses”, confirming Finjan’s analysis of the current state of the cybercrime economy.

“Over the course of the last 18 months we have been watching the profit-driven Cybercrime market maturing rapidly. It has evolved into a booming business, operating in a major shadow economy with an organizational structure that closely mimics the real business world. This makes businesses today even more vulnerable for cybercrime attacks, especially considering the maturity of the cybercrime market and its well-structured cybercrime organizations,” said Yuval Ben-Itzhak, Finjan’s CTO. “Recent industry reports containing record numbers of malware infections during the first half of 2008 alone underline again the huge impact of cybercrime on today’s businesses.”

The report explores the trend of loosely organized clusters of hackers trading stolen data online being replaced by hierarchical cybercrime organizations. These organizations deploy sophisticated pricing models, Crimeware business models refined for optimal operation, Crimeware drop zones, and campaigns for optimal distribution of the Crimeware.

These cybercrime organizations consist of strict hierarchies, in which each cybercriminal is rewarded according to his position and task.
The “boss” in the cybercrime organization operates as a business entrepreneur and does not commit the cybercrimes himself. Directly under him is the “underboss”, acting as the second in command and managing the operation. This individual provides the Trojans for attacks and manages the Command and Control (C&C) of those Trojans. “Campaign managers” reporting to the underboss lead their own attack campaigns. They use their own “affiliation networks” as distribution channels to perform the attacks and steal the data. The stolen data is sold by “resellers”, who are not involved in the Crimeware attacks themselves.

“In our report we provide a closer look at today’s cybercrime enemy, indicating how it organizes, operates and benefits from stolen data. We unveil the business cycle of data collecting and trading by today’s cybercriminals, said Yuval Ben-Itzhak, CTO of Finjan. “We also show examples of the highly effective tools and methods that are being used to steal data from enterprises around the world.”

As a preventative measure, businesses should look closely at their security practices to make sure they are protected. A layered security approach is a highly effective way of handling these latest threats, and applying innovative security solutions, such as real-time content inspection, designed to detect and handle them is a key factor is being adequately protected.

Malicious Code Research Center (MCRC) is the leading research department at Finjan, dedicated to the research and detection of security vulnerabilities in Internet applications, as well as other popular programs. MCRC’s goal is to stay steps ahead of hackers attempting to exploit open platforms and technologies to develop malicious code such as Spyware, Trojans, Phishing attacks, worms and viruses. MCRC shares its research efforts with many of the world’s leading software vendors to help patch their security holes. MCRC is a driving force behind the development of next generation security technologies used in Finjan’s proactive web security solutions. For more information, visit our MCRC subsite.

Finjan is a global provider of web security solutions for the enterprise market. Our real-time, appliance-based web security solutions deliver the most effective shield against web-borne threats, freeing enterprises to harness the web for maximum commercial results. Finjan’s real-time web security solutions utilize patented behavior-based technology to repel all types of threats arriving via the web, such as spyware, phishing, Trojans and obfuscated malicious code, securing businesses against unknown and emerging threats, as well as known malware. Finjan's security solutions have received industry awards and recognition from leading analyst houses and publications, including Gartner, IDC, Butler Group, SC Magazine, CRN, ITPro, PCPro, ITWeek, Network Computing, and Information Security. With Finjan’s award-winning and widely used solutions, businesses can focus on implementing web strategies to realize their full organizational and commercial potential. For more information about Finjan, please visit: www.finjan.com.

Source: Eskenzi PR

ISAF welcomes strengthened UK government IT security awareness

London 17th July 2008 - - Dr David King, ISSA-UK and Chair of the Information Security Awareness Forum (ISAF), said he is delighted that the requirement to provide information security awareness has been raised to the top of the agenda amongst UK government departments and agencies.

"The move is welcomed by the ISAF and will, I have no doubt, also be greeted in a positive manner by other IT security bodies across the UK," said Dr King, who added that the sea change in the government's approach to information security is the result of conclusions of a number of relevant reports in recent weeks.

"The new security ethos permeating through the various strata of the UK's government and its agencies will, we hope, encourage all managers in the public sector to take a responsible attitude towards looking after their computer data," he said.

According to Dr King, there is now a greater need for education and guidance on information security matters for existing and new employees of the government and its agencies.

"This need is about to become pressing as the government and its agencies gear up to take on the several tens of thousands of newly-qualified graduates that have decided to enter the public sector this coming September," he said.

Most of these new employees, he added, will have grown up with computers, both at home and at school, but many will lack a basic understanding of data security issues.

"It's down to their new public sector employers to educate them on this front and they can only do this if the relevant managers get behind the security policies that already exist in many government departments and agencies, and pro-actively apply them," he said.

"Here at the ISAF, we believe that government departments and their agencies should develop positive strategies to raise awareness and understanding of information security principles, taking into account the DPA, HRA, RIPA, Computer Misuse Act, Police & Criminal Justice Act, Defamation Act, Fraud Act, Obscene Publications Act. They should also prepare for the governance provisions of the Companies Act 2006, which is due to become law later this year," he added.

Dr King went on to say that, as individuals as well as employees, ISAF
members and associates, as well as anyone involved in business management, need to be more aware of the issues that affect us all in our day-to-day handling of personal data.

"This is especially true when it comes to developing the resources required to provide information security guidance to all members of staff, covering issues such as incident reporting, data handling and taking a holistic approach to the topic," he said.

The ISAF had already seen the need to do this at a Director level with the production of its Directors’ Guides on Information Assurance launched in April 2008, sponsored jointly by IAAC, ISAF and BT. The ICO has warmly received and reviewed these and believes that they should be on the desk of every single director of every single company / organisation in the land. When asked by the ISAF as we seek to use the Directors’ Guide to spread the message that information risks must be understood and effectively managed, Richard Thomas the Information Commissioner replied, “Every Director should have one!” and continued, “We will be saying more about board-level accountability in the Thomas/Walport Report on Data Sharing due out shortly.”

Founding members of the forum included the ISSA, (ISC)2, BCS, Infosecurity Europe, IISP, ISACA, EURIM, Get Safe Online, NeCPC and Security Awareness SIG.

“The Security Awareness SIG is looking forward to assisting the public sector by sharing the knowledge and skills learned by corporations in the private sector. Our members have been tackling the challenging issues surrounding data protection for many years, and there is a wealth of good practice and experience that will save the painful reinvention of many wheels.” Said Martin Smith MBE BSc FSyI, Chairman and Founder of the Security Awareness SIG

“The CMA is proud to be a founding ISAF member and though our organisation is not an obvious one for Information Security, we have long recognised that security issues arise from our increasingly interconnected and converging world and that top down business involvement is key element in improving the security posture of any organisation (or country)” said Peter Wenham CISSP MICAF CLAS, Director, CMA

Nigel Jones, Director of the Cyber Security Knowledge Transfer Network, commented: “It is essential that the education and awareness of information security becomes a top priority for UK government IT users. Meeting today’s information security challenges relies on addressing three key issues – how to make our technology more secure; how to help business understand the positive economic impact of reducing e-crime; and how to change the way society thinks about the value and vulnerability of its sensitive information. This announcement offers a positive outlook on all three. Information security may be a global issue but it must be tackled locally first. The decision to increase government focus on Information security awareness demonstrates that the UK will lead from the centre on cyber crime and security.”

For more on the UK government's enhanced security ethos click here:

For more on the ISAF see website:

Source: Eskenzi PR
<>

RingCube and MXI Security™ Partner to Deliver a Virtual Desktop on the hardware-encrypted Stealth MXP™ Biometric USB Drive

RingCube Managed Virtual Workspaces to be Delivered on MXI Security Biometric USB Drives

NEWS RELEASE

Mountain View and Santa Ana, Calif., July 15, 2008RingCube Technologies, a leading provider of the managed, virtual workspace, and MXI Security, the leader in superior managed portable security solutions, partner to deliver MojoPac Enterprise on MXI Security’s Stealth MXP Biometric USB drives. The integrated mobile virtual desktop solution enables users to securely carry their entire desktop environment including applications, files and settings on a portable USB drive and access it from any PC – at work, home, at a customer site or on a public computer -- online or offline. MXI Security USB portable security devices ensure that RingCube’s high performance virtual workspaces are protected by the highest 2-factor biometric user authentication and the strongest AES 256-bit hardware encryption to prevent data leakage and unauthorized access.

“Workers who travel from office to office or between the office and home like the convenience of having their desktop available without having to carry a notebook from point to point. Virtual desktops that run off a USB flash or hard drive give users a portable personality with all of their applications, data and preferences delivered in a familiar work environment that is available offline on any PC,” said Leslie Fiering, Research Vice President, Gartner Group. It is critical that organizations secure the virtual desktop environment with encryption, strong authentication and host security checks to verify that the host is secure and trusted before the user can login.”

RingCube’s award-winning MojoPac virtualization technology separates a user’s desktop environment, including applications, data, settings and system resources, from the operating system and encapsulates it into a secure container. Users can run their managed virtual workspace on unmanaged, non-corporate PCs – at home, at a client site or in a hotel business center. By plugging their MojoPac-enabled USB portable security device into a host PC, mobile professionals can transform any Windows PC into their own familiar and personalized workspace to access their files, applications, settings and entire desktop, as if they were on their own PC. Before allowing the user to launch their workspace, MojoPac verifies the security of the system based on administrator-defined policies. For administrators, RingCube provides an Administration Server that makes creating, deploying and managing virtual desktop environments easy and less costly.

“We are honored to have RingCube delivering their innovative virtualization technology on MXI Security portable security devices,” said Lawrence Reusing, CEO at MXI Security. “The combination of RingCube’s high-performance virtual workspaces and MXI Security’s high-capacity biometric USB drives with the ACCESS Enterprise device lifecycle management solution make it possible to take corporate desktops anywhere with the highest level of security.”

Enterprise companies can provide drives to employees, contractors or customers that are pre-provisioned with the user’s workspace or let them self-provision their workspace from RingCube’s web-based Client Portal. To launch the MojoPac workspace, users simply plug-in their MXI Security USB portable security device into any PC, swipe their finger across the biometric reader, and then will be automatically logged-in to their MojoPac workspace. Once logged in, users have access to their personalized applications, files, settings and desktop environment.

“Today’s workers are demanding ever-increasing levels of mobility, putting a serious strain on enterprises’ ability to manage these users and keep their data and systems secure,” said Ron DiBiase, RingCube’s VP of Business Development. “The partnership between RingCube and MXI Security enables enterprise customers to give their users portable workspaces that can be carried in the palm of their hand, while providing the security of biometrics and hardware encryption.”

MXI Security USB portable security devices provide secure portable storage that includes biometric authentication and AES 256-bit hardware encryption for MojoPac Enterprise workspaces. MXI Security Biometric USB portable security devices are available immediately in either a USB Flash Drive or USB Hard Drive. Stealth MXP™ is a fully manageable FIPS 140-2 Level 2 validated USB device that protects up to 8 gigabytes of data with AES 256-bit hardware encryption and strong authentication (biometric, password or both). Stealth MXP delivers strong authentication for network logins/SSO, remote access and full disk encryption with total portability.

Outbacker MXP™ protects up to 120 gigabytes of data with AES 256-bit hardware encryption with strong user authentication and supports digital identity functions with complete portability and simplicity. Outbacker MXP is a fully manageable hard disk drive device ideally suited for organizations that need high capacities for portable data or portable desktop and OS environments.

RingCube is the leading provider of the managed virtual workspace. The company’s award-winning software platform, MojoPac, enables enterprise and consumer users to securely access their complete personal computing experience from any Windows PC around the world. The company is venture-backed by New Enterprise Associates (NEA) and Mohr Davidow Ventures (MDV) and is based in Santa Clara, Calif.

MXI Security leads the way in providing superior managed portable security solutions designed to meet the highest security and privacy standards of even the most demanding customers.

MXI Security solutions combine the power of strong user authentication, digital identity and data encryption to protect access to sensitive information and systems.

Easy to manage and transparent to the end user, MXI Security solutions enable organizations to satisfy multiple security needs with a single device, facilitating greater mobility without compromising security.

Source: OnPR for MXI Security

Who exactly does own the documents you store online?

Storing documents, etc. in the “cloud”... My first and immediate advice... don't

by Michael Smith (Veshengro)

With online office applications improving in quality all the time, they are quickly becoming the tools of choice for web workers.

Between the ability to access your documents from anywhere via a web-enabled PC or laptop, the easy sharing, and the automatic backups, and all that more and more people who are using these services.

But in this rush to go online, we all sometimes fail to understand exactly what we are getting for free there in the cloud. If you use these services for more business purposes, it is worth a look at their Terms of Service.

Let us therefore look at the terms for three of the major alternatives in the online document space – Google Docs, Zoho, and Adobe’s new Acrobat.com service. What I found might give you some pause for thought – especially if you tend towards the cautious and/or paranoid end of the business user spectrum.

In order to find the terms for Google Docs, you need to first go to the “Help Center”, and then follow three separate links to the privacy policy, terms of service, and additional terms. Here are a few excerpts – and may be here is the right place to insert the disclaimer that says, “I am not a lawyer”. Therefore, for full details, you do best to read the originals themselves and – ideally – discuss them with your own attorney ot paralegal.

As far as Google is concerned while you retain copyright, “you give Google a worldwide, royalty-free, and non-exclusive license to reproduce, adapt, modify, translate, publish, publicly perform, publicly display and distribute any Content which you submit, post or display on or through the Service for the sole purpose of enabling Google to provide you with the Service in accordance with its Privacy Policy.”

Oh lovely... so they can use anything you and I store online in any way that they like. I think so NOT!

Also, Google can discontinue the service at any time with no notice, and you may lose your files with no notice.

Furthermore, Google retains the right to filter or remove content, can put ads wherever they want, with no notice to you.

Also, you may like to note that deleted documents may remain on Google’s servers for up to three weeks.

Zoho’s Terms of Service and privacy policy are linked directly from their home page. If you read them, you’ll find:

“Unless specifically permitted by you, your use of the Services does not grant AdventNet the license to use, reproduce, adapt, modify, publish or distribute the content created by you or stored in your Account for AdventNet’s commercial, marketing or any similar purpose.”

While this sounds already better than the previous ones, below a couple of more, namely that Zoho can block or remove content that infringes copyright or violates laws.

Zoho can also terminate your account at any time for any reason and here files may remain on their servers after deletion for an unspecified length of time.

So, are you still considering to store your documents in the cloud?

Acrobat.com, like Zoho, has its services agreement and privacy policy linked from their home page. On the minor annoyances side, the terms are only available as a PDF, not online as with other services. So, you have to download them first in order to read them, or have Adobe Reader open them in the browser.

Here are some ideas as to what the TOS and other policies contain:

Adobe can discontinue providing the service at any time, with no notice.

According to the information you retain ownership of your files, but “By maintaining your Content on the Services, you grant to Adobe a non-exclusive, worldwide, perpetual, royalty-free and fully paid license under all intellectual property rights to copy, distribute, transmit, publicly display, publicly perform, transmit, and reformat your Content solely to deliver the Services to you.”

Sorry, do I understand rightly that the majority of them, that is to say here in this case two out of three, seem to take upon themselves the right that they own, theoretically, access to my documents that I may store on their services and the right to, while I “retain the copyright” they can use the material in any way that they choose. Duh?

And, Adobe may read your content for legal or technical reasons.

So what’s it all mean? Reading over all three agreements, it’s very clear that Google and Adobe have more lawyers hanging around than does AdventNet (Zoho’s corporate name). - and, like lawyers everywhere, they’ve gotten their fingers into the pie. Of the three services, Google has perhaps the most intrusive agreement, thanks to their explicitly reserving the right to serve ads anywhere. As far as ownership goes, you should be OK with any of these services; although Google and Adobe claim licenses, the full terms make it clear that these license are limited to actually providing you the service you’re using.

One thing that’s clearly missing is any sort of backup guarantee. While you may feel more secure storing your documents on Google’s or Zoho’s or Adobe’s servers than your own, that security is not something that you’re promised. Any of the three can lose your documents or terminate your ability to get to them at any time for pretty much any reason, and you’re out of luck. So if you do put important things online - back them up somewhere else.

Therefore, don't rely on this kind of storage. Do your own backup and store your data offline on internal and external hard drives, CDs, etc.

As I have previously said in my article "Cloud Computing – Methinks not!" you may, if the services fail, find yourself up the creek without a paddle and I certainly would not rely, ever on this.

Also, none of these services guarantee you privacy nor the integrity of your documents. While some, a great number in fact, of Web 2.0 services and such are great for all of us to use and I love the iScrybe service and Google Calendar and I also have a Google Mail account, I will not rely on Web 2.0 for storage of my data of any kind.

While this may upset some people and also some of the providers what I am saying here the fact remains that such services are – probably – great when it comes to document sharing and online collaboration but more or less permanent storage in the cloud I would most certainly advise against.

I know that I am still old fashioned and give me the option I probably would still make tape backups even. When it comes to documents and such like, they are all best kept close to you, especially if you value the information and do not, necessarily, want the entire world to know; at least not before you choose to bring out the information into the public domain.

So, in summing up, yet again my advice: by all means use online services, “in the cloud” services, for documents that you want to be able to access remotely or that you want to share with other for purposes of collaborations and such, but do not keep your data there as a means of more or less permanent storage facility. Those service are not ideal for that.

© M Smith (Veshengro), July 2008

Zero day flaw in WORD allows exploits by Trojan

by Michael Smith (Veshengro)

Microsoft warned on July 10, 2008, that an unpatched security vulnerability in WORD has become the subject of targeted attacks.

Yet another security flaw in Microsoft products? You don't say... The more I see of Microsoft the more I wonder what kind of incompetence reigns there at Redmond.

The flaw – which, supposedly, is restricted, so they claim, to Microsoft Office WORD 2002 Service Pack 3 (one may wonder when they notice that it not just affects that one) – creates a mechanism for hackers to inject hostile code onto vulnerable systems. Redmond has published workarounds as a stop-gap measure while its researchers investigate the flaw in greater depth.

In the meantime, Microsoft is keen to downplay alarm. "At this time, we are aware of limited, targeted attacks attempting to use the reported vulnerability, but we will continue to track this issue," a post on its security response blog explains.

The vulnerability has appeared in a number of samples on malware. A widening number of anti-virus firms have issued signature updates to defend against the threat.

Symantec, acting on samples sent to it by handlers at the SANS Institute's Internet Storm Centre, was the first to publish an advisory.

Maybe a firewall would be advisable here as well that can prevent the injection of hostile code such as the recently tested – by me, due to my favorite Zonealarm having been disabled by the nice guys from Redmond with Microsoft Security Update for Windows KB951748 – PC Tools' free Firewall. It has an advanced facility that can prevent the injection of code. It can be annoying though when this is set as it will have the little window pop up every time that you launch a program, until it has learned which programs are allowed to do this and that.

The timing of the arrival of the exploit meant Microsoft had not enough time to respond before its regular “Patch Tuesday” update, This factor is probably no coincidence. So far the direct details of the flaw are still under investigation and it can be safely assumed that they will probably be withheld from the public and industry even until a fix is unavailable. It is also not at all clear as to who the attack is targeting and aimed at. However, historically unpatched WORD exploits are a particular favorite of Chinese hackers.

Seeing how clever Redmond was recently with Microsoft Security Update for Windows KB951748, which disabled most if not indeed all Zonealarm applications and so far we have no response from them as to that foul up, why should we trust them when they are so silent.

Many people seem to believe that the disabling of Zonealarm in the above mentioned patch was no coincidence but was in fact one of the aims.

Yet again, I cannot and will not comment further to such claims as they cannot, so far, be substantiated and proven. Let the reader, however, beware.

The best advice, I am sure, can only be here, yet again, to go Open Source, and to use and alternative to Microsoft Office. There are a number of them available and most are as good, at least, as MS Office.

As I, personally, am moving – work wise – between Linux and Windows all the time, I am using only, nowadays, Open Office 2.0 for all the work that generally would have been the domain of MS Office. This is with the exceptions as and when WORD needs to be used to work with some templates, for instance, such as Avery Dennison's ones, as they still do not have created an Open Office interoperability.

I am not saying that there may not be vulnerabilities in Open Office or the other Open Source products. The fact remains, though, that most hackers do not seem to even attempt to target such open source software and also operating systems. Or, more precisely, in the case of the operating systems, such as Linux Ubuntu, they try to get somewhere but do not succeed.

© M Smith (Veshengro), July 2008

To patch or not to patch – that is the question

by Michael Smith (Veshengro)

After the recent episode of problems with a “security” update by Microsoft for Windows XP that disabled not only the Zone Alarm firewall on my system but, basically, all Zone Alarm firewalls left, right and center, that is to say the firewalls of thousands of other users worldwide, I am beginning to wonder, yet again as to the question of patch or no patch.

The question is as to whether downloading patches, and other such so highly tauted important updates and patches, from Microsoft for Windows is such a great idea. I have had problems in that department before but we shall touch on the advice given to me by computer personnel later in this article.

Obviously, initially, and I assume I was not alone there, I thought that the Zone Alarm program had gone bonkers for some reason or other and I uninstalled the older version, that is to say 6.5, downloaded the latest version, installed same but, guess what? Well, you guessed it... zilch. The problem persisted. No access to anything on the Web, neither my emails via the email client nor web pages. Nothing was loading.

As I wanted and need Internet access, which I just could not get, I disabled Zone Alarm, accessed the Web and downloaded PC Tools' Firewall which though nice and rather powerful if just not Zone Alarm (sorry PC Tools... nothing wrong with the program... just me having used Zone Alarm now for so many years and that...). I am sure you all know what it is like when you have gotten used to and used to trusting something for many years.

Had it, however, not for the fact that I got PC Tools' firewall I would not have been able to get online and finally find out that I was not the only one affected and that Zone Alarm was advising that there is a problem for all Zone Alarm users with the Microsoft Security Update for Windows KB951748. Cheers Microsoft! I followed the instructions provided by Zone Alarm's website and uninstalled that patch and put Zone Alarm back on and all is well.

But back to the question of “to patch or not to patch”...

Years ago some geeks told me not ever to install any patched and so-called “security updates” from Microsoft as some of them were doing more harm than good and I must say I have had a couple of occasions in those days when that did happen and when, according those that then sorted out the PC for me (before I learned a lot of how to deal with them things), the reason for the malfunctions were those “security updates” from Microsoft.

I may just about go back to those days after the above events when I no longer installed any “security updates” and “patches” from Microsoft and do as I did then, while ensuring, obviously, that all possible security software is in place and continually updated.

I must say that, after some bad experiences with patches, such as disabled Open Source software on my PC and disabled add-ons to Firefox I am beginning to think and to believe that there is something in what some of the geeks used to say and still say about Microsoft's patches and such; namely that some of those pieces of software are there to check on one's system and disable software in use that Microsoft does not agree with or approve of. I have no proof for that and therefore make no claims as to whether or not what others have said and claimed is the truth or not.

All I am beginning to wonder is as to whether “to patch or not to patch?”

Who would have believed that an update patch, in this case Microsoft's Security Update for Windows KB951748, direct from a supposedly reputable source would disable a firewall on PCs and only, so it would appear, the firewalls of one particular company.

Let me hear your thoughts, theories and even conspiracies on this.

And, the question remains, “to patch or not to patch?”

As I have indicated already, I am of a mind right now to turn off, as I had done with previous Windows operating systems, automatic updates, and leave the operating system as it is and just just run the best third-party protection software and keep said programs updated, obviously, ate a more-or-less daily basis.

The “game” with Microsoft updates “killing” off Zone Alarm cost me hours of productive work and while I managed to get back online, protected, with PC Tools' firewall, which, as said, is quite neat and has lots of features, I was missing my Zone Alarm that I was familiar with and with which I have good experiences. This was time that I could have use much better in researching and writing articles or doing other productive work and things. Messing around with a computer when you are not really certain as to what has happened in the first place does not rather as very good entertainment in by book.

I would love to hear Microsoft's response and excuse with regards to this as to how and why and wherefore this happened and what guarantees they are proposing to give and what safeguards they are putting in place that this is not going to happen again with other patches. Mind you, I doubt that we will get a real response from Microsoft at all. Corporations such as that one and especially that one think that they are different and do not have to do things like that.

So, “to patch or not to patch?”

I think my answer – for the time being at least – you all can guess. What say the rest of you?

© M Smith (Veshengro), July 2008

Cloud Computing – Methinks not!

by Michael Smith (Veshengro)

I know that so-called “in the cloud” computing is becoming increasingly popular especially with the kind of PCs with little if any proper hard disk drive.

Other people who like the idea of “in the cloud” computing are those that are constantly on the move and those that have to do lots of collaboration work on documents and such with others many miles away.

They like the idea if “in the cloud” computing as they can, generally, access their data, their documents, their bookmarks, etc. from any Web-enabled computer from anywhere in the world.

While access to one's documents and other data from any Internet enabled computer from wherever in the world might be a lovely idea, however, and this is why I said “generally” a moment ago, what if the online service goes down for some reason or throws an extended wobbly? Or, if the problem that I am currently having with Yahoo's “My Web 2.0” where I can only get access at times for a short while and then it will not acknowledge my sign in for days on end.

If that happens the user is then “up the creek without a paddle”, as the saying goes, and especially and even more so if there is no other virtual or better still physical location where this data is held. If it is just in that particular cloud that is gone down then “oops!”

My advice would be rather to have but the data that you need to use when on the move and such on removable media, such as USB sticks – and those should, ideally, be encrypted ones – external portable hard drives or such. Do not rely in any way whatsoever on “in the cloud”.

This also applies for stuff when working in fixed locations.

If you want to use one of those new micro PCs or Laptops then have external portable hard drives as storage devices. Attached peripherals, including hard disks and others such as floppy drives and even, when they arrived, CD drives, used to be the norm in the early desktop computers like they were used with the military. Everything was attached on the outside, basically. With today's technology of USB 1.1 and USB 2.0, as well as Firewire, such devices are damned fast.

I know that I am rather contrary here to most people and I know that a lot of “web workers” love the “in the cloud” computing and storage but... I certainly advise against the “in the cloud” approach

While I know that this does set me at odds with a lot of people, if not indeed all of them, of the Web 2.0 field, it is my belief that online data storage and document storage is not a good idea. At least not without holding duplicates, and maybe even triplicates, of the information that is put up into the cloud, stored, back at base.

In addition to this, that is to say, data being inaccessible if the server of the host should have problems or whatever, my other concerns with regards to “in the cloud” computing are what if (1) the provider changes their rules and a free service suddenly is one that needs paying for or (2) what if service gets withdrawn, as most EULAs state that changes can be made without prior notification, or (3) what if the provider simply folds?

I know that those above are a worse-case scenario type of thing but, if you do not hold that data that you have in the cloud elsewhere that you may have lost it all.

The other question that goes with “in the cloud” computing in the security and privacy of your information and document. Many companies that provide online storage facilities, especially that that do so for free, have it in small print on the EULA that states that the data, the documents, the photos, the what-have-you, that you upload to store in their cloud becomes their property and they can share it, display it, etc. Duh? Sorry, not the way I am playing. I value my privacy and that of my data.

Therefore, as far as I am concerned, there maybe, in the future, some “in the cloud” computing for me, but certainly not much, and if I am going for some of those E-PCs then they will have HDD and other stuff attached on the outside. My data stays securely where I can control it, thanks. And where I can get to it when I want to and need to and where I am not reliant on a server that may, or may not, be working at that particular moment.

© M Smith (Veshengro), July 2008

The Boss' guide to Geek Speak

Peter Mitteregger, European Vice President CREDANT Technologies

Do you speak Geek?

Every company today relies heavily on technology to complete even the most basic of day to day activities. Yet this reliance comes at a price. The news is full of organisations having to put their hands up to a breach of sensitive data from one source or another – be it a deliberate attack or a victim of circumstance with a mislaid laptop. Combine this with the ICO's determination to name and shame any who do not adhere to the Data Protection Act and enforce its eight principles and its simple to see the financial implications of taking an ostrich’s approach.

The problem is fully comprehending the weaknesses you face and how best to strengthen them. You've got your top man on the job but when he presents you with his report it's full of acronyms, end points, phishing, pod slurping and other such terms that are better suited in the dialogue of an episode of Red Dwarf. Geek speak often sounds like normal English that doesn't quite make sense because familiar words have been given a new meaning. For example, a port is no longer where a ship docks and a spool isn’t for thread and, for that matter, a thread is no longer a thin strand of cotton. Executing a program is not at all the same thing as killing it.

This article aims to decipher the jargon, converting it to real business contexts, enabling you to not only understand what is being asked for, and how much it will all cost, but fully comprehend why it is needed. Simply, it will give you the power to communicate with the Geeks.

So let’s start at the beginning
Let’s look at some of the everyday terms used to describe the technology we use and how it works :

Architecture : a term applied to both the process and the outcome of thinking out and specifying the overall structure, logical components, and the logical interrelationships of a computer, its operating system, a network, or other conception. Computer architecture can be divided into five fundamental components: input/output, storage, communication, control, and processing.

Client/Server Architecture : network where some computers are dedicated workstations (often referred to as clients) and some are dedicated servers; information is centralised on the server and an administrator sets policies and manages it.

LAN (Local Area Network) : network that operates within a small geographic area, usually within a building, office or department.

WAN (Wide Area Network) : geographically dispersed network of computers.

WWAN (Wireless Wide Area Network) : wireless connectivity to the Internet. That allows a user with a laptop or PDA and a WWAN card to surf the Internet, check email, or connect to a Virtual Private Network (VPN) from anywhere within the regional boundaries of mobile services.

Operating System : sometimes abbreviated to OS it is the program that, after being initially loaded into the computer by a boot program, manages all the other programs in a computer. The other programs are called applications. For example, Microsoft Windows Vista is the operating system, while Microsoft Word and Adobe Acrobat are applications.

Data : information that has been translated into a binary digital form that is more convenient to move or process. It is measured in bits (the smallest unit of data in a computer) and bytes (the standard size - 8-bits).

Mobile Device / End Points : This includes mobile phones, laptops, PDAs, memory sticks, CDs, iPods, even digital cameras. It encompasses anything portable that data can be transferred to.

Wi-Fi (wireless fidelity) : a term for certain types of wireless local area network (WLAN) that use specifications in the 802.11 family.

What we’re trying to avoid
Now that we understand what we’re talking about protecting, let’s look at some of the things that we’re trying to protect them from:

War Driving : locating and exploiting security-exposed wireless LANs. Unless adequately protected, a Wi-Fi network can be susceptible to access by unauthorised users who use the access as a free Internet connection.

Spyware : any technology that aids in gathering information about a person or organisation without their knowledge. On the Internet (where it’s sometimes called a spybot or tracking software), spyware is programming that’s put in someone's computer to secretly gather information about the user and relay it to advertisers or other interested parties.

BotNet : a number of Internet computers that, although their owners are unaware of it, have been set up to forward transmissions (including spam or viruses) to other computers on the Internet. Any such computer (often home-based) is referred to as a zombie - in effect, a computer "robot" or "bot" that serves the wishes of some master spam or virus originator.

Keylogging –records every key pressed on the computer keyboard to get at sensitive data, such as passwords.

PodSlurping : the unauthorised download of data from a computer to a small device with storage capacity, such as a Flash drive or an iPod or other MP3 player. The small size of the devices and the ease of connectivity - for example through the USB port or a wireless Bluetooth connection - makes it possible for anyone with computer access to surreptitiously download files from it

The Best Defence is a Solid Defence
This final section looks to decode what can be used to protect against some of these threats :

Firewall : a set of related programs, located at a network gateway server, that protects the resources of a private network from users from other networks. (The term also implies the security policy that is used with the programs.) An enterprise with an intranet that allows its workers access to the wider Internet installs a firewall to prevent outsiders from accessing its own private data resources and for controlling what outside resources its own users have access to.

Authentication : the process of determining whether someone or something is, in fact, who or what it is declared to be. In private and public computer networks, authentication is done through the use of logon passwords. Knowledge of the password is assumed to guarantee that the user is authentic.

Encryption : the conversion of data into a form, called a ciphertext, that cannot be easily understood by unauthorised people. Decryption is the process of converting encrypted data back into its original form, so it can be understood. Encryption/decryption is especially important in wireless communications. This is because wireless circuits are easier to "tap" than their hard-wired counterparts.

Full-disk encryption (FDE) : a process that encrypts everything on the hard disk, i.e. the media - this means that when data is saved to an encrypted disk it is encoded, all without user action. This includes the operating system, swap file, any temporary files and all the free space on the drive. The swap and temporary files can often leak important confidential data to a hacker. FDE also provides support for pre-boot authentication. It's an effective technique, but encryption can double data access times, particularly when virtual memory is being heavily accessed also, it is only effective if the machine is switched off. With FDE, only one key is used to encrypt the entire disk. Usually keys are stored on the local system, and their sole protection is typically the user's password or passphrase. And we all know how weak they can be! FDE does not protect against the most damaging breaches posed by an authorised user who has “legitimate” access to sensitive information who either accidentally or maliciously chooses to misuse or leak that information.

Full Data Encryption : full disk without the risk – only encrypting the data, not the media it is saved to. Encryption can take place whether data is on a desktop, laptop, PDA, or USB stick and it's granular, so administrators can set policies to determine which data is protected and against whom. As FDE uniquely protects individual users’ data, without interfering with the other operational processes (upgrades, patches, etc) that need to be done, it protects against the internal threat and provides lower TCO.

IDSes (Intrusion Detection Systems) : pretty much what it says on the tin detecting potential intrusions.

IPS (Intrusion Prevention Systems) : a pre-emptive approach to network security used to identify potential threats and respond to them swiftly. Like an intrusion detection system (IDS), an intrusion prevention system (IPS) monitors network traffic. Intrusion prevention systems also have the ability to take immediate action, based on a set of rules established by the network administrator.

VPN (Virtual Private Network) : a network that uses a public telecommunication infrastructure, such as the Internet, to provide remote offices or individual users with secure access to their organisation's network. A VPN works by using the shared public infrastructure while maintaining privacy through security procedures and tunnelling protocols such as the Layer Two Tunnelling Protocol (L2TP). In effect, the protocols, by encrypting data at the sending end and decrypting it at the receiving end, send the data through a "tunnel" that cannot be "entered" by data that is not properly encrypted. An additional level of security involves encrypting not only the data, but also the originating and receiving network addresses.

NAC (Network Access Control) : a method of bolstering the security of a proprietary network by restricting the availability of network resources to endpoint devices that comply with a defined security policy. NAC restricts the data that each particular user can access, as well as implementing anti-threat applications such as firewalls, antivirus software and spyware-detection programs.

DLP (Data Loss Prevention) : security products that focus on keeping sensitive enterprise data in.

PKI (Public Key Infrastructure) : enables users of a basically unsecure public network, such as the Internet, to securely and privately exchange data and money through the use of a public and a private cryptographic key pair that is obtained and shared through a trusted authority.

The threat against laptops and mobile endpoints is real and you need to arm yourself against data loss! Don’t let a language barrier come between you and the team trying to present you peace of mind.

www.credant.com
<>

Strategic Security Seminar – Tower of London

Good food, good company and great seminar

by Michael Smith (Veshengro)

The Strategic Security Seminar was held Wednesday July 2, 2008, at the Tower of London and was organized by CM Logic in conjunction with IBM Partners and the venue chosen with reference to securing your assets.

What the presentations showed and what we recently have come to realize with regards to lost data on CDs and such is that too many companies, government departments, organizations, and many others, take far too a lackadaisical attitude to database and general computer information security and security of (critical) data.

We do not even want to talk about in this instance about the ordinary home and or even small business users of computers, including those that have sensitive data on their PCs and small networks.

Other important and sensitive computers that are so often also unsecured, as we have noticed recently with the loss of a number of laptops of members of the military and security forces.

During the seminar it was mentioned that a survey had found that:

10% of all websites that accept payment details do not encrypt them.

35% of all companies and institutions have no control over staff use of instant messaging.

67% of all companies and institutions do nothing to prevent confidential data leaving on USB sticks and similar devices.

78% of all companies and institutions that had computers stolen did admit that those computers did not have encrypted hard drives.

84% of all companies and institutions do not scan outgoing emails for confidential data.

I am sure now everyone is really feeling secure and that their data held by others is safe – hardly.

The Strategic Security Seminar was held in the “New Armouries” of the Tower of London and the venue and the food was brilliant.

The presentations of the speakers of the various companies were most informative and it might have been good if more CIOs and CEOs from more companies would have attended this seminar and would attend other such seminars.

There does, however, seems to be the attitude about that while it may happen to others it could never happen to them. False security and a false sense of security is no security at all.

I know we not only find this attitude as regards to computer and data security. In many cases people and organizations who should know better also treat perimeter and site security, as well as personal security, with this “it won't ever happen to us because we have this or that in place.” Right! And? Has it actually be tested as to whether it works. I mean tested as in “properly tested”, as in “penetration tested” and this applies equally to computers, computers systems and networks, as it does to perimeter and site security.

Military sentries can get into deep and hot water for waving an officer through even without checking his or her credentials. “But I know you, Sir!”, I was once told by a young PFC on guard who I challenged when I entered the base in civilian dress as to why he had not asked to see my ID, “I have seen you many times in uniform.” Wrong answer that was and the sentry was lucky that I was in a good mood.

This attitude, however, prevails everywhere, and also and especially in regards to access to sensitive data with people having far too many privileges than necessary to do their job. This even includes temporary staff in many cases. Why should a temp have the permission to access data, of whatever kind, and transfer same to, say, a USB stick or similar.

How do you know where your data goes from there?

© M Smith (Veshengro), July 2008