By Guus Leeuw, President & CEO, ITPassion Ltd
There are two types of storage services that can be provided to organisations: in-sourcing and out-sourcing.
With in-sourcing, the client would receive, say, a storage administration team that then works on the premises and with the equipment of that client. The business model behind this type of service is quite easy to setup and to sell. Setting up an average cost, to be paid per team member per month, is quite easily done, as one only has to look at the salary ranges of these people to figure out how much the client should be paying for each one of them. As Junior Administrators are likely to be less expensive than a Senior Administrator, an average price covering all variants between Junior and Principal is easily made up.
The client side of this is that one would expect a good balance in skills, ranging from junior people to Principals. However, it is fairly easy to provide a lot more junior people than a good balance would suggest, and thus under-deliver on the quality of service.
The result of such a scheme would be that the client is over-charged for the service that it receives. After a while the client becomes unhappy with the service and starts looking for a different organisation to provide more of the same. Meanwhile, the storage provider gets a nice bonus for under-delivering on the quality and is doing well financially.
The difficulty in this scenario is to find and maintain the right balance, for the sake of the client. The interests of both parties are essentially conflicting, as the service provider wants to reduce cost, whereas the client wants to improve quality of service. Often, this conflict of interest is not understood at the client, who assumes that the service provider will do their utmost to provide good services, whereas the service providers eagerly make sure, that this remains so.
It would be a good thing, if the service provider would care more about profit in the long term, making clients happy. For the only good client is a happy client.
Out-sourcing is a lot trickier to setup from a business model perspective. There are several factors that play a role: Cost of data centre, electricity, cooling, equipment, and staff all play a vital role in making sure that the price for 1 TB of storage actually matches the cost that the service provider has in providing and managing that Terabyte of storage.
There are several ways to make sure one can over-charge a client, the most obvious is to hide the business model and the calculations that resulted in the price of that Terabyte of storage. Unless faced with procurement who already did the cost calculations for the organisation itself, not many clients understand the business model behind storage service providers.
Another easy way to reduce ones costs, from a provider perspective, is to utilise low-cost labour. Low-cost labour is often times also less experienced. Again, here is a conflict of interest: the client wants good quality storage services, whereas the provider wants to reduce the cost behind its business model.
In reality providing a service to a client is about making a profit off that client. The question that the client should ask and answer for himself is: How much of a profit do I want the service provider to make? And only when the answer is understood should one go about selecting a service provider.
Guus Leeuw jr. studied Software Development on the Polytechnics Highschool of Information & Communication Technology in Enschede, Netherlands. Soon after gaining his degree he was hired by EMC Germany to aid internal software development. Guus subsequently travelled and worked across Europe before, in 2007, setting up his own Software and Storage company ITPassion.
IT Passion Ltd is exhibiting at Storage Expo 2008 the UK’s definitive event for data storage, information and content management. Now in its 8th year, the show features a comprehensive FREE education programme and over 100 exhibitors at the National Hall, Olympia, London from 15 - 16 October 2008 www.storage-expo.com
Source: StoragePR
<>
Providing a Service is all about the client... yeah right!
Low cost automatic backup and remote data duplication solution for SMBs
By Ernesto Soria-Garcia, VP Sales IDS-Enterprise
When looking into products that solve our daily issues concerning the aspects of making sure our every day and all-important documents and databases are safely secured away, we find a host of different options and specialized products that do so with proficiency.
However If we look closely at the issues concerning data safety, and solutions to restore data in cases of accidentally deletion, or in the unfortunate case of losing a laptop under whatever circumstance or ‘force majeure’ as they say, it is clear that not all issues are covered by one solution, and that often we are obliged to build up a mecano-like assembly of products, and manage them fitting together.
In most small and medium businesses, not to mention micro enterprises or independent consultant’s cases, addressing these issues ourselves becomes almost impossible, as they are not necessarily knowledgeable enough to feel confident at having a crack at it, especially with the heavy burden of responsibility that goes with the management of such solutions.
So often we lay our hands down and decide that we’ll survive without, or accept a partial solution, or of course look for a third party to take care of this aspect of our business on our behalf. Often the more secure or complex the solution, the more costly it is, and the more dependent we are on the third party that we have called to our aid.
Linux based mini servers can offer SMBs and micro enterprises a solution that makes a backup of data from PCs in the office into a locally based unit and then duplicates it over existing ADSL lines to a second identical unit for disaster recovery of their choice, all automatically, extremely fast and at a surprisingly low cost. The server can integrate the most advanced enterprise-class technology and allow SMBs and micro enterprises with no more skills in IT than the general PC user to set up and run, providing for the first time real private and fully confidential outsourcing of duplication of one’s data. The intelligent server can monitor the whole process with security checks and counter checks, provided to the user as well as the manager/owner of the SMB.
Outlined below are what every enterprise whether a conglomerate or SMB needs to do in order to secure their data backup and the procedures and techniques needed to restore the data and files. To do this, the IT industry defines broadly the building blocks necessary to have a full fledged solution as follows:
For backing up data on the company site and then making copies of it for transportation to the remote site physically or perhaps to duplicate and transfer data via networks to a recipient unit elsewhere including media for remote storage:
Backup software to manage the daily ‘gathering’ of data from all ‘producers’ of data or clients.
- The hardware (PC, Server) that orchestrates this collection
- Local office storage media such as tape drives, disk drives to store the data collected
- Software or procedures to create a copy or replication of the backed up data to local tapes or disks that are physically transported daily to the remote (outsourced) site
- Hardware( PC/Server) to orchestrate and send data execute this process to additional remote hardware (PC/Server) storage Disk or Tape drives and their media
- Network infrastructure, whether SAN, LANS, or Internet ADSL
- High –level IT engineer to put it all together, administrate and monitor.
By covering in one fully integrated and purpose made software and hardware product that does all this linux servers can offer a huge opportunity for SMB and micro enterprises to once and for all equip themselves with a solution that brings an extremely affordable, high-level enterprise security to their data.
A typical end user wish list when looking for a backup and remote duplication solution are commonly identified as needing to be:
- All inclusive with high performance. IDSbox integrates all under its Linux OS, and local office data modifications and updates only are transmitted to the remote IDSbox, therefore avoiding any clogs in the ADSL lines, and providing extremely high performance synchronization of local and remote (outsourced) backup.
- Have completeness and integrity of data. Various monitoring control Checks are carried out at repertory, file levels amongst others
- Robust. A robust system made of metal chassis including a reinforced PVC casing
- Low operating cost. An electronic temperature and activity control lowers energy consumption to a meagre 9 Watts; could not be greener!
- Easy to install and administrate. Linux OS has been adapted to not require human intervention during its operation. Any programming or planning is done via a web browser.
- Provider of highest security levels. The local and remote servers as well as their disks containing the backed up data are mutually interchangeable. Data is transferred via encrypted data tunnels, validation and authentification certificates, password control and operate and Rsync/ssh protocol controls are used.
- Independence. No additional services or third party products are needed so there are no hidden surprises. Only existing ADSL lines are used.
- Easy Data backup definitions and restoration procedures. The software should provide a very easy way to define (simple drag and drop!) the files and folders one wants to permanently have secured by backup and remote duplication. The times to backup and frequencies should also be automated.
- Flexibility in data outsourcing possibilities. Should the company be a micro enterprise with a couple of PCs only, then only a single server will be needed at the remote location to back up data from the PCs.
- Confidentiality. Access to back up data should be provided through personal passwords only. This is different from NAS servers that share data.
- Scalability. The data backup and duplication capacity can be upgraded by simply changing the server disk sizes, e.g. from 320 Gb capacity to 1TB, or by adding an external Disk of up to 1 TB. Tape drives and disks for snapshot functions can be attached.
- Accompanying services such as warranty extensions, hot lines. Various warranties and extensions should be provided by the technology supplier, to provide maximum peace of mind.
- Cost effective. Linux servers can provide the most economical solution in the market to back up locally data and duplicate it remotely. This type of solution can be as purchased for as little as £1,200 for an entry level twin box solution consisting of two mini Linux servers including 320 Gb disk capacity in them, and all the software necessary to backup as many as 20 PCs or servers locally, the software to duplicate the data, to create the secure encrypted transmission tunnel via the ADSL and monitor and report to the SMB manager and individual users. Other solutions in the market that can carry this service out, whether hardware, software based, or both can cost 20 to 50 times more.
In conclusion, for the first time an extremely affordable solution that integrates all the components required to back up data on a company site as well as creating and maintaining a permanently updated copy physically elsewhere via ADSL is available and can be set up and programmed by a typical PC user. A SMB can now physically posses and control not only its local but outsourced data, without requiring a third party. Many SMB owners have expressed strong interest in this, as their data remains now fully confidential, and in case of a major disaster or data loss they can themselves recover the data using the extremely simple and intuitive software provided.
IDS-Enterprise are exhibiting at Storage Expo 2008 the UK’s definitive event for data storage, information and content management. Now in its 8th year, the show features a comprehensive FREE education programme and over 100 exhibitors at the National Hall, Olympia, London from 15 - 16 October 2008 www.storage-expo.com
Source: StoragePR
<>
Whittington NHS Trust loses 18,000 sets of data
by Michael Smith
The personal details of nearly 18,000 NHS staff have gone missing in the post, it has emerged.
Four computer discs containing the details of 17,990 current and former staff were lost in July when they were sent between Whittington Hospital NHS Trust in north London and McKesson, a firm providing IT payroll services.
Those CDs contained the names, dates of birth, national insurance numbers, start dates and pay details of all staff of Whittington Hospital NHS Trust, Islington Primary Care Trust, Camden Primary Care Trust and Camden and Islington NHS Foundation Trust.
They also contained the addresses of some staff, although Whittington trust insisted they did not contain anyone's personal bank account details. Well, now there is a relief. But this has just be announced rather in a very slap happy way.
The more we hear about this the more we can but wonder as to whether there is somewhere in British government institutions, including NHS Trusts, the MOD, etc., a competition going on as to how many sets of data can be lost. This kind of criminal negligence just cannot be explained away in any other way unless gross stupidity also has something to do with it.
The trust said the discs went missing when an envelope they were in was placed in a post tray marked "recorded delivery" on Tuesday 22 July. But there was no record of the discs being sent.
The chief executive of the trust said that each one had a separate alpha-numeric passwords on them which, unless found by expert hackers, are very difficult to break. Let us just hope that this is indeed the case. But they have just password. They are NOT encrypted. Who the **** is running this asylum called British government?
He apologised to all those affected by the blunder, saying it was the first time information had been sent through the post and that the member of staff thought to be responsible has been suspended.
"It is trust policy to send any such information by courier," he said, and he added, “to our knowledge this is the one and only time that such information was directed through the post.
"An investigation is underway, with an enquiry panel taking place shortly. In the meantime, a member of staff has been suspended."
It is NOT the member of staff whose head should roll – at least not alone. The buck does not stop at the little guy or girl who may not even have been told how to send the CDs and never been told that they are to be sent by courier.
This revelation led both the Conservatives and the Liberal Democrats to call on the Government to scrap its planned electronic database of 50 million patient records in England. One can but add to that a call to scrap the National ID Card scheme and other such hair brained things. This country and its government are incapable of looking after data of its people.
Not that it would be impossible to make the systems safe. While it may not be possible to 110% guarantee that no one ever will be able to get hold of someone's details it is possible to encrypt the data to such an extent that it would take even a sophisticated hacker – even a hacker team – months if not more – to gain access to the data. If the protection would then be set in such a way that a limited attempts are permitted only and the data will after the limit be wiped c;lean then things would be safer still. This is NOT rocket science, as I keep saying. The technology is available and out there.
© M Smith (Veshengro), September 2008
<>
Nearline and Archiving in the Data Warehouse: What's the Difference?
By: Arthur Ritchie - Chairman and CEO at SAND
In recent years, data warehouses have begun to increase radically in size. To maintain acceptable performance in the face of this "data explosion", several techniques have been introduced. These include pre-building aggregates and Key Performance Indicators (KPI’s) from large amounts of detailed transaction data, and indexing as many columns as possible in order to speed up query processing.
As data warehouses continue to grow, however, the time required to do all the necessary preprocessing of data increases to the point where these tasks can no longer be performed in the available "batch windows" when the warehouse is not being accessed by users. So, trade-offs need to be made. Doing less preprocessing work reduces the required time, but also means that queries that depend on aggregates, KPIs or additional indexes may take an inordinately long time to run, and may also severely degrade performance for other users as the system attempts to do the processing "on the fly". This impasse leads to two possible choices: either stop providing the analytic functionality – making the system less valuable, and users more frustrated, -- or “put the database on a diet" by moving some of the data it contains to another location.
Putting the Database "on a Diet"
Both Nearline and Archiving solutions can help trim down an over-expanded database: the database can be made much smaller by implementing an Information Lifecycle Management (ILM) approach, removing unused or infrequently used detailed transactional data from the online database and storing it elsewhere. When the database is smaller, it will perform better and be capable of supporting a wider variety of user needs. Aggregates and KPI’s will be built from a much smaller amount of detailed transaction data. Additionally, column indexing will be more practicable as there will be fewer rows per column to be indexed.
The Key Differences between Archiving and Nearlining in a Data Warehouse
It is important to stress the differences between archiving warehouse data (using products from Open Text, Princeton Softech and so on) and storing it nearline (using SAND/DNA). Since both types of product are used to hold data that has been moved out of the main "online" system, it is unclear to some why one would need to be implemented if the other is in place. To clarify this question and make it easier to discuss why one or the other type of system (or both) might be required in a given situation, the major differences between nearline data and archived data are outlined below.
Archive
Normally, the concept of electronic archiving focuses on the preservation of documents or data in a form that has some sort of certifiable integrity (for example, conformity to legal requirements), is immune to unauthorized access and tampering, and is easily subject to certain record management operations within a defined process – for example, automatic deletion after a certain period, or retrieval when requested by an auditor. The archive is in fact a kind of operational system for processing documents/data that are no longer in active use.
The notion of archiving has traditionally focused on unstructured data in the form of documents, but similar concepts can be applied to structured data in the warehouse. An archive for SAP BI, for example, would preserve warehouse data that is no longer needed for analytical use but which needs to be kept around because it may be required by auditors, as would be the case if SAP BI data were used as the basis for financial statements. The archive data does not need to be directly accessible to the user community, just locatable and retrievable in case it is required for inspection or verification – not for analysis in the usual sense. In fact, because much of the data that needs to be preserved in the archive is fairly sensitive (for example, detailed financial data), the ability to access it may need to be strictly regulated.
While many vendors of archiving solutions stress the performance benefits of reducing the amount of data in the online database, accessing the archived data is a complicated and relatively slow process, since it will need to be located and then restored into the online database. For this reason, it is unrealistic to expect archived data to be usable for analysis/reporting purposes.
Nearline
In the Information Lifecycle Management approach, the nearline repository holds data that is used less frequently than the "hottest" most current data but is still potentially useful for analysis or for constructing new or revised analytic objects for the warehouse.
While the exact proportion of nearline to online data will vary, the amount of "less frequently used" data that needs to be kept available is normally quite large. Moving this out of the main database greatly reduces the pressure on the online database and enables continued performance of standard database operations within available time windows, even in the face of the explosive data growth that many organizations are currently facing.
Thus, the archiving requirements described above do not apply to a nearline product such as SAND/DNA, which is designed to reduce the size of the online warehouse database, while at the same time keeping the data more or less transparently accessible to end users who may need to use it for analysis, for rebuilding KPI's and so on.
In Brief
Why a Nearline Product is not an Archive
Nearline products do:
- Make older data easily accessible to end users for enhanced analysis/reporting
- Offer very good performance in delivering data to end users - typically not more than 1.x times slower than online, with little or no impact on online users
- Allow greater amounts of relatively recent data to be moved out of the online system
- Offer methods for ensuring the compliance of data with regulations
- Feature any special built-in security regime beyond the read-only status of the data
- Take care of operational processes on data, such as enforcement of retention periods, automatic deletion and so on.
Archiving products do:
- Provide controlled storage of older data that will probably not be accessed except in special circumstances
- Enforce organizational policies with regard to data retention
- Ensure compliance
- Limit access to sensitive data.
- Make data easily accessible to users for analysis or reporting.
- Offer fast performance in restoring data
- Store relatively recent data that may be required for analytics/reporting
Source: StoragePR
<>
RSA® Conference Europe - New keynotes and Sessions of Interest
There’s just a few weeks now to go until the 9th annual RSA® Conference Europe (27-29 October, 2008) at ExCeL London and here are a few updates as to what is happening:
New keynotes and Sessions of Interest:
In addition to the Information Commissioner, Richard Thomas’ keynote on Wednesday, 29 October, RSA Conference Europe has also confirmed Baroness Neville-Jones, Shadow Security Minister, UK will present the closing keynote.
Baroness Neville-Jones will focus on the issues, both practical and political, that government and society face when developing a national security agenda. During her early career, Neville-Jones was a British diplomat having served in then-Rhodesia, Singapore and the USA, amongst others.
Since then, she has held a number of security posts, including Deputy Secretary to the Cabinet and Head of the Defence and Overseas Secretariat in the Cabinet Office (’91 to ’94). In January 2006, she was appointed by David Cameron to head the Conservative Party's National and International Security Policy Group, and on 2 July 2007 she was appointed Shadow Security Minister and National Security Adviser to the Leader of the Opposition.
Olympic Games Information Security: The Ultimate Challenge is scheduled on Wednesday, 29th October at 9am. Marc Llanes, Information Security Manager, Atos Origin and Vladan Todorovic, Information Security Manager, Beijing 2008 Olympic Games are confirmed to lead the session.
They will cover how to address the challenges that come with securing the world’s most high profile event, and how to recognise real threats and ensure consistent and secure data flow in such information overloaded, widespread and heterogeneous high risk environment.
Other already confirmed keynotes
Richard Thomas, Information Commissioner of the UK’s Information Commissioner's Office is keynoting on Wednesday 29th October and will discuss the latest developments and topical issues of the ever-evolving landscape of information security and how the role of the Information Commissioner’s Office (ICO) is being strengthened and what the ICO’s approach will be following the recent high profile data losses across the UK’s public and private sectors.
Online Privacy and the World of Behavioral Targeting: Challenges and Options is the first keynote panel to be confirmed which will be moderated by Chris Kuner, Partner and Head of International Privacy and Information Management at Hunton & Williams, one of the worlds largest law firms.
Art W. Coviello, Jr., Executive Vice President, EMC Corporation and President, RSA, The Security Division of EMC will be giving his annual keynote at the first day of the Conference.
Enhanced content with new Tracks and Sessions
Last year, RSA Conference Europe attendees gave us our highest-ever rating for the Conference content. 70 + sessions over the following 9 tracks will take place over the 3 days:
Developers & Applications (formerly Developing with Security)
Security Services (formerly Authentication)
Business of Security (formerly Business Trends & Impact)
Hosts (formerly covered by Enterprise Defence)
Governance (formerly Policy & Government)
Networks (formerly covered by Enterprise Defence)
Professional Development
Research & Threats (formerly Hackers & Threats)
Sponsor Case Studies
2008 Conference Theme: Alan Turing
This year’s Conference theme is built around Alan Turing, British cryptographer, mathematician, logician, philosopher and biologist, and will celebrate his legacy and contribution towards digital computers today. Experts and historians agree that Turing had a deeper understanding of the vast potential of computer science than anyone in his era, and is often considered the father of modern computer science. Do you want to add in something about the Bletchley Park exhibit? More information is available here.
Bloggers Are Welcome!
We are pleased to let you know that for the first time bloggers will be able to obtain a free press pass. Registration will be judged individually and based upon the credibility of the blog itself. Bloggers must have covered information security topics for a minimum of three months with a consistent posting rate (at least 2 posts a week). Other information, like Technorati ratings and number of hits/page views, will also be taken into consideration.
ExCeL: Still Closer Than You Think!
Previously misconstrued as somewhat inaccessible, we hope that last year’s overwhelmingly well attended Conference (we had 100+ press and analysts attending from all over Europe) has gone a long way to dispelling those myths and justified the choice of ExCeL London again for RSA Conference Europe 2008. Do you want to add in that it’s hosting 7 games at the Olympics in 2012? Gives it a bit more kudos!
ExCeL is exceptionally well served by air, rail, underground/DLR and road, directions to which can be found here.
Useful Info for Press/Analysts/Bloggers on Website
As RSA Conference Europe 2008 wants to make access to information as easy as possible, the dedicated press area of the website has been re-designed and improved, and is accessible by going here.
Source: AxiCom
<>
10 Criteria to Selecting the Right Enterprise Business Continuity Software
By Jerome M Wendt DCIG, LLC
The pressures to implement business continuity software that can span the enterprise and recover application servers grow with each passing day. Disasters come in every form and shape from regional disasters (earthquakes, floods, lightning strikes) to terrorist attacks to brown-outs to someone accidently unplugging the wrong server.
Adding to the complexity, the number of application servers and virtual machines are on the rise and IT headcounts are flat or shrinking. Despite these real-world situations, companies often still buy business continuity software that is based on centralized or stand-alone computing models that everyone started abandoning over a decade ago.
Distributed computing is now almost universally used for hosting mission critical applications in all companies. However business continuity software that can easily recover and restore data in distributed environments is still based on 10 year old models. This puts businesses in a situation when they end up purchasing business continuity software that can only recover a subset of their application data.
Organizations now need a new set of criteria that accounts for the complexities of distributed systems environments. Today’s business continuity software must be truly enterprise and distributed in its design. Here are 10 features that companies now need to identify when selecting business continuity software so it meets the needs of their enterprise distributed environment:
- Heterogeneous server and storage support. In distributed environments, companies generally have multiple operating systems and storage systems from multiple different hardware vendors. Companies want the flexibility to recover applications running on any of these operating systems while using storage that they have available at the DR site to do the recovery. Many business continuity solutions require the same configurations (host software, network appliance, storage system) at the production and DR sites. New enterprise business continuity intended for distributed environments should not.
- Accounts for differences in performance. A major reason that companies implement specific business continuity solutions for specific applications is due to how they manage high numbers of write I/Os. High performance (i.e. high write I/Os) applications put much different demands on business continuity software than those that protect application servers with infrequent write I/Os. To scale in enterprise distributed environments, the business continuity software needs to provide options to scale under either type of application load.
- Manages replication over WAN links. Replicating all production data to the target site is great until the network connection becomes congested or breaks. Enterprise business continuity needs to monitor these WAN connections, provide logical starting and stopping points if the connection is interrupted and resume replication without loosing data or negatively impacting the application which it is protecting.
- Multiple ways to replicate data. Not every application server needs all of its data replicated. Some application servers need only select files or directories replicated while other application servers need all data on one or more volumes replicated to ensure the recoverability of the system. Enterprise business continuity software should give companies the flexibility to replicate data at whatever layer – block or file – that the application server requires.
- Application integration. Replicating data without any knowledge of what application is using the data or how it is using the data represents a substantial risk when it comes time to recover the application. Recovering applications such as Microsoft Exchange, SharePoint or SQL Server that keep multiple files open at the same time can result in inconsistent and unrecoverable copies of data at the DR site. Business continuity software must integrate with these applications such that it provides consistently recoverable images at the DR site.
- Provides multiple recovery points. A problem with a number of existing business continuity solutions is that it only provides one recovery point – the one right before the disaster occurred. However disasters are rarely ever that neat and tidy. Sometimes companies are not even aware a disaster has occurred until hours after the disaster (think database corruption or wrong file loaded). Business continuity software needs to provide multiple recovery points so companies can rollback to a point in time right before the disaster occurred as well as give them multiple options to recover the data.
- Introduces little or no overhead on the host server. Putting agents on host servers provides a number of intangible benefits – application awareness, capture of all write I/Os, and even a choice as to where the replication (block or file) of the data will occur. However if using agent on the server consumes so many resources on the server that the application can not run, it negates the point of using the business continuity software in the first place.
- Replicates data at different points in the network (host, network or storage system). Getting agents on every corporate server is usually never an option. Whether it is because of corporate service level agreements (SLAs), ignorance about the presence of new virtual machines or just good old fashioned corporate politics, agents are great but not an option in every situation. In this case, the business continuity software should also provide options to capture data at either the network or storage system level.
- Centrally managed. Enterprise business continuity software needs to monitor and manage where it is installed in the enterprise, what applications it is protecting, how much data it is replicating and the flow of replication from the production to DR sites. It also should provide a console from which administrators can manage recoveries anywhere in the enterprise.
- Scales to manage replication for tens, hundreds or even thousands of servers. Enterprise companies sometimes fail to realize just how many application servers they actually have in their organization. Tens of servers is a given in even most small organizations with hundreds or even thousands of servers more common than not in any large company. The business continuity software should include an architecture that scales to account for this number of servers without breaking the replication processes or the bank.
InMage is exhibiting at Storage Expo 2008 the UK’s definitive event for data storage, information and content management. Now in its 8th year, the show features a comprehensive FREE education programme and over 100 exhibitors at the National Hall, Olympia, London from 15 - 16 October 2008 www.storage-expo.com
Source: StoragePR
<>
UK Information Commissioner to Keynote at RSA® Conference Europe in October
Details of the Event's Comprehensive Educational Track Session Programme Also Unveiled
RSA® Conference, the world's leading information security conference group, announced in August 2008 that Richard Thomas, Information Commissioner in the UK's Information Commissioner's Office (ICO), will be keynoting at the ninth annual RSA® Conference Europe, which is taking place from 27th-29th October 2008 at ExCeL London.
Mr. Thomas will be discussing the ever-evolving landscape of information security, how the role of the ICO is being strengthened and what the ICO's approach will be following the recent high-profile data losses across the UK's public and private sectors.
Educational Track Sessions at RSA Conference Europe 2008
Central to RSA Conference Europe are its 70+ high-quality educational track sessions, a unique feature that distinguishes RSA Conferences from other professional events. Technology companies and their end-user customers are invited to submit papers for sessions in which to share experiences around real-life security demands and deployments - and to discuss today's most burning security issues.
After an extensive review process in the Spring by an independent selection panel, the RSA Conference Europe 2008 session agenda will include speakers drawn from across the whole value chain, including major brands such as Nokia, eBay, BT Global Services and Verizon Business. Representatives from research houses such as Cryptography Research, Freeform Dynamics and Forrester Research will also be presenting.
This year's session tracks are:
-- Business of Security
-- Developers & Applications
-- Governance
-- Hosts
-- Networks
-- Professional Development
-- Research & Threats
-- Security Services
-- Sponsor Case Studies
"Moving RSA Conference Europe to ExCeL London last year started a new phase in the Conference's development. Not only have we grown our attendee base significantly, but the attendees gave us our highest-ever ratings for Conference content in 2007," said Linda Lynch, RSA Conference Europe Manager. "I'm delighted that Richard Thomas will use the Conference as the platform to discuss one of the most critical issues in information security - that of safeguarding personal data."
This year's RSA Conference theme is built around Alan Turing - the British cryptographer, mathematician, logician, philosopher and biologist - and will celebrate his legacy and contribution towards digital computers today. Experts and historians agree that Turing had a deeper understanding of the vast potential of computer science than anyone in his era, and is often considered the father of modern computer science.
Full details about registration and deadlines for special discounts are available at
http://www.rsaconference.com/2008/Europe/Registration.aspx
For more information about press registration please visit the Conference website at
http://www.rsaconference.com/2008/Europe/For_Press.aspx
RSA Conference is helping drive the security agenda worldwide with annual events in the U.S., Europe and Japan. Throughout its history, RSA Conference has consistently attracted the world's best and brightest in the field, creating opportunities for Conference attendees to learn about IT security's most important issues through first-hand interactions with peers, luminaries and both emerging and established companies. As the IT security field continues to grow in importance and influence, RSA Conference plays an integral role in keeping security professionals across the globe connected and educated. For more information and Conference dates, visit http://www.rsaconference.com
Source: AxiCom
<>