Time of Proactive Security is Beginning!

By Ari Takanen, CTO, Codenomicon

The easiest method of conducting a security compromise is to look for a vulnerability in widely used software and exploit that. The problem today is that vulnerabilities rarely become public. There is very little motivation to disclose security findings anymore. Unfortunately this also makes reactive tools such as intrusion detection systems, security scanners and vulnerability scanners useless. They are all based on public vulnerability knowledge, and today they just do not have the data to work on. More and more zero-day attacks emerge with no protection available. It is time to be proactive!

Fortunately a set of proactive security assessment tools has emerged to fill the gap. These tools can be divided in three categories: static code analysis tools, reverse-engineering tools, and fuzzers. But which of these tools are useful for your everyday security engineer trying to defend his or her enterprise network? Maybe to you, they all look like quality assurance tools rather than enterprise tools? Code auditing tools require access to source code to be useful. Reverse-engineering is powerful, but often illegal means of finding vulnerabilities. That leaves fuzzing as the only proactive means for protecting your system.

Recently, fuzzing tools have been adapted in standard penetration testing practices and certification processes. For example in SCADA (industrial automation) fuzzing has become a critical part of the security test. Such tests have also been adapted as the procurement criteria in telecoms. Also if you look at the recent marketing materials for Google Chrome (http://www.google.com/googlebooks/chrome/) you can see that major software companies have taken fuzzing as part of their quality assurance process.

Without knowing, you might already be using a product that has been fuzzed during its lifecycle. I definitely hope it has been. The only way to ensure that is to fuzz it yourself. This was the beginning of enterprise fuzzing market, and more and more end-user organizations are adapting and integrating fuzzing into their standard auditing, acceptance and procurement processes.

What is Fuzzing?

Fuzzing is nothing new. For years already, software testers, developers and auditors have used fuzzing in their proactive security assessments. It is used to easily find defects that can be triggered by malformed inputs via external interfaces, This means that fuzzing is able to cover the most exposed and critical attack surfaces in a system relatively well, and identify many common errors and potential vulnerabilities quickly and cost-effectively. There are no false positives with fuzz testing. A crash is a crash, you cannot argue against that.

Although today most widely used fuzzers are all commercial, much of the notoriety of fuzzers has arisen from the success of open source projects. The best-known fuzzing comes from testing Unix command-line tools with fuzzed parameters in 1989 by Miller et al. (see http://www.cs.wisc.edu/~bart/fuzz/). Their research indicated that 20-40% of all tested software failed (crashed) when random inputs were provided. Back then fuzzing was dumb but still powerful. During the last 10-15 years, fuzzing has gradually developed towards a full testing discipline with support from both the security research and traditional QA testing communities, although some people still suffer from misconceptions regarding its capabilities, effectiveness and practical implementation. Fuzzing today is extremely intelligent!

Fuzzing Value

Fuzzing is especially useful in analyzing proprietary and commercial systems, as it does not require any access to source code. The system under test can be viewed as a black-box, with one or more external interfaces available for injecting tests, but without any other information available on the internals of the tested system. A practical example of fuzzing would be to send malformed HTTP requests to a web server, or create malformed Word document files for viewing on a word processing application.

The purpose of fuzzing is to find flaws in software, and it does that extremely efficiently. In tests conducted by Codenomicon Labs (www.gohackyourself.net) the researchers found out that none of the available WLAN access points used by consumers could withstand any fuzzing. Elimination of such flaws with automated black-box tools reduces the cost of software in both R&D, as well as maintenance costs by the end-users of the communication products. Potentially in the world of tomorrow, you will not need any security devices, because the networks themselves will have been thoroughly tested, with fuzzing, to tolerate any surprises coming from the network.

Codenomicon is exhibiting at Infosecurity Europe 2009, Europe's number one dedicated Information security event. Now in its 14th year, the show continues to provide an unrivalled education programme, the most diverse range of new products & services from over 300 exhibitors and 12,000 visitors from every segment of the industry. Held on the 28th - 30th April 2009 in Earls Court, London this is a must attend event for all professionals involved in Information Security. www.infosec.co.uk

Courtesy: Infosecurity PR
<>