Noms and IKANHAZFIZMA

Posted August 26th, 2011 by

Kickin’ it old-school with some kitteh overflows

iz noms stack overflow



Similar Posts:

Posted in IKANHAZFIZMA | No Comments »
Tags:

The Rise of the Slow Denial of Service

Posted August 23rd, 2011 by

Usually when you think about Denial of Service attacks nowadays, most people think up images of the Anonymous kids running their copy of LOIC in a hivemind or Russian Gangsters building a botnet to run an online protection racket.  Now there is a new-ish type of attack technique floating around which I believe will become more important over the next year or two: the slow http attacks.

Refs:

How Slow DOS Works

Webservers run an interesting version of process management.  When you start an Apache server, it starts a master process that spawns a number of listener processes (or threads) as defined by StartServers (5-10 is a good starting number).  Each listener serves a number of requests, defined by MaxRequestsPerChild (1000 is a good number here), and then dies to be replaced by another process/thread by the master server.  This is done so that if there are any applications that leak memory, they won’t hang.  As more requests are received, more processes/threads are spawned up to the MaxClients setting.  MaxClients is designed to throttle the number of processes so that Apache doesn’t forkbomb and the OS become unmanageable because it’s thrashing to swap.  There are also some rules for weaning off idle processes but those are immaterial to what we’re trying to do today.

Go read my previous post on Apache tuning and stress testing for the background on server pool management.

What happens in a slow DOS is that the attack tools sends an HTTP request that never finishes.  As a result, each listener process never finishes its quota of MaxRequestsPerChild so that it can die.  By sending a small amount of never-complete requests, Apache gladly spawns new processes/threads up to MaxClients at which point it fails to answer requests and the site is DOS’ed.  The higher the rate of listener process turnover, the faster the server stops answering requests.  For a poorly tuned webserver configuration with MaxClients set too high, the server starts thrashing to swap before it hits MaxClients and to top it off, the server is unresponsive even to ssh connections and needs a hard boot.

The beauty of this is that the theoretical minimum number of requests to make a server hang for a well-tuned Apache is equal to MaxClients.  This attack can also take out web boundary devices: reverse proxies, Web Application Firewalls, Load Balancers, Content Switches, and anything else that receives HTTP(S).

Post photo by Salim Virji.

Advantages to Slow DOS Attacks

There are a couple of reasons why slow DOS tools are getting research and development this year and I see them growing in popularity.

  • Speed and Simplicity:  Slow DOS attacks are quick to take down a server.  One attacker can take down a website without trying to build a botnet or cooordinate attack times and targets with 3000 college students and young professionals.
  • TOR:  With volume-based attacks like the Low Orbit Ion Cannon, it doesn’t make sense to route attack traffic through TOR.  TOR adds latency, throttles the amount of requests that the attacker can send, and might eventually fail before the target’s network does.  Using TOR keeps the defender from tracking you back to your real location.
  • Server Logging: Because the request is never completed, most servers don’t make a log.  This makes it very hard to detect or troubleshoot which means it takes longer to mitigate.  I’m interested in exceptions if you know specifics on which webserver/tool combinations make webtraffic logs.
  • IDS Evasion: Most DOS tools are volume-based attack.  There are IDS rules to detect these: usually by counting the number of TCP SYN traffic coming from each IP address in a particular span of time and flagging the traffic when a threshold is exceeded.  By using a slow DOS tool that sends requests via SSL, IDS has no idea that you’re sending it slow DOS traffic.
  • Stay out of the “Crowbar Hotel”:  Use the Ion Cannon, make logs on the target system, go to jail.  Use slow DOS with TOR and SSL, leave less traces, avoid having friends that will trade you for a pack of cigarettes.

Defenses

This part is fun, and by that I mean “it sucks”.  There are some things that help, but there isn’t a single solution that makes the problem go away.

  • Know how to detect it.  This is the hard one.  What you’re looking for is Apache spawned out to MaxClients but not logging a comparative volume of traffic.  IE, the servers are hung up waiting for that one last request to finish and shucking all other requests.
    • “ps aux | grep apache2 | grep start | wc -l” is equal to MaxClients +2.
    • Your webserver isn’t logging the normal amount of requests.  Use some grep-foo and “wc -l” to compare traffic from: a month ago, a day ago, an hour ago, and the last 5 minutes.
  • Disable POST as a method if you don’t need it.  Some of the more advanced techniques rely on the fact that POST can contain more headers and more body data.
  • Use an astronomically high number of servers.  If your server processes can timeout and respawn faster than the slow DOS can hang them, you win.  If you had maybe 3000 servers, you wouldn’t have to worry about this.  Don’t have 3000 servers, I might have some you could use.
  • Set a lower connection timeout.  Something like 15-30 seconds will keep Apache humming along.
  • Limit the request size.  1500 bytes is pretty small, 3K is a pretty good value to set.  Note that this needs testing, it will break some things.
  • Block TOR exit nodes before the traffic reaches your webservers (IE, at layer 3/4).  TOR has a list of these.

 

 

 

 



Similar Posts:

Posted in Cyberwar, DDoS, Hack the Planet, Technical | 7 Comments »
Tags:

DDoS Planning: Business Continuity with a Twist

Posted August 17th, 2011 by

So since I’ve semi-officially been granted the title of “The DDoS Kid” after some of the incident response, analysis, and talks that I’ve done, I’m starting to get asked a lot about how much the average DDoS costs the targeted organization.  I have some ideas on this, but the simplest way is to recycle Business Continuity/Disaster Recovery figures but with some small twists.

Scoping:

  • Plan on a 4-day attack.  A typical attack duration is 2-7 days.
  • Consider an attack on the “main” (www) site and anything else that makes money (shopping cart, product pages)

Direct:

  • Downtime: one day’s worth of downtime for both peak times (for most eCommerce sites, that’s Thanksgiving to January 5th) and low-traffic times x  (attack duration).
  • Bandwidth: For services that charge by the bit or CPU cycle such as cloud computing or some ISP services, the direct cost of the usage bursting.  The cost per bit/cpu/$foo is available from the service provider, multiply your average rate for peak times by 1000 (small attack) or 10000 (large attack) x (attack duration) worth of usage.  This is the only big difference in cost from BCP/DR data.
  • Mitigation Services:  Figure $5K to $10K for a DDoS mitigation service x (duration of attack).

Indirect:

  • Increased callcenter load: A percentage (10% as a starting guess) of user calls to the callcenter x (average dollar cost per call) x (attack duration).
  • Increased physical “storefront” visits: A percentage (10%) of users now have to go to a physical location x (attack duration).
  • Customer churn: customer loss due to frustration.  Figure 2-4% customer loss x (attack duration).

Brand damage, these vary from industry to industry and attack to attack:

  • Increased marketing budget: Percentage increase in marketing budget.  Possible starting value is 5%.
  • Increased customer retention costs: Percentage increase in customer retention costs.  Possible starting value is 10%.

Note that it’s reasonably easy to create example costs for small, medium, and large attacks and do planning around a medium-sized attack.

However we recycle BCP/DR figures for an outage, mitigation of the attack is different:

  • For high-volume attacks, you will need to rely on service providers for mitigation simply because of their capacity.
  • Fail-over to a secondary site means that you now have two sites that are overwhelmed.
  • Restoration of service after the attack is more like recovering from a hacking attack than resuming service at the primary datacenter.


Similar Posts:

Posted in DDoS, Risk Management, Technical | No Comments »
Tags:

Realistic NSTIC

Posted August 10th, 2011 by

OK, it’s been out a couple of months now with the usual “ZOMG it’s RealID all over again” worry-mongers raising their heads.

So we’re going to go through what NSTIC is and isn’t and some “colorful” (or “off-color” depending on your opinion) use cases for how I would (hypothetically, of course) use an Identity Provider under NSTIC.

The Future Looks Oddly Like the Past

There are already identity providers out there doing part of NSTIC: Google Authenticator, Microsoft Passport, FaceBook Connect, even OpenID fits into part of the ecosystem.  My first reaction after reading the NSTIC plan was that the Government was letting the pioneers in the online identity space take all the arrows and then swoop in to save the day with a standardized plan for the providers to do what they’ve been doing all along and to give them some compatibility.  I was partially right, NSTIC is the Government looking at what already exists out in the market and helping to grow those capabilities by providing some support as far as standardizations and community management.  And that’s the plan all along, but it makes sense: would you rather have experts build the basic system and then have the Government adopt the core pieces as the technology standard or would you like to have the Government clean-room a standard and a certification scheme and push it out there for people to use?

Not RealID Not RealID Not RealID

Many people think that NSTIC is RealID by another name.  Aaron Titus did a pretty good job at debunking some of these hasty conclusions.  The interesting thing about NSTIC for me is that the users can pick which identity or persona that they use for a particular use.  In that sense, it actually gives the public a better set of tools for determining how they are represented online and ways to keep these personas separate.  For those of you who haven’t seen some of the organizations that were consulted on NSTIC, their numbers include the EFF and the Center for Democracy and Technology (BTW, donate some money to both of them, please).  A primary goal of NSTIC is to help website owners verify that their users are who they say they are and yet give users a set of privacy controls.

 

Stick in the Mud

Stick in the Mud photo by jurvetson.

Now on to the use cases, I hope you like them:

I have a computer at home.  I go to many websites where I have my public persona, Rybolov the Hero, the Defender of all Things Good and Just.  That’s the identity that I use to log into my official FaceBook account, use teh Twitters, log into LinkedIn–basically any social networking and blog stuff where I want people to think I’m a good guy.

Then I use a separate, non-publicized NSTIC identity to do all of my online banking.  That way, if somebody manages to “gank” one of my social networking accounts, they don’t get any money from me.  If I want to get really paranoid, I can use a separate NSTIC ID for each account.

At night, I go creeping around trolling on the Intertubes.  Because I don’t want my “Dudley Do-Right” persona to be sullied by my dark, emoting, impish underbelly or to get an identity “pwned” that gives access to my bank accounts, I use the “Rybolov the Troll” NSTIC  ID.  Or hey, I go without using a NSTIC ID at all.  Or I use an identity from an identity provider in a region *cough Europe cough* that has stronger privacy regulations and is a couple of jurisdiction hops away but is still compatible with NSTIC-enabled sites because of standards.

Keys to Success for NSTIC:

Internet users have a choice: You pick how you present yourself to the site.

Website owners have a choice: You pick the NSTIC ID providers that you support.

Standards: NIST just formalizes and adopts the existing standards so that they’re not controlled by one party.  They use the word “ecosystem” in the NSTIC description a lot for a reason.



Similar Posts:

Posted in NIST, Technical | Comments Off on Realistic NSTIC
Tags:

Clouds, FISMA, and the Lawyers

Posted April 26th, 2011 by

Interesting blog post on Microsoft’s TechNet, but the real gem is the case filing and summary from the DoJ (usual .pdf caveat applies).  Basically the Reader’s Digest Condensed Version is that the Department of Interior awarded a cloud services contract to Microsoft for email.  The award was protested by Google for a wide variety of reasons, you can go read the full thing for all the whinging.

But this is the interesting thing to me even though it’s mostly tangential to the award protest:

  • Google has an ATO under SP 800-37 from GSA for its Google Apps Premiere.
  • Google represents Google Apps for Government as having an ATO which, even though 99% of the security controls could be the same, is inaccurate as presented.
  • DOI rejected Google’s cloud because it had state and local (sidenote: does this include tribes?) tenants which might not have the same level of “security astuteness” as DOI.  Basically what they’re saying here is that if one of the tenants on Google’s cloud doesn’t know how to secure their data, it affects all the tenants.

So this is where I start thinking.  I thunk until my thinker was sore, and these are the conclusions I came to:

  • There is no such thing as “FISMA Certification”, there is a risk acceptance process for each cloud tenant.  Cloud providers make assertions of what common controls that they have built across all
  • Most people don’t understand what FISMA really means.  This is no shocker.
  • For the purposes of this award protest, the security bits do not matter because
  • This could all be solved in the wonk way by Google getting an ATO on their entire infrastructure and then no matter what product offerings they add on top of it, they just have to roll it into the “Master ATO”.
  • Even if the cloud infrastructure has an ATO, you still have to authorize the implementation on top of it given the types of data and the implementation details of your particular slice of that cloud.

And then there’s the “back story” consisting of the Cobell case and how Interior was disconnected from the Internet several times and for several years.  The Rybolov interpretation is that if Google’s government cloud potentially has tribes as a tenant, it increases the risk (both data security and just plain politically) to Interior beyond what they are willing to accept.

Obligatory Cloud photo by jonicdao.



Similar Posts:

Posted in FISMA, NIST, Outsourcing | 2 Comments »
Tags:

Some Comments on SP 800-39

Posted April 6th, 2011 by

You should have seen Special Publication 800-39 (PDF file, also check out more info on Fismapedia.org) out by now.  Dan Philpott and I just taught a class on understanding the document and how it affects security managers out them doing their job on a daily basis.  While the information is still fresh in my head, I thought I would jot down some notes that might help everybody else.

The Good:

NIST is doing some good stuff here trying to get IT Security and Information Assurance out of the “It’s the CISO’s problem, I have effectively outsourced any responsibility through the org chart” and into more of what DoD calls “mission assurance”.  IE, how do we go from point-in-time vulnerabilities (ie, things that can be scored with CVSS or tested through Security Test and Evaluation) to briefing executives on what the risk is to their organization (Department, Agency, or even business) coming from IT security problems.  It lays out an organization-wide risk management process and a framework (layer cakes within layer cakes) to share information up and down the organizational stack.  This is very good, and getting the mission/business/data/program owners to recognize their responsibilities is an awesome thing.

The Bad:

SP 800-39 is good in philosophy and a general theme of taking ownership of risk by the non-IT “business owners”, when it comes to specifics, it raises more questions than it answers.  For instance, it defines a function known as the Risk Executive.  As practiced today by people who “get stuff done”, the Risk Executive is like a board of the Business Unit owners (possibly as the Authorizing Officials), the CISO, and maybe a Chief Risk Officer or other senior executives.  But without the context and asking around to find out what people are doing to get executive buy-in, the Risk Executive seems fairly non-sequitor.  There are other things like that, but I think the best summary is “Wow, this is great, now how do I take this guidance and execute a plan based on it?”

The Ugly:

I have a pretty simple yardstick for evaluating any kind of standard or guideline: will this be something that my auditor will understand and will it help them help me?  With 800-39, I think that it is written abstractly and that most auditor-folk would have a hard time translating that into something that they could audit for.  This is both a blessing and a curse, and the huge recommendation that I have is that you brief your auditor beforehand on what 800-39 means to them and how you’re going to incorporate the guidance.



Similar Posts:

Posted in FISMA, NIST, Risk Management, What Works | 5 Comments »
Tags:

« Previous Entries Next Entries »


Visitor Geolocationing Widget: