DDoS and Elections

Posted May 10th, 2012 by

I’ve noticed a trend over the past 6 months: DDoS traffic associated with elections.  A quick sampling of news will show the following:

Last week it picked up again with the re-inauguration of Vladimir Putin.

And then yesterday, Ustream and their awesome response: which, in the Rybolov-paraphrased version read something like: “We shall go on to the end. We shall fight in France, we shall fight on the Interblagosphere, we shall fight with growing confidence and growing strength in our blocking capabilities, we shall defend our videostreams, whatever the cost may be. We shall fight on the routers, we shall fight on the load balancers, we shall fight in the applications and in the databases, we shall fight by building our own Russian subsite; we shall never surrender!!!!1111” (Ref)

Afghanistan Presidential Election 2004

Afghanistan Presidential Elections 2004 photo by rybolov.

So why all this political activity?  A couple of reasons that I can point to:

  • Elections are a point-in-time.  It’s critical for one day.  Anything that has a short window of time is a good DDoS target.
  • DDoS is easy to do.  Especially for the Russians.  Some of them already have big botnets they’re using for other things.
  • Other DDoS campaigns.  Chaotic Actors (Anonymous and their offshoots and factions) have demonstrated that DDoS has at a minimum PR value and at the maximum financial and political value.
  • Campaign sites are usually put up very quickly.  They don’t have much supporting infrastructure and full/paid/professional staffing.
  • Elections are IRL Flash Mobs.  Traffic to a campaign site increases slowly at first then exponentially the closer you get to the day of the election.  This stresses what infrastructure is in place and design ideas that seemed good at the time but that don’t scale with the increased load.

So is this the future of political campaigns?  I definitely think it is.  Just like any other type of web traffic, as soon as somebody figures out how to use the technology for their benefit (information sharing => eCommerce => online banking => political fundraising), a generation later somebody else figures out how to deny that benefit.

How to combat election DDoS:

  • Have a plan.  You know that the site is going to get flooded the week of the election.  Prepare accordingly.  *ahem* Expect them.
  • Tune applications and do caching at the database, application, webserver, load balancer, content delivery network, etc.
  • Throw out the dynamic site.  On election day, people just want to know a handful of things.  Put those on a static version of the site and switch to that.  Even if you have to publish by hand every 30 minutes, it’s better than taking a huge outage.
  • Manage the non-web traffic.  SYN and UDP floods have been around for years and years and still work in some cases.  For these attacks, you need lots of bandwidth and something that does blocking: these point to a service provider that offers DDoS protection.

It’s going to be an interesting November.



Similar Posts:

Posted in Cyberwar, DDoS, Hack the Planet | No Comments »
Tags:

FedRAMP: It’s Here but Not Yet Here

Posted December 12th, 2011 by

Contrary to what you might hear this week in the trade press, FedRAMP is not fully unveiled although there was some much-awaited progress. There was a memo that came out from the administration (PDF caveat).  Basically what it does is lay down the authority and responsibility for the Program Management Office and set some timelines.  This is good, and we needed it a year and a half ago.

However, people need to stop talking about how FedRAMP has solved all their problems because the entire program isn’t here yet.  Until you have a process document and a catalog of controls to evaluate, you don’t know how the program is going to help or hinder you, so all the press about it is speculation.



Similar Posts:

Posted in DISA, FISMA, NIST, Outsourcing, Risk Management | No Comments »
Tags:

The “Off The Record” Track

Posted November 21st, 2011 by

So while I was at some conferences over the past couple of months, I had an awesome idea while sitting in a panel about data breaches, especially notification. While streaming conferences is pretty awesome for most content, I keep thinking that we need that as an industry we need the exact opposite: a track of the conference that is completely off-the-record.

Here in DC when we do smaller training sessions, we invoke the Chatham House Rule.  That is, the discussion is for non-attribution.  There are several reasons behind this:

  • You don’t have to worry (too much, anyway) about vendors in attendance selling you something
  • It won’t end up in the press
  • It gets real information to people instead of things that are “fit for public consumption”

My local area has a hackers association (No linkie, if you have minimal skill you can find it) that meets to talk about mostly technical stuff and what folks are working on.  I find that more and more often when I do a talk there I do it “Off the Record” for a wide variety of reasons:

  • I don’t want the attackers to get more effective
  • I have half-baked ideas where I want/need feedback on if they are completely off-base
  • The subject matter is in a legal gray-area and I’m not a lawyer
  • I talk “on the record” all day every day about the same things
  • I can “test-drive” presentation material to see how it works
  • I can show nuts and bolts

So, the point of all this is that maybe we need to start having more frank discussions about what the bad guys are doing “in the wild” if we want to stop them, and that involves talking with peers from other companies inside the same industry to see what they are getting hit with.

Chatham House Rule

Chatham House Rule photo by markhillary.



Similar Posts:

Posted in Public Policy, Speaking, What Doesn't Work, What Works | No Comments »
Tags:

DHS is Looking for a CISO

Posted November 4th, 2011 by

Job announcement is here.  Share with anybody you think can do it.



Similar Posts:

Posted in FISMA, NIST, Odds-n-Sods | 1 Comment »
Tags:

Realistic NSTIC

Posted August 10th, 2011 by

OK, it’s been out a couple of months now with the usual “ZOMG it’s RealID all over again” worry-mongers raising their heads.

So we’re going to go through what NSTIC is and isn’t and some “colorful” (or “off-color” depending on your opinion) use cases for how I would (hypothetically, of course) use an Identity Provider under NSTIC.

The Future Looks Oddly Like the Past

There are already identity providers out there doing part of NSTIC: Google Authenticator, Microsoft Passport, FaceBook Connect, even OpenID fits into part of the ecosystem.  My first reaction after reading the NSTIC plan was that the Government was letting the pioneers in the online identity space take all the arrows and then swoop in to save the day with a standardized plan for the providers to do what they’ve been doing all along and to give them some compatibility.  I was partially right, NSTIC is the Government looking at what already exists out in the market and helping to grow those capabilities by providing some support as far as standardizations and community management.  And that’s the plan all along, but it makes sense: would you rather have experts build the basic system and then have the Government adopt the core pieces as the technology standard or would you like to have the Government clean-room a standard and a certification scheme and push it out there for people to use?

Not RealID Not RealID Not RealID

Many people think that NSTIC is RealID by another name.  Aaron Titus did a pretty good job at debunking some of these hasty conclusions.  The interesting thing about NSTIC for me is that the users can pick which identity or persona that they use for a particular use.  In that sense, it actually gives the public a better set of tools for determining how they are represented online and ways to keep these personas separate.  For those of you who haven’t seen some of the organizations that were consulted on NSTIC, their numbers include the EFF and the Center for Democracy and Technology (BTW, donate some money to both of them, please).  A primary goal of NSTIC is to help website owners verify that their users are who they say they are and yet give users a set of privacy controls.

 

Stick in the Mud

Stick in the Mud photo by jurvetson.

Now on to the use cases, I hope you like them:

I have a computer at home.  I go to many websites where I have my public persona, Rybolov the Hero, the Defender of all Things Good and Just.  That’s the identity that I use to log into my official FaceBook account, use teh Twitters, log into LinkedIn–basically any social networking and blog stuff where I want people to think I’m a good guy.

Then I use a separate, non-publicized NSTIC identity to do all of my online banking.  That way, if somebody manages to “gank” one of my social networking accounts, they don’t get any money from me.  If I want to get really paranoid, I can use a separate NSTIC ID for each account.

At night, I go creeping around trolling on the Intertubes.  Because I don’t want my “Dudley Do-Right” persona to be sullied by my dark, emoting, impish underbelly or to get an identity “pwned” that gives access to my bank accounts, I use the “Rybolov the Troll” NSTIC  ID.  Or hey, I go without using a NSTIC ID at all.  Or I use an identity from an identity provider in a region *cough Europe cough* that has stronger privacy regulations and is a couple of jurisdiction hops away but is still compatible with NSTIC-enabled sites because of standards.

Keys to Success for NSTIC:

Internet users have a choice: You pick how you present yourself to the site.

Website owners have a choice: You pick the NSTIC ID providers that you support.

Standards: NIST just formalizes and adopts the existing standards so that they’re not controlled by one party.  They use the word “ecosystem” in the NSTIC description a lot for a reason.



Similar Posts:

Posted in NIST, Technical | Comments Off on Realistic NSTIC
Tags:

Clouds, FISMA, and the Lawyers

Posted April 26th, 2011 by

Interesting blog post on Microsoft’s TechNet, but the real gem is the case filing and summary from the DoJ (usual .pdf caveat applies).  Basically the Reader’s Digest Condensed Version is that the Department of Interior awarded a cloud services contract to Microsoft for email.  The award was protested by Google for a wide variety of reasons, you can go read the full thing for all the whinging.

But this is the interesting thing to me even though it’s mostly tangential to the award protest:

  • Google has an ATO under SP 800-37 from GSA for its Google Apps Premiere.
  • Google represents Google Apps for Government as having an ATO which, even though 99% of the security controls could be the same, is inaccurate as presented.
  • DOI rejected Google’s cloud because it had state and local (sidenote: does this include tribes?) tenants which might not have the same level of “security astuteness” as DOI.  Basically what they’re saying here is that if one of the tenants on Google’s cloud doesn’t know how to secure their data, it affects all the tenants.

So this is where I start thinking.  I thunk until my thinker was sore, and these are the conclusions I came to:

  • There is no such thing as “FISMA Certification”, there is a risk acceptance process for each cloud tenant.  Cloud providers make assertions of what common controls that they have built across all
  • Most people don’t understand what FISMA really means.  This is no shocker.
  • For the purposes of this award protest, the security bits do not matter because
  • This could all be solved in the wonk way by Google getting an ATO on their entire infrastructure and then no matter what product offerings they add on top of it, they just have to roll it into the “Master ATO”.
  • Even if the cloud infrastructure has an ATO, you still have to authorize the implementation on top of it given the types of data and the implementation details of your particular slice of that cloud.

And then there’s the “back story” consisting of the Cobell case and how Interior was disconnected from the Internet several times and for several years.  The Rybolov interpretation is that if Google’s government cloud potentially has tribes as a tenant, it increases the risk (both data security and just plain politically) to Interior beyond what they are willing to accept.

Obligatory Cloud photo by jonicdao.



Similar Posts:

Posted in FISMA, NIST, Outsourcing | 2 Comments »
Tags:

« Previous Entries


Visitor Geolocationing Widget: