FedRAMP: It’s Here but Not Yet Here

Posted December 12th, 2011 by

Contrary to what you might hear this week in the trade press, FedRAMP is not fully unveiled although there was some much-awaited progress. There was a memo that came out from the administration (PDF caveat).  Basically what it does is lay down the authority and responsibility for the Program Management Office and set some timelines.  This is good, and we needed it a year and a half ago.

However, people need to stop talking about how FedRAMP has solved all their problems because the entire program isn’t here yet.  Until you have a process document and a catalog of controls to evaluate, you don’t know how the program is going to help or hinder you, so all the press about it is speculation.

Similar Posts:

Posted in DISA, FISMA, NIST, Outsourcing, Risk Management | No Comments »

DDoS Planning: Business Continuity with a Twist

Posted August 17th, 2011 by

So since I’ve semi-officially been granted the title of “The DDoS Kid” after some of the incident response, analysis, and talks that I’ve done, I’m starting to get asked a lot about how much the average DDoS costs the targeted organization.  I have some ideas on this, but the simplest way is to recycle Business Continuity/Disaster Recovery figures but with some small twists.


  • Plan on a 4-day attack.  A typical attack duration is 2-7 days.
  • Consider an attack on the “main” (www) site and anything else that makes money (shopping cart, product pages)


  • Downtime: one day’s worth of downtime for both peak times (for most eCommerce sites, that’s Thanksgiving to January 5th) and low-traffic times x  (attack duration).
  • Bandwidth: For services that charge by the bit or CPU cycle such as cloud computing or some ISP services, the direct cost of the usage bursting.  The cost per bit/cpu/$foo is available from the service provider, multiply your average rate for peak times by 1000 (small attack) or 10000 (large attack) x (attack duration) worth of usage.  This is the only big difference in cost from BCP/DR data.
  • Mitigation Services:  Figure $5K to $10K for a DDoS mitigation service x (duration of attack).


  • Increased callcenter load: A percentage (10% as a starting guess) of user calls to the callcenter x (average dollar cost per call) x (attack duration).
  • Increased physical “storefront” visits: A percentage (10%) of users now have to go to a physical location x (attack duration).
  • Customer churn: customer loss due to frustration.  Figure 2-4% customer loss x (attack duration).

Brand damage, these vary from industry to industry and attack to attack:

  • Increased marketing budget: Percentage increase in marketing budget.  Possible starting value is 5%.
  • Increased customer retention costs: Percentage increase in customer retention costs.  Possible starting value is 10%.

Note that it’s reasonably easy to create example costs for small, medium, and large attacks and do planning around a medium-sized attack.

However we recycle BCP/DR figures for an outage, mitigation of the attack is different:

  • For high-volume attacks, you will need to rely on service providers for mitigation simply because of their capacity.
  • Fail-over to a secondary site means that you now have two sites that are overwhelmed.
  • Restoration of service after the attack is more like recovering from a hacking attack than resuming service at the primary datacenter.

Similar Posts:

Posted in DDoS, Risk Management, Technical | No Comments »

Some Comments on SP 800-39

Posted April 6th, 2011 by

You should have seen Special Publication 800-39 (PDF file, also check out more info on Fismapedia.org) out by now.  Dan Philpott and I just taught a class on understanding the document and how it affects security managers out them doing their job on a daily basis.  While the information is still fresh in my head, I thought I would jot down some notes that might help everybody else.

The Good:

NIST is doing some good stuff here trying to get IT Security and Information Assurance out of the “It’s the CISO’s problem, I have effectively outsourced any responsibility through the org chart” and into more of what DoD calls “mission assurance”.  IE, how do we go from point-in-time vulnerabilities (ie, things that can be scored with CVSS or tested through Security Test and Evaluation) to briefing executives on what the risk is to their organization (Department, Agency, or even business) coming from IT security problems.  It lays out an organization-wide risk management process and a framework (layer cakes within layer cakes) to share information up and down the organizational stack.  This is very good, and getting the mission/business/data/program owners to recognize their responsibilities is an awesome thing.

The Bad:

SP 800-39 is good in philosophy and a general theme of taking ownership of risk by the non-IT “business owners”, when it comes to specifics, it raises more questions than it answers.  For instance, it defines a function known as the Risk Executive.  As practiced today by people who “get stuff done”, the Risk Executive is like a board of the Business Unit owners (possibly as the Authorizing Officials), the CISO, and maybe a Chief Risk Officer or other senior executives.  But without the context and asking around to find out what people are doing to get executive buy-in, the Risk Executive seems fairly non-sequitor.  There are other things like that, but I think the best summary is “Wow, this is great, now how do I take this guidance and execute a plan based on it?”

The Ugly:

I have a pretty simple yardstick for evaluating any kind of standard or guideline: will this be something that my auditor will understand and will it help them help me?  With 800-39, I think that it is written abstractly and that most auditor-folk would have a hard time translating that into something that they could audit for.  This is both a blessing and a curse, and the huge recommendation that I have is that you brief your auditor beforehand on what 800-39 means to them and how you’re going to incorporate the guidance.

Similar Posts:

Posted in FISMA, NIST, Risk Management, What Works | 5 Comments »

Reinventing FedRAMP

Posted February 15th, 2011 by

“Cloud computing is about gracefully losing control while maintaining accountability even if the operational responsibility falls upon one or more third parties.”
–CSA Security Guidance for Critical Areas of Focus in Cloud Computing V2.1

Now enter FedRAMP.  FedRAMP is a way to share Assessment and Authorization information for a cloud provider with its Government tenants.  In case you’re not “in the know”, you can go check out the draft process and supporting templates at FedRAMP.gov.  So far a good idea, and I really do support what’s going on with FedRAMP, except for somewhere along the lines we went astray because we tried to kluge doctrine that most people understand over the top of cloud computing which most people also don’t really understand.

I’ve already done my part to submit comments officially, I just want to put some ideas out there to keep the conversation going. As I see it, these are/should be the goals for FedRAMP:

  • Delineation of responsibilities between cloud provider and cloud tenant.  Also knowing where there are gaps.
  • Transparency in operations.  Understanding how the cloud provider does their security parts.
  • Transparency in risk.  Know what you’re buying.
  • Build maturity in cloud providers’ security program.
  • Help cloud providers build a “Governmentized” security program.

So now for the juicy part, how I would do a “clean room” implementation of FedRAMP on Planet Rybolov, “All the Authorizing Officials are informed, the Auditors are helpful, and every ISSO is above average”?  This is my “short list” of how to get the job done:

  • Authorization: Sorry, not going to happen on Planet Rybolov.  At least, authorization by FedRAMP, mostly because it’s a cheat for the tenant agencies–they should be making their own risk decisions based on risk, cost, and benefit.  Acceptance of risk is a tenant-specific thing based on the data types and missions being moved into the cloud, baseline security provided by the cloud provider, the security features of the products/services purchased, and the tenant’s specific configuration on all of the above.  However, FedRAMP can support that by helping the tenant agency by being a repository of information.
  • 800-53 controls: A cloud service provider manages a set of common controls across all of their customers.  Really what the tenant needs to know is what is not provided by the cloud service provider.  A simple RACI matrix works here beautifully, as does the phrase “This control is not applicable because XXXXX is not present in the cloud infrastructure”.  This entire approach of “build one set of controls definitions for all clouds” does not really work because not all clouds and cloud service providers are the same, even if they’re the same deployment model.
  • Tenant Responsibilities: Even though it’s in the controls matrix, there needs to be an Acceptable Use Policy for the cloud environment.  A message to providers: this is needed to keep you out of trouble because it limits the potential impacts to yourself and the other cloud tenants.  Good examples would be “Do not put classified data on my unclassified cloud”.
  • Use Automation: CloudAudit is the “how” for FedRAMP.  It provides a structure to query a cloud (or the FedRAMP PMO) to find out compliance and security management information.  Using a tool, you could query for a specific control or get documents, policy statements, or even SCAP assessment content.
  • Changing Responsibilities: Things change.  As a cloud provider matures, releases new products, or moves up and down the SPI stack ({Software|Platform|Infrastructure}as a Service), the balance of responsibilities change.  There needs to be a vehicle to disseminate these changes.  Normally in the IA world we do this with a Plan of Actions and Milestones but from the viewpoint of the cloud provider, this is more along the lines of a release schedule and/or roadmap.  Not that I’m personally signing up for this, but a quarterly/semi-annually tenant agency security meeting would be a good way to get this information out.

Then there is the special interest comment:  I’ve heard some rumblings (and read some articles, shame on you security industry press for republishing SANS press releases) about how FedRAMP would be better accomplished by using the 20 Critical Security Controls.  Honestly, this is far from the truth: a set of controls scoped to the modern enterprise (General Support System supporting end users) or project (Major Application) does not scale to an infrastructure-and-server cloud. While it might make sense to use 20 CSC in other places (agency-wide controls), please do your part to squash this idea of using it for cloud computing whenever and wherever you see it.


Ramp photo by ell brown.

Similar Posts:

Posted in FISMA, Risk Management, What Works | 2 Comments »

Interviewed for the “What It’s Like” Series for CSOOnline

Posted November 23rd, 2010 by

Joan Goodchild interviewed me about some of my experiences in the big sandbox and how I was good enough at avoiding IEDs to make it there and home again–an abstract form of risk management. Go check it out.  And while you’re on the subject or for visuals to go along with the story, check out my Afghanistan set on Flickr, a random set of them are below….

Similar Posts:

Posted in Army, Risk Management | 1 Comment »

Akamai Government Symposium November 10th

Posted October 12th, 2010 by

I’ll be speaking at Akamai’s Government Symposium on November 10th on the security of our platform and incorporating us into a Government IT environment: risk management, regulation, compliance, and delineation of responsibilities.  If you’re interested in Web Security, Government, FISMA, and/or Cloud Computing, it should be something of interest to you.  Even if you’re working in State and Local Government, there will be something of interest.

Event page is here.

Disclaimer: obviously I work for Akamai.  Nothing I blog about represents the official position of my employer.  From time to time, Akamai even claims me.  =)

Similar Posts:

Posted in Risk Management, Speaking | 1 Comment »

« Previous Entries

Visitor Geolocationing Widget: