FedRAMP: It’s Here but Not Yet Here

Posted December 12th, 2011 by

Contrary to what you might hear this week in the trade press, FedRAMP is not fully unveiled although there was some much-awaited progress. There was a memo that came out from the administration (PDF caveat).  Basically what it does is lay down the authority and responsibility for the Program Management Office and set some timelines.  This is good, and we needed it a year and a half ago.

However, people need to stop talking about how FedRAMP has solved all their problems because the entire program isn’t here yet.  Until you have a process document and a catalog of controls to evaluate, you don’t know how the program is going to help or hinder you, so all the press about it is speculation.



Similar Posts:

Posted in DISA, FISMA, NIST, Outsourcing, Risk Management | No Comments »
Tags:

Clouds, FISMA, and the Lawyers

Posted April 26th, 2011 by

Interesting blog post on Microsoft’s TechNet, but the real gem is the case filing and summary from the DoJ (usual .pdf caveat applies).  Basically the Reader’s Digest Condensed Version is that the Department of Interior awarded a cloud services contract to Microsoft for email.  The award was protested by Google for a wide variety of reasons, you can go read the full thing for all the whinging.

But this is the interesting thing to me even though it’s mostly tangential to the award protest:

  • Google has an ATO under SP 800-37 from GSA for its Google Apps Premiere.
  • Google represents Google Apps for Government as having an ATO which, even though 99% of the security controls could be the same, is inaccurate as presented.
  • DOI rejected Google’s cloud because it had state and local (sidenote: does this include tribes?) tenants which might not have the same level of “security astuteness” as DOI.  Basically what they’re saying here is that if one of the tenants on Google’s cloud doesn’t know how to secure their data, it affects all the tenants.

So this is where I start thinking.  I thunk until my thinker was sore, and these are the conclusions I came to:

  • There is no such thing as “FISMA Certification”, there is a risk acceptance process for each cloud tenant.  Cloud providers make assertions of what common controls that they have built across all
  • Most people don’t understand what FISMA really means.  This is no shocker.
  • For the purposes of this award protest, the security bits do not matter because
  • This could all be solved in the wonk way by Google getting an ATO on their entire infrastructure and then no matter what product offerings they add on top of it, they just have to roll it into the “Master ATO”.
  • Even if the cloud infrastructure has an ATO, you still have to authorize the implementation on top of it given the types of data and the implementation details of your particular slice of that cloud.

And then there’s the “back story” consisting of the Cobell case and how Interior was disconnected from the Internet several times and for several years.  The Rybolov interpretation is that if Google’s government cloud potentially has tribes as a tenant, it increases the risk (both data security and just plain politically) to Interior beyond what they are willing to accept.

Obligatory Cloud photo by jonicdao.



Similar Posts:

Posted in FISMA, NIST, Outsourcing | 2 Comments »
Tags:

Some Comments on SP 800-39

Posted April 6th, 2011 by

You should have seen Special Publication 800-39 (PDF file, also check out more info on Fismapedia.org) out by now.  Dan Philpott and I just taught a class on understanding the document and how it affects security managers out them doing their job on a daily basis.  While the information is still fresh in my head, I thought I would jot down some notes that might help everybody else.

The Good:

NIST is doing some good stuff here trying to get IT Security and Information Assurance out of the “It’s the CISO’s problem, I have effectively outsourced any responsibility through the org chart” and into more of what DoD calls “mission assurance”.  IE, how do we go from point-in-time vulnerabilities (ie, things that can be scored with CVSS or tested through Security Test and Evaluation) to briefing executives on what the risk is to their organization (Department, Agency, or even business) coming from IT security problems.  It lays out an organization-wide risk management process and a framework (layer cakes within layer cakes) to share information up and down the organizational stack.  This is very good, and getting the mission/business/data/program owners to recognize their responsibilities is an awesome thing.

The Bad:

SP 800-39 is good in philosophy and a general theme of taking ownership of risk by the non-IT “business owners”, when it comes to specifics, it raises more questions than it answers.  For instance, it defines a function known as the Risk Executive.  As practiced today by people who “get stuff done”, the Risk Executive is like a board of the Business Unit owners (possibly as the Authorizing Officials), the CISO, and maybe a Chief Risk Officer or other senior executives.  But without the context and asking around to find out what people are doing to get executive buy-in, the Risk Executive seems fairly non-sequitor.  There are other things like that, but I think the best summary is “Wow, this is great, now how do I take this guidance and execute a plan based on it?”

The Ugly:

I have a pretty simple yardstick for evaluating any kind of standard or guideline: will this be something that my auditor will understand and will it help them help me?  With 800-39, I think that it is written abstractly and that most auditor-folk would have a hard time translating that into something that they could audit for.  This is both a blessing and a curse, and the huge recommendation that I have is that you brief your auditor beforehand on what 800-39 means to them and how you’re going to incorporate the guidance.



Similar Posts:

Posted in FISMA, NIST, Risk Management, What Works | 5 Comments »
Tags:

Reinventing FedRAMP

Posted February 15th, 2011 by

“Cloud computing is about gracefully losing control while maintaining accountability even if the operational responsibility falls upon one or more third parties.”
–CSA Security Guidance for Critical Areas of Focus in Cloud Computing V2.1

Now enter FedRAMP.  FedRAMP is a way to share Assessment and Authorization information for a cloud provider with its Government tenants.  In case you’re not “in the know”, you can go check out the draft process and supporting templates at FedRAMP.gov.  So far a good idea, and I really do support what’s going on with FedRAMP, except for somewhere along the lines we went astray because we tried to kluge doctrine that most people understand over the top of cloud computing which most people also don’t really understand.

I’ve already done my part to submit comments officially, I just want to put some ideas out there to keep the conversation going. As I see it, these are/should be the goals for FedRAMP:

  • Delineation of responsibilities between cloud provider and cloud tenant.  Also knowing where there are gaps.
  • Transparency in operations.  Understanding how the cloud provider does their security parts.
  • Transparency in risk.  Know what you’re buying.
  • Build maturity in cloud providers’ security program.
  • Help cloud providers build a “Governmentized” security program.

So now for the juicy part, how I would do a “clean room” implementation of FedRAMP on Planet Rybolov, “All the Authorizing Officials are informed, the Auditors are helpful, and every ISSO is above average”?  This is my “short list” of how to get the job done:

  • Authorization: Sorry, not going to happen on Planet Rybolov.  At least, authorization by FedRAMP, mostly because it’s a cheat for the tenant agencies–they should be making their own risk decisions based on risk, cost, and benefit.  Acceptance of risk is a tenant-specific thing based on the data types and missions being moved into the cloud, baseline security provided by the cloud provider, the security features of the products/services purchased, and the tenant’s specific configuration on all of the above.  However, FedRAMP can support that by helping the tenant agency by being a repository of information.
  • 800-53 controls: A cloud service provider manages a set of common controls across all of their customers.  Really what the tenant needs to know is what is not provided by the cloud service provider.  A simple RACI matrix works here beautifully, as does the phrase “This control is not applicable because XXXXX is not present in the cloud infrastructure”.  This entire approach of “build one set of controls definitions for all clouds” does not really work because not all clouds and cloud service providers are the same, even if they’re the same deployment model.
  • Tenant Responsibilities: Even though it’s in the controls matrix, there needs to be an Acceptable Use Policy for the cloud environment.  A message to providers: this is needed to keep you out of trouble because it limits the potential impacts to yourself and the other cloud tenants.  Good examples would be “Do not put classified data on my unclassified cloud”.
  • Use Automation: CloudAudit is the “how” for FedRAMP.  It provides a structure to query a cloud (or the FedRAMP PMO) to find out compliance and security management information.  Using a tool, you could query for a specific control or get documents, policy statements, or even SCAP assessment content.
  • Changing Responsibilities: Things change.  As a cloud provider matures, releases new products, or moves up and down the SPI stack ({Software|Platform|Infrastructure}as a Service), the balance of responsibilities change.  There needs to be a vehicle to disseminate these changes.  Normally in the IA world we do this with a Plan of Actions and Milestones but from the viewpoint of the cloud provider, this is more along the lines of a release schedule and/or roadmap.  Not that I’m personally signing up for this, but a quarterly/semi-annually tenant agency security meeting would be a good way to get this information out.

Then there is the special interest comment:  I’ve heard some rumblings (and read some articles, shame on you security industry press for republishing SANS press releases) about how FedRAMP would be better accomplished by using the 20 Critical Security Controls.  Honestly, this is far from the truth: a set of controls scoped to the modern enterprise (General Support System supporting end users) or project (Major Application) does not scale to an infrastructure-and-server cloud. While it might make sense to use 20 CSC in other places (agency-wide controls), please do your part to squash this idea of using it for cloud computing whenever and wherever you see it.

Ramp

Ramp photo by ell brown.



Similar Posts:

Posted in FISMA, Risk Management, What Works | 2 Comments »
Tags:

FedRAMP is Officially Out

Posted November 3rd, 2010 by

Go check it out.  The project management folks have been jokingly grilled over numerous times for being ~2-3 months late.

However, comments are being accepted until December 2nd.  Do yourselves a favor and submit some comments.



Similar Posts:

Posted in FISMA, NIST | 2 Comments »
Tags:

Engagement Economics and Security Assessments

Posted September 29th, 2010 by

Ah yes, I’ve explained this about a hundred times this week (at that thing that I can’t blog about, but @McKeay @MikD and @Sawaba were there so fill in the gaps), thought I should get this down somewhere.

the 3 factors that determine how much money you will make (or lose) in a consulting practice:

  • Bill Rate: how much do you charge your customers.  This is pretty familiar to most folks.
  • Utilization: what percentage of your employees’ time is spent being billable.  The trick here is if you can get them to work 50 hours/week because then they’re at 125% utilization and suspiciously close to “uncompensated overtime”, a concept I’ll maybe explain in the future.
  • Leverage: the ratio of bosses to worker bees.  More experienced people are more expensive to have as employees.  Usually a company loses money on these folks because the bill rate is less than what they are paid.  Conversely, the biggest margin is on work done by junior folks.  A highly leveraged ratio is 1:25, a lowly leveraged ratio is 1:5 or even less.

Site Assessment photo by punkin3.14.

And then we have the security assessments business and security consulting in general.  Let’s face it, security assessments are a commodity market.  What this means is that since most competitors in the assessment space charge the same amount (or at least relatively close to each other), this means some things about the profitability of an assessment engagement:

  • Assuming a Firm Fixed Price for the engagement, the Effective Bill Rate is inversely proportionate to the amount of hours you spend on the project.  IE, $30K/60 hours=$500/hour and 30K/240 hours = $125/hour.  I know this is a shocker, but the less amount of time you spend on an assessment, the bigger your margin but you would also expect the quality to suffer.
  • Highly leveraged engagements let you keep margin but over time the quality suffers.  1:25 is incredibly lousy for quality but awesome for profit.  If you start looking at security assessment teams, they’re usually 1:4 or 1:5 which means that the assessment vendor is getting squeezed on margin.
  • Keeping your people engaged as much as possible gives you that extra bit of margin.  Of course, if they’re spending 100% of their time on the road, they’ll get burned out really quickly.  This is not good for both staff longevity (and subsequent recruiting costs) and for work quality.

Now for the questions that this raises for me:

  • Is there a 2-tier market where there are ninjas (expensive, high quality) and farmers (commodity prices, OK quality)?
  • How do we keep audit/assessment quality up despite economic pressure?  IE, how do we create the conditions where the ninja business model is viable?
  • Are we putting too much trust in our auditors/assessors for what we can reasonably expect them to perform successfully?
  • How can any information security framework focused solely on audit/assessment survive past 5 years? (5-10 years is the SWAG time on how long it takes a technology to go from “nobody’s done this before” to “we have a tool to automate most of it”)
  • What’s the alternative?


Similar Posts:

Posted in Rants, What Doesn't Work | 3 Comments »
Tags:

« Previous Entries


Visitor Geolocationing Widget: