FedRAMP: It’s Here but Not Yet Here

Posted December 12th, 2011 by

Contrary to what you might hear this week in the trade press, FedRAMP is not fully unveiled although there was some much-awaited progress. There was a memo that came out from the administration (PDF caveat).  Basically what it does is lay down the authority and responsibility for the Program Management Office and set some timelines.  This is good, and we needed it a year and a half ago.

However, people need to stop talking about how FedRAMP has solved all their problems because the entire program isn’t here yet.  Until you have a process document and a catalog of controls to evaluate, you don’t know how the program is going to help or hinder you, so all the press about it is speculation.



Similar Posts:

Posted in DISA, FISMA, NIST, Outsourcing, Risk Management | No Comments »
Tags:

NIST Cloud Conference Recap

Posted June 2nd, 2010 by

A couple of weeks ago I went to the NIST Cloud Conference for the afternoon security sessions.  You can go grab the slides off the conference site.  Good stuff all around.

Come to think of it, I haven’t blogged about FedRAMP, maybe it’s time to.

FedRAMP is a way to do security authorization (formerly certification and accreditation, get with the times, man) on a cloud then let tenant projects use that authorization.  Hmmm, sounds like…. a General Support System with common controls and Major Applications that inherit those controls.  This isn’t really anything new, just the “bread and butter” security management concepts scoped to a cloud.  Basically what will happen with FedRAMP is that they have 3 standards: DoD, DHS, and GSA (most stringent first) and cloud providers get authorized against that standard.  Then when a project wants to build on that cloud, they can use that authorization for their own authorization package.

All things considered, FedRAMP is an awesome idea.  Now if we can get the holdout agencies to actually acknowledge their internal common controls, I’ll be happy–the background story being that some number of months ago I was told by my certifier that “we don’t recognize common controls so even though you’re just a simple web application you have to justify every control even if it’s provided to you as infrastructure.”  No, still not bitter at all here, but I digress….

And then there are the pieces that I haven’t seen worked out yet:

  • Mechanism of Sharing: As a service provider, it’s hard enough to keep one agency happy.  Add in 5 of them and it gets nearly impossible.  This hasn’t really been figured out, but in Rybolov’s small, myopic world, a panel of agencies owning an authorization for a cloud provider means that the cloud never gets authorized.  The way this has been “happening in the wild” is that one agency owns the authorization and all the other agencies get the authorization package from that agency.
  • Using FedRAMP is Optional: An agency or project can require their own risk assessment and authorization even though a FedRAMP one is available.  This means that if the agency’s auditors don’t understand the process or the “risk monkeys” (phrase courtesy of My Favorite Govie) decree it, you lose any kind of cost savings and time savings that you would get by participating in FEDRAMP.
  • Cloud Providers Rule the Roost: Let’s face it, as much as the Government wants to pretend that the cloud providers are satisfying the Government’s security requirements, we all know that due to the nature of catalogs of controls and solution engineering, the vendor here has the advantage.  Nothing new, it’s been happening that way with outsourcing, only now it’s immediately evident.  Instead of trying to play ostrich and stick our heads in the sand, why don’t we look at the incentives for the cloud providers and see what makes sense for their role in all this.
  • Inspector General Involvement: I don’t see this happening, and to be honest, this scares the hell out of me.  Let me just invoke Rybolov’s Law: “My solution is only as good as my auditor’s ability to understand it.”  IE, if the IGs and other auditors don’t understand FedRAMP, you don’t really have a viable solution.

The Big Ramp photo by George E. Norkus.  FedRAMP has much opportunity for cool photos.



Similar Posts:

Posted in FISMA, NIST, Outsourcing, Risk Management, What Doesn't Work, What Works | 2 Comments »
Tags:

“Machines Don’t Cause Risk, People Do!”

Posted May 26th, 2010 by

A few weeks back I read an article on an apparent shift in emphasis in government security… OMB outlines shift on FISMA” take a moment to give it a read. I’ll wait….

That was followed by NASA’s “bold move” to change the way they manage risk

Once again the over-emphasis and outright demagoguery on “compliance,” “FISMA reports,” “paper exercises,” and similar concepts that occupy our security geek thoughts have not given way to enlightenment. (At least “compliancy” wasn’t mentioned…) I was saddened by a return to the “FISMA BAD” school of thought so often espoused by the luminaries at SANS. Now NASA has leapt from the heights… At the risk of bashing Alan Paller yet again, I am often turned off by the approach of “being able to know the status of every machine at every minute, ” – as if machines by themselves cause bad security… It’s way too tactical (incorrect IMHO) and too easy to make that claim.

Hence the title of this rant – Machines don’t cause risk, people do!

The “people” I’m talking about are everyone from your agency director, down to the lowliest sysadmin… The problem? They may not be properly educated or lack the necessary skills for their position – another (excellent) point brought forth in the first article. Most importantly, even the most seasoned security veteran operating without a strategic vision within a comprehensive security program (trained people, budget, organizational will, technology and procedures) based upon the FISMA framework will be doomed to failure. Likewise, having all the “toys” in the world means nothing without a skilled labor force to operate them and analyze their output. (“He who dies with the most toys is still dead.”) Organizations and agency heads that do not develop and support a comprehensive security program that incorporates the NIST Risk Management Framework as well as the other facets listed above will FAIL. This is nothing new or revolutionary, except I don’t think we’ve really *done* FISMA yet. As I and others have said many times, it’s not about the paper, or the cost per page – it’s about the repeatable processes — and knowledgeable people — behind what the paper describes.

I also note the somewhat disingenuous mention of the risk management program at the State Department in the second article… As if that were all State was doing! What needs to be noted here is that State has approached security in the proper way, IMHO — from a Strategic, or Enterprise level. They have not thrown out the figurative baby with the bath water by dumping everything else in their security program in favor of the risk scoring system or some other bright, shiny object. I know first-hand from having worked with many elements in the diplomatic security hierarchy at State – these folks get it. They didn’t get to the current level of goodness in the program by decrying (dare I say whining about?) “paper.” They made the organizational commitment to providing contract vehicles for system owners to use to develop their security plans and document risk in Plans of Action and Milestones (POA&Ms). Then they provided the money to get it done. Is the State program a total “paragon of virtue?” Probably not, but the bottom line is that it’s an effective program.

Mammoth Strategy, Same as Last Year

Mammoth Strategy, Same as Last Year image by HikingArtist.com.

Desiring to know everything about everything may seem to some to be a worthy goal, but may be beyond many organization’s budgets. *Everything* is a point in time snapshot, no matter how many snapshots you take or how frequently you take them. Continuous, repeatable security processes followed by knowledgeable, responsible practitioners are what government needs. But you cannot develop these processes without starting from a larger, enterprise view. Successful organizations follow this–dare I say it–axiom whether discussing security governance, or system administration.

Government agencies need to concentrate on developing agency-wide security strategies that encompass, but do not concentrate on solely, what patch is on what machine, and what firewall has which policy. Likewise, system POA&Ms need to concentrate on higher-level strategic issues that affect agencies — things like changes to identity management schemes that will make working from home more practical and less risky for a larger percentage of the workforce. Or perhaps a dashboard system that provides the status of system authorization for the agency at-a-glance. “Burying your head in a foxhole” —becoming too tactical — is akin to burying it in the sand, or like getting lost in a bunch of trees that look like a forest. When organizations behave this way, everything becomes a threat, therefore they spray their resource firepower on the “threat of the day, or hour.”

An organization shouldn’t worry about patching servers if its perimeter security is non-existent. Developing the larger picture, while letting some bullets strike you, may allow you recognize threats, prioritize them, potentially allowing you to expend minimal resources to solve the largest problem. This approach is the one my organization is following today. It’s a crawl first, then walk, then run approach. It’s enabled management to identify, segregate, and protect critical information and resources while giving decision-makers solid information to make informed, risk-based decisions. We’ll get to the patches, but not until we’ve learned to crawl. Strangely, we don’t spend a lot of time or other organizational resources on “paper drills” — we’re actively performing security tasks, strategic and tactical that follow documented procedures, plans and workflows! Oh yes, there is the issue of scale. Sorry, I think over 250 sites in every country around the world, with over 62 different government customers tops most enterprises, government or otherwise, but then this isn’t about me or my organization’s accomplishments.

In my view, professional security education means providing at least two formal paths for security professionals – the one that SANS instantiates is excellent for administrators – i.e., folks operating on the tactical level. I believe we have these types of security practitioners in numbers. We currently lack sufficient seasoned professionals – inside government – who can approach security strategically, engaging agency management with plans that act both “globally” and “locally.” Folks like these exist in government but they are few. Many live in industry or the contractor space. Not even our intelligence community has a career path for security professionals! Government as a whole lacks a means to build competence in the security discipline. Somehow government agencies need to identify security up-and-comers within government and nurture them. What I’m calling for here is a government-sponsored internal mentorship program – having recognized winners in the security game mentor peers and subordinates.

Until we security practitioners can separate the hype from the facts, and can articulate these facts in terms management can understand and support, we will never get beyond the charlatans, headline grabbers and other “self-licking ice cream cones.” Some might even look upon this new, “bold initiative” by NASA as quitting at a game that’s seen by them as “too hard.” I doubt seriously that they tried to approach the problem using a non-academic, non-research approach. It needed to be said. Perhaps if the organization taking the “bold steps” were one that had succeeded at implementing the NIST guidance, there might be more followers, in greater numbers.

Perhaps it’s too hard because folks are merely staring at their organization’s navel and not looking at the larger picture?

Lastly, security needs to be approached strategically as well as tactically. As Sun Tzu said, “Tactics without strategy is the noise before defeat.”



Similar Posts:

Posted in FISMA, NIST, Public Policy, Rants, Risk Management, What Doesn't Work, What Works | 14 Comments »
Tags:

The Guerilla CISO Rants: Don’t Write a System Security Plan

Posted October 1st, 2009 by

OK, I know you’re shocked…I’m saying something controversial.  But hear me out on this one, I’ll explain.

Now this is my major beef with the way we write SSPs today:  this is all information that is contained in other artifacts that I have to pay people to do cut-and-paste to get it into a SSP template.  As practiced, we seriously have a problem with polyinstantiation of data in various lifecycle artifacts that is cut-and-pasted into an SSP.  Every time you change the upstream document, you create a difference between that document and the SSP.

This is a practice I would like to change, but I can’t do it all by myself.

This is the skeleton outline of an SSP from Special Publication 800-18, the guide to writing an SSP:

  1. Information System Name/Title–On the investment/FISMA inventory, the Exhibit 300/53, etc
  2. Information System Categorization–usually on a FIPS-199 memorandum
  3. Information System Owner–In an assignment memo
  4. Authorizing Official–In an assignment memo
  5. Other Designated Contacts–In an assignment memo
  6. Assignment of Security Responsibility–In assignment memos
  7. Information System Operational Status–On the investment/FISMA inventory, the Exhibit 300/53, etc
  8. Information System Type–On the investment/FISMA inventory, the Exhibit 300/53, etc
  9. General System Description/Purpose–In the design document, Exhibit 300/53
  10. System Environment–Common controls not inside the scope of our system
  11. System Interconnections/Information Sharing–from Interconnection Security Agreements
  12. Related Laws/Regulations/Policies–Should be part of the system categorization but hardly ever is on templates
  13. Minimum Security Controls–800-53 controls descriptions which can easily be done in a Requirements Traceability Matrix
  14. Information System Security Plan Completion Date–specific to each document
  15. Information System Security Plan Approval Date–specific to each document

Now some of this has changed in practice a little bit–# 10 can functionally be replaced with a designation of common controls and hybrid controls.

So my line of thinking is that if we provide a 2-6-page system description with the names of the “guilty parties” and some inventory information, controls-specific Requirements Traceability Matrix, and a System Design Document, then we have the functional equivalent of an SSP.

Why have I declared an InfoSec fatwah against SSPs as currently practiced?

Well, my philosophy for operation is based on some concepts I’ve picked up through the years:

  • Why run when you can walk, why walk when you can sit, why sit when you can lay down.  There is a time to spend effort on determining what the security controls are for a project.  You need to have them documented but it’s not cost-effective to be worried about format, which we do probably too much of today.
  • Make it easy to do the right thing.  If we polyinstantiate security information, we have made something harder to maintain.  Easier to maintain means that it will get maintained instead of being shelfware.  I would rather have updated and accurate security information than overly verbose and well-polished documents that are inaccurate.
  • Security is not a “security guy thing”–most problems are actually a management and project team problem.  My idea uses their SDLC artifacts instead of security-specific versions of artifacts.  My idea puts the project problems back in the project space where it belongs.
  • If I have a security engineer who has a finite amount of hours in a day, I have to choose what they spend their time on.  If it’s a matter of vulnerability mitigation, patching, etc, or correcting SSP grammar, I know what I want him to do.  Then again, I’m still an infantryman deep down inside and I realize I have biases against flowery writing.

Criticisms to not writing a dedicated SSP document:

“My auditors are used to seeing the information in the same format at someplace they worked previously”. Believe it or not, I hear this quite a bit.  My response is along the lines of the fact that if you make your standard be what I’m suggesting for a security plan, then you’ve met all of the FISMA and 800-53 requirements and my personal requirement to “don’t do stupid stuff if you can help it”.

“My auditors will grill me to death if they have to page back and forth between several documents”.  This one also I’ve heard.  There are a couple of ways to deal with this.  One way to deal with this is that in your 800-53 Requirements Traceability Matrix you reference the source document.  Most auditors at this point bring up that you need to reference the official name, date of publication, and specific page/section of the reference and I think they need to get a life because they’ve taken us back to the maintainability problem.

“This is all too new-school and I can’t get over it”. Then you are a dinosaur and your kind deserves extinction.  =)

.

This blog post is for grecs at novainfosecportal.com who perked up instantly when I mentioned the concept months ago.  Finally got around to putting the text somewhere.

How to Plan the Perfect Dinner Party photo by kevindooley.



Similar Posts:

Posted in FISMA, NIST | 11 Comments »
Tags:

A Layered Model for Massively-Scaled Security Management

Posted August 24th, 2009 by

So we all know the OSI model by heart, right?   Well, I’m offering up my model of technology management. Really at this stage I’m looking for feedback

  • Layer 7: Global Layer. This layer is regulated by treaties with other nation-states or international standards.  I fit cybercrime treaties in here along with the RFCs that make the Internet work.  Problem is that security hasn’t really reached much to this level unless you want to consider multinational vendors and top-level cert coordination centers like CERT-CC.
  • Layer 6: National-Level Layer. This layer is an aggregation of Federations and industries and primarily consists of Federal law and everything lumped into a “critical infrastructure” bucket.  Most US Federal laws fit into this layer.
  • Layer 5: Federation/Community Layer. What I’m talking here with this layer is an industry federated or formed in some sort of community.  Think major verticals such as energy supply.  It’s not a coincidence that this layer lines up with DHS’s critical infrastructure and key resources breakdown but it can also refer to self-regulated industries such as the function of PCI-DSS or NERC.
  • Layer 4: Enterprise Layer. Most security thought, products, and tools are focused on this layer and the layers below.  This is the realm of the CSO and CISO and roughly equates to a large corporation.
  • Layer 3: Project Layer. Collecting disparate technologies and data into a similar piece such as the LAN/WAN, a web application project, etc.  In the Government world, this is the location for the Information System Security Officer (ISSO) or the System Security Engineer (SSE).
  • Layer 2: Integration Layer. Hardware, software, and firmware combine to become products and solutions and is focused primarily on engineering.
  • Layer 1: Code Layer. Down into the code that makes everything work.  This is where the application security people live.

There are tons of way to use the model.I’m thinking each layer has a set of characteristics like the following:

  • Scope
  • Level of centralization
  • Responsiveness
  • Domain expertise
  • Authority
  • Timeliness
  • Stakeholders
  • Regulatory bodies
  • Many more that I haven’t thought about yet

Chocolate Layer Cake photo by foooooey.

My whole point for this model is that I’m going to try to use it to describe the levels at which a particular problem resides at and to stimulate discussion on what is the appropriate level at which to solve it.  For instance, take a technology and you can trace it up and down the stack. Say Security Event and Incident Monitoring:

  • Layer 7: Global Layer. Coordination between national-level CERTs in stopping malware and hacking attacks.
  • Layer 6: National-Level Layer. Attack data from Layer 5 is aggregated and correlated to respond to large incidents on the scale of Cyberwar.
  • Layer 5: Federation/Community Layer. Events are filtered from Layer 4 and only the confirmed events or interest are correlated to determine trends.
  • Layer 4: Enterprise Layer. Events are aggregated by a SIEM with events of interest flagged for response.
  • Layer 3: Project Layer. Logs are analyzed in some manner.  This is most likely the highest in the model that we
  • Layer 2: Integration Layer. Event logs have to be written to disk and stored for a period of time.
  • Layer 1: Code Layer. Code has to be programmed to create event logs.

I do have an ulterior motive.  I created this model because most of our security thought, doctrine, tools, products, and solutions work at Layer 4 and below.  What we need is discussion on Layers 5 and above because when we try to create massively-scaled security solutions, we start to run into a drought of information at what to do above the Enterprise.  There are other bits of doctrine that I want to bring up, like trying to solve any problem at the lowest level for which it makes sense.  So in other words, we can use the model to propose changes to the way we manage security… say we have a problem like the lack of data on data breaches.  What we’re saying when we say that we need a Federal data breach law is that because of the scope and the amount of responsibility and competing interests at Layer 5, that we need a solution at Layer 6, but in any case we should start at the bottom and work our way up the model until we find an adequate scope and scale.

So, this is my question to you, Internet: have I just reinvented enterprise public policy, IT architecture (Federal Enterprise Architecture) and business blueprinting, or did I create some kind of derivative view of technology, security, and public policy that I can now use?



Similar Posts:

Posted in Public Policy | 6 Comments »
Tags:

NIST Framework for FISMA Dates Announced

Posted April 10th, 2009 by

Some of my friends (and maybe myself) will be teaching the NIST Framework for FISMA in May and June with Potomac Forum.   This really is an awesome program.  Some highlights:

  • Attendance is limited to Government employees only so that you can talk openly with your peers.
  • Be part of a cohort that trains together over the course of a month.
  • The course is 5 Fridays so that you can learn something then take it back to work the next week.
  • We have a Government speaker ever week, from the NIST FISMA guys to agency CISOs and CIOs.
  • No pitching, no marketing, no product placement (OK, maybe we’ll go through DoJ’s CSAM but only as an example of what kinds of tools are out there) , no BS.

See you all there!



Similar Posts:

Posted in NIST, Speaking | 1 Comment »
Tags:

« Previous Entries


Visitor Geolocationing Widget: