GAO’s 5 Steps to “Fix” FISMA

Posted July 2nd, 2009 by

Letter from GAO on how Congress can fix FISMA.  And oh yeah, the press coverage on it.

Now supposedly this was in response to an inquiry from Congress about “Please comment on the need for improved cyber security relating to S.773, the proposed Cybersecurity Act of 2009.”  This is S.773.

GAO is mixing issues and has missed the mark on what Congress asked for.  S.773 is all about protecting critical infrastructure.  It only rarely mentions government internal IT issues.  S.773 has nothing at all to do with FISMA reform.  However, GAO doesn’t have much expertise in cybersecurity outside of the Federal Agencies (they have some, but I would never call it extensive), so they reported on what they know.

The GAO report used the often-cited metric of an increase in cybersecurity attacks against Government IT systems growing from “5,503 incidents reported in fiscal year 2006 to 16,843 incidents in fiscal year 2008” as proof that the agencies are not doing anything to fix the problem.  I’ve questioned these figures before, it’s associated with the measurement problem and increased reporting requirements more than an increase in attacks.  Truth be told, nobody knows if the attacks are increasing and, if so, at what rate.  I would guess they’re increasing, but we don’t know, so quit citing some “whacked” metric as proof.

Reform photo by shevy.

GAO’s recommendations for FISMA Reform:

Clarify requirements for testing and evaluating security controls.  In other words, the auditing shall continue until the scores improve.  Hate to tell you this, but really all you can test at the national level is if the FISMA framework is in place, the execution of the framework (and by extension, if an agency is secure or not) is largely untestable using any kind of a framework.

Require agency heads to provide an assurance statement on the overall adequacy and effectiveness of the agency’s information security program.  This is harkening back to the accounting roots of GAO.  Basically what we’re talking here is for the agency head to attest that his agency has made the best effort that it can to protect their IT.  I like part of this because part of what’s missing is “executive support” for IT security.  To be honest, though, most agency heads aren’t IT security dweebs, they would be signing an assurance statement based upon what their CIO/CISO put in the executive summary.

Enhance independent annual evaluations.  This has significant cost implications.  Besides, we’re getting more and more evaluations as time goes on with an increase in audit burden.  IE, in the Government IT security space, how much of your time is spent providing proof to auditors versus building security?  For some people, it’s their full-time job.

Strengthen annual reporting mechanisms.  More reporting.  I don’t think it needs to get strengthened, I think it needs to get “fixed”.  And by “fixed” I mean real metrics.  I’ve touched on this at least a hundred times, go check out some of it….

Strengthen OMB oversight of agency information security programs.  This one gives me brain-hurt.  OMB has exactly the amount of oversight that they need to do their job.  Just like more auditing, if you increase the oversight and the people doing the execution have the same amount of people and the same amount of funding and the same types of skills, do you really expect them to perform differently?

Rybolov’s synopsis:

When the only tool you have is a hammer, every problem looks like a nail, and I think that’s what GAO is doing here.  Since performance in IT security is obviously down, they suggest that more auditing and oversight will help.  But then again, at what point does the audit burden tip to the point where nobody is really doing any work at all except for answering to audit requests?

Going back to what Congress really asked for, We run up against a problem.  There isn’t a huge set of information about how the rest of the nation is doing with cybersecurity.  There’s the Verizon DBIR, the Data Loss DB, some surveys, and that’s about it.

So really, when you ask GAO to find out what the national cybersecurity situation is, all you’re going to get is a bunch of information about how government IT systems line up and maybe some anecdotes about critical infrastructure.

Coming to a blog near you (hopefully soon): Rybolov’s 5 steps to “fix” FISMA.



Similar Posts:

Posted in FISMA | 2 Comments »
Tags:

Why We Need PCI-DSS to Survive

Posted June 9th, 2009 by

And by “We”, I mean the security industry as a whole.  And yes, this is your public-policy lesson for today, let me drag my soapbox over here and sit for a spell while I talk at you.

By “Survive”, I mean that we need some kind of self-regulatory framework that fulfills the niche that PCI-DSS occupies currently. Keep reading, I’ll explain.

And the “Why” is a magical phrase, everybody say it after me: self-regulatory organization.  In other words, the IT industry (and the Payment Card Industry) needs to regulate itself before it crosses the line into being considered for statutory regulation (ie, making a law) by the Federal Government.

Remember the PCI-DSS hearings with the House Committe on Homeland Security (AKA the Thompson Committee)?  All the Security Twits were abuzz about it, and it did my heart great justice to hear all the cool kids become security and public policy wonks at least for an afternoon.  Well, there is a little secret here and that is that when Congress gets involved, they’re gathering information to determine if they need to regulate an industry.  That’s about all Congress can do: make laws that you (and the Executive Branch) have to follow, maybe divvy up some tax money, and bring people in to testify.  Other than that, it’s just positioning to gain favor with other politicians and maybe some votes in the next election.

Regulation means audits and more compliance.  They go together like TCP and IP.  Most regulatory laws have at least some designation for a party who will perform oversight.  They have to do this because, well, if you’re not audited/assessed/evaluated/whatever, then it’s really an optional law, which doesn’t make sense at all.

Yay Audits photo by joebeone.

Another magical phrase that the public policy sector can share with the information security world: audit burden.  Audit burden is how much a company or individual pays both in direct costs (paying the auditors) and in indirect costs (babysitting the auditors, producing evidence for the auditors, taking people away from making money to talk to auditors, “audit requirements”, etc).  I think we can all agree that low audit burden is good, high audit burden is bad.  In fact, I think that’s one of the problems with FISMA as implemented is that it has a high audit burden with moderately tangible results. But I digress, this post is about PCI-DSS.

There’s even a concept that is mulling around in the back of my head to make a metric that compares the audit burden to the amount of security that it provides to the amount of assurance that it provides against statutory regulation.  It almost sounds like the start of a balanced scorecard for security management frameworks, now if I could get @alexhutton to jump on it, his quant brain would churn out great things in short order.

But this is the lesson for today: self-regulation is preferrable to legislation.

  • Self-regulation is defined by people in the industry.  Think about the State Bar Association setting the standards for who is allowed to practice law.
  • Standards ideally become codified versions of “best practices”.  OK, this is if they’re done correctly, more to follow.
  • Standards are more flexible than laws.  As hard/cumbersome as it is to change a standard, the time involved in changing a law is prohibitive most of the time unless you’re running for reelection.
  • Standards sometimes can be “tainted” to force out competition, laws are even more so.

The sad fact here is that if we don’t figure out as an industry how to make PCI-DSS or any other forms of self-regulation work, Congress will regulate for us.  Don’t like PCI-DSS because of the audit burden, wait until you have a law that requires you to do the same controls framework.  It will be the same thing, only with bigger penalties for failure, larger audit burdens to avoid the larger penalties, larger industries created to satisfy the market demand for audit.  Come meet the new regulatory body, same as the old only bigger and meaner. =)

However, self-regulation works if you do it right, and by right I mean this:

  • The process is transparent and not the product of a secret back-room cabbal.
  • Representation from all the shareholders.  For PCI-DSS, that would be Visa/MasterCard, banks, processors, large merchants, small merchants, and some of the actual customers.
  • The standards committee knows how to compromise and come to a consensus.  IE, we can’t have both full hard drive encryption, a WAF, code review, and sacrificing of chickens in the server room, so we’ll make one of the 4 mandatory.
  • The regulatory organization has a grievance process for its constituency to present valid (AKA “Not just more whining”) discrepencies in the standards and processes for clarification or consideration for change.
  • The standard is “owned” by every member of the constituency.  Right now, people governed by PCI-DSS are not feeling that the standard is their standard and that they have a say in what comprises the standard and that they are the ones being helped by the standard.  Some of that is true, some of that is an image problem.  The way you combat this is by doing the things that I mentioned in the previous bullets.

Hmm, sounds like making an ISO standard, which brings its own set of politics.

While we need some form of self-regulation, right now PCI-DSS and ISO 27001 are the closest that we have in the private sector.  Yeah, it sucks, but it sucks the least, just like our form of government.



Similar Posts:

Posted in Public Policy, Rants | 11 Comments »
Tags:

Some Thoughts on POA&M Abuse

Posted June 8th, 2009 by

Ack, Plans of Action and Milestones.  I love them and I hate them.

For those of you who “don’t habla Federali”, a POA&M is basically an IOU from the system owner to the accreditor that yes, we will fix something but for some reason we can’t do it right now.  Usually these are findings from Security Test and Evaluation (ST&E) or Certification and Accreditation (C&A).  In fact, some places I’ve worked, they won’t make new POA&Ms unless they’re traceable back to ST&E results.

Functions that a POA&M fulfills:

  • Issue tracking to resolution
  • Serves as a “risk register”
  • Used as the justification for budget
  • Generate mitigation metrics
  • Can be used for data-mining to find common vulnerabilities across systems

But today, we’re going to talk about POA&M abuse.  I’ve seen my fair share of this.

Conflicting Goals: The basic problem is that we want POA&Ms to satisfy too many conflicting functions.  IE, if we use the number of open POA&Ms as a metric to determine if our system owners are doing their job and closing out issues but we also turn around and report these at an enterprise level to OMB or at the department level, then it’s a conflict of interest to get these closed as fast as possible, even if it means losing your ability to track things at the system level or to spend the time doing things that solve long-term security problems–our vulnerability/weakness/risk management process forces us into creating small, easily-to-satisfy POA&Ms instead of long-term projects.

Near-Term v/s Long-Term:  If we set up POA&Ms with due dates of 30-60-90 (for high, moderate, and low risks) days, we don’t really have time at all to turn these POA&Ms into budget support.  Well, if we manage the budget up to 3 years in advance and we have 90 days for high-risk findings, then that means we’ll have exactly 0 input into the budget from any POA&M unless we can delay the bugger for 2 years or so, much too long for it to actually be fixable.

Bad POA&Ms:  Let’s face it, sometimes the one-for-one nature of ST&E, C&A, and risk assessment findings to POA&Ms means that you get POA&Ms that are “bad” and by that I mean that they can’t be satisfied or they’re not really something that you need to fix.

Some of the bad POA&Ms I’ve seen, these are paraphrased from the original:

  • The solution uses {Microsoft|Sun|Oracle} products which has a history of vulnerabilities.
  • The project team needs to tell the vendor to put IPV6 into their product roadmap
  • The project team needs to implement X which is a common control provided at the enterprise level
  • The System Owner and DAA have accepted this risk but we’re still turning it into a POA&M
  • This is a common control that we really should handle at the enterprise level but we’re putting it on your POA&M list for a simple web application

Plan of Action for Refresh Philly photo by jonny goldstein.

Keys to POA&M Nirvana:  So over the years, I’ve observed some techniques for success in working with POA&Ms:

  • Agree on the evidence/proof of POA&M closure when the POA&M is created
  • Fix it before it becomes a POA&M
  • Have a waiver or exception process that requires a cost-benefit-risk analysis
  • Start with”high-level” POA&Ms and work down to more detailed POA&Ms as your security program matures
  • POA&Ms are between the System Owner and the DAA, but the System Owner can turn around and negotiate a POA&M as a cedural with an outsourced IT provider

And then the keys to Building Good POA&Ms:

  • Actionable–ie, they have something that you need to do
  • Achievable–they can be accomplished
  • Demonstrable–you can demonstrate that the POA&M has been satisfied
  • Properly-Scoped–absorbed at the agency level, the common control level, or the system level
  • They are SMART: Specific, Manageable, Attainable, Relevant, and within a specified Timeframe
  • They are DUMB: Doable, Understandable, Manageable, and Beneficial

Yes, I stole the last 2 bullets from the picture above, but they make really good sense in a way that “know thyself” is awesome advice from the Oracle at Delphi.



Similar Posts:

Posted in BSOFH, FISMA | No Comments »
Tags:

Wanted: Some SCAP Wranglers

Posted May 18th, 2009 by

So I was doing my usual “Beltway Bandit Perusal of Opportunities for Filthy Lucre” also known as diving into FedBizOps and I found this gem.  Basically what this means is that sometime this summer, NIST is going to put out an RFP for contractors to further develop SCAP using ARRA funds.

Keeping in mind that this isn’t the official list of what NIST wants done under this contract, but it’s interesting to look at from an angle of where SCAP will go over the next couple of years:

  1. Evolution of the SCAP protocol and specifications thereof
  2. Feasibility studies, development, documenting, prototyping, and road-mapping of SCAP expansions (e.g., remediation capability) and analog protocols (e.g., Network Event Content Automation Protocol
  3. Implementation and maintenance support for the Security Automation Content Validation Program
  4. Maintenance support for the SCAP Product Validation Program
  5. Pilot, beta, and production support for SCAP and security automation use-cases
  6. Content development, modification, and testing
  7. Infrastructure and reference implementation development in JAVA, C++, and C programming languages
  8. Data trust models and data provenance solutions.

So how do you play?  Well, the first thing is that you respond to the notice with a capabilities statement saying “yes, we have experience in doing what you want”–there is a list of specifics in the original notice.  Then sign up for FedBizOps and follow the announcement so you can get changes and the RFP when it comes out.



Similar Posts:

Posted in NIST, Outsourcing | 5 Comments »
Tags:

Ed Bellis’s Little SCAP Project

Posted March 19th, 2009 by

So way back in the halcyon days of 2008 when Dan Philpott, Chris Burton, Ian Charters, and I went to the NIST SCAP Conference.  Just by a strange coincidence, Ed Bellis threw out a twit along the lines of “wow, I wish there was a way to import and export all this vulnerability data” and I replied back with “Um, you mean like SCAP?

Fast forward 6 months.  Ed Bellis has been busy.  He delivered this presentation at SnowFROC 2009 in Denver:

So some ideas I have about what Ed is doing:

#1 This vulnerability correllation and automation should be part of vulnerability assessment (VA) products.  In fact, most VA products include some kind of ticketing and workflow nowadays if you get the “enterprise edition”. That’s nice, but…

#2 The VA industry is a broken market with compatibility in workflow.  Everybody wants to sell you *their* product to be the authoritative manager. That’s cool and all, but what I really need is the connectors to your competitor’s products so that I can have one database of vulnerabilities, one set of charts to show my auditors, and one trouble ticket system. SCAP helps here but only for static, bulk data transfers–that gets ugly really quickly.

#3 Ed’s correllation and automation software is a perfect community project because it’s a conflict of interest for any VA vendor to write it themselves. And to be honest, I wouldn’t be surprised if there aren’t a dozen skunkwork projects that people will admit to creating just in the comments section of this post. I remember 5 years ago trying to hack together some perl to take the output from the DISA SRR Scripts and aggregate them into a .csv.

#4 The web application security world needs to adopt SCAP. So far it’s just been the OS and shrinkwrapped application vendors and the whole race to detection and patching. Now the interesting part to me is that the market is all around tying vulnerabilities to specific versions of software and a patch, where when you get to the web application world, it’s more along the lines of one-off misconfigurations and coding errors. It takes a little bit of a mindshift in the vulnerability world, but that’s OK in my book.

#5 This solution is exactly what the Government needs and is exactly why SCAP was created. Imagine you’re the Federal Government with 3.5 million desktops, the only way you can manage all those is through VA automation and a tool that aggregates information from various VA products across multiple zones of trust, environments, and even organizations.

#6 Help Ed out! We need this.



Similar Posts:

Posted in Technical, What Works | 4 Comments »
Tags:

Comments on the Annual OMB Security Report to Congress

Posted March 11th, 2009 by

While you were looking the other way, OMB released their Fiscal Year 2008 Report to Congress on Implementation of The Federal Information Security Management Act of 2002.  Mostly it’s just the verbatim responses from the agencies and a rollup of the numbers with scarcely any analysis.

It’s interesting to contrast this with last year’s report which had a huge chunk of analysis.  In my cynical hours, I like to mentally replace “analysis” with “spin”, but not today.  =)

Another interesting thing is that since they published the actual responses, you can get some analysis like Angela Gunn of BetaNews provides.

My opinion: metrics are good, raw data is better.

Government transparency in action?  Maybe.  New staffers at OMB? Also likely.

Another interesting and related article is this one from Federal Computer News on Government security metrics. Yes, they need to be reconsidered, but for the most part the existing metrics are aimed at the major provisions of FISMA the LAW which is very high-level and very management-centric.  But hey, that’s what the law is supposed to provide, but more on that later.



Similar Posts:

Posted in FISMA | No Comments »
Tags:

Next Entries »


Visitor Geolocationing Widget: