DojoCon 2009 Presentation

Posted November 7th, 2009 by

For those of you who didn’t know the real purpose of DojoCon, it was to raise money and awareness for Hackers for Charity. If you like anything that is in this post, go to HFC and make a donation of time, equipment, tech support, and maybe money. If you’ve never heard of HFC because you’re not one of the “InfoSec Cool Kids”, now is your chance–go read about them.

The video of my dojocon presentation. The microphone was off for the first couple of minutes but I look pretty animated.

And then the compliance panel that I tried not to dominate:

And finally, my slides are up on slideshare:



Similar Posts:

Posted in FISMA, Speaking | 6 Comments »
Tags:

The Guerilla CISO Rants: Don’t Write a System Security Plan

Posted October 1st, 2009 by

OK, I know you’re shocked…I’m saying something controversial.  But hear me out on this one, I’ll explain.

Now this is my major beef with the way we write SSPs today:  this is all information that is contained in other artifacts that I have to pay people to do cut-and-paste to get it into a SSP template.  As practiced, we seriously have a problem with polyinstantiation of data in various lifecycle artifacts that is cut-and-pasted into an SSP.  Every time you change the upstream document, you create a difference between that document and the SSP.

This is a practice I would like to change, but I can’t do it all by myself.

This is the skeleton outline of an SSP from Special Publication 800-18, the guide to writing an SSP:

  1. Information System Name/Title–On the investment/FISMA inventory, the Exhibit 300/53, etc
  2. Information System Categorization–usually on a FIPS-199 memorandum
  3. Information System Owner–In an assignment memo
  4. Authorizing Official–In an assignment memo
  5. Other Designated Contacts–In an assignment memo
  6. Assignment of Security Responsibility–In assignment memos
  7. Information System Operational Status–On the investment/FISMA inventory, the Exhibit 300/53, etc
  8. Information System Type–On the investment/FISMA inventory, the Exhibit 300/53, etc
  9. General System Description/Purpose–In the design document, Exhibit 300/53
  10. System Environment–Common controls not inside the scope of our system
  11. System Interconnections/Information Sharing–from Interconnection Security Agreements
  12. Related Laws/Regulations/Policies–Should be part of the system categorization but hardly ever is on templates
  13. Minimum Security Controls–800-53 controls descriptions which can easily be done in a Requirements Traceability Matrix
  14. Information System Security Plan Completion Date–specific to each document
  15. Information System Security Plan Approval Date–specific to each document

Now some of this has changed in practice a little bit–# 10 can functionally be replaced with a designation of common controls and hybrid controls.

So my line of thinking is that if we provide a 2-6-page system description with the names of the “guilty parties” and some inventory information, controls-specific Requirements Traceability Matrix, and a System Design Document, then we have the functional equivalent of an SSP.

Why have I declared an InfoSec fatwah against SSPs as currently practiced?

Well, my philosophy for operation is based on some concepts I’ve picked up through the years:

  • Why run when you can walk, why walk when you can sit, why sit when you can lay down.  There is a time to spend effort on determining what the security controls are for a project.  You need to have them documented but it’s not cost-effective to be worried about format, which we do probably too much of today.
  • Make it easy to do the right thing.  If we polyinstantiate security information, we have made something harder to maintain.  Easier to maintain means that it will get maintained instead of being shelfware.  I would rather have updated and accurate security information than overly verbose and well-polished documents that are inaccurate.
  • Security is not a “security guy thing”–most problems are actually a management and project team problem.  My idea uses their SDLC artifacts instead of security-specific versions of artifacts.  My idea puts the project problems back in the project space where it belongs.
  • If I have a security engineer who has a finite amount of hours in a day, I have to choose what they spend their time on.  If it’s a matter of vulnerability mitigation, patching, etc, or correcting SSP grammar, I know what I want him to do.  Then again, I’m still an infantryman deep down inside and I realize I have biases against flowery writing.

Criticisms to not writing a dedicated SSP document:

“My auditors are used to seeing the information in the same format at someplace they worked previously”. Believe it or not, I hear this quite a bit.  My response is along the lines of the fact that if you make your standard be what I’m suggesting for a security plan, then you’ve met all of the FISMA and 800-53 requirements and my personal requirement to “don’t do stupid stuff if you can help it”.

“My auditors will grill me to death if they have to page back and forth between several documents”.  This one also I’ve heard.  There are a couple of ways to deal with this.  One way to deal with this is that in your 800-53 Requirements Traceability Matrix you reference the source document.  Most auditors at this point bring up that you need to reference the official name, date of publication, and specific page/section of the reference and I think they need to get a life because they’ve taken us back to the maintainability problem.

“This is all too new-school and I can’t get over it”. Then you are a dinosaur and your kind deserves extinction.  =)

.

This blog post is for grecs at novainfosecportal.com who perked up instantly when I mentioned the concept months ago.  Finally got around to putting the text somewhere.

How to Plan the Perfect Dinner Party photo by kevindooley.



Similar Posts:

Posted in FISMA, NIST | 11 Comments »
Tags:

OMB Wants a Direct Report

Posted August 28th, 2009 by

The big news in OMB’s M-09-29 FY 2009 Reporting Instructions for the Federal Information Security Management Act and Agency Privacy Management is that instead of fiddling with document files reporting will now be done directly through an online tool. This has been covered elsewhere and it is the one big change since last year.  However having less paper in the paperwork is not the only change.

Piles of Paper photo by °Florian.

So what will this tool be like? It is hard to tell at this point. Some information will be entered directly but the system appears designed to accept uploads of some documents, such as those supporting M-07-16. Similar to the spreadsheets used for FY 2008 there will be separate questions for the Chief Information Officer, Inspector General and Senior Agency Official for Privacy. Microagencies will still have abbreviated questions to fill out. Additional information on the automated tool, including full instructions and a beta version will be available in August, 2009.

Given the required information has changed very little the automated system is unlikely to significantly ease the reporting burden. This system appears primarily designed to ease the data processing requirements for OMB. With Excel spreadsheets no longer holding data many concerns relating to file versions, data aggregation and analysis are greatly eased.

It is worth noting that a common outcome of systems re-engineered to become more efficient is that managers look to find ways to utilize the new efficiency. What does this mean? Now that OMB has the ability to easily analyze data which took a great amount of effort to process before they may want to improve what is reported. A great deal has been said over the years about the inefficiencies in the current reporting regime. This may be OMB’s opportunity to start collecting an increased amount of information that may better reflect agencies actual security posture. This is pure speculation and other factors may moderate OMB’s next steps, such as the reporting burden on agencies, but it is worth consideration.

One pleasant outcome to the implementation of this new automated tool is the reporting deadline has been pushed back to November 18, 2009.

Agencies are still responsible for submitting document files to satisfy M-07-16. The automated tool does not appear to allow direct input of this information. However the document requirements are slightly different. Breach notification policy document need only be submitted if it has changed. It is no longer sufficient to simply report progress on eliminating SSNs and reducing PII, an implementation plan and a progress update must be submitted. The requirement for a policy document covering rules of behavior and consequences has been removed.

In addition to the automated tool there are other, more subtle changes to OMB’s FY 2009 reporting. Let’s step through them, point by point:

10. It is reiterated that NIST guidance is required. This point has been expanded to state that legacy systems, agencies have one year to come into compliance with NIST documents new material. For new systems agencies are expected to be in compliance upon system deployment.

13 & 15. Wording indicating that disagreements on reports should be resolved prior to submission and that the agency head’s view will be authoritative have been removed. This may have been done to reduce redundancy as M-09-29’s preface indicates agency reports must reflect the agency head’s view.

52. The requirement for an central web page with working links to agency PIAs and Federal Register published SORNs has been removed.

A complete side-by-side comparison of changes between the two documents is available at FISMApedia.org.

All in all the changes to OMB’s guidance this year will not change agencies reporting burden significantly. And that may not be a bad thing.



Similar Posts:

Posted in FISMA, NIST, Public Policy | 1 Comment »
Tags:

Federated Vulnerability Management

Posted July 14th, 2009 by

Why hello there private sector folks.  It’s no big surprise, I work in the US Federal Government Space and we have some unique challenges of scale.  Glad to meet you, I hear you’ve got the same problems only not in the same kind of scale as the US Federal Government.  Sit back, read, and learn.

You see, I work in places where everything is siloed into different environments.  We have crazy zones for databases, client-facing DMZs, managment segments, and then the federal IT architecture itself: a loose federation of semi-independent enterprises that are rapidly coming together in strange ways under the wonderful initiative known as “The TIC”.  We’re also one of the most heavily audited sectors in IT.

And yet, the way we manage patch and vulnerability information is something out of the mid-80’s.

Current State of Confusion

Our current patch management information flow goes something like this:

  • Department SOC/US-CERT/CISOs Office releases a vulnerability alert (IAVA, ISVM, something along those lines)
  • Somebody makes a spreadsheet with the following on it:
    • Number of places with this vulnerability.
    • How many have been fixed.
    • When you’re going to have it fixed.
    • A percentage of completion
  • We then manage by spreadsheets until the spreadsheets say “100%”.
  • The spreadsheets are aggregated somewhere.  If we’re lucky, we have some kind of management tool that we dump our info into like eMASS.
  • We wonder why we get pwned (by either haxxorz or the IG).

Now for how we manage vulnerability scan information:

  • We run a tool.
  • The tool spits out a .csv or worse, a .html.
  • We pull up the .csv in Excel and add some columns.
  • We assign dates and responsibilities to people.
  • We have a weekly meeting togo over what’s been completed.
  • When we finish something, we provide evidence of what we did.
  • We still really don’t know how effective we were.

Problems with this approach:

  • It’s too easy to game.  If I’m doing reporting, the only thing really keeping me reporting the truth is my sense of ethics.
  • It’s slow as hell.  If somebody updates a spreadsheet, how does the change get echoed into the upstream spreadsheets?
  • It isn’t accurate at any given moment in time, mostly because changes quicker than the process can keep up.  What this means is that we always look like liars who are trying to hide something because our spreadsheet doesn’t match up with the “facts on ground”.
  • It doesn’t compare with our other management tools like Plans of Action and Milestone (POA&M).  They usually are managed in a different application than the technical parts, and this means that we need a human with a spreadsheet to act as the intermediary.

So this is my proposal to “fix” government patch and vulnerability management: Federated Patch and Vulnerability Management through SCAP.

Trade Federation Battle Droid photo by Stéfan.  Roger, Roger, SCAP means business.

Whatchu Talkin’ Bout With This “Federated” Stuff, Willis?

This is what I mean, my “Plan for BSOFH Happiness”:

Really what I want is every agency to have an “orchestrator” ala Ed Bellis’s little SCAP tool of horrors. =)  Then we federate them so that information can roll up to a top-level dashboard for the entire executive branch.

In my beautiful world, every IT asset reports into a patch management system of some sort.  Servers, workstations, laptops, all of it.  Yes, databases too.  Printers–yep.  If we can get network devices to get reported on config info via SCAP-enabled NMS, let’s get that pushing content into our orchestrator. We don’t even really  have to push patches using these tools–what I’m primarily concerned with at this point is to have the ability to pull reports.

I group all of my IT assets in my system into a bucket of some sort in the orchestrator.  That way, we know who’s responsible when something has a problem.  It also fits into our “system” concept from FISMA/C&A/Project Management/etc.

We do periodic network scanning to identify everything on our network and feed them into the orchestrator.  We do regular vulnerability scans and any findings feed into the orchestrator.  The more data, the better aggregate information we can get.

Our orchestrator correlates network scans with patch management status and gives us a ticket/alert/whatever where we have unmanaged devices.  Yes, most enterprise management tools do this today, but the more scan results I have feeding them, the better chance I have at finding all my assets.  Thanks to our crazy segmented architecture models, we have all these independent zones that break patch, vulnerability, and configuration management as the rest of the IT world performs it.  Flat is better for management, but failing that, I’ll take SCAP hierarchies of reporting.

The Department takes a National Vulnerability Database feed and pushes down to the Agencies what they used to send in an IAVA, only they also send down the check to see if your system is vulnerable.  My orchestrator automagically tests and reports back on status before I’m even awake in the morning.

I get hardening guides pushed from the Department or Agency in SCAP form, then pull an audit on my IT assets and have the differences automagically entered into my workflow and reporting.

I become a ticket monkey.  Everything is in workflow.  I can be replaced with somebody less expensive and can now focus on finding the answer to infosec nirvana.

We provide a feed upstream to our Department, the Department provides a feed to somebody (NCSD/US-CERT/OMB/Cybersecurity Coordinator) who now has the view across the entire Government.  Want to be bold, let Vivek K and the Sunlight Foundation at the data feeds and have truly open and transparent, “Unbreakable Government 2.1”.  Who needs FISMA report cards when our vulnerability data is on display?

Keys to Making Federated Patch and Vulnerability Management Work

Security policy that requires SCAP-compatible vulnerability and patch management products.  Instead of parroting back 800-53, please give me a requirement in your security policy that every patch and vulnerability management tool that we buy MUST BE SCAP-CERTIFIED.  Yes, I know we won’t get it done right now, but if we get it in policy, then it will trickle down into product choices eventually.  This is compliance I can live with, boo-yeah!

Security architecture models (FEA anyone?) that show federated patch and vulnerability management deployments as part of their standard configuration.  OK with the firewall pictures and zones of trust, I understand what you’re saying, now give me patch and vulnerability management flows across all the zones so I can do the other 85% of my job.

Network traffic from the edges of the hierarchy to…somewhere.  OK, you just need network connectivity throughout the hierarchy to aggregate and update patch and vulnerability information, this is basic data flow stuff.  US-CERT in a future incarnation could be the top-level aggregator, maybe.  Right now I would be happy building aggregation up to the Department level because that’s the level at which we’re graded.

Understanding.  Hey, I can’t fix everything all the time–what I’m doing is using automation to make the job of fixing things easier by aggregation, correlation, status reporting, and dashboarding.  These are all concepts behind good IT management, why shouldn’t we apply them to security managment also?  Yes, I’ll have times when I’m behind on something or another, but guess what, I’m behind today and you just don’t know it.  However, with near-real-time reporting, we need a culture shift away from trying to police each other up all the time to understanding that sometimes nothing is really perfect.

Patch and vulnerability information is all-in.  It has to be reporting in 100% across the board, or you don’t have anything–back to spreadsheets hell for you.  And simply put, why don’t you have everything in the patch management system already?  Come on, that’s not a good enough reason.

POA&Ms need to be more fluid.  Face it, with automated patch and vulnerability management, POA&Ms become more like trouble tickets.  But yes, that’s much awesome, smaller, easily-satisfied POA&Ms are much easier to manage provided that the administrative overhead for each of these is reduced to practically nothing… just like IT trouble tickets.

Regression testing and providing proof becomes easier because it’s all automated.  Once you fix something and it’s marked in the aggregator as completed, it gets slid into the queue for retesting, and the results become the evidence.

Interfaces with existing FISMA management tools.  This one is tough.  But we have a very well-entrenched software base geared around artifact management, POA&M management, and Security Test and Evaluation results.  This class of software exists because none of the tools vendors really understand how the Government does security management, and I mean NONE of them.  There has to be some weird unnatural data import/export acts going on here to make the orchestrator of technical data match up with the orchestrator of managment data, and this is the part that scares me in a federated world.

SCAP spreads to IT management suites.  They already have a footprint out there on everything, and odds are we’re using them for patch and configuration management anyway.  If they don’t talk SCAP, push the vendor to get it working.

Where Life Gets Surreal

Then I woke up and realized that if I provide my Department CISO with near-real-time patch and vulnerability mangement information, I suddenly have become responsible for patch and management instead of playing “kick it to the contractors” and hiding behind working groups.  It could be that if I get Federated Patch and Vulnerabilty Management off the ground, I’ve given my Department CISO the rope to hang me with.  =)

Somehow, somewhere, I’ve done most of what CAG was talking about and automated it.  I feel so… um… dirty.  Really, folks, I’m not a shill for anybody.



Similar Posts:

Posted in DISA, NIST, Rants, Technical | 12 Comments »
Tags:

GAO’s 5 Steps to “Fix” FISMA

Posted July 2nd, 2009 by

Letter from GAO on how Congress can fix FISMA.  And oh yeah, the press coverage on it.

Now supposedly this was in response to an inquiry from Congress about “Please comment on the need for improved cyber security relating to S.773, the proposed Cybersecurity Act of 2009.”  This is S.773.

GAO is mixing issues and has missed the mark on what Congress asked for.  S.773 is all about protecting critical infrastructure.  It only rarely mentions government internal IT issues.  S.773 has nothing at all to do with FISMA reform.  However, GAO doesn’t have much expertise in cybersecurity outside of the Federal Agencies (they have some, but I would never call it extensive), so they reported on what they know.

The GAO report used the often-cited metric of an increase in cybersecurity attacks against Government IT systems growing from “5,503 incidents reported in fiscal year 2006 to 16,843 incidents in fiscal year 2008” as proof that the agencies are not doing anything to fix the problem.  I’ve questioned these figures before, it’s associated with the measurement problem and increased reporting requirements more than an increase in attacks.  Truth be told, nobody knows if the attacks are increasing and, if so, at what rate.  I would guess they’re increasing, but we don’t know, so quit citing some “whacked” metric as proof.

Reform photo by shevy.

GAO’s recommendations for FISMA Reform:

Clarify requirements for testing and evaluating security controls.  In other words, the auditing shall continue until the scores improve.  Hate to tell you this, but really all you can test at the national level is if the FISMA framework is in place, the execution of the framework (and by extension, if an agency is secure or not) is largely untestable using any kind of a framework.

Require agency heads to provide an assurance statement on the overall adequacy and effectiveness of the agency’s information security program.  This is harkening back to the accounting roots of GAO.  Basically what we’re talking here is for the agency head to attest that his agency has made the best effort that it can to protect their IT.  I like part of this because part of what’s missing is “executive support” for IT security.  To be honest, though, most agency heads aren’t IT security dweebs, they would be signing an assurance statement based upon what their CIO/CISO put in the executive summary.

Enhance independent annual evaluations.  This has significant cost implications.  Besides, we’re getting more and more evaluations as time goes on with an increase in audit burden.  IE, in the Government IT security space, how much of your time is spent providing proof to auditors versus building security?  For some people, it’s their full-time job.

Strengthen annual reporting mechanisms.  More reporting.  I don’t think it needs to get strengthened, I think it needs to get “fixed”.  And by “fixed” I mean real metrics.  I’ve touched on this at least a hundred times, go check out some of it….

Strengthen OMB oversight of agency information security programs.  This one gives me brain-hurt.  OMB has exactly the amount of oversight that they need to do their job.  Just like more auditing, if you increase the oversight and the people doing the execution have the same amount of people and the same amount of funding and the same types of skills, do you really expect them to perform differently?

Rybolov’s synopsis:

When the only tool you have is a hammer, every problem looks like a nail, and I think that’s what GAO is doing here.  Since performance in IT security is obviously down, they suggest that more auditing and oversight will help.  But then again, at what point does the audit burden tip to the point where nobody is really doing any work at all except for answering to audit requests?

Going back to what Congress really asked for, We run up against a problem.  There isn’t a huge set of information about how the rest of the nation is doing with cybersecurity.  There’s the Verizon DBIR, the Data Loss DB, some surveys, and that’s about it.

So really, when you ask GAO to find out what the national cybersecurity situation is, all you’re going to get is a bunch of information about how government IT systems line up and maybe some anecdotes about critical infrastructure.

Coming to a blog near you (hopefully soon): Rybolov’s 5 steps to “fix” FISMA.



Similar Posts:

Posted in FISMA | 2 Comments »
Tags:

Why We Need PCI-DSS to Survive

Posted June 9th, 2009 by

And by “We”, I mean the security industry as a whole.  And yes, this is your public-policy lesson for today, let me drag my soapbox over here and sit for a spell while I talk at you.

By “Survive”, I mean that we need some kind of self-regulatory framework that fulfills the niche that PCI-DSS occupies currently. Keep reading, I’ll explain.

And the “Why” is a magical phrase, everybody say it after me: self-regulatory organization.  In other words, the IT industry (and the Payment Card Industry) needs to regulate itself before it crosses the line into being considered for statutory regulation (ie, making a law) by the Federal Government.

Remember the PCI-DSS hearings with the House Committe on Homeland Security (AKA the Thompson Committee)?  All the Security Twits were abuzz about it, and it did my heart great justice to hear all the cool kids become security and public policy wonks at least for an afternoon.  Well, there is a little secret here and that is that when Congress gets involved, they’re gathering information to determine if they need to regulate an industry.  That’s about all Congress can do: make laws that you (and the Executive Branch) have to follow, maybe divvy up some tax money, and bring people in to testify.  Other than that, it’s just positioning to gain favor with other politicians and maybe some votes in the next election.

Regulation means audits and more compliance.  They go together like TCP and IP.  Most regulatory laws have at least some designation for a party who will perform oversight.  They have to do this because, well, if you’re not audited/assessed/evaluated/whatever, then it’s really an optional law, which doesn’t make sense at all.

Yay Audits photo by joebeone.

Another magical phrase that the public policy sector can share with the information security world: audit burden.  Audit burden is how much a company or individual pays both in direct costs (paying the auditors) and in indirect costs (babysitting the auditors, producing evidence for the auditors, taking people away from making money to talk to auditors, “audit requirements”, etc).  I think we can all agree that low audit burden is good, high audit burden is bad.  In fact, I think that’s one of the problems with FISMA as implemented is that it has a high audit burden with moderately tangible results. But I digress, this post is about PCI-DSS.

There’s even a concept that is mulling around in the back of my head to make a metric that compares the audit burden to the amount of security that it provides to the amount of assurance that it provides against statutory regulation.  It almost sounds like the start of a balanced scorecard for security management frameworks, now if I could get @alexhutton to jump on it, his quant brain would churn out great things in short order.

But this is the lesson for today: self-regulation is preferrable to legislation.

  • Self-regulation is defined by people in the industry.  Think about the State Bar Association setting the standards for who is allowed to practice law.
  • Standards ideally become codified versions of “best practices”.  OK, this is if they’re done correctly, more to follow.
  • Standards are more flexible than laws.  As hard/cumbersome as it is to change a standard, the time involved in changing a law is prohibitive most of the time unless you’re running for reelection.
  • Standards sometimes can be “tainted” to force out competition, laws are even more so.

The sad fact here is that if we don’t figure out as an industry how to make PCI-DSS or any other forms of self-regulation work, Congress will regulate for us.  Don’t like PCI-DSS because of the audit burden, wait until you have a law that requires you to do the same controls framework.  It will be the same thing, only with bigger penalties for failure, larger audit burdens to avoid the larger penalties, larger industries created to satisfy the market demand for audit.  Come meet the new regulatory body, same as the old only bigger and meaner. =)

However, self-regulation works if you do it right, and by right I mean this:

  • The process is transparent and not the product of a secret back-room cabbal.
  • Representation from all the shareholders.  For PCI-DSS, that would be Visa/MasterCard, banks, processors, large merchants, small merchants, and some of the actual customers.
  • The standards committee knows how to compromise and come to a consensus.  IE, we can’t have both full hard drive encryption, a WAF, code review, and sacrificing of chickens in the server room, so we’ll make one of the 4 mandatory.
  • The regulatory organization has a grievance process for its constituency to present valid (AKA “Not just more whining”) discrepencies in the standards and processes for clarification or consideration for change.
  • The standard is “owned” by every member of the constituency.  Right now, people governed by PCI-DSS are not feeling that the standard is their standard and that they have a say in what comprises the standard and that they are the ones being helped by the standard.  Some of that is true, some of that is an image problem.  The way you combat this is by doing the things that I mentioned in the previous bullets.

Hmm, sounds like making an ISO standard, which brings its own set of politics.

While we need some form of self-regulation, right now PCI-DSS and ISO 27001 are the closest that we have in the private sector.  Yeah, it sucks, but it sucks the least, just like our form of government.



Similar Posts:

Posted in Public Policy, Rants | 11 Comments »
Tags:

« Previous Entries Next Entries »


Visitor Geolocationing Widget: