Building A Modern Security Policy For Social Media and Government

Posted December 13th, 2009 by

A small presentation Dan Philpott and I put together for Potomac Forum about getting sane social media policy out of your security staff. I also recommend reading something I put out a couple of months ago about Social Media Threats and Web 2.0.



Similar Posts:

Posted in FISMA, NIST, Outsourcing, Risk Management, Speaking | 4 Comments »
Tags:

I’m on the OWASP Podcast

Posted October 1st, 2009 by

I sat down with Jim Manico a month or so ago when he was in DC and recorded a podcast for the OWASP Podcast.  It’s now live, check it out.



Similar Posts:

Posted in FISMA, NIST, Public Policy, Rants, Speaking, The Guerilla CISO | No Comments »
Tags:

OMB Wants a Direct Report

Posted August 28th, 2009 by

The big news in OMB’s M-09-29 FY 2009 Reporting Instructions for the Federal Information Security Management Act and Agency Privacy Management is that instead of fiddling with document files reporting will now be done directly through an online tool. This has been covered elsewhere and it is the one big change since last year.  However having less paper in the paperwork is not the only change.

Piles of Paper photo by °Florian.

So what will this tool be like? It is hard to tell at this point. Some information will be entered directly but the system appears designed to accept uploads of some documents, such as those supporting M-07-16. Similar to the spreadsheets used for FY 2008 there will be separate questions for the Chief Information Officer, Inspector General and Senior Agency Official for Privacy. Microagencies will still have abbreviated questions to fill out. Additional information on the automated tool, including full instructions and a beta version will be available in August, 2009.

Given the required information has changed very little the automated system is unlikely to significantly ease the reporting burden. This system appears primarily designed to ease the data processing requirements for OMB. With Excel spreadsheets no longer holding data many concerns relating to file versions, data aggregation and analysis are greatly eased.

It is worth noting that a common outcome of systems re-engineered to become more efficient is that managers look to find ways to utilize the new efficiency. What does this mean? Now that OMB has the ability to easily analyze data which took a great amount of effort to process before they may want to improve what is reported. A great deal has been said over the years about the inefficiencies in the current reporting regime. This may be OMB’s opportunity to start collecting an increased amount of information that may better reflect agencies actual security posture. This is pure speculation and other factors may moderate OMB’s next steps, such as the reporting burden on agencies, but it is worth consideration.

One pleasant outcome to the implementation of this new automated tool is the reporting deadline has been pushed back to November 18, 2009.

Agencies are still responsible for submitting document files to satisfy M-07-16. The automated tool does not appear to allow direct input of this information. However the document requirements are slightly different. Breach notification policy document need only be submitted if it has changed. It is no longer sufficient to simply report progress on eliminating SSNs and reducing PII, an implementation plan and a progress update must be submitted. The requirement for a policy document covering rules of behavior and consequences has been removed.

In addition to the automated tool there are other, more subtle changes to OMB’s FY 2009 reporting. Let’s step through them, point by point:

10. It is reiterated that NIST guidance is required. This point has been expanded to state that legacy systems, agencies have one year to come into compliance with NIST documents new material. For new systems agencies are expected to be in compliance upon system deployment.

13 & 15. Wording indicating that disagreements on reports should be resolved prior to submission and that the agency head’s view will be authoritative have been removed. This may have been done to reduce redundancy as M-09-29’s preface indicates agency reports must reflect the agency head’s view.

52. The requirement for an central web page with working links to agency PIAs and Federal Register published SORNs has been removed.

A complete side-by-side comparison of changes between the two documents is available at FISMApedia.org.

All in all the changes to OMB’s guidance this year will not change agencies reporting burden significantly. And that may not be a bad thing.



Similar Posts:

Posted in FISMA, NIST, Public Policy | 1 Comment »
Tags:

Note to the Data People: Give us Some Raw InfoSec Data

Posted August 24th, 2009 by

We have all these data wonks running around now in the information security field thanks to a couple of people (Jaquith, Shostack, Stewart, and our friends at Verizon Business) who brought us some books and some data.

Well, earlier this year, the Government started a website called Data.gov.  This is much awesomeness, Viva Las Transpareny!  However, it’s missing something very relevant to my interests: information security management data.

So, I want people to go to data.gov’s “request a dataset” page and request the following:

Complete responses from the Departments and Agencies to the FISMA reporting requirements for FY2004-2009 based on OMB Memoranda 04-25, 05-15, 06-20, 07-19, 08-21, and 09-29.

Raw incident data for years 2005-2007 as reported to OMB and summarized in their report to Congress on FY2007 FISMA performance and published at http://www.whitehouse.gov/omb/inforeg/reports/2007_fisma_report.pdf

Raw incident data for years 2007 and later in any type and format similar to the Verizon Data Breach Incident Report available at http://www.verizonbusiness.com/resources/security/reports/2009_databreach_rp.pdf

This information is necessary for researchers to study the effectiveness of information security management techniques and regulatory schemes and for industry to propose changes to national-level information security management frameworks and legislation such as FISMA.  This information for the most part has been released in a summary format to Congress and the release of the complete dataset on data.gov would greatly aid the information security community.

It might be a fool’s errand at this point, but it doesn’t hurt to ask, and it only takes a couple of minutes to do.  =)



Similar Posts:

Posted in Public Policy | 6 Comments »
Tags:

Random Thoughts on “The FISMA Challenge” in eHealthcare

Posted August 4th, 2009 by

OK, so there’s this article being bounced all over the place.  Basic synopsis is that FISMA is keeping the government from doing any kind of electronic health records stuff because FISMA requirements extend to health care providers and researchers when they take data from the Government.

Read one version of the story here

So the whole solution is that, well, we can’t extend FISMA to eHealthcare when the data is owned by the Government because that security management stuff gets in the way.  And this post is about why they’re wrong and right, but not in the places that they think they are.

Government agencies need to protect the data that they have by providing “adequate security”.  I’ve covered this a bazillion places already. Somewhere somehow along the lines we let the definition of adequate security mean “You have to play by our rulebook” which is complete and utter bunk.  The framework is an expedient and a level-setting experience across the government.  It’s not made to be one-size-fits-all, but is instead meant to be tailored to each individual case.

The Government Information Security Trickle-Down Effect is a name I use for FISMA/NIST Framework requirements being transferred from the Government to service providers, whether they’re in healthcare or IT or making screws that sometimes can be used on the B2 bombers.  It will hit you if you take Government data but only because you have no regulatory framework of your own with which you can demonstrate that you have “adequate security”.  In other words, if you provide a demonstrable level of data protection equal to or superior to what the Government provides, then you should reasonably be able to take the Government data, it’s finding the right “esperanto” to demonstrate your security foo.

If only there was a regulatory scheme already in place that we could use to hold the healthcare industry to.  Oh wait, there is: HIPAA.  Granted, HIPAA doesn’t really have a lot of teeth and its effects are maybe demonstrable, but it does fill most of the legal requirement to provide “adequate security”, and that’s what’s the important thing, and more importantly, what is required by FISMA.

And this is my problem with this whole string of articles:  The power vacuum has claimed eHealthcare.  Seriously, there should be somebody who is in charge of the data who can make a decision on what kinds of protections that they want for it.  In this case, there are plenty of people with different ideas on what that level of protection is so they are asking OMB for an official ruling.  If you go to OMB asking for their guidance on applying FISMA to eHealthcare records, you get what you deserve, which is a “Yes, it’s in scope, how do you think you should do this?”

So what the eHealthcare people really are looking for is a set of firm requirements from their handlers (aka OMB) on how to hold service providers accountable for the data that they are giving them.  This isn’t a binary question on whether FISMA applies to eHealthcare data (yes, it does), it’s a question of “how much is enough?” or even “what level of compensating controls do we need?”

But then again, we’re beaten down by compliance at this point.  I know I feel it from time to time.  After you’ve been beaten badly for years, all you want to do is for the batterer to tell you what you need to do so the hurting will stop.

So for the eHealthcare agencies, here is a solution for you.  In your agreements/contracts to provide data to the healthcare providers, require the following:

  • Provider shall produde annual statements for HIPAA compliance
  • Provider shall be certified under a security management program such as an  ISO 27001, SAS-70 Type II, or even PCI-DSS
  • Provider shall report any incident resulting in a potential data breach of 500 or more records within 24 hours
  • Financial penalties for data breaches based on number of records
  • Provider shall allow the Government to perform risk assessments of their data protection controls

That should be enough compensating controls to provide “adequate security” for your eHealthcare data.  You can even put a line through some of these that are too draconian or high-cost.  Take that to OMB and tell them how you’re doing it and how they would like to spend the taxpayers’ money to do anything other than this.

Case Files and Medical Records photo by benuski.



Similar Posts:

Posted in FISMA, Rants | 1 Comment »
Tags:

Federated Vulnerability Management

Posted July 14th, 2009 by

Why hello there private sector folks.  It’s no big surprise, I work in the US Federal Government Space and we have some unique challenges of scale.  Glad to meet you, I hear you’ve got the same problems only not in the same kind of scale as the US Federal Government.  Sit back, read, and learn.

You see, I work in places where everything is siloed into different environments.  We have crazy zones for databases, client-facing DMZs, managment segments, and then the federal IT architecture itself: a loose federation of semi-independent enterprises that are rapidly coming together in strange ways under the wonderful initiative known as “The TIC”.  We’re also one of the most heavily audited sectors in IT.

And yet, the way we manage patch and vulnerability information is something out of the mid-80’s.

Current State of Confusion

Our current patch management information flow goes something like this:

  • Department SOC/US-CERT/CISOs Office releases a vulnerability alert (IAVA, ISVM, something along those lines)
  • Somebody makes a spreadsheet with the following on it:
    • Number of places with this vulnerability.
    • How many have been fixed.
    • When you’re going to have it fixed.
    • A percentage of completion
  • We then manage by spreadsheets until the spreadsheets say “100%”.
  • The spreadsheets are aggregated somewhere.  If we’re lucky, we have some kind of management tool that we dump our info into like eMASS.
  • We wonder why we get pwned (by either haxxorz or the IG).

Now for how we manage vulnerability scan information:

  • We run a tool.
  • The tool spits out a .csv or worse, a .html.
  • We pull up the .csv in Excel and add some columns.
  • We assign dates and responsibilities to people.
  • We have a weekly meeting togo over what’s been completed.
  • When we finish something, we provide evidence of what we did.
  • We still really don’t know how effective we were.

Problems with this approach:

  • It’s too easy to game.  If I’m doing reporting, the only thing really keeping me reporting the truth is my sense of ethics.
  • It’s slow as hell.  If somebody updates a spreadsheet, how does the change get echoed into the upstream spreadsheets?
  • It isn’t accurate at any given moment in time, mostly because changes quicker than the process can keep up.  What this means is that we always look like liars who are trying to hide something because our spreadsheet doesn’t match up with the “facts on ground”.
  • It doesn’t compare with our other management tools like Plans of Action and Milestone (POA&M).  They usually are managed in a different application than the technical parts, and this means that we need a human with a spreadsheet to act as the intermediary.

So this is my proposal to “fix” government patch and vulnerability management: Federated Patch and Vulnerability Management through SCAP.

Trade Federation Battle Droid photo by Stéfan.  Roger, Roger, SCAP means business.

Whatchu Talkin’ Bout With This “Federated” Stuff, Willis?

This is what I mean, my “Plan for BSOFH Happiness”:

Really what I want is every agency to have an “orchestrator” ala Ed Bellis’s little SCAP tool of horrors. =)  Then we federate them so that information can roll up to a top-level dashboard for the entire executive branch.

In my beautiful world, every IT asset reports into a patch management system of some sort.  Servers, workstations, laptops, all of it.  Yes, databases too.  Printers–yep.  If we can get network devices to get reported on config info via SCAP-enabled NMS, let’s get that pushing content into our orchestrator. We don’t even really  have to push patches using these tools–what I’m primarily concerned with at this point is to have the ability to pull reports.

I group all of my IT assets in my system into a bucket of some sort in the orchestrator.  That way, we know who’s responsible when something has a problem.  It also fits into our “system” concept from FISMA/C&A/Project Management/etc.

We do periodic network scanning to identify everything on our network and feed them into the orchestrator.  We do regular vulnerability scans and any findings feed into the orchestrator.  The more data, the better aggregate information we can get.

Our orchestrator correlates network scans with patch management status and gives us a ticket/alert/whatever where we have unmanaged devices.  Yes, most enterprise management tools do this today, but the more scan results I have feeding them, the better chance I have at finding all my assets.  Thanks to our crazy segmented architecture models, we have all these independent zones that break patch, vulnerability, and configuration management as the rest of the IT world performs it.  Flat is better for management, but failing that, I’ll take SCAP hierarchies of reporting.

The Department takes a National Vulnerability Database feed and pushes down to the Agencies what they used to send in an IAVA, only they also send down the check to see if your system is vulnerable.  My orchestrator automagically tests and reports back on status before I’m even awake in the morning.

I get hardening guides pushed from the Department or Agency in SCAP form, then pull an audit on my IT assets and have the differences automagically entered into my workflow and reporting.

I become a ticket monkey.  Everything is in workflow.  I can be replaced with somebody less expensive and can now focus on finding the answer to infosec nirvana.

We provide a feed upstream to our Department, the Department provides a feed to somebody (NCSD/US-CERT/OMB/Cybersecurity Coordinator) who now has the view across the entire Government.  Want to be bold, let Vivek K and the Sunlight Foundation at the data feeds and have truly open and transparent, “Unbreakable Government 2.1”.  Who needs FISMA report cards when our vulnerability data is on display?

Keys to Making Federated Patch and Vulnerability Management Work

Security policy that requires SCAP-compatible vulnerability and patch management products.  Instead of parroting back 800-53, please give me a requirement in your security policy that every patch and vulnerability management tool that we buy MUST BE SCAP-CERTIFIED.  Yes, I know we won’t get it done right now, but if we get it in policy, then it will trickle down into product choices eventually.  This is compliance I can live with, boo-yeah!

Security architecture models (FEA anyone?) that show federated patch and vulnerability management deployments as part of their standard configuration.  OK with the firewall pictures and zones of trust, I understand what you’re saying, now give me patch and vulnerability management flows across all the zones so I can do the other 85% of my job.

Network traffic from the edges of the hierarchy to…somewhere.  OK, you just need network connectivity throughout the hierarchy to aggregate and update patch and vulnerability information, this is basic data flow stuff.  US-CERT in a future incarnation could be the top-level aggregator, maybe.  Right now I would be happy building aggregation up to the Department level because that’s the level at which we’re graded.

Understanding.  Hey, I can’t fix everything all the time–what I’m doing is using automation to make the job of fixing things easier by aggregation, correlation, status reporting, and dashboarding.  These are all concepts behind good IT management, why shouldn’t we apply them to security managment also?  Yes, I’ll have times when I’m behind on something or another, but guess what, I’m behind today and you just don’t know it.  However, with near-real-time reporting, we need a culture shift away from trying to police each other up all the time to understanding that sometimes nothing is really perfect.

Patch and vulnerability information is all-in.  It has to be reporting in 100% across the board, or you don’t have anything–back to spreadsheets hell for you.  And simply put, why don’t you have everything in the patch management system already?  Come on, that’s not a good enough reason.

POA&Ms need to be more fluid.  Face it, with automated patch and vulnerability management, POA&Ms become more like trouble tickets.  But yes, that’s much awesome, smaller, easily-satisfied POA&Ms are much easier to manage provided that the administrative overhead for each of these is reduced to practically nothing… just like IT trouble tickets.

Regression testing and providing proof becomes easier because it’s all automated.  Once you fix something and it’s marked in the aggregator as completed, it gets slid into the queue for retesting, and the results become the evidence.

Interfaces with existing FISMA management tools.  This one is tough.  But we have a very well-entrenched software base geared around artifact management, POA&M management, and Security Test and Evaluation results.  This class of software exists because none of the tools vendors really understand how the Government does security management, and I mean NONE of them.  There has to be some weird unnatural data import/export acts going on here to make the orchestrator of technical data match up with the orchestrator of managment data, and this is the part that scares me in a federated world.

SCAP spreads to IT management suites.  They already have a footprint out there on everything, and odds are we’re using them for patch and configuration management anyway.  If they don’t talk SCAP, push the vendor to get it working.

Where Life Gets Surreal

Then I woke up and realized that if I provide my Department CISO with near-real-time patch and vulnerability mangement information, I suddenly have become responsible for patch and management instead of playing “kick it to the contractors” and hiding behind working groups.  It could be that if I get Federated Patch and Vulnerabilty Management off the ground, I’ve given my Department CISO the rope to hang me with.  =)

Somehow, somewhere, I’ve done most of what CAG was talking about and automated it.  I feel so… um… dirty.  Really, folks, I’m not a shill for anybody.



Similar Posts:

Posted in DISA, NIST, Rants, Technical | 12 Comments »
Tags:

« Previous Entries Next Entries »


Visitor Geolocationing Widget: