FedRAMP: It’s Here but Not Yet Here

Posted December 12th, 2011 by

Contrary to what you might hear this week in the trade press, FedRAMP is not fully unveiled although there was some much-awaited progress. There was a memo that came out from the administration (PDF caveat).  Basically what it does is lay down the authority and responsibility for the Program Management Office and set some timelines.  This is good, and we needed it a year and a half ago.

However, people need to stop talking about how FedRAMP has solved all their problems because the entire program isn’t here yet.  Until you have a process document and a catalog of controls to evaluate, you don’t know how the program is going to help or hinder you, so all the press about it is speculation.



Similar Posts:

Posted in DISA, FISMA, NIST, Outsourcing, Risk Management | No Comments »
Tags:

Federated Vulnerability Management

Posted July 14th, 2009 by

Why hello there private sector folks.  It’s no big surprise, I work in the US Federal Government Space and we have some unique challenges of scale.  Glad to meet you, I hear you’ve got the same problems only not in the same kind of scale as the US Federal Government.  Sit back, read, and learn.

You see, I work in places where everything is siloed into different environments.  We have crazy zones for databases, client-facing DMZs, managment segments, and then the federal IT architecture itself: a loose federation of semi-independent enterprises that are rapidly coming together in strange ways under the wonderful initiative known as “The TIC”.  We’re also one of the most heavily audited sectors in IT.

And yet, the way we manage patch and vulnerability information is something out of the mid-80’s.

Current State of Confusion

Our current patch management information flow goes something like this:

  • Department SOC/US-CERT/CISOs Office releases a vulnerability alert (IAVA, ISVM, something along those lines)
  • Somebody makes a spreadsheet with the following on it:
    • Number of places with this vulnerability.
    • How many have been fixed.
    • When you’re going to have it fixed.
    • A percentage of completion
  • We then manage by spreadsheets until the spreadsheets say “100%”.
  • The spreadsheets are aggregated somewhere.  If we’re lucky, we have some kind of management tool that we dump our info into like eMASS.
  • We wonder why we get pwned (by either haxxorz or the IG).

Now for how we manage vulnerability scan information:

  • We run a tool.
  • The tool spits out a .csv or worse, a .html.
  • We pull up the .csv in Excel and add some columns.
  • We assign dates and responsibilities to people.
  • We have a weekly meeting togo over what’s been completed.
  • When we finish something, we provide evidence of what we did.
  • We still really don’t know how effective we were.

Problems with this approach:

  • It’s too easy to game.  If I’m doing reporting, the only thing really keeping me reporting the truth is my sense of ethics.
  • It’s slow as hell.  If somebody updates a spreadsheet, how does the change get echoed into the upstream spreadsheets?
  • It isn’t accurate at any given moment in time, mostly because changes quicker than the process can keep up.  What this means is that we always look like liars who are trying to hide something because our spreadsheet doesn’t match up with the “facts on ground”.
  • It doesn’t compare with our other management tools like Plans of Action and Milestone (POA&M).  They usually are managed in a different application than the technical parts, and this means that we need a human with a spreadsheet to act as the intermediary.

So this is my proposal to “fix” government patch and vulnerability management: Federated Patch and Vulnerability Management through SCAP.

Trade Federation Battle Droid photo by Stéfan.  Roger, Roger, SCAP means business.

Whatchu Talkin’ Bout With This “Federated” Stuff, Willis?

This is what I mean, my “Plan for BSOFH Happiness”:

Really what I want is every agency to have an “orchestrator” ala Ed Bellis’s little SCAP tool of horrors. =)  Then we federate them so that information can roll up to a top-level dashboard for the entire executive branch.

In my beautiful world, every IT asset reports into a patch management system of some sort.  Servers, workstations, laptops, all of it.  Yes, databases too.  Printers–yep.  If we can get network devices to get reported on config info via SCAP-enabled NMS, let’s get that pushing content into our orchestrator. We don’t even really  have to push patches using these tools–what I’m primarily concerned with at this point is to have the ability to pull reports.

I group all of my IT assets in my system into a bucket of some sort in the orchestrator.  That way, we know who’s responsible when something has a problem.  It also fits into our “system” concept from FISMA/C&A/Project Management/etc.

We do periodic network scanning to identify everything on our network and feed them into the orchestrator.  We do regular vulnerability scans and any findings feed into the orchestrator.  The more data, the better aggregate information we can get.

Our orchestrator correlates network scans with patch management status and gives us a ticket/alert/whatever where we have unmanaged devices.  Yes, most enterprise management tools do this today, but the more scan results I have feeding them, the better chance I have at finding all my assets.  Thanks to our crazy segmented architecture models, we have all these independent zones that break patch, vulnerability, and configuration management as the rest of the IT world performs it.  Flat is better for management, but failing that, I’ll take SCAP hierarchies of reporting.

The Department takes a National Vulnerability Database feed and pushes down to the Agencies what they used to send in an IAVA, only they also send down the check to see if your system is vulnerable.  My orchestrator automagically tests and reports back on status before I’m even awake in the morning.

I get hardening guides pushed from the Department or Agency in SCAP form, then pull an audit on my IT assets and have the differences automagically entered into my workflow and reporting.

I become a ticket monkey.  Everything is in workflow.  I can be replaced with somebody less expensive and can now focus on finding the answer to infosec nirvana.

We provide a feed upstream to our Department, the Department provides a feed to somebody (NCSD/US-CERT/OMB/Cybersecurity Coordinator) who now has the view across the entire Government.  Want to be bold, let Vivek K and the Sunlight Foundation at the data feeds and have truly open and transparent, “Unbreakable Government 2.1”.  Who needs FISMA report cards when our vulnerability data is on display?

Keys to Making Federated Patch and Vulnerability Management Work

Security policy that requires SCAP-compatible vulnerability and patch management products.  Instead of parroting back 800-53, please give me a requirement in your security policy that every patch and vulnerability management tool that we buy MUST BE SCAP-CERTIFIED.  Yes, I know we won’t get it done right now, but if we get it in policy, then it will trickle down into product choices eventually.  This is compliance I can live with, boo-yeah!

Security architecture models (FEA anyone?) that show federated patch and vulnerability management deployments as part of their standard configuration.  OK with the firewall pictures and zones of trust, I understand what you’re saying, now give me patch and vulnerability management flows across all the zones so I can do the other 85% of my job.

Network traffic from the edges of the hierarchy to…somewhere.  OK, you just need network connectivity throughout the hierarchy to aggregate and update patch and vulnerability information, this is basic data flow stuff.  US-CERT in a future incarnation could be the top-level aggregator, maybe.  Right now I would be happy building aggregation up to the Department level because that’s the level at which we’re graded.

Understanding.  Hey, I can’t fix everything all the time–what I’m doing is using automation to make the job of fixing things easier by aggregation, correlation, status reporting, and dashboarding.  These are all concepts behind good IT management, why shouldn’t we apply them to security managment also?  Yes, I’ll have times when I’m behind on something or another, but guess what, I’m behind today and you just don’t know it.  However, with near-real-time reporting, we need a culture shift away from trying to police each other up all the time to understanding that sometimes nothing is really perfect.

Patch and vulnerability information is all-in.  It has to be reporting in 100% across the board, or you don’t have anything–back to spreadsheets hell for you.  And simply put, why don’t you have everything in the patch management system already?  Come on, that’s not a good enough reason.

POA&Ms need to be more fluid.  Face it, with automated patch and vulnerability management, POA&Ms become more like trouble tickets.  But yes, that’s much awesome, smaller, easily-satisfied POA&Ms are much easier to manage provided that the administrative overhead for each of these is reduced to practically nothing… just like IT trouble tickets.

Regression testing and providing proof becomes easier because it’s all automated.  Once you fix something and it’s marked in the aggregator as completed, it gets slid into the queue for retesting, and the results become the evidence.

Interfaces with existing FISMA management tools.  This one is tough.  But we have a very well-entrenched software base geared around artifact management, POA&M management, and Security Test and Evaluation results.  This class of software exists because none of the tools vendors really understand how the Government does security management, and I mean NONE of them.  There has to be some weird unnatural data import/export acts going on here to make the orchestrator of technical data match up with the orchestrator of managment data, and this is the part that scares me in a federated world.

SCAP spreads to IT management suites.  They already have a footprint out there on everything, and odds are we’re using them for patch and configuration management anyway.  If they don’t talk SCAP, push the vendor to get it working.

Where Life Gets Surreal

Then I woke up and realized that if I provide my Department CISO with near-real-time patch and vulnerability mangement information, I suddenly have become responsible for patch and management instead of playing “kick it to the contractors” and hiding behind working groups.  It could be that if I get Federated Patch and Vulnerabilty Management off the ground, I’ve given my Department CISO the rope to hang me with.  =)

Somehow, somewhere, I’ve done most of what CAG was talking about and automated it.  I feel so… um… dirty.  Really, folks, I’m not a shill for anybody.



Similar Posts:

Posted in DISA, NIST, Rants, Technical | 12 Comments »
Tags:

Comments on SCAP 2008

Posted September 24th, 2008 by

I just got back from the SCAP 2008 conference at NIST HQ, and this is a collection of my thoughts in a somewhat random order:

Presention slides are available at the NVD website

I blogged about SCAP a year ago, and started pushing it in conversations with security managers that I came across.  Really, if you’re managing security of anything and you don’t know what SCAP is, you need to get smart on it really fast, if for no other reason than that you will be pitched it by vendors sporting new certifications.

Introduction to SCAP:  SCAP is a collection of XML schemas/standards that allow technical security information to be exchanged between tools.  It consists of the following standards:

  • Common Platform Enumeration (CPE): A standard to describe a specific hardware, OS, and software configuration.  Asset information, it’s fairly humdrum, but it makes the rest of SCAP possible–think target enumeration and you’re pretty close.
  • Common Vulnerabilities and Exposures (CVE): A definition of publicly-known vulnerabilities and weaknesses.  Should be familiar to most security researches and patch monkies.
  • Common Configuration Enumeration (CCE): Basically, like CVE but specific to misconfigurations.
  • Common Vulnerability Scoring System (CVSS): A standard for determining the characteristics and impact of security vulnerabilities.  Hmmm, sounds suspiciously like standardization of what is a high, medium, and low criticality vulnerability.
  • Open Vulnerability and Assessment Language (OVAL):  Actually, 3 schemas to describe the inventory of a computer, the configuration on that computer, and a report of what vulnerabilites were found on that computer.
  • Extensible Configuration Checklist Description Format (XCCDF): A data set that describes checks for vulnerabilities, benchmarks, or misconfigurations.  Sounds like the updates to your favorite vulnerability scanning tool because it is.

Hall of Standards inside NIST HQ photo by ME!!!

What’s the big deal with SCAP: SCAP allows data exchanges between tools.  So, for example, you can take a technical policy compliance tool, load up the official Government hardening policy in XCCDF for, say, Windows 2003, run a compliance scan, export the data in OVAL, and load the results into a final application that can help your CISO keep track of all the vulnerabilities.  Basically, imagine that you’re DoD and have 1.5 million desktops–how do you manage all of the technical information on those without having tools that can import and export from each other?

And then there was the Federal Desktop Core Configuration (FDCC): OMB and Karen Evans handed SCAP its first trial-by-fire.  FDCC is a configuration standard that is to be rolled out to every Government desktop.  According to responses received by OMB from the departments in the executive branch (see, Karen, I WAS paying attention =)   ), there are roughly 3.5 Million desktops inside the Government.  The only way to manage these desktops is through automation, and SCAP is providing that.

He sings, he dances, that Tony Sager is a great guy: So he’s presented at Black Hat, now SCAP 2008 (.pdf caveat).  Basically, while the NSA has a great red-team (think pen-test) capability, they had a major change of heart and realized, like the rest of the security world (*cough*Ranum*cough*), that while attacking is fun, it isn’t very productive at defending your systems–there is much more work to be done for the defenders, and we need more clueful people doing that.

Vendors are jumping on the bandwagon with both feet: The amount of uptake from the vulnerability and policy compliance vendors is amazing.  I would give numbers of how many are certified, but I literally get a new announcement in my news reader ever week or so.  For vendors, being certified means that you can sell your product to the Government, not being certified means that you get to sit on the bench watching everybody else have all the fun.  The GSA SAIR Smart-Buy Blanket Purchase Agreement sweetens the deal immensely by having your product easily purchasable in massive quantities by the Government.

Where are the rest of the standards: Yes, FDCC is great, but where are the rest of the hardening standards in cute importable XML files, ready to be snarfed into my SCAP-compliant tool?  Truth be told, this is one problem with SCAP right now because everybody has been focusing on FDCC and hasn’t had time yet to look at the other platforms.  Key word is “yet” because it’s happening real soon now, and it’s fairly trivial to convert the already-existing DISA STIGs or CIS Benchmarks into XCCDF.  In fact, Sun was blindsided by somebody who had made some SCAP schemas for their products and they had no idea that anybody was working on it–new content gets added practically daily because of the open-source nature of SCAP.

Changing Government role: This is going to be controversial.  With NVD/CVE, the government became the authoritative source for vulnerabilities.  So far that’s worked pretty well.  With the rest of SCAP, the Government changes roles to be a provider of content and configurations.  If NIST is smart, they’ll stay out of this because they prefer to be in the R&D business and not the operations side of things.  Look for DHS to pick up the role of being a definitions provider.  Government has to be careful here because they could in some instances be competing with companies that sell SCAP-like feed services.  Not a happy spot for either side of the fence.

More information security trickle-down effect: A repeated theme at SCAP 2008 is that the public sector is interested in what Big SCAP can do for them.  The vendors are using SCAP certification as a differentiator for the time being, but expect to see SCAP for security management standards like PCI-DSS, HIPAA, and SOX–to be honest here, though, most of the vendors in this space cut their teeth on these standards, it’s just a matter of legwork to be able to export in SCAP schemas.  Woot, we all win thanks to the magic that is the Government flexing its IT budget dollars!

OS and Applications vendors: these guys are feeling the squeeze of standardization.  On one hand, the smart vendors (Oracle, Microsoft, Sun, Cisco) have people already working with DISA/NSA to help produce the configuration guides, they just have to sit back and let somebody turn the guides into SCAP content.  Some of the applications vendors still haven’t figured out that their software is about to be made obsolete in the Government market because they don’t have the knowledge base to self-certify with FDCC and later OS standards.  With a 3-year lead time required for some of the desktop applications before a feature request (make my junk work with FDCC) makes it into a product release, there had better be some cluebat work going on in the application vendor community.  Adobe, I’m talking to you and Lifecycle ES–if you need help, just call me.

But how about system integrators: Well, for the time being, system integrators have almost a free ride–they just have to deal with FDCC.  There are some of them that have some cool solutions built on the capabilities of SCAP, but for the most part I haven’t seen much movement except for people who do some R&D.  Unfortunately for system integrators, the Federal Acquisition Regulation now requires that anything you sell to the Government be configured IAW the NIST checklists program.  And just how do you think the NIST checklists program will be implemented?  I’ll take SCAP for $5Bazillion, Alex.  Smart sytem integrators will at least keep an eye on SCAP before it blindsides them 6 months from now.

Technical compliance tools are destined to be a commodity: For the longest time, the vulnerability assessment vendors made their reputation by having the best vulnerability signatures.  In order to get true compatibility across products, standardized SCAP feeds means that the pure-play security tools are going to have less things to differentiate themselves from all the other tools and they fall into a commodity market centered on the accuracy of their checks with reduced false positives and negatives.  While it may seem like a joyride for the time being (hey, we just got our ticket to sell to the Gubmint by being SCAP-certified), that will soon turn into frustration as the business model changes and the margins get smaller.  Smart vendors will figure out ways to differentiate themselves and will survive, the others will not.

Which leads me to this: Why is it that SCAP only applies to security tools?  I mean, seriously, guys like BigFix and NetIQ have crossover from technical policy compliance to network management systems–CPE in particular.  What we need is a similar effort applied to network and data center tools.  And don’t point me at SNMP, I’m talking rich data.  =)  On a positive note, expect some of the security pure-play tools to be bought up and incorporated into enterprise suites if they aren’t already.

Side notes:

I love how the many deer (well over 9000 deer on the NIST campus) all have ear tags.  It brings up all sorts of scientific studies ideas.  But apparently the deer are on birth control shots or something….

Former Potomac Forum students:  Whattayaknow, I met some of our former students who are probably reading this right now because I pimped out my blog probably too aggressively.  =)  Hi Shawn, Marc, and Bob!

Old friends:  Wow, I found some of them, too.  Hi Jess, Walid, Chris, and a cast of thousands.

Deer on NIST Gaithersburg Campus photo by Chucka_NC.



Similar Posts:

Posted in DISA, FISMA, NIST, Technical, What Works | 2 Comments »
Tags:

Government Can’t Turn on a Dime, News at 11

Posted February 27th, 2008 by

Are we done with the Federal Desktop Core Configuration yet? Are we compliant with OMB Memo 07-11?? Have we staved off dozens of script-kiddies armed with nmap and some ‘sploits they downloaded from teh Intarweb, all through hardening our desktops to the one true standard?

No? I didn’t think we would. Of course, neither did the CISOs and other security managers out there in the agencies. It’s too much too fast, and the government is too large to turn on a time. Or even a quarter, for that matter. =)

Now get ready for a blamestorm at the end of the month. By that time, all the agencies are supposed to report on their status to OMB. It’s not going to be pretty, but it’s hardly unexpected.

So why haven’t we finished this yet? Inquiring minds want to know.

Well, it all goes back to the big question of “how many directions can today’s government CISO be pulled in?” Think about it: You’ve got IPV6, HSPD-12, all the PII guidance (Memo 06-16 et al), reducing Internet connections down to 50, aligning your IT systems with the Federal Enterprise Architecture, getting your Internet connections monitored by Einstein, and the usual administrative overhead. that’s too many major initiatives all at the same time, and it’s a good way to be torn in too many directions at the same time. In government-speak, these are all what we call “unfunded mandates”, and one is bad enough to cripple your budget, much less a handful of them.

Where we’re at right now with FDCC is that the implementers are finding out what applications are broken, and we’re starting to impact operations–not being able to get the job done. Yes, this is the desired effect, it puts the pressure on the OS vendors and the application vendors, and it’s a good thing, IMO–we won’t buy your software if it doesn’t support our security model, and we’ll take our $75B IT budget with us. Suddenly, it’s the gorilla of market pressure throwing its weight around, and the BSOFH inside me likes this.

Now don’t get me wrong, I’m a big believer in FDCC (for both the Government and with a payoff for the civilian world), and I think it’s security-sound once it’s implemented, but in order for it to work, the following “infrastructure” needs to be in place:

  • An official image shared between agencies
  • Ability to buy a hardened FDCC OS as part of purchasing the hardware
  • Microsoft rolling FDCC into its standard COTS build that it offers to the rest of the world
  • Applications that are certified to run on the “one standard to rule them all” and on a list so I can pick one and know that it works
  • Security people who understand GPOs and that even though it’s a desktop configuration standard, it affects servers, too
  • An automated tool to validate technical policy compliance (there, I said it, and in this space it actually makes sense for a change)

Until you have these things, what OMB is asking for the agencies to get squeezed between a vendor who can’t ship a default-hardened OS, lazy applications vendors who won’t/can’t fix their software, and the 5+ levels of oversight that are watching over the shoulder of the average ISSO at the implementation level. In short, we’re throwing the implementers under the bus and making them do our dirty work because at the national level we have failed to build the right kind of influence over the vendors.

Gosh, it sounds like this would go so much better if we phased in FDCC along with the next tech refresh of our desktops, doesn’t it? That’s how the “sane world” would tackle something like this. Not a sermon, just a thought. =)



Similar Posts:

Posted in DISA, FISMA, NIST, Rants, Technical, What Doesn't Work, What Works | 1 Comment »

SCAP for Dummies

Posted October 2nd, 2007 by

SCAP is becoming one of my favorite government acronyms: Security Content Automation Protocol. OK, what does that mean in English? Well, it’s a glue to hold together a whole slew of xml nummie data goodnesses such as the National Vulnerability Database and a standard for asset inventory management.

I was pretty skeptical on SCAP (and the Federal Desktop Core Configuration–FDCC) when it was first announced–like wow, we have yet another obscure memo from Karen Evans that we have to address.

I had a change of heart after I heard the magical phrase “We know it’s going to break things, and we don’t care”. That made me take notice. I thought about it all weekend–I was getting really riled up over such an obvious irresponsible security hard line. But then I found the magic in what they were doing and learned to stop fearing SCAP and embrace the love that it brings. I’ll tell you why.

Imagine you’re Microsoft. You can’t harden down your OS because you have all the applications vendors (including the A-V/Malware guys) raising the big anti-trust flag. And they’re right to do so. Maybe at one point, you could make your software “secure by default” but that was 20 years ago, and if you would have done so, you would have been last to market.

But that doesn’t work to plug the holes in the OS. In my opinion, it’s the lesson of Vista: if you make it stronger, it breaks applications. We all know that, so a design choice is to either leave the holes or give you a nag-screen or a combination of the two. Speaking strictly from the security side of things, that–along with continuous OS patching–is just “polishing a turd”. Yeah, you can make it all shiny on the outside, but deep down inside it’s still nothing pretty.

But now put yourselves in the Government’s shoes: You buy an OS and spend how much time and effort into OS hardening. That’s money you could spend elsewhere. The people at the top of the Government understand this, that’s why they’re always looking at ways to simplify.

OMB and others have been pushing SCAP pretty hard. So far, most of the focus has been on the databases that exist (CVE, NVD) and the desktop configuration (FDCC).

Think about a pre-hardened Government OS. What it does is break applications–applications that are poorly designed. If your application is poorly designed and doesn’t work with the FDCC, then you’re squeezed out of the public sector. The true capitalists here would say something like “let the market decide who the winners are” or something like that. Realistically, if you want a slice of the federal IT budget, then you need to make your software compatible with their hardening standard. They make it easy to do, with tools to test your software and a certification program.

The part that I like about SCAP is that it’s the Government doing what the OS vendors can’t–put pressure on the applications guys. As usual, this should have a trickle-down effect for the private sector, with the beginning being free hardening guides and the vulnerability databases and the end being a comprehensive information security management toolset.

Check out the presentations from the SCAP conference last month. The Tim Grance presentation (.ppt) alone is worth the price of admission.

Right now SCAP is at the national/CISO level. Give it 6 months and it will be at the forefront of what people are doing.



Similar Posts:

Posted in DISA, FISMA, NIST, What Works | 5 Comments »

My Inbox this Afternoon: Best Practices Checklist

Posted June 21st, 2007 by

Ah, DISA, gotta love it. They give me periodic spam–not as bad as it sounds. =)

This time, I got one that immediately perked my interest:

“DISA FSO is releasing the Best Security Practice Checklist. This checklist was developed to assist during the procurement process for managed services acquisitions. “

What’s interesting to me is that it’s mostly based on web applications service providers. I don’t think most of it applies to what my guys do, or we’re doing something along a different scope.



Similar Posts:

Posted in DISA, Outsourcing | No Comments »

« Previous Entries


Visitor Geolocationing Widget: