Surprise Report: Not Enough Security Staff

Posted July 22nd, 2009 by

Somedays I feel like people are reading this blog and getting ideas that they turn around and steal.  Then I take my pills and my semi-narcisistic feelings go away.  =)

So anyway, B|A|H threw me for a loop this afternoon.  They released a report on the cybersecurity workforce.  You can check out the article on The Register or you can go get the report from here.  Surprise, we don’t have anywhere near enough security people to go around.  I’ve been saying this for years, I think B|A|H is stealing my ideas by using Van Eck phreaking on my brain while I sleep.

 Some revelations from the executive summary:

  • The pipeline of potential new talent is inadequate.  In other words, demand is growing and the amount of people that we’re training is not growing to meet the demand.
  • Fragmented governance and uncoordinated leadership hinders the ability to meet federal cybersecurity workforce needs.  Nobody’s so far been able to articulate how we build an adequate supply of security folks to keep up with demand and most of our efforts have been at the execution level.
  • Complicated processes and rules hamper recruiting and retention efforts.  It takes maybe 6 months to hire a government employee, this is entirely unsatisfactory.  My current project I was cleared for for 3 years, took a 9-month break, and it took me 6 months to get cleared again.
  • There is a disconnect between front-line hiring managers and government’s HR specialists.  Since the HR folks don’t know what the real job description is, hiring information security people is akin to buzzword bingo.

These are all the same problems the private sector deals with, only in true Government stylie, we have it on a larger scale.

 

He’s Part of the Workforce photo by pfig.

Now for the things that no self-respecting contractor will admit (hmm, what does this say about me?  I’m not sure yet)….

If you do not have an adequate supply of workers in the industry, outsourcing cybersecurity tasks to contractors will not work.  It works something like this:

  • High Demand = High Bill Rate.
  • High Bill Rate = More Contractor Interest
  • More Contractor Interest + High Bill Rate +  Low Supply = High Rate of Charlatans

Contractors do not have the labor pool to tap into to satisfy their contracts.  If you want to put on your cynic hat (all the Guerilla-CISO staff have theirs permanently attached with wood screws), you could say that the B|A|H report was trying to get the Government to pump more money into workforce development so that they could then hire those people and bill them back to the Government.  It’s a twisted world, folks.

Current contractor labor pools have some of the skills necessary for cybersecurity but not all.  More info in future blog posts, but I think a simple way to summarize it is to say that our current workforce is “tooled” around IT security compliance and that we are lacking in large-scale attack and defense skills.

Not only do we need more people in the security industry, but we need more security people in Government.  There is a set of tasks called “inherent government functions” that cannot be delegated to contractors.  Even if you solely increase the contractor headcount, you still have to increase the government employee headcount in order to manage the contractors.



Similar Posts:

Posted in Outsourcing, Public Policy | 9 Comments »
Tags:

Your Security “Requirements” are Teh Suxxorz

Posted July 1st, 2009 by

Face it, your security requirements suck. I’ll tell you why.  You write down controls verbatim from your catalog of controls (800-53, SoX, PCI, 27001, etc), put it into a contract, and wonder how come when it comes time for security testing, we just aren’t talking the same language.  Even worse, you put in the cr*ptastic “Contractor shall be compliant with FISMA and all applicable NIST standards”.  Yes, this happens more often than I could ever care to count, and I’ve seen it from both sides.

The problem with quoting back the “requirements” from a catalog of controls is that they’re not really requirements, they’re control objectives–abstract representations of what you need in order to protect your data, IT system, or business.  It’s a bit like brain surgery using a hammer and chisel–yes, it might work out for you, but I don’t really feel comfortable doing it or being on the receiving end.

And this is my beef with the way we manage security controls nowadays.  They’re not requirements, functionally they’re a high-level needs statement or even a security concept of operations.  Security controls need to be tailored into real requirements that are buildable, testable, measurable, and achievable.

Requirements photo by yummiec00kies.  There’s a social commentary in there about “Single, slim, and pleasant looking” but even I’m afraid to touch that one. =)

Did you say “Wrecks and Female Pigs’? In the contracting world, we have 2 vehicles that we use primarily for security controls: Statements of Work (SOW) and Engineering Requirements.

  • Statements of Work follow along the lines of activities performed by people.  For instance, “contractor shall perform monthly 100% vulnerability scanning of the $FooProject.”
  • Engineering Requirements are exactly what you want to have build.  For instance, “Prior to displaying the login screen, the application shall display the approved Generic Government Agency warning banner as shown below…”

Let’s have a quick exercise, shall we?

What 800-53 says: The information system produces audit records that contain sufficient information to, at a minimum, establish what type of event occurred, when (date and time) the event occurred, where the event occurred, the source of the event, the outcome (success or failure) of the event, and the identity of any user/subject associated with the event.

How It gets translated into a contract: Since it’s more along the lines of a security functional requirement (ie, it’s a specific functionality not a task we want people to do), we brake it out into multiple requirements:

The $BarApplication shall produce audit records with the following content:

  • Event description such as the following:
    • Access the $Baz subsystem
    • Mounting external hard drive
    • Connecting to database
    • User entered administrator mode
  • Date/time stamp in ‘YYYY-MM-DD HH:MM:SS’ format;
  • Hostname where the event occured;
  • Process name or program that generated the event;
  • Outcome of the event as one of the following: success, warn, or fail; and
  • Username and UserID that generated the event.

For a COTS product (ie, Windows 2003 server, Cisco IOS), when it comes to logging, I get what I get, and this means I don’t have a requirement for logging unless I’m designing the engineering requirements for Windows.

What 800-53 says: The The organization configures the information system to provide only essential capabilities and specifically prohibits and/or restricts the use of the following functions, ports, protocols, and/or services: [Assignment: organization-defined list of prohibited and/or restricted functions, ports, protocols, and/or services].

How It gets translated into a contract: Since it’s more along the lines of a security functional requirement, we brake it out into multiple requirements:

The $Barsystem shall have the software firewall turned on and only the following traffic shall be allowed:

  • TCP port 443 to the command server
  • UDP port 123 to the time server at this address
  • etc…..

If we drop the system into a pre-existing infrastructure, we don’t need firewall rules per-se as part of the requirements, what we do need is a SOW along the following lines:

The system shall use our approved process for firewall change control, see a copy here…

So what’s missing, and how do we fix the sorry state of requirements?

This is the interesting part, and right now I’m not sure if we can, given the state of the industry and the infosec labor shortage:  we need security engineers who understand engineering requirements and project management in addition to vulnerability management.

Don’t abandon hope yet, let’s look at some things that can help….

Security requirements are a “best effort” proposition.  By this, I mean that we have our requirements and they don’t fit in all cases, so what we do is we throw them out there and if you can’t meet the requirement, we waiver it (live with it, hope for the best) or apply a compensating control (shield it from bad things happening).  This is unnerving because what we end up doing is arguing all the time over whether the requirements that were written need to be done or not.  This drives the engineers nuts.

It’s a significant amount of work to translate control objectives into requirements.  The easiest, fastest way to fix the “controls view” of a project is to scope out things that are provided by infrastructure or by policies and procedures at the enterprise level.  Hmmm, sounds like explicitly stating what our shared/common controls are.

You can manage controls by exclusion or inclusion:

  • Inclusion:  We have a “default null” for controls and we will explicitly say in the requirements what controls you do need.  This works for small projects like standing up a pair of webservers in an existing infrastructure.
  • Exclusion:  We give you the entire catalog of controls and then tell you which ones don’t apply to you.  This works best with large projects such as the outsourcing of an entire IT department.

We need a reference implementation per technology.  Let’s face it, how many times have I taken the 800-53 controls and broken them down into controls relevant for a desktop OS?  At least 5 in the last 3 years.  The way you really need to do this is that you have a hardening guide and that is the authoritative set of requirements for that technology.  It makes life simple.  Not that I’m saying deviate from doctrine and don’t do 800-53 controls and 800-53A test procedures, but that’s the point of having a hardening guide–it’s really just a set of tailored controls specific to a certain technology type.  The work has been done for you, quit trying to re-engineer the wheel.

Use a Joint Responsibilities Matrix.  Basically this breaks down the catalog of controls into the following columns:

  • Control Designator
  • Control Title
  • Provided by the Government/Infrastructure/Common Control
  • Provided by the Contractor/Project Team/Engineer


Similar Posts:

Posted in BSOFH, Outsourcing, Technical | 3 Comments »
Tags:

Wanted: Some SCAP Wranglers

Posted May 18th, 2009 by

So I was doing my usual “Beltway Bandit Perusal of Opportunities for Filthy Lucre” also known as diving into FedBizOps and I found this gem.  Basically what this means is that sometime this summer, NIST is going to put out an RFP for contractors to further develop SCAP using ARRA funds.

Keeping in mind that this isn’t the official list of what NIST wants done under this contract, but it’s interesting to look at from an angle of where SCAP will go over the next couple of years:

  1. Evolution of the SCAP protocol and specifications thereof
  2. Feasibility studies, development, documenting, prototyping, and road-mapping of SCAP expansions (e.g., remediation capability) and analog protocols (e.g., Network Event Content Automation Protocol
  3. Implementation and maintenance support for the Security Automation Content Validation Program
  4. Maintenance support for the SCAP Product Validation Program
  5. Pilot, beta, and production support for SCAP and security automation use-cases
  6. Content development, modification, and testing
  7. Infrastructure and reference implementation development in JAVA, C++, and C programming languages
  8. Data trust models and data provenance solutions.

So how do you play?  Well, the first thing is that you respond to the notice with a capabilities statement saying “yes, we have experience in doing what you want”–there is a list of specifics in the original notice.  Then sign up for FedBizOps and follow the announcement so you can get changes and the RFP when it comes out.



Similar Posts:

Posted in NIST, Outsourcing | 5 Comments »
Tags:

Analyzing Fortify’s Plan to “Fix” the Government’s Security Problem

Posted April 1st, 2009 by

So I like reading about what people think about security and the Government.  I know, you’re all surprised, so cue shock and awe amongst my reader population.

Anyway, this week it’s Fortify and a well-placed article in NextGov.  You remember Fortify, they are the guys with the cool FUD movie about how code scanning is going to save the world.  And oh yeah, there was this gem from SC Magazine: “Fortify’s Rachwald agrees that FISMA isn’t going anywhere, especially with the support of the paper shufflers. ‘It’s been great for people who know how to fill out forms. Why would they want it to go away?'”  OK, so far my opinion has been partially tainted–somehow I think I’m supposed to take something here personal but I’m not sure exactly what.

Fortify has been trying to step up to the Government feed trough over the past year or so.  In a rare moment of being touch-feely intuitive, from their marketing I get the feeling that Fortify is a bunch of Silicon Valley technologists who think they know what’s best for DC–digital carpetbagging.  Nothing new, all y’alls been doing this for as long as I’ve been working with the Government.

Now don’t get me wrong, I think Fortify makes some good products.  I think that universal adoption of code scanning, while not as foolproof as advertised, is a good thing.  I also think that software vendors should use scanning tools as part of their testing and QA.

Fortified cité of Carcassonne photo by http2007.

Now for a couple basic points that I want to get across:

  • Security is not a differentiator between competing products unless it’s the classified world. People buy IT products based on features, not security.
  • The IT industry is a broken market because there is no incentive to sell secure code.
  • In fact, software vendors are often rewarded market-wise because if you arrive first to market with the largest market penetration, you become the defacto standard.
  • The vendors are abstracted from the problems faced by their customers thanks to the terms of most EULAs–they don’t really have to fix security problems since the software is sold with no guarantees.
  • The Government is dependent upon the private sector to provide it with secure software.
  • It is a conflict of interest for the vendors to accurately represent their flaws unless the Government is going to pay to have them fixed.
  • It’s been proposed numerous the Government use its “huge” IT budget to require vendors to sell secure projects.
  • How do you determine that a vendor is shipping a secure product?

Or more to the point, how do I as a software vendor reasonably demonstrate that I have provided a secure product to the government without a making the economics infeasible for smaller vendors, creating an industry of certifiers ala PCI-DSS and SOX, or dramatically lengthening my development/procurement schedules?  Think of the problems with common criteria, because that’s our previous attempt.

We run into this problem all the time in Government IT security, but it’s mostly at the system integrator level.  It’s highly problematic to make contract requirements that are objective, demonstrable, and testable yet still take into account threats and vulnerabilities that do not exist today.

I’ve spent the past month writing a security requirements document for integrated special-purpose devices sold to the Government.  Part of this exercise was the realization that I can require that the vendor perform vulnerability scanning, but it becomes extremely difficult to include an amount of common sense into requirements when it comes to deciding what to fix.  “That depends” keeps coming back to bite me in the buttocks time and time again.  At this point, I usually tell my boss how I hate security folks, self included, because of their indecisiveness.

The end result is that I can specify a process (Common Criteria for software/hardware, Certification and Accreditation for integration projects) and an outcome (certification, product acceptance, “go live” authorization), leave the decision-making authority with the Government, and put it in the hands of contracts officers and subject-matter experts who know how to manage security.  Problems with this technique:

  • I can’t find enough contracts officers who are security experts.
  • As a contractor, how do I account for the costs I’m going to incur since it’s apparently “at the whim of the Government”?
  • I have to apply this “across the board” to all my suppliers due to procurement law.  This might not be possible right now for some kinds of outsourced development.
  • We haven’t really solved the problem of defining what constitutes a secure product.
  • We’ve just deferred the problem from a strategic solution to a tactical process depending on a handful of clueful people.

Honestly, though, I think that’s as good as we’re going to get.  Ours is not a perfect world.

And as for Fortify?  Guys, quit trying to insult the people who will ultimately recommend your product.  It’s bad mojo, especially in a town where the toes you step on today may be attached to the butt you kiss tomorrow.  =)



Similar Posts:

Posted in Outsourcing, Technical, What Doesn't Work, What Works | 2 Comments »
Tags:

When the Feds Come Calling

Posted October 21st, 2008 by

I’ve seen the scenario about a dozen times in the last 2 months–contractors and service providers of all sorts responding to the Government’s security requirements in the middle of a contract.  It’s almost reached the stage where I have it programmed as a “battle drill” ala the infantryman’s Battle Drill 1A, and I’m here to share the secret of negotiating these things.

Let’s see, without naming names, let’s look at where I’ve seen this come up:

  • Non-Government Organizations that assist the Government with para-Government services to the citizens
  • Companies doing research and development funded by the Government–health care and military
  • Universities who do joint research with the Government
  • Anybody who runs something that the Government has designated as “critical infrastructure”
  • State and local governments who use Federal Government data for their social plans (unemployment system, food stamps, and ) and homeland security-esque activities (law enforcement, disaster response)
  • Health Care Providers who service Government insurance plans

For the purposes of this blog post, I’ll refer to all of these groups as contractors or service providers.  Yes, I’m mixing analogies, making huge generalizations, and I’m not precise at all.  However, these groups should all have the same goals and the approach is the same, so bear with me while I lump them all together.

Really, guys, you need to understand both sides of the story because this a cause for negotiations.  I’ll explain why in a minute.

On the Government side:  Well, we have some people we share data with.  It’s not a lot, and it’s sanitized so the value of it is minimal except for the Washington Post Front Page Metric.  Even so, the data is PII that we’ve taken an anonymizer to so that it’s just statistical data that doesn’t directly identify anybody.  We’ve got a pretty good handle on our own IT systems over the past 2 years, so our CISO and IG want us to focus on data that goes outside of our boundaries.  Now I don’t expect/want to “own” the contractor’s IT systems because they provide us a service, not an IT system.  My core problem is that I’m trying to take an existing contract and add security requirements retroactively to it and I’m not sure exactly how to do that.

Our Goals:

  • Accomplishing the goals of the program that we provided data to support
  • Protection of the data outside of our boundaries
  • Proving due-diligence to our 5 layers of oversight that we are doing the best we can to protect the data
  • Translating what we need into something the contractor understands
  • Being able to provide for the security of Government-owned data at little to no additional cost to the program

On the contractor/service provider side:  We took some data from the Government and now they’re coming out of the blue saying that we need to be FISMA-compliant.  Now I don’t want to sound whiney, but this FISMA thing is a huge undertaking and I’ve heard that for a small business such as ourselves, it can cripple us financially.  While I still want to help the Government add security to our project, I need to at least break even on the security support.  Our core problem is to keep security from impacting our project’s profitability.

Our Goals:

  • Accomplishing the goals of the program that we were provided data to support
  • Protection of the data given to us to keep the Government happy and continuing to fund us (the spice must flow!)
  • Giving something to the Government so that they can demonstrate due-diligence to their auditors and IG
  • Translating what we do into something the Government understands
  • Keeping the cost of security to an absolute minimum or at least funded for what we do add because it wasn’t scoped into the SOW

Hmm, looks like these goals are very much in alignment with each other.  About the only thing we need to figure out is scope and cost, which sounds very much like a negotiation.

Hardcore Negotiation Skills photo by shinosan.

Little-known facts that might help in our scenario here:

  • Section 2.4 of SP 800-53 discusses the use of compensating controls for contractor and service-provider systems.
  • One of the concepts in security and the Government is that agencies are to provide “adequate security” for their information and information systems.  Have a look at FISMA and OMB Circular A-130.
  • Repeat after me:  “The endstate is to provide a level of protection for the data equivalent or superior to what the Government would provide for that data.”
  • Appendix G in SP 800-53 has a traceability matrix through different standards that can serve as a “Rosetta Stone” for understanding each other.  Note to NIST:  let’s throw in PCI-DSS, Sarbanes-Oxley,  and change ISO 17799 to 27001.

So what’s a security geek to do?  Well, this, dear readers, is Rybolov’s 5-fold path to Government/contractor nirvana:

  1. Contractor and Government have a kickoff session to meet each other and build raport, starting from a common ground such as how you both have similar goals.  The problem really is one of managing each others’ expectations.
  2. Both Government and Contractor perform internal risk assessment to determine what kind of outcome they want to negotiate.
  3. Contractor and Government meet a week later to negotiate on security.
  4. Contractor provides documentation on what security controls they have in place.  This might be as minimal as a contract with the guard force company at their major sites, or it might be just employee background checks and
  5. Contractor and Government negotiate for a 6-month plan-of-action.  For most organizations considering ISO 27001, this is a good time to make a promise to get it done.  For smaller organizations or data , we may not even

Assumptions and dependencies:

  • The data we’re talking about is low-criticality or even moderate-criticality.
  • This isn’t an outsourced IT system that could be considered government-owned, contractor-operated (GO-CO)


Similar Posts:

Posted in FISMA, Outsourcing | 1 Comment »
Tags:

Imagine that, System Integrators Doing Security Jointly with DoD

Posted September 11th, 2008 by

First, some links:

Synopsis: DoD wants to know how its system integrators protect the “Controlled Unclassified Information” that they give them.  Hmm, sounds like the fun posts I’ve done about NISPOM, SBU and my data types as a managed service provider.

This RFI is interesting to me because basically what the Government is doing is collecting “best practices” on how contractors are protecting non-classified data and then they’ll see what is reasonable.

Faustian Contract

Faustian Contract photo by skinny bunny.

However, looking at the problem, I don’t see this as much of a safeguards issue as I do a contracts issue.  Contractors want to do the right thing, it’s just that they can’t decide if security is which of these things:

  • A service that they should include as part of the work breakdown structure in proposals.  This is good, but can be a problem if you want to keep the solution cheap and drop the security services from the project because the RFP/SOW doesn’t specify what exactly the Government wants by way of security.
  • A cost of doing business that they should reduce as much as possible.  For system integrators, this is key:  perform scope management to keep the Government from bleeding you dry with stupid security managers who don’t understand compensating controls.  Problem with this approach is that the Government won’t get all of what they need because the paranoia level is set by the contractor who wants to save money.

Well, the answer is that security is a little bit of both, but most of all it’s a customer care issue.  The Government wants security, and you want to give it to them in the flavor that they want, but you’re still not a dotorg–you want to get compensated for what you do provide and still make a profit of some sort.

Guess what?  It takes cooperation between the Government and its contractors.  This “Contractor must be compliant with FISMA and NIST Guidelines” paragraph just doesn’t cut it anymore, and what DoD is doing is to research how its contractors are doing their security piece.  Pretty good idea once you think about it.

Now I’m not the sharpest bear in the forest, but it would occur to me that we need this to happen in the civilian agencies, too.  Odds are they’ll just straphang on the DoD efforts. =)



Similar Posts:

Posted in Outsourcing, Risk Management | No Comments »
Tags:

« Previous Entries Next Entries »


Visitor Geolocationing Widget: