Imagine that, System Integrators Doing Security Jointly with DoD

Posted September 11th, 2008 by

First, some links:

Synopsis: DoD wants to know how its system integrators protect the “Controlled Unclassified Information” that they give them.  Hmm, sounds like the fun posts I’ve done about NISPOM, SBU and my data types as a managed service provider.

This RFI is interesting to me because basically what the Government is doing is collecting “best practices” on how contractors are protecting non-classified data and then they’ll see what is reasonable.

Faustian Contract

Faustian Contract photo by skinny bunny.

However, looking at the problem, I don’t see this as much of a safeguards issue as I do a contracts issue.  Contractors want to do the right thing, it’s just that they can’t decide if security is which of these things:

  • A service that they should include as part of the work breakdown structure in proposals.  This is good, but can be a problem if you want to keep the solution cheap and drop the security services from the project because the RFP/SOW doesn’t specify what exactly the Government wants by way of security.
  • A cost of doing business that they should reduce as much as possible.  For system integrators, this is key:  perform scope management to keep the Government from bleeding you dry with stupid security managers who don’t understand compensating controls.  Problem with this approach is that the Government won’t get all of what they need because the paranoia level is set by the contractor who wants to save money.

Well, the answer is that security is a little bit of both, but most of all it’s a customer care issue.  The Government wants security, and you want to give it to them in the flavor that they want, but you’re still not a dotorg–you want to get compensated for what you do provide and still make a profit of some sort.

Guess what?  It takes cooperation between the Government and its contractors.  This “Contractor must be compliant with FISMA and NIST Guidelines” paragraph just doesn’t cut it anymore, and what DoD is doing is to research how its contractors are doing their security piece.  Pretty good idea once you think about it.

Now I’m not the sharpest bear in the forest, but it would occur to me that we need this to happen in the civilian agencies, too.  Odds are they’ll just straphang on the DoD efforts. =)



Similar Posts:

Posted in Outsourcing, Risk Management | No Comments »
Tags:

HR 5983–DHS Now Responsible for Contractor Security

Posted May 12th, 2008 by

I’ve said it a million times before:  I don’t care if you switch to $FooFramework, as long as you have the same people executing it with the same skillset, the results will be the same.  Last week and for the near-term, it’s a new bill to replicate the tenets of FISMA and the NIST framework thereof.

Last week, Representative Langevin introduced HR 5983, the “Homeland Security Network Defense and Accountability Act of 2008”.  Some press on the bill:

Now the big question for me on this bill (and really, any proposed law) is this:  How does this provide anything above and beyond what is already required by FISMA, OMB policies, and NIST guidelines?  My short analysis:  Not much, and Rep Langevin is just “stirring the pot” with the big spoon of politics.

HR 5983 requires the following:

  • Re-establishes the role and staffing requirements for the CIO, including network monitoring
  • Testing the DHS networks using “attack-based” protocols
  • IG audits and reporting
  • Adding responsibility for contractor systems

Again, nothing new here that isn’t required already.  The only benefit to this bill that I see is that if it’s law, the Executive Branch has to request the funding in their budget request and Congress has to (maybe) fund it. It isn’t that DHS doesn’t have the in-house expertise–they own US-CERT.  It’s not that they have a lack of smart people–they own the Security Line of Business.  It’s that there are only so many hours in the day to get things done, and DHS has had lots of work since their creation in 2002.

A little bit of peeking behind the security kimono at DHS is in order.  DHS consists of subagencies, known as Operational Elements, such as TSA, ICE, CBP, etc.  The heads of these agencies are peers to the DHS CIO and have their own CIO and CISO, even though that’s not what they’re called.  See, the OEs do not have to listen to the DHS CIO, and that’s a huge problem.  Last year, DHS made the DHS CIO the budget approver for the OE’s IT budgets, which is a step forward, but still there is much room for improvement.  That’s something that Congress can fix.

Now it just isn’t a “Government IT Security News Day” without a comment from Alan Paller of SANS fame…

“One story is missing from this issue because the press hasn’t picked it up yet. Under Chairman Langevin of Rhode Island, the US House of Representatives Subcommittee on Emerging Threats and Cybersecurity just approved a new bill that changes how security will be measured, at least at the Department of Homeland Security. This is the beginning of the end of the huge waste under FISMA and the start of an era of continuous monitoring and automation. Long overdue. Look for news stories over the coming days.
Alan”

Like I say sometimes, I’m a bear of little brain and a recovering infantryman, but why is the answer to a law to make another law saying the same exactly the same thing.  All I have to say is this:  You’re not on Slashdot, you actually have to read the bill before you comment on it.  I didn’t see anything that supports what Alan’s saying.    =)

 

Capitol at Sunset

Capitol at Sunset by vgm8383.

To me, the very interesting thing about this bill is this provision:

“Before entering into or renewing a covered contract, the Secretary, acting through the Chief Information Officer, must determine that the contractor has an internal information systems security policy that complies with the Department’s information security requirements, including with regard to authentication, access control, risk management, intrusion detection and prevention, incident response, risk assessment, and remote access, and any other policies that the Secretary considers necessary to ensure the security of the Department’s information infrastructure.”

I have an issue with the language of this provision.  It’s one of scope.

But perhaps an explanation is in order.  Most (OK, mabye half or a little bit more, this isn’t a scientific number) government IT systems are contractor-operated.  These contractors have “Government data” on their corporate networks.  Some of this is fairly benign:  contracting collateral, statements of work, staffing plan, bill rates, etc.  Some of this is really bad:  PII, Privacy Act data, mission data, etc.  Some of this is “gray area”: trouble tickets, event data, SIEM data, etc.

Now taking this back to cost-effective, adequate security, what the Langevin bill means is that you’re taking the FISMA framework and applying it to all contractors without any bounds on what you consider within your realm of protection–ie, according to the language of the bill, if I’m any contractor supporting DHS in an outsourcing engagement, you can audit my network, whether or not it has Government data on it.  This is a problem because your oversight cuts into my margins and in some cases does not provide the Government with the desired level of security.

My response as a contractor is the following:

  • Increase my rates to compensate for the cost of demonstrating compliance
  • Do not bid DHS contracts
  • Adopt a policy that says that DHS policies apply to the systems containing government mission data and meta-data
  • Charge the Government at Time and Materials for any new requirements that they levy on you for mitigation

Unfortunately, this is a game that the Government will win at with respect to controlling the contractor’s network and lose at with respect to cost.

Good contractors understand the liability of having separation between Government data and their own network.  Back in my CISO role, that was the #1 rule–do not putGovernment data on the corporate network or “cross the streams” (Thanks, Vlad).  In fact, I wrote a whole chunk of blog posts last year about outsourcing, go check them out.  In fact, we would give to the customer anything that could be built in a dedicated mode specifically for them.  The dedicated network sections used the customer’s policy, procedures, standards, and they got to test them whenever they wanted.  In back of that was a shared piece for things that needed large economy of scale, like the STK 8500 and the NOC dashboards to put all the performance data on one screen.

Having said that, some data does need to cross over to the contractor’s network (or, even better, a separate management network) in order to provide economy of scale.  In our case, it was trouble tickets–in order to split field technicians across different contracts to keep them billable, the only cost-effective way to do this is to have tickets go into a shared system.  Any other solution costs the Government a ton of money because they would be paying for full-time field techs to be on-site doing nothing.

The problem is that our guidance on contractor systems is grossly outdated and highly naive.  The big book of rules that we are using for contractor security is NISPOM.  Unfortunately, NISPOM only applies to classified data, and we’re left with a huge gap when it comes to unclassified data.

What we need is the unclassified version of NISPOM.

The NIST answer is in section 2.4 of SP 800-53:

The assurance or confidence that the risk to the organization’s operations, assets, and individuals is at an acceptable level depends on the trust that the authorizing official places in the external service provider. In some cases, the level of trust is based on the amount of direct control the authorizing official is able to exert on the external service provider with regard to the employment of appropriate security controls necessary for the protection of the service and the evidence brought forth as to the effectiveness of those controls. The level of control is usually established by the terms and conditions of the contract or service-level agreement with the external service provider and can range from extensive (e.g., negotiating a contract or agreement that specifies detailed security control requirements for the provider) to very limited (e.g., using a contract or service-level agreement to obtain commodity services such as commercial telecommunications services).

Hmmm, in a classic ploy of stealing lines from my Guerilla CISO Bag-o-Tricks ™, NIST has said “Well, it depends”.  And yes, it depends, but how do you impement that when OMB dictates that what NIST says is THE standard?



Similar Posts:

Posted in FISMA, NIST, Rants | No Comments »
Tags:

And in This Corner, Special Publication 800-60

Posted September 6th, 2007 by

Remember the last post I did about the Business Reference Model? If you don’t, go read it now. We’re about to get freaky on it.

Have you ever had a chance to give everybody the opportunity to categorize their business functions for confidentiality, integrity, and availability as high, moderate, and low ala FIPS-199? You’ll know the effect if you’ve ever had to build a corporate website–everybody wants a link on the front page. The purchasing team wants a link on the front page so that prospective vendors can reach them, the HR people want the entire corporate employees’ manual online.

It’s a little organizational behavior factoid: everybody thinks that the silo that they operate in makes the business run. Translate that into security and we’ll find out that, left to their own devices to determine criticality, every business department lists its IT systems as high-criticality. Don’t tell my HR department, but if the personnel management system disappeared for a week, we would still continue to monitor networks. Even if the payroll system were to implode and we had to rebuild it from scratch, it wouldn’t matter unless it happened on or just before a payday–we would still be making money as a business. Yes, have an outage for 2 weeks and our employees would pretend to work because we’re pretending to pay them, and it would be just like the Brezhnev era. =)

Add in to the mix the fact that if your system is high-criticality, you get more money (in theory, that’s how the government determines the budget, reality may be a little different), and everybody in the government has a high-criticality system. In a classic case of “NIST to the rescue”, they have developed a totally awesome document that nobody reads called Special Publication 800-60. It’s more than everything a data and system classification geek would need to fall in love with.

Don’t try to read it from cover to cover. You won’t make it through. =) Instead, SP 800-60 is a reference to determine the criticality of systems using… wait for it… the BRM. As you might expect, it’s a good reference material like a government-wide data criticality dictionary. It level-sets the security expectations and definitions of what criticality is government-wide.

How do we use this thing? Well, the SP has a “stuffy” process description complete with phrases like “provisional impact level”, this is the Guerilla’s Guide to Data Categorization, so we’ll keep it simple:

  • Take a business function.
  • Locate it on the BRM. It might be in as many as 15 different places.
  • Find the BRM section in 800-60.
  • Read the description of the data, see if it fits what you’re looking for
  • Look at the criticality description. Usually it’s a “fielder’s choice”: if the data matches <this description>, then it’s <this> critical, if it matches <this description>, then it’s <this> critical.
  • Assign a criticality based on the 800-60 definitions.
  • Use the “Common Sense Test”–does what you’ve assigned make sense? You might have other factors such as data aggregation that might make it worth more to you. Feel free to deviate.

So now, going back to my own private list of data types, let’s assign a criticality and a justification:

  • Customer Mission Data–this inherits the criticality from what the customer says it is, which is usually MMM. But then again, I rate the criticality for my company and my business unit, which might be different from what the government thinks it’s worth.
  • Security Incident Data–we own the systems that this data is on, but the government says that it’s high for criticality because they do not want to expose this data to the press until they’re ready and they have high integrity requirements so they can produce it as evidence and send the perpetrator to a “Federal Pound You .* Prison”. We segregated this data type from network monitoring and performance data because doing so means that we have a smaller pool/quantity of high-criticality data. Criticality is HHM.
  • Internal Purchasing Data–usually LML criticality except for the SoX separation of duty for purchasing, etc. I don’t typically deal with this kind of data and rely on my company as a support vendor, so I have trace amounts under my control. Note that if your sole business is purchasing things for other people, this is probably MMH or thereabouts.
  • IT Infrastructure Management Data–network monitoring data. Usually MML but inherits some criticality from the clients’ systems in that a monitoring system for a classified system is also classified even though it has trace amounts of mission data on it.
  • Contracts Collateral Data–proposals and the like. This is MLL for CIA because we work with the government and most of it is public record but we still want to be able to keep our trade secrets secret if we’re working on a bid.
  • Billing Data–MML. Our rates, where they came from, and the markup is for internal use only. It’s bad manners to show the customer your financial guts. =)
  • HR Data–MML only because it contains private information that is the basis for data breach nightmares.

Notice what I didn’t explicitly state but I hinted around? You can have types of data that are covered under some regulation (yes, the infamous c*mpliance buzzwords). These can become data types in itself. For example, health information covered under HIPAA should be its own category of data.

Carrying this a bit further, I have my matrix which has the data types running down the left side with the following columns across the top:

  • Criticality for client, corporation, and business unit
  • Location of the data by IT system type (laptops, trouble ticket system, shared files, etc)
  • Governing regulations (FISMA, NISPOM, Privacy Act, specific classifications)

Here, I think it’s hard to describe, I’ve posted it online for the world to see. I’ll add this into my “Book of Death” for the next revision. No, I didn’t spill the whole cookie jar, just give you a framework to build your own. If you want to pay me obscene amounts of money to come fill out a spreadsheet for you, I would be glad to, but you shouldn’t need that much help.

So what have we accomplished with this BRM exercise? Well, for a couple hours’ worth of work, I have the following things:

  • A data criticality classification guide that I can hand to the security engineers to match up with security controls
  • A security business impact assessment that I can hand to managers
  • An azimuth check on which assets have the most value for us as a company
  • A prioritization on who gets the biggest security budget
  • A governance list to tell me which data types are governed by which regulations
  • A list to exclude some assets from some types of regulations (ie, smaller scope means less audit when I have to pay for one)
  • The start of a way to break down my enterprise into “bite-sized pieces”

And that’s a beautiful thing.



Similar Posts:

Posted in FISMA, The Guerilla CISO, What Works | 1 Comment »


Visitor Geolocationing Widget: