Federated Vulnerability Management

Posted July 14th, 2009 by

Why hello there private sector folks.  It’s no big surprise, I work in the US Federal Government Space and we have some unique challenges of scale.  Glad to meet you, I hear you’ve got the same problems only not in the same kind of scale as the US Federal Government.  Sit back, read, and learn.

You see, I work in places where everything is siloed into different environments.  We have crazy zones for databases, client-facing DMZs, managment segments, and then the federal IT architecture itself: a loose federation of semi-independent enterprises that are rapidly coming together in strange ways under the wonderful initiative known as “The TIC”.  We’re also one of the most heavily audited sectors in IT.

And yet, the way we manage patch and vulnerability information is something out of the mid-80’s.

Current State of Confusion

Our current patch management information flow goes something like this:

  • Department SOC/US-CERT/CISOs Office releases a vulnerability alert (IAVA, ISVM, something along those lines)
  • Somebody makes a spreadsheet with the following on it:
    • Number of places with this vulnerability.
    • How many have been fixed.
    • When you’re going to have it fixed.
    • A percentage of completion
  • We then manage by spreadsheets until the spreadsheets say “100%”.
  • The spreadsheets are aggregated somewhere.  If we’re lucky, we have some kind of management tool that we dump our info into like eMASS.
  • We wonder why we get pwned (by either haxxorz or the IG).

Now for how we manage vulnerability scan information:

  • We run a tool.
  • The tool spits out a .csv or worse, a .html.
  • We pull up the .csv in Excel and add some columns.
  • We assign dates and responsibilities to people.
  • We have a weekly meeting togo over what’s been completed.
  • When we finish something, we provide evidence of what we did.
  • We still really don’t know how effective we were.

Problems with this approach:

  • It’s too easy to game.  If I’m doing reporting, the only thing really keeping me reporting the truth is my sense of ethics.
  • It’s slow as hell.  If somebody updates a spreadsheet, how does the change get echoed into the upstream spreadsheets?
  • It isn’t accurate at any given moment in time, mostly because changes quicker than the process can keep up.  What this means is that we always look like liars who are trying to hide something because our spreadsheet doesn’t match up with the “facts on ground”.
  • It doesn’t compare with our other management tools like Plans of Action and Milestone (POA&M).  They usually are managed in a different application than the technical parts, and this means that we need a human with a spreadsheet to act as the intermediary.

So this is my proposal to “fix” government patch and vulnerability management: Federated Patch and Vulnerability Management through SCAP.

Trade Federation Battle Droid photo by Stéfan.  Roger, Roger, SCAP means business.

Whatchu Talkin’ Bout With This “Federated” Stuff, Willis?

This is what I mean, my “Plan for BSOFH Happiness”:

Really what I want is every agency to have an “orchestrator” ala Ed Bellis’s little SCAP tool of horrors. =)  Then we federate them so that information can roll up to a top-level dashboard for the entire executive branch.

In my beautiful world, every IT asset reports into a patch management system of some sort.  Servers, workstations, laptops, all of it.  Yes, databases too.  Printers–yep.  If we can get network devices to get reported on config info via SCAP-enabled NMS, let’s get that pushing content into our orchestrator. We don’t even really  have to push patches using these tools–what I’m primarily concerned with at this point is to have the ability to pull reports.

I group all of my IT assets in my system into a bucket of some sort in the orchestrator.  That way, we know who’s responsible when something has a problem.  It also fits into our “system” concept from FISMA/C&A/Project Management/etc.

We do periodic network scanning to identify everything on our network and feed them into the orchestrator.  We do regular vulnerability scans and any findings feed into the orchestrator.  The more data, the better aggregate information we can get.

Our orchestrator correlates network scans with patch management status and gives us a ticket/alert/whatever where we have unmanaged devices.  Yes, most enterprise management tools do this today, but the more scan results I have feeding them, the better chance I have at finding all my assets.  Thanks to our crazy segmented architecture models, we have all these independent zones that break patch, vulnerability, and configuration management as the rest of the IT world performs it.  Flat is better for management, but failing that, I’ll take SCAP hierarchies of reporting.

The Department takes a National Vulnerability Database feed and pushes down to the Agencies what they used to send in an IAVA, only they also send down the check to see if your system is vulnerable.  My orchestrator automagically tests and reports back on status before I’m even awake in the morning.

I get hardening guides pushed from the Department or Agency in SCAP form, then pull an audit on my IT assets and have the differences automagically entered into my workflow and reporting.

I become a ticket monkey.  Everything is in workflow.  I can be replaced with somebody less expensive and can now focus on finding the answer to infosec nirvana.

We provide a feed upstream to our Department, the Department provides a feed to somebody (NCSD/US-CERT/OMB/Cybersecurity Coordinator) who now has the view across the entire Government.  Want to be bold, let Vivek K and the Sunlight Foundation at the data feeds and have truly open and transparent, “Unbreakable Government 2.1”.  Who needs FISMA report cards when our vulnerability data is on display?

Keys to Making Federated Patch and Vulnerability Management Work

Security policy that requires SCAP-compatible vulnerability and patch management products.  Instead of parroting back 800-53, please give me a requirement in your security policy that every patch and vulnerability management tool that we buy MUST BE SCAP-CERTIFIED.  Yes, I know we won’t get it done right now, but if we get it in policy, then it will trickle down into product choices eventually.  This is compliance I can live with, boo-yeah!

Security architecture models (FEA anyone?) that show federated patch and vulnerability management deployments as part of their standard configuration.  OK with the firewall pictures and zones of trust, I understand what you’re saying, now give me patch and vulnerability management flows across all the zones so I can do the other 85% of my job.

Network traffic from the edges of the hierarchy to…somewhere.  OK, you just need network connectivity throughout the hierarchy to aggregate and update patch and vulnerability information, this is basic data flow stuff.  US-CERT in a future incarnation could be the top-level aggregator, maybe.  Right now I would be happy building aggregation up to the Department level because that’s the level at which we’re graded.

Understanding.  Hey, I can’t fix everything all the time–what I’m doing is using automation to make the job of fixing things easier by aggregation, correlation, status reporting, and dashboarding.  These are all concepts behind good IT management, why shouldn’t we apply them to security managment also?  Yes, I’ll have times when I’m behind on something or another, but guess what, I’m behind today and you just don’t know it.  However, with near-real-time reporting, we need a culture shift away from trying to police each other up all the time to understanding that sometimes nothing is really perfect.

Patch and vulnerability information is all-in.  It has to be reporting in 100% across the board, or you don’t have anything–back to spreadsheets hell for you.  And simply put, why don’t you have everything in the patch management system already?  Come on, that’s not a good enough reason.

POA&Ms need to be more fluid.  Face it, with automated patch and vulnerability management, POA&Ms become more like trouble tickets.  But yes, that’s much awesome, smaller, easily-satisfied POA&Ms are much easier to manage provided that the administrative overhead for each of these is reduced to practically nothing… just like IT trouble tickets.

Regression testing and providing proof becomes easier because it’s all automated.  Once you fix something and it’s marked in the aggregator as completed, it gets slid into the queue for retesting, and the results become the evidence.

Interfaces with existing FISMA management tools.  This one is tough.  But we have a very well-entrenched software base geared around artifact management, POA&M management, and Security Test and Evaluation results.  This class of software exists because none of the tools vendors really understand how the Government does security management, and I mean NONE of them.  There has to be some weird unnatural data import/export acts going on here to make the orchestrator of technical data match up with the orchestrator of managment data, and this is the part that scares me in a federated world.

SCAP spreads to IT management suites.  They already have a footprint out there on everything, and odds are we’re using them for patch and configuration management anyway.  If they don’t talk SCAP, push the vendor to get it working.

Where Life Gets Surreal

Then I woke up and realized that if I provide my Department CISO with near-real-time patch and vulnerability mangement information, I suddenly have become responsible for patch and management instead of playing “kick it to the contractors” and hiding behind working groups.  It could be that if I get Federated Patch and Vulnerabilty Management off the ground, I’ve given my Department CISO the rope to hang me with.  =)

Somehow, somewhere, I’ve done most of what CAG was talking about and automated it.  I feel so… um… dirty.  Really, folks, I’m not a shill for anybody.



Similar Posts:

Posted in DISA, NIST, Rants, Technical | 12 Comments »
Tags:

The World Asks: is S.773 Censorship?

Posted May 15th, 2009 by

Here in the information assurance salt mines, we sure do loves us some conspiracies, so here’s the conspiracy of the month: S.773 gives the Government the ability to view your private data and the President disconnect authority over the Internet, which means he can sensor it.

Let’s look at the sections and paragraphs that would seem to say this:

Section 14:

(b) FUNCTIONS- The Secretary of Commerce–

(1) shall have access to all relevant data concerning such networks without regard to any provision of law, regulation, rule, or policy restricting such access;

Section 18: The President–

(2) may declare a cybersecurity emergency and order the limitation or shutdown of Internet traffic to and from any compromised Federal Government or United States critical infrastructure information system or network;

(6) may order the disconnection of any Federal Government or United States critical infrastructure information systems or networks in the interest of national security;

Taken completely by itself, it would seem like this gives the president the authorities to do all sorts of wrong stuff, all he has to do is to declare something as critical infrastructure and declare it compromised or in the interests of national security.  And some people have:

And some movies (we all love movies):

Actually, Shelly is pretty astute and makes some good points, she just doens’t have the background in information security.

It makes me wonder since when have people considered social networking sites or the Internet as a whole as “critical infrastructure”. Then the BSOFH in me things “Ye gods, when did our society sink so low?”

Now, as far as going back to Section 14 of S.773, it exists because most of the critical infrastructure is privately-held.  There is a bit of history to understand here and that is that the critical infrastructure owners and operators are very reluctant to give the information on their piece of critical infrastructure to the Government.  Don’t blame them, I had the same problem as a contractor: if you give the Government information, the next step is them telling you how to change it and how to run your business.  Since the owners/operators are somewhat non-helpful, the Government needs more teeth to get what it needs.

But as far as private data traversing the critical infrastructure?  I think it’s a stretch to say that’s part of the requirements of Section 14, it’s to collect data “about” (the language of the bill) the critical infrastructure, not “processed, stored, or forwarded” on the critical infrastructure. But yeah, let’s scope this a little bit better, CapHill Staffers.

On to Section 18.  Critical infrastructure is defined elsewhere in law.  Let’s see the definitions section from HSPD-7, Critical Infrastructure Identification, Prioritization, and Protection:

In this directive:

The term “critical infrastructure” has the meaning given to that term in section 1016(e) of the USA PATRIOT Act of 2001 (42 U.S.C. 5195c(e)).

The term “key resources” has the meaning given that term in section 2(9) of the Homeland Security Act of 2002 (6 U.S.C. 101(9)).

The term “the Department” means the Department of Homeland Security.

The term “Federal departments and agencies” means those executive departments enumerated in 5 U.S.C. 101, and the Department of Homeland Security; independent establishments as defined by 5 U.S.C. 104(1);Government corporations as defined by 5 U.S.C. 103(1); and the United States Postal Service.

The terms “State,” and “local government,” when used in a geographical sense, have the same meanings given to those terms in section 2 of the Homeland Security Act of 2002 (6 U.S.C. 101).

The term “the Secretary” means the Secretary of Homeland Security.

The term “Sector-Specific Agency” means a Federal department or agency responsible for infrastructure protection activities in a designated critical infrastructure sector or key resources category. Sector-Specific Agencies will conduct their activities under this directive in accordance with guidance provided by the Secretary.

The terms “protect” and “secure” mean reducing the vulnerability of critical infrastructure or key resources in order to deter, mitigate, or neutralize terrorist attacks.

And referencing the Patriot Act gives us the following definition for critical infrastructure:

In this section, the term “critical infrastructure” means systems and assets, whether physical or virtual, so vital to the United States that the incapacity or destruction of such systems and assets would have a debilitating impact on security, national economic security, national public health or safety, or any combination of those matters.

Since it’s not readily evident from what we really consider to be critical infrastructure, let’s look at the implemention of HSPD-7.  They’ve defined critical infrastructure sectors and key resources, each of which have a sector-specific plan on how to protect them.

  • Agriculture and Food
  • Banking and Finance
  • Chemical
  • Commercial Facilities
  • Communications
  • Critical Manufacturing
  • Dams
  • Defense Industrial Base
  • Emergency Services
  • Energy
  • Government Facilities
  • Healthcare and Public Health
  • Information Technology
  • National Monuments and Icons
  • Nuclear Reactors, Materials and Waste
  • Postal and Shipping
  • Transportation System
  • Water

And oh yeah, S.773 doesn’t mention key resources, only critical infrastructure.  Some of this key infrastructure isn’t even networked (*cough* icons and national monuments *cough*). Also note that “Teh Interblagosphere” isn’t listed, although you could make a case that information technology and communications sectors might include it.

Yes, this is not immediately obvious, you have to stitch about half a dozen laws together, but if we didn’t do pointers to other laws, we would have the legislative version of spaghetti code.

Going back to Section 18 of S.773, what paragraph 2 does is give the President the authority to disconnect critical infrastructure or government-owned IT systems from the Internet if they have been compromised.  That’s fairly scoped, I think.  I know I’ll get some non-technical readers on this blog post, but basically one of the first steps in incident response is to disconnect the system, fix it, then restore service.

Paragraph 6 is the part that scares me, mostly because it has the same disconnect authority as paragraph 2and the same scope (critical infrastructure and but the only justification is “in the interests of national security”. In other words, we don’t have to tell you why we disconnected your systems from the Internet because you don’t have the clearances to understand.

So how do we fix this bill?

Section 14 needs an enumeration of the types of data that we can request from critical infrastructure owners and operators. Something like the following:

  • Architecture and toplogy
  • Vulnerability scan results
  • Asset inventories
  • Audit results

The bill has a definitions section–Section 23.  We need to adopt the verbiage from HSPD-7 and include it in Section 23.  That takes care of some of the scoping issues.

We need a definition for “compromise” and we need a definition for “national security”. Odds are these will be references to other laws.

Add a recourse for critical infrastructure owners who have been disconnected: At the very minimum, give them the conditions under which they can be reconnected and some method of appeal.



Similar Posts:

Posted in Public Policy, Rants | 3 Comments »
Tags:

Blow-By-Blow on S.773–The Cybersecurity Act of 2009–Part 1

Posted April 14th, 2009 by

Rybolov Note: this is such a long blog post that I’m breaking it down into parts.  Go read the bill hereGo read part two hereGo read part three here. Go read part four hereGo read part 5 here. =)

So the Library of Congress finally got S.773 up on http://thomas.loc.gov/.  For those of you who have been hiding under a rock, this is the Cybersecurity Act of 2009 and is a bill introduced by Senators Rockefeller and Snowe and, depending on your political slant, will allow us to “sock it to the hackers and send them to a federal pound-you-in-the-***-prison” or “vastly erode our civil liberties”.

A little bit of pre-reading is in order:

Timing: Now let’s talk about the timing of this bill.  There is the 60-day Cybersecurity Review that is supposed to be coming out Real Soon Now (TM).  This bill is an attempt by Congress to head it off at the pass.

Rumor mill says that not only will the Cybersecurity Review be unveiled at RSA (possible, but strange) and that it won’t bring anything new to the debate (more possibly, but then again, nothing’s really new, we’ve known about this stuff for at least a decade).

Overall Comments:

This bill is big.  It really is an omnibus Cybersecurity Act and has just about everything you could want and more.  There’s a fun way of doing things in the Government, and it goes something like this: ask for 300% of what you need so that you will end up with 80%.  And I see this bill is taking this approach to heart.

Pennsylvania Ave – Old Post Office to the Capitol at Night photo by wyntuition.

And now for the good, bad, and ugly:

SEC. 2. FINDINGS. This section is primarily a summary of testimony that has been delivered over the past couple of years.  It really serves as justification for the rest of the bill.  It is a little bit on the FUD side of things (as in “omigod, they put ‘Cyber-Katrina‘ in a piece of legislation”), but overall it’s pretty balanced and what you would expect for a bill.  Bottom line here is that we depend on our data and the networks that carry it.  Even if you don’t believe in Cyberwar (I don’t really believe in Cyberwar unles it’s just one facet of combined arms warfare), you can probably agree that the costs of insecurity on a macroeconomic scale need to be looked at and defended against, and our dependency on the data and networks is only going to increase.

No self-respecting security practitioner will like this section, but politicians will eat it up.  Relax, guys, you’re not the intended audience.

Verdict: Might as well keep this in there, it’s plot development without any requirements.

SEC. 3. CYBERSECURITY ADVISORY PANEL. This section creates a Cybersecurity Advisory Panel made up of Federal Government, private sector, academia, and state and local government.  This is pretty typical so far.  The interesting thing to me is “(7) whether societal and civil liberty concerns are adequately addressed”… in other words, are we balancing security with citizens’, corporations’, and states’ rights?  More to come on this further down in the bill.

Verdict: Will bring a minimal cost in Government terms.  I’m very hesitant to create new committees.  But yeah, this can stay.

SEC. 4. REAL-TIME CYBERSECURITY DASHBOARD. This section is very interesting to me.  On one hand, it’s what we do at the enterprise level for most companies.  On the other hand, this is specific to the Commerce Department –“Federal Government information systems and networks managed by the Department of Commerce.”  The first reading of this is the internal networks that are internal to Commerce, but then why is this not handed down to all agencies?  I puzzled on this and did some research until I remembered that Commerce, through NTIA, runs DNS, and Section 8 contains a review of the DNS contracts.

Verdict: I think this section needs a little bit of rewording so that the scope is clearer, but sure, a dashboard is pretty benign, it’s the implied tasks to make a dashboard function (ie, proper management of IT resources and IT security) that are going to be the hard parts.  Rescope the dashboard and explicitly say what kind of information it needs to address and who should receive it.

SEC. 5. STATE AND REGIONAL CYBERSECURITY ENHANCEMENT PROGRAM. This section calls for Regional Cybersecurity Centers, something along the lines of what we call “Centers of Excellence” in the private sector.  This section is interesting to me, mostly because of how vague it seemed the first time I read it, but the more times I look at it, I go “yeah, that’s actually a good idea”.  What this section tries to do is to bridge the gap between the standards world that is NIST and the people outside of the beltway–the “end-users” of the security frameworks, standards, tools, methodologies, what-the-heck-ever-you-want-to-call-them.  Another interesting thing about this is that while the proponent department is Commerce, NIST is part of Commerce, so it’s not as left-field as you might think.

Verdict: While I think this section is going to take a long time to come to fruition (5+ years before any impact is seen), I see that Regional Cybersecurity Centers, if properly funded and executed, can have a very significant impact on the rest of the country.  It needs to happen, only I don’t know what the cost is going to be, and that’s the part that scares me.

SEC. 6. NIST STANDARDS DEVELOPMENT AND COMPLIANCE. This is good.  Basically this section provides a mandate for NIST to develop a series of standards.  Some of these have been sitting around for some time in various incarnations, I doubt that anyone would disagree that these need to be done.

  1. CYBERSECURITY METRICS RESEARCH:  Good stuff.  Yes, this needs help.  NIST are the people to do this kind of research.
  2. SECURITY CONTROLS:  Already existing in SP 800-53.  Depending on interpretation, this changes the scope and language of the catalog of controls to non-Federal IT systems, or possibly a fork of the controls catalog.
  3. SOFTWARE SECURITY:  I guess if it’s in a law, it has come of age.  This is one of the things that NIST has wanted to do for some time but they haven’t had the manpower to get involved in this space.
  4. SOFTWARE CONFIGURATION SPECIFICATION LANGUAGE: Part of SCAP.  The standard is there, it just needs to be extended to various pieces of software.
  5. STANDARD SOFTWARE CONFIGURATION:  This is the NIST configuration checklist program ala SP 800-70.  I think NIST ran short on manpower for this also and resorted back to pointing at the DISA STIGS and FDCC.  This so needs further development into a uniform set of standards and then, here’s the key, rolled back upstream to the software vendors so they ship their product pre-configured.
  6. VULNERABILITY SPECIFICATION LANGUAGE: Sounds like SCAP.

Now for the “gotchas”:

(d) COMPLIANCE ENFORCEMENT- The Director shall–

(1) enforce compliance with the standards developed by the Institute under this section by software manufacturers, distributors, and vendors; and

(2) shall require each Federal agency, and each operator of an information system or network designated by the President as a critical infrastructure information system or network, periodically to demonstrate compliance with the standards established under this section.

This section basically does 2 things:

  • Mandates compliancy for vendors and distributors with the NIST standards listed above.  Suprised this hasn’t been talked about elsewhere.  This clause suffers from scope problems because if you interpret it BSOFH-stylie, you can take it to mean that anybody who sells a product, regardless of who’s buying, has to sell a securely-configured version.  IE, I can’t sell XP to blue-haired grandmothers unless I have something like an FDCC variant installed on it.  I mostly agree with this in the security sense but it’s a serious culture shift in the practical sense.
  • Mandates an auditing scheme for Federal agencies and critical infrastructure.  Everybody’s talked about this, saying that since designation of critical infrastructure is not defined, this is left at the discretion of the Executive Branch.  This isn’t as wild-west as the bill’s opponents want it to seem, there is a ton of groundwork layed out in HSPD-7.  But yeah, HSPD-7 is an executive directive and can be changed “at the whim” of the President.  And yes, this is auditing by Commerce, which has some issues in that Commerce is not equipped to deal with IT security auditing.  More on this in a later post.

Verdict: The standard part is already happening today, this section just codifies it and justify’s NIST’s research.  Don’t task Commerce with enforcement of NIST standards, it leads down all sorts of inappropriate roads.



Similar Posts:

Posted in Public Policy, What Doesn't Work, What Works | 7 Comments »
Tags:

Beware the Cyber-Katrina!

Posted February 19th, 2009 by

Scenario: American Internet connections are attacked.  In the resulting chaos, the Government fails to respond at all, primarily because of infighting over jurisdiction issues between responders.  Mass hysteria ensues–40 years of darkness, cats sleeping with dogs kind of stuff.

Sounds similar to New Orleans after Hurricane Katrina?  Well, this now has a name: Cyber-Katrina.

At least, this is what Paul Kurtz talked about this week at Black Hat DC.  Now I understand what Kurtz is saying:  that we need to figure out the national-level response while we have time so that when it happens we won’t be frozen with bureaucratic paralysis.  Yes, it works for me, I’ve been saying it ever since I thought I was somebody important last year.  =)

But Paul…. don’t say you want to create a new Cyber-FEMA for the Internet.  That’s where the metaphor you’re using failed–if you carry it too far, what you’re saying is that you want to make a Government organization that will eventually fail when the nation needs it the most.  Saying you want a Cyber-FEMA is just an ugly thing to say after you think about it too long.

What Kurtz really meant to say is that we don’t have a national-level CERT that coordinates between the major players–DoD, DoJ, DHS, state and local governments, and the private sector for large-scale incident response.  What’s Kurtz is really saying if you read between the lines is that US-CERT needs to be a national-level CERT and needs funding, training, people, and connections to do this mission.  In order to fulfill what the administration wants, needs, and is almost promising to the public through their management agenda, US-CERT has to get real big, real fast.

But the trick is, how do you explain this concept to somebody who doesn’t have either the security understanding or the national policy experience to understand the issue?  You resort back to Cyber-Katrina and maybe bank on a little FUD in the process.  Then the press gets all crazy on it–like breaking SSL means Cyber-Katrina Real Soon Now.

Now for those of you who will never be a candidate for Obama’s Cybersecurity Czar job, let me break this down for you big-bird stylie.  Right now there are 3 major candidates vying to get the job.  Since there is no official recommendation (and there probably won’t be until April when the 60 days to develop a strategy is over), the 3 candidates are making their move to prove that they’re the right person to pick.  Think of it as their mini-platforms, just look out for when they start talking about themselves in the 3rd person.

FEMA Disaster Relief photo by Infrogmation. Could a Cyber-FEMA coordinate incident response for a Cyber-Katrina?

And in other news, I3P (with ties to Dartmouth) has issued their National Cyber Security Research and Development Challenges document which um… hashes over the same stuff we’ve seen from the National Strategy to Secure Cyberspace, the Systems and Technology Research and Design Plan, the CSIS Recommendations, and the Obama Agenda.  Only the I3P report has all this weird psychologically-oriented mumbo-jumbo that when I read it my eyes glazed over.

Guys, I’ve said this so many times I feel like a complete cynic: talk is cheap, security isn’t.  It seems like everybody has a plan but nobody’s willing to step up and fix the problem.  Not only that, but they’re taking each others recommendations, throwing them in a blender, and reissuing their own.  Wake me up when somebody actually does something.

It leads me to believe that, once again, those who talk don’t know, and those who know don’t talk.

Therefore, here’s the BSOFH’s guide to protecting the nation from Cyber-Katrina:

  • Designate a Cybersecurity Czar
  • Equip the Cybersecurity Czar with an $100B/year budget
  • Nationalize Microsoft, Cisco, and one of the major all-in-one security companies (Symantec)
  • Integrate all the IT assets you now own and force them to write good software
  • Public execution of any developer who uses strcpy() because who knows what other stupid stuff they’ll do
  • Require code review and vulnerability assessments for any IT product that is sold on the market
  • Regulate all IT installations to follow Government-approved hardening guides
  • Use US-CERT to monitor the military-industrial complex
  • ?????
  • Live in a secure Cyber-World

But hey, that’s not the American way–we’re not socialists, damnit! (well, except for mortgage companies and banks and automakers and um yeah….)  So far all the plans have called for cooperation with the public sector, and that’s worked out just smashingly because of an industry-wide conflict of interest–writing junk software means that you can sell for upgrades or new products later.

I think the problem is fixable, but I predict these are the conditions for it to happen:

  • Massive failure of some infrastructure component due to IT security issues
  • Massive ownage of Government IT systems that actually gets publicized
  • Deaths caused by massive IT Security fail
  • Osama Bin Laden starts writing exploit code
  • Citizen outrage to the point where my grandmother writes a letter to the President

Until then, security issues will be always be a second-fiddle to wars, the economy, presidential impeachments, and a host of a bazillion other things.  Because of this, security conditions will get much, much worse before they get better.

And then the cynic in me can’t help but think that, deep down inside, what the nation needs is precisely an IT Security Fail along the lines of 9-11/Katrina/Pearl Harbor/Dien Bien Fu/Task Force Smith.



Similar Posts:

Posted in BSOFH, Public Policy, Rants | 6 Comments »
Tags:

Security Assessment Economics

Posted June 12th, 2008 by

I’ve spent a couple of days traveling around to agencies to teach.  It was fun but tiring, and the best part of it is that since I’m not teaching pure doctrine, I can include the “here’s how it works in real life” parts and some of the BSOFH parts–what I refer to as the “security management heretic thoughts”.

Some basic statements, the rest of this post will explain:

  • C&A is a commodity market
  • Security controls assessment is a commodity market
  • PCI assessment is a commodity market
  • Most MSSP (or rather, Security Device Management Service Providers) services are commodity markets

Now my boss said the first one to me about 4 months ago and it really needed some time for me to grasp the implications.  What we mean by “commodity market” is that since there isn’t really much of a difference between vendors, the vendors have to compete on having the lower price.

Now what the smart people will try to do is to take the commodity service and try to make it more of a boutique service by increasing the value.  Problem is that it only works if the customers play along and figure out how your service is different–usually what happens is you lose in the market simply because now you’re “too expensive”.

Luxury, Boutique, Commodity

Where Boutique Sits by miss_rogue.

Since the security assessment world is a services business, the only way to compete in a commodity market is to pay your people less and try to charge more. But oh yeah, we compete on price, so that only leaves the paychecks as the way to keep the margin up.

Some ways that vendors will try to keep the assessment costs down:

  • Hire cheaper people (yes, paper CISSPs)
  • Try to reduce the engegement to a formula/methodlogy (ack, a checklist)
  • It’s all about billability:  what percentage of your people’s time is not billable to clients? 
  • Put people on assessments who have tangential skills just to keep them billable
  • Use Cost-Plus-Margin or Time-Plus-Materials so that you can work more hours
  • Use Firm-Fixed-Price contracts with highly reduced services ($150 PCI assessments)

Now inside Government contracting, there’s a fact that’s not known outside of the beltway:  your margins are fixed by the Government.  In other words, they only allow you to have around a 13-15% margin.  The way to make money is that the pie is a much bigger pie, even though you only get a small piece of it.  And yes, they do look at your accounting records and yes, there are loopholes, but for the most part, you can only collect this little margin.  If you stop and think about it, the Government almost forces the majority of its contractors into a commodity market.

Then we wonder why C&A engagements go so haywire…

The problem with commodity markets and vulnerability/risk/pen-test assessments is that your results, and by extension your ability to secure your data, are only as good as the skills and creativity of the people that the vendor sends.  Sounds like a problem?  It is.

So knowing this, how can you as the client get the most out of your service providers? This is a quick list:

  • Every year (or every other), get an assessment from somebody who has a good reputation for being thorough (ie, a boutique)
  • Be willing to pay more for services than the bottom of the market but be sure that you get quality people to go along with it, otherwise you’ve just added to the vendor’s margin with no real improvements to yourself
  • Get assessments from multiple vendors across the span of a year or two–more eyes means different checklists
  • Provide the assessors with your own checklists so you can steer them (tip from Dave Mortman)
  • Self-identify vulnerabilities when appropriate (especially with vulnerabilities from previous assessments)
  • Typical contracting fixes such as scope management, reviewing resumes of key personnel, etc
  • Get lucky when the vendor hires really good people who don’t know how much they’re really worth (that was me 5 years ago)
  • More than I’m sure will end up in the comments to this post  =)

And the final technique is that it’s all about what you do with the assessment results.  If you feed them into a mitigation plan (goviespeak: POA&M) and improve your security, it’s a win.



Similar Posts:

Posted in Outsourcing, Rants, Risk Management, The Guerilla CISO | 6 Comments »
Tags:

Papers and Presentations

Posted June 8th, 2008 by

We’re always open to speaking invitations on any topic you see on the blog, just ask via our contact page. In fact, you could say at this point that we are a mini speaker’s bureau for US Federal Government Information Security issues.

Have a look at some work we’ve put out:

Meta-Metrics: Building a Scorecard for the Evaluation of Security Management and Control Frameworks , Metricon 5 2010(.ppt presentation, August 2010)

Building A Modern Security Policy For Social Media and Government, Potomac Forum 2009(.ppt presentation, December 2009)

Compliancy, Why Me? Living with the Compliance Staff, a BSOFH Guide, DojoCon 2009(.ppt presentation, November 2009)

Massively Scaled Security Solutions for Massively Scaled IT: SecTor 2009(.ppt presentation, October 2009)

Making the Choice: ATO, IATO, or Denial: Role of the Authorizing Official/Designated Approving Authority and the Accreditation Decision (.ppt presentation, February 2009)

The Evolution of Digital Forensics: Civilizing the Cyber Frontier (.pdf document, January 2009)

Backtrack 3 and USB: The Quick and Dirty Pictorial Guide to a Non-Persistent USB Installation (.ppt presentation, September 2008)

View SlideShare presentation or Upload your own. (tags: usb pen-test)

FISMA, NIST, and OMB Oh My: Why information security in the Government succeeds or fails and what you can learn from it (.ppt presentation, June 2008)



Similar Posts:

Posted in | Comments Off on Papers and Presentations

« Previous Entries Next Entries »


Visitor Geolocationing Widget: