Let’s roleplay, shall we?
You’re the US government. You have an IT budget of $65 ba-ba-ba-ba-billion (stuttering added for additional effect) every year (2007 budget). If you wanted to, you might be able to make an offer to buy Microsoft based on one year’s worth of budget.
So how do you manage security risks associated with such a huge amount of cash? Same way you would manage those IT systems in the non-security world:
- Break it all down into bite-sized pieces
- Have some sort of methodology to manage the pieces effectively
- Delegate responsibility for each piece to somebody
- Use metrics to track where you are going
- Focus on risks to the business and the financial investment
- Provide oversight on all of the pieces that you delegated
- Evaluate each piece to see how well it is doing
Hmm, sounds exactly like what the government has done so far. It’s exactly like an agency’s investment (system) inventory/portfolio, OMB budget process, and the GAO metrics.
Now how would you manage each bite-sized piece? This is roughly the way a systems engineer would do it:
- Define needs
- Define requirements
- Build a tentative design
- Design review
- Build the system
- Test that the requirements are met
- Flip the switch to take it live
- Support anything that breaks
Hmm, that’s suspiciously like a system development life-cycle, isn’t it? There’s a reason we use project management and SDLC–in order to get from here to there, you need to have a plan or methodology to follow, and SDLC makes sense.
So then let’s do the same exercise and add in the security pieces of the puzzle.
- Define needs: Determine how much thesystem and the information is worth–categorization (FIPS-199 and NIST SP 800-60)
- Define requirements (FIPS-200 andNIST SP 800-53 along with a ton of tailoring)
- Build a tentative design (first security plan draft)
- Design review (security plan approval)
- Build the system
- Test that the needs and requirements are met (security test and evaluation)
- Flip the switch to take it live (accreditation decision)
- Support anything that breaks (continuous monitoring)
Guess what? That’s C&A in a nutshell. All this other junk is just that–junk. If you’re not managing security risk throughout the SDLC, what are you doing except for posturing for the other security people to see and arguing about triviata?
This picture (blatantly stolen from NIST SP 800-64, Security Considerations in the Information System Development Life Cycle) shows you how the core components of C&A fit in with the rest of the SDLC:
My theory is that the majority of systems have already been built and are in O&M phase of their SDLC. What that means is that we are trying to do C&A for these systems too late to really change anything. It also means that for the most part we will be trying to do C&A on systems that have already been built, so, just like how people confused war communism with pure communism, we confuse the emergency state of C&A post-facto with the pure state of C&A.
Now let’s look at where C&A typically falls apart:
- Confusing compliance (check the box) with risk management (are we providing “adequate security”?)
- Focusing too much on the certification statement and accreditation decision, which should be a gate instead of the entire process
- Not hiring smart people
- Failure to associate technical and system-specific risks with the business case
- Disassociation from the rest of the SDLC
- Disassociation from reality (liarware)
- Trying to certify and accredit systems in the implementation phase of the SDLC
- Risk-adverse decision-making
- Using C&A as quality assurance
Keys to success at this game follow roughly along what ISM-Community has proposed as an ISM Top 10. Those ISM guys, they’re pretty smart. =)