Ooh ooh, the review is supposed to be announced tomorrow!
Ooh ooh, the review is supposed to be announced tomorrow!
One of the best things about being almost older than dirt is that I’ve seen several cycles within the security community. Just like fashion and ladies’ hemlines, if you pay attention long enough, you’ll see history repeat itself, or something that closely resembles history. Time for a short trip “down memory lane…”
In the early days of computer security, all eyes were fixed on Linthicum and the security labs associated with the NSA. In the late 80’s and early 90’s the NSA evaluation program was notoriously slow – glacial would be a word one could use… Bottom line, the process just wasn’t responsive enough to keep up with the changes and improvements in technology. Products would be in evaluation for years before coming out of the process with their enabling technology nearly obsolete. It didn’t matter, it was the only game in town until NIST and the Common Criteria labs came onto the scene. This has worked well, however the reality is, it’s not much better at vetting and moving technology from vendors to users. The problem is, the evaluation process takes time and time means money, but it also means that the code submitted for evaluation will most likely be several revisions old by the time it emerges from evaluation. Granted, it may only be 6 months, but it might take a year – regardless, this is far better than before.
So… practically speaking, if the base version of FooOS submitted for evaluation is, say Version 5.0.1, several revisions — each solving operational problems affecting the organization — may have been released. We may find that we need to run Version 5.6.10r3 in order to pass encrypted traffic via the network. Because we encrypt traffic we must use FIPS-Level 2 certified code – but in the example above, the validated version of the FooOS will not work in our network… What does the CISO do? We’ll return to this in a moment, it gets better!
In order to reach levels of FIPS-140 goodness, one vendor in particular has instituted “FIPS Mode.” What this does is require administration of the box from apposition directly in front of the equipment, or at the length of your longest console cable… Clearly, this is not suitable for organizations with equipment deployed worldwide to locations that do not have qualified administrators or network engineers. Further, having to fly a technician to Burundi to clear sessions on a box every time it becomes catatonic is ridiculous at worst. At best it’s not in accordance with the network concept of operations. How does the CISO propose a workable, secure solution?
Standard Hill photo by timparkinson.
Now to my point. (about time Vlad) How does the CISO approach this situation? Allow me to tell you the approach I’ve taken….
1. Accept the fact that once Foo OS has achieved a level of FIPS-140 goodness, the likelihood that the modules of code within the OS implementing cryptographic functionality in follow-on versions have not been changed. This also means you have to assume the vendor has done a good job of documenting the changes to their baseline in their release notes, and that they HAVE modular code…
2. Delve into vendor documentation and FIPS-140 to find out exactly what “FIPS Mode” is, its benefits and the requirement. Much of the written documentation in the standard deals with physical security of the cryptographic module itself (e.g., tamper-evident seals) – but most helpful is Table 1.
|Security Level 1||Security Level 2||Security Level 3||Security Level 4|
|Specification of cryptographic module, cryptographic boundary, Approved algorithms, and Approved modes of operation. Description of cryptographic module, including all hardware, software, and firmware components. Statement of module security policy.|
|Cryptographic Module Ports and Interfaces||Required and optional interfaces. Specification of all interfaces and of all input and output data paths.||Data ports for unprotected critical security parameters logically or physically separated from other data ports.|
|Roles, Services, and Authentication||Logical separation of required and optional roles and services||Role-based or identity-based operator authentication||Identity-based operator authentication.|
|Finite State Model||Specification of finite state model. Required and optional states. State transition diagram and specification of state transitions.|
|Physical Security||Production grade equipment.||Locks or tamper evidence.||Tamper detection and response for covers and doors.||Tamper detection and response envelope. EFP or EFT.|
|Operational Environment||Single operator. Executable code. Approved integrity technique.||Referenced PPs evaluated at EAL2 with specified discretionary access control mechanisms and auditing.||Referenced PPs plus trusted path evaluated at EAL3 plus security policy modeling.||Referenced PPs plus trusted path evaluated at EAL4.|
|Cryptographic Key Management||Key management mechanisms: random number and key generation, key establishment, key distribution, key entry/output, key storage, and key zeroization.|
|Secret and private keys established using manual methods may be entered or output in plaintext form.||Secret and private keys established using manual methods shall be entered or output encrypted or with split knowledge procedures.|
|EMI/EMC||47 CFR FCC Part 15. Subpart B, Class A (Business use). Applicable FCC requirements (for radio).||47 CFR FCC Part 15. Subpart B, Class B (Home use).|
|Self-Tests||Power-up tests: cryptographic algorithm tests, software/firmware integrity tests, critical functions tests. Conditional tests.|
|Design Assurance||Configuration management (CM). Secure installation and generation. Design and policy correspondence. Guidance documents.||CM system. Secure distribution. Functional specification.||High-level language implementation.||Formal model. Detailed explanations (informal proofs). Preconditions and postconditions.|
|Mitigation of Other Attacks||Specification of mitigation of attacks for which no testable requirements are currently available.|
Summary of Security Requirements From FIPS-140-2
Bottom line — some “features” are indeed useful, but this one particular vendor’s implementation into a “one-size fits all” option tends to limit the use of the feature at all in some operational scenarios (most notably, the one your humble author is dealing with.) BTW, changing vendors is not an option.
3. Upon analyzing the FIPS requirements against operational needs, and (importantly) the environment the equipment is operating in, one has to draw the line between “operating in vendor FIPS Mode,” and using FIPS 140-2 encryption.
4. Document the decision and the rationale.
Once again, security professionals have to help managers to strike a healthy balance between “enough” security and operational requirements. You would think that using approved equipment, operating systems, and vendors using the CC evaluation process would be enough. Reading the standard, we see the official acknowledgement that “Your Mileage May Indeed Vary:” TM
“While the security requirements specified in this standard are intended to maintain the security provided by a cryptographic module, conformance to this standard is not sufficient to ensure that a particular module is secure. The operator of a cryptographic module is responsible for ensuring that the security provided by a module is sufficient and acceptable to the owner of the information that is being protected and that any residual risk is acknowledged and accepted.” FIPS 140-2 Sec 15, Qualifications
The next paragraph constitutes validation of the approach I’ve embraced:
“Similarly, the use of a validated cryptographic module in a computer or telecommunications system does not guarantee the security of the overall system. The responsible authority in each agency shall ensure that the security of the system is sufficient and acceptable.” (Emphasis added.)
One could say, “it depends,” but you wouldn’t think so at first glance – it’s a Standard for Pete’s sake!
Then again, nobody said this job would be easy!
Posted in Rants, Risk Management, Technical | 4 Comments »
Tags: accreditation • certification • compatibility • compliance • cryptography • fips-140 • government • infosec • itsatrap • management • NIST • pwnage • risk • scalability • security • tailoring
So I was doing my usual “Beltway Bandit Perusal of Opportunities for Filthy Lucre” also known as diving into FedBizOps and I found this gem. Basically what this means is that sometime this summer, NIST is going to put out an RFP for contractors to further develop SCAP using ARRA funds.
Keeping in mind that this isn’t the official list of what NIST wants done under this contract, but it’s interesting to look at from an angle of where SCAP will go over the next couple of years:
So how do you play? Well, the first thing is that you respond to the notice with a capabilities statement saying “yes, we have experience in doing what you want”–there is a list of specifics in the original notice. Then sign up for FedBizOps and follow the announcement so you can get changes and the RFP when it comes out.
Here in the information assurance salt mines, we sure do loves us some conspiracies, so here’s the conspiracy of the month: S.773 gives the Government the ability to view your private data and the President disconnect authority over the Internet, which means he can sensor it.
Let’s look at the sections and paragraphs that would seem to say this:
(b) FUNCTIONS- The Secretary of Commerce–
(1) shall have access to all relevant data concerning such networks without regard to any provision of law, regulation, rule, or policy restricting such access;
Section 18: The President–
(2) may declare a cybersecurity emergency and order the limitation or shutdown of Internet traffic to and from any compromised Federal Government or United States critical infrastructure information system or network;
(6) may order the disconnection of any Federal Government or United States critical infrastructure information systems or networks in the interest of national security;
Taken completely by itself, it would seem like this gives the president the authorities to do all sorts of wrong stuff, all he has to do is to declare something as critical infrastructure and declare it compromised or in the interests of national security. And some people have:
And some movies (we all love movies):
Actually, Shelly is pretty astute and makes some good points, she just doens’t have the background in information security.
It makes me wonder since when have people considered social networking sites or the Internet as a whole as “critical infrastructure”. Then the BSOFH in me things “Ye gods, when did our society sink so low?”
Now, as far as going back to Section 14 of S.773, it exists because most of the critical infrastructure is privately-held. There is a bit of history to understand here and that is that the critical infrastructure owners and operators are very reluctant to give the information on their piece of critical infrastructure to the Government. Don’t blame them, I had the same problem as a contractor: if you give the Government information, the next step is them telling you how to change it and how to run your business. Since the owners/operators are somewhat non-helpful, the Government needs more teeth to get what it needs.
But as far as private data traversing the critical infrastructure? I think it’s a stretch to say that’s part of the requirements of Section 14, it’s to collect data “about” (the language of the bill) the critical infrastructure, not “processed, stored, or forwarded” on the critical infrastructure. But yeah, let’s scope this a little bit better, CapHill Staffers.
On to Section 18. Critical infrastructure is defined elsewhere in law. Let’s see the definitions section from HSPD-7, Critical Infrastructure Identification, Prioritization, and Protection:
In this directive:
The term “critical infrastructure” has the meaning given to that term in section 1016(e) of the USA PATRIOT Act of 2001 (42 U.S.C. 5195c(e)).
The term “key resources” has the meaning given that term in section 2(9) of the Homeland Security Act of 2002 (6 U.S.C. 101(9)).
The term “the Department” means the Department of Homeland Security.
The term “Federal departments and agencies” means those executive departments enumerated in 5 U.S.C. 101, and the Department of Homeland Security; independent establishments as defined by 5 U.S.C. 104(1);Government corporations as defined by 5 U.S.C. 103(1); and the United States Postal Service.
The terms “State,” and “local government,” when used in a geographical sense, have the same meanings given to those terms in section 2 of the Homeland Security Act of 2002 (6 U.S.C. 101).
The term “the Secretary” means the Secretary of Homeland Security.
The term “Sector-Specific Agency” means a Federal department or agency responsible for infrastructure protection activities in a designated critical infrastructure sector or key resources category. Sector-Specific Agencies will conduct their activities under this directive in accordance with guidance provided by the Secretary.
The terms “protect” and “secure” mean reducing the vulnerability of critical infrastructure or key resources in order to deter, mitigate, or neutralize terrorist attacks.
And referencing the Patriot Act gives us the following definition for critical infrastructure:
In this section, the term “critical infrastructure” means systems and assets, whether physical or virtual, so vital to the United States that the incapacity or destruction of such systems and assets would have a debilitating impact on security, national economic security, national public health or safety, or any combination of those matters.
Since it’s not readily evident from what we really consider to be critical infrastructure, let’s look at the implemention of HSPD-7. They’ve defined critical infrastructure sectors and key resources, each of which have a sector-specific plan on how to protect them.
And oh yeah, S.773 doesn’t mention key resources, only critical infrastructure. Some of this key infrastructure isn’t even networked (*cough* icons and national monuments *cough*). Also note that “Teh Interblagosphere” isn’t listed, although you could make a case that information technology and communications sectors might include it.
Yes, this is not immediately obvious, you have to stitch about half a dozen laws together, but if we didn’t do pointers to other laws, we would have the legislative version of spaghetti code.
Going back to Section 18 of S.773, what paragraph 2 does is give the President the authority to disconnect critical infrastructure or government-owned IT systems from the Internet if they have been compromised. That’s fairly scoped, I think. I know I’ll get some non-technical readers on this blog post, but basically one of the first steps in incident response is to disconnect the system, fix it, then restore service.
Paragraph 6 is the part that scares me, mostly because it has the same disconnect authority as paragraph 2and the same scope (critical infrastructure and but the only justification is “in the interests of national security”. In other words, we don’t have to tell you why we disconnected your systems from the Internet because you don’t have the clearances to understand.
So how do we fix this bill?
Section 14 needs an enumeration of the types of data that we can request from critical infrastructure owners and operators. Something like the following:
The bill has a definitions section–Section 23. We need to adopt the verbiage from HSPD-7 and include it in Section 23. That takes care of some of the scoping issues.
We need a definition for “compromise” and we need a definition for “national security”. Odds are these will be references to other laws.
Add a recourse for critical infrastructure owners who have been disconnected: At the very minimum, give them the conditions under which they can be reconnected and some method of appeal.
Check out this blog post. Wow, all sorts of crazies decend out of the woodwork when Bruce talks about something that’s been around for years and suddenly everyone’s redesigning the desktop from the ground up.
Quick recap on comments:
Proving once again that you can’t talk about Windows desktop security without it evolving into a flamewar. Might as well pull out “vi v/s emacs” while you’re at it, Bruce. =)
Computer Setup photo by karindalziel. Yes, one of them is a linux box, I used this picture for that very same reason. =)
But there is one point that people need to understand. The magic of FDCC is not in the fact that the Government used its IT-buying muscle to get Microsoft to cooperate. Oh no, that’s to be expected–the guys at MS are used to working with a lot of people now on requests.
The true magic of FDCC is getting the application vendors to play along. To wit:
In other words, if your software works with FDCC, it’s probably built to run on a security-correct operating system in the first place. This is a good thing, and in this case the Government is using its IT budget to bring the application vendors into some sort of minimal security to the rest of the world.
This statement is from the FDCC FAQ, comments in parenthesis are mine:
“How are vendors required to prove FDCC compliance?
There is no formal compliance process; vendors of information technology products must self-assert FDCC compliance. They are expected to ensure that their products function correctly with computers configured with the FDCC settings. The product installation process must make no changes to the FDCC settings. Applications must work with users who do not have administrative privileges, the only acceptable exception being information technology management tools. Vendors must test their products on systems configured with the FDCC settings, they must use SCAP validated tools with FDCC Scanner capability to certify their products operate correctly with FDCC configurations and do not alter FDCC settings. The OMB provided suggested language in this memo: http://www.whitehouse.gov/omb/memoranda/fy2007/m07-18.pdf, vendors are likely to encounter similar language when negotiating with agencies.”
So really what you get out of self-certification is something like this:
April Fools Day pranks aside, I’m wondering what happened to the 60-day Cybersecurity Review. Supposedly, it was turned into the President on the 17th. I guess all I can do is sigh and say “So much for transparency in Government”.
I’m trying hard to be understanding here, I really am. But isn’t the administration pulling the same Comprehensive National Cybersecurity Initiative thing again, telling the professionals out in the private sector that it depends on, “You can’t handle the truth!”
And this is the problem. Let’s face it, our information sharing from Government to private sector really sucks right now. I understand why this is–when you have threats and intentions that come from classified sources, if you share that information, you risk losing your sources. (ref: Ultra and Coventry, although it’s semi-controversial)
Secret Passage photo by electricinca.
Looking back at one of the weaknesses of our information-sharing strategy so far:
In my opinion, Government can’t figure out if they are a partner or a regulator. Choose wisely, it’s hard to be both.
As a regulator, we just establish the standard and, in theory anyway, the private sector folks don’t need to know the reasoning behind the standard. It’s fairly easy to manage but not very flexible–you don’t get much innovation and new technology if people don’t understand the business case. This is also a more traditional role for Government to take.
As a partner, we can share information and consequences with the private sector. It’s more flexible in response but takes much more effort and money to bring them information. It also takes participation from both sides–Government and private sector.
Now to tie it all off by going back to the 60-Day Cybersecurity Review…. The private sector needs information contained in the review. Not all of it, mind you, just the parts that they need to do their job. They need it to help the Government. They need it to build products that fit the Government’s needs. They need it to secure their own infrastructure.