When Standards Aren’t Good Enough
Posted May 22nd, 2009 by Vlad the ImpalerOne of the best things about being almost older than dirt is that I’ve seen several cycles within the security community. Just like fashion and ladies’ hemlines, if you pay attention long enough, you’ll see history repeat itself, or something that closely resembles history. Time for a short trip “down memory lane…”
In the early days of computer security, all eyes were fixed on Linthicum and the security labs associated with the NSA. In the late 80’s and early 90’s the NSA evaluation program was notoriously slow – glacial would be a word one could use… Bottom line, the process just wasn’t responsive enough to keep up with the changes and improvements in technology. Products would be in evaluation for years before coming out of the process with their enabling technology nearly obsolete. It didn’t matter, it was the only game in town until NIST and the Common Criteria labs came onto the scene. This has worked well, however the reality is, it’s not much better at vetting and moving technology from vendors to users. The problem is, the evaluation process takes time and time means money, but it also means that the code submitted for evaluation will most likely be several revisions old by the time it emerges from evaluation. Granted, it may only be 6 months, but it might take a year – regardless, this is far better than before.
So… practically speaking, if the base version of FooOS submitted for evaluation is, say Version 5.0.1, several revisions — each solving operational problems affecting the organization — may have been released. We may find that we need to run Version 5.6.10r3 in order to pass encrypted traffic via the network. Because we encrypt traffic we must use FIPS-Level 2 certified code – but in the example above, the validated version of the FooOS will not work in our network… What does the CISO do? We’ll return to this in a moment, it gets better!
In order to reach levels of FIPS-140 goodness, one vendor in particular has instituted “FIPS Mode.” What this does is require administration of the box from apposition directly in front of the equipment, or at the length of your longest console cable… Clearly, this is not suitable for organizations with equipment deployed worldwide to locations that do not have qualified administrators or network engineers. Further, having to fly a technician to Burundi to clear sessions on a box every time it becomes catatonic is ridiculous at worst. At best it’s not in accordance with the network concept of operations. How does the CISO propose a workable, secure solution?
Standard Hill photo by timparkinson.
Now to my point. (about time Vlad) How does the CISO approach this situation? Allow me to tell you the approach I’ve taken….
1. Accept the fact that once Foo OS has achieved a level of FIPS-140 goodness, the likelihood that the modules of code within the OS implementing cryptographic functionality in follow-on versions have not been changed. This also means you have to assume the vendor has done a good job of documenting the changes to their baseline in their release notes, and that they HAVE modular code…
2. Delve into vendor documentation and FIPS-140 to find out exactly what “FIPS Mode” is, its benefits and the requirement. Much of the written documentation in the standard deals with physical security of the cryptographic module itself (e.g., tamper-evident seals) – but most helpful is Table 1.
Security Level 1 | Security Level 2 | Security Level 3 | Security Level 4 | |
Cryptographic
Module Specification |
Specification of cryptographic module, cryptographic boundary, Approved algorithms, and Approved modes of operation. Description of cryptographic module, including all hardware, software, and firmware components. Statement of module security policy. | |||
Cryptographic Module Ports and Interfaces | Required and optional interfaces. Specification of all interfaces and of all input and output data paths. | Data ports for unprotected critical security parameters logically or physically separated from other data ports. | ||
Roles, Services, and Authentication | Logical separation of required and optional roles and services | Role-based or identity-based operator authentication | Identity-based operator authentication. | |
Finite State Model | Specification of finite state model. Required and optional states. State transition diagram and specification of state transitions. | |||
Physical Security | Production grade equipment. | Locks or tamper evidence. | Tamper detection and response for covers and doors. | Tamper detection and response envelope. EFP or EFT. |
Operational Environment | Single operator. Executable code. Approved integrity technique. | Referenced PPs evaluated at EAL2 with specified discretionary access control mechanisms and auditing. | Referenced PPs plus trusted path evaluated at EAL3 plus security policy modeling. | Referenced PPs plus trusted path evaluated at EAL4. |
Cryptographic Key Management | Key management mechanisms: random number and key generation, key establishment, key distribution, key entry/output, key storage, and key zeroization. | |||
Secret and private keys established using manual methods may be entered or output in plaintext form. | Secret and private keys established using manual methods shall be entered or output encrypted or with split knowledge procedures. | |||
EMI/EMC | 47 CFR FCC Part 15. Subpart B, Class A (Business use). Applicable FCC requirements (for radio). | 47 CFR FCC Part 15. Subpart B, Class B (Home use). | ||
Self-Tests | Power-up tests: cryptographic algorithm tests, software/firmware integrity tests, critical functions tests. Conditional tests. | |||
Design Assurance | Configuration management (CM). Secure installation and generation. Design and policy correspondence. Guidance documents. | CM system. Secure distribution. Functional specification. | High-level language implementation. | Formal model. Detailed explanations (informal proofs). Preconditions and postconditions. |
Mitigation of Other Attacks | Specification of mitigation of attacks for which no testable requirements are currently available. |
Summary of Security Requirements From FIPS-140-2
Bottom line — some “features” are indeed useful, but this one particular vendor’s implementation into a “one-size fits all” option tends to limit the use of the feature at all in some operational scenarios (most notably, the one your humble author is dealing with.) BTW, changing vendors is not an option.
3. Upon analyzing the FIPS requirements against operational needs, and (importantly) the environment the equipment is operating in, one has to draw the line between “operating in vendor FIPS Mode,” and using FIPS 140-2 encryption.
4. Document the decision and the rationale.
Once again, security professionals have to help managers to strike a healthy balance between “enough” security and operational requirements. You would think that using approved equipment, operating systems, and vendors using the CC evaluation process would be enough. Reading the standard, we see the official acknowledgement that “Your Mileage May Indeed Vary:” TM
“While the security requirements specified in this standard are intended to maintain the security provided by a cryptographic module, conformance to this standard is not sufficient to ensure that a particular module is secure. The operator of a cryptographic module is responsible for ensuring that the security provided by a module is sufficient and acceptable to the owner of the information that is being protected and that any residual risk is acknowledged and accepted.” FIPS 140-2 Sec 15, Qualifications
The next paragraph constitutes validation of the approach I’ve embraced:
“Similarly, the use of a validated cryptographic module in a computer or telecommunications system does not guarantee the security of the overall system. The responsible authority in each agency shall ensure that the security of the system is sufficient and acceptable.” (Emphasis added.)
One could say, “it depends,” but you wouldn’t think so at first glance – it’s a Standard for Pete’s sake!
Then again, nobody said this job would be easy!
Vlad
Similar Posts:
Posted in Rants, Risk Management, Technical | 4 Comments »
Tags: accreditation • certification • compatibility • compliance • cryptography • fips-140 • government • infosec • itsatrap • management • NIST • pwnage • risk • scalability • security • tailoring
May 22nd, 2009 at 1:47 pm
Right on the money!
May 25th, 2009 at 10:01 am
[…] #1 – History Repeating: Vlad the Impaler opened his “When Standards Aren’t Good Enough” post with the sentence “[o]ne of the best things about being almost older than dirt is that I’ve seen several cycles within the security community. Just like fashion and ladies’ hemlines, if you pay attention long enough, you’ll see history repeat itself, or something that closely resembles history.” Definitely one of the best openings we’ve ever seen, because it’s totally true that history repeats itself. Even in the relatively ‘new’ field of security, there have been trends that have fallen into obscurity only to become the latest rage in a few years time. As Vlad points out, one of those things is standards. While they have improved over time, they still have a long way to go. I highly encourage you to take a “short trip ‘down memory lane,’” (as Vlad puts it) and see how the past is still impacting the present by reading Vlad’s post. […]
May 25th, 2009 at 8:00 pm
Vlad,
We’ve been having some of the very same discussions in my “Hell’s Half Acre” in DoD-land.
Like yours, our conversation bled into similar conversations about how patching might affect compliance with Common Criteria, and now FDCC.
Unlike your conversation, ours took a turn as to how a CISO/DAA as to how anyone can verify that FIPS 140 (for lack of a better example) is being complied with. Given that many compliance standards are configuration-specific, how is one to know that the configuration steps are being followed without looking over someone’s shoulders (a poor use of time)?
Liked the post. Keep ’em flowing.
stradageezer
May 29th, 2009 at 3:01 pm
As far as looking over people’s shoulder goes, we’ve implemented OpsWare here as part of our OSS (Operations Support System). It’s really powerful once you determine your actual baseline — it can detect variance/deviations and automagically correct them.
My problem is that it automagically disabled one of my monitoring systems by clobbering the firewall config lines that allowed remote monitoring!
(file that under “Things We Do to Ourselves”, and “There is No Silver Bullet”)
Cheers,
Vlad