Micro Digital Signatures Howto

Posted February 22nd, 2011 by

With RSA wrapping up, I figured I would do something fun with Alice, Bob, and crypto.  There is a need for small digital signatures (Micro Digital Signatures/”MicroDigiSigs” if I can be as bold as to think I can start a nerdy meme) and tools to support them over small message spaces such as The Twitters, SMS/Text Messaging, barcodes, jabber/xmpp, and probably tons of other things I haven’t even thought of.

Elliptic Curve Cryptography (ECC) provides a solution because of some inherent traits in the algorithms:

  • Speed to compute
  • Low processor load
  • Small keys
  • Small signatures

Some general-type info to know before we go further with this:

  • OpenSSL 1.00 supports ECC functions.  This is teh awesome, thank you OpenSSL peoples.
  • You can check out the OpenSSL HOWTO, I derived a ton of info from this resource http://www.madboa.com/geek/openssl/
  • Issues with ECC support in OpenSSL:
    • ECC is poorly documented in OpenSSL.  Pls fix kthanx.
    • Some targets are missing from OpenSSL (ECC Digital Signature Algorithm signatures with SHA-256).

Now on to the step-by-step process.   Feel free to shoot holes in this, I’m sure there are tons of other ways to do things.

Show all the available curves:
rybolov@ryzhe:~$ openssl ecparam -list_curves
secp112r1 : SECG/WTLS curve over a 112 bit prime field
secp112r2 : SECG curve over a 112 bit prime field
secp128r1 : SECG curve over a 128 bit prime field
secp128r2 : SECG curve over a 128 bit prime field
secp160k1 : SECG curve over a 160 bit prime field
secp160r1 : SECG curve over a 160 bit prime field
secp160r2 : SECG/WTLS curve over a 160 bit prime field
secp192k1 : SECG curve over a 192 bit prime field
secp224k1 : SECG curve over a 224 bit prime field
secp224r1 : NIST/SECG curve over a 224 bit prime field
secp256k1 : SECG curve over a 256 bit prime field
secp384r1 : NIST/SECG curve over a 384 bit prime field
secp521r1 : NIST/SECG curve over a 521 bit prime field
prime192v1: NIST/X9.62/SECG curve over a 192 bit prime field
prime192v2: X9.62 curve over a 192 bit prime field
prime192v3: X9.62 curve over a 192 bit prime field
prime239v1: X9.62 curve over a 239 bit prime field
prime239v2: X9.62 curve over a 239 bit prime field
prime239v3: X9.62 curve over a 239 bit prime field
prime256v1: X9.62/SECG curve over a 256 bit prime field
sect113r1 : SECG curve over a 113 bit binary field
sect113r2 : SECG curve over a 113 bit binary field
sect131r1 : SECG/WTLS curve over a 131 bit binary field
sect131r2 : SECG curve over a 131 bit binary field
sect163k1 : NIST/SECG/WTLS curve over a 163 bit binary field
sect163r1 : SECG curve over a 163 bit binary field
sect163r2 : NIST/SECG curve over a 163 bit binary field
sect193r1 : SECG curve over a 193 bit binary field
sect193r2 : SECG curve over a 193 bit binary field
sect233k1 : NIST/SECG/WTLS curve over a 233 bit binary field
sect233r1 : NIST/SECG/WTLS curve over a 233 bit binary field
sect239k1 : SECG curve over a 239 bit binary field
sect283k1 : NIST/SECG curve over a 283 bit binary field
sect283r1 : NIST/SECG curve over a 283 bit binary field
sect409k1 : NIST/SECG curve over a 409 bit binary field
sect409r1 : NIST/SECG curve over a 409 bit binary field
sect571k1 : NIST/SECG curve over a 571 bit binary field
sect571r1 : NIST/SECG curve over a 571 bit binary field
c2pnb163v1: X9.62 curve over a 163 bit binary field
c2pnb163v2: X9.62 curve over a 163 bit binary field
c2pnb163v3: X9.62 curve over a 163 bit binary field
c2pnb176v1: X9.62 curve over a 176 bit binary field
c2tnb191v1: X9.62 curve over a 191 bit binary field
c2tnb191v2: X9.62 curve over a 191 bit binary field
c2tnb191v3: X9.62 curve over a 191 bit binary field
c2pnb208w1: X9.62 curve over a 208 bit binary field
c2tnb239v1: X9.62 curve over a 239 bit binary field
c2tnb239v2: X9.62 curve over a 239 bit binary field
c2tnb239v3: X9.62 curve over a 239 bit binary field
c2pnb272w1: X9.62 curve over a 272 bit binary field
c2pnb304w1: X9.62 curve over a 304 bit binary field
c2tnb359v1: X9.62 curve over a 359 bit binary field
c2pnb368w1: X9.62 curve over a 368 bit binary field
c2tnb431r1: X9.62 curve over a 431 bit binary field
wap-wsg-idm-ecid-wtls1: WTLS curve over a 113 bit binary field
wap-wsg-idm-ecid-wtls3: NIST/SECG/WTLS curve over a 163 bit binary field
wap-wsg-idm-ecid-wtls4: SECG curve over a 113 bit binary field
wap-wsg-idm-ecid-wtls5: X9.62 curve over a 163 bit binary field
wap-wsg-idm-ecid-wtls6: SECG/WTLS curve over a 112 bit prime field
wap-wsg-idm-ecid-wtls7: SECG/WTLS curve over a 160 bit prime field
wap-wsg-idm-ecid-wtls8: WTLS curve over a 112 bit prime field
wap-wsg-idm-ecid-wtls9: WTLS curve over a 160 bit prime field
wap-wsg-idm-ecid-wtls10: NIST/SECG/WTLS curve over a 233 bit binary field
wap-wsg-idm-ecid-wtls11: NIST/SECG/WTLS curve over a 233 bit binary field
wap-wsg-idm-ecid-wtls12: WTLS curvs over a 224 bit prime field
Oakley-EC2N-3:
IPSec/IKE/Oakley curve #3 over a 155 bit binary field.
Not suitable for ECDSA.
Questionable extension field!
Oakley-EC2N-4:
IPSec/IKE/Oakley curve #4 over a 185 bit binary field.
Not suitable for ECDSA.
Questionable extension field!

ECC keys are specific to curves.  Make a key for secp256k1, it’s fairly standard (ie, specified in NIST’s DSA Signature Standard (DSS) as are all of the secp* curves).

rybolov@ryzhe:~$ openssl ecparam -out key.test.pem -name prime256v1 -genkey
rybolov@ryzhe:~$ cat key.test.pem
—–BEGIN EC PARAMETERS—–
BggqhkjOPQMBBw==
—–END EC PARAMETERS—–
—–BEGIN EC PRIVATE KEY—–
MHcCAQEEIGkhtOzaKTpxETF9VNQc7Nu7SMX5/klNvObBbJo/riKsoAoGCCqGSM49
AwEHoUQDQgAEXmD6Hz/c8rxVYe1klFTUVOxxKwT4nLRcOLREQnC5GL+qNayqx7d0
Q+yal6sVSk013EbJr9Ukw/aiQzbrlcU1VA==
—–END EC PRIVATE KEY—–

Make a public key.  This is poorly documented and I had to extrapolate from the RSA key generation process.
rybolov@ryzhe:~$ openssl ec -in key.test.pem -pubout -out key.test.pub
read EC key
writing EC key

rybolov@ryzhe:~$ cat key.test.pub
—–BEGIN PUBLIC KEY—–
MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEXmD6Hz/c8rxVYe1klFTUVOxxKwT4
nLRcOLREQnC5GL+qNayqx7d0Q+yal6sVSk013EbJr9Ukw/aiQzbrlcU1VA==
—–END PUBLIC KEY—–

Make a test message:
rybolov@ryzhe:~$ echo “destroy all monsters” > msg.test
rybolov@ryzhe:~$ cat msg.test
destroy all monsters

Generate MD5, SHA-1, and SHA-256 hashes:

rybolov@ryzhe:~$ openssl dgst -md5 msg.test
MD5(msg.test)= a4a5e7ccfda28fdeb43697b6e619ed45
rybolov@ryzhe:~a$ openssl dgst -sha1 msg.test
SHA1(msg.test)= 4d1d1b917377448a66b94e1060e3a4c467bae01c
rybolov@ryzhe:~$ openssl dgst -sha256 msg.test
SHA256(msg.test)= efd54922696e25c7fed4023b116882d38cd1f0e4dcc35e38548eae9947aedd23

Make a signature, note that every time you make a signature with ECC it will be different.
rybolov@ryzhe:~$ cat msg.test | openssl dgst -sha1 -sign key.test.pem -out test.sha1.sig

rybolov@ryzhe:~$ cat msg.test | openssl dgst -sha1 -sign key.test.pem
0E!ÔøΩÔøΩÔøΩEÔøΩ-y
ÔøΩÔøΩ1K2ÔøΩÔøΩ›§{!ÔøΩv4+ÔøΩÔøΩÔøΩÔøΩ WÔøΩ    ÔøΩcÔøΩÔøΩP≈ô—áÔøΩaÔøΩ*~)@aÔøΩ1ÔøΩJ>ÔøΩdÔøΩ

Make the signature readable/text by encoding it with Base64:
rybolov@ryzhe:~$ openssl enc -base64 -in test.sha1.sig
MEUCIGbR7ftdgICMZCGefKfd6waMvOM23DJo3S0adTvNH5tYAiEAuJ6Qumt83ZsL
sxDqJ1JNH7XzUl28M/eYf52ocMZgyrk=

rybolov@ryzhe:~$ wc -m test.sha1.sig.asc
98

rybolov@ryzhe:~$ openssl enc -base64 -in test.sha1.sig > test.sha1.sig.asc

Validate the signature:
rybolov@ryzhe:~$ openssl dgst -sha1 -verify key.test.pub -signature test.sha1.sig msg.test
Verified OK

OpenSSL is dumb here because it can’t read base64:
rybolov@ryzhe:~$ openssl dgst -sha1 -verify key.test.pub -signature test.sha1.sig.asc msg.test
Error Verifying Data
3077905144:error:0D0680A8:asn1 encoding routines:ASN1_CHECK_TLEN:wrong tag:tasn_dec.c:1320:
3077905144:error:0D07803A:asn1 encoding routines:ASN1_ITEM_EX_D2I:nested asn1 error:tasn_dec.c:382:Type=ECDSA_SIG

So we can use OpenSSL encode with the -d flag to make a binary version:
rybolov@ryzhe:~$ openssl enc -base64 -d -in test.sha1.sig.asc -out test.sha1.sig.bin
rybolov@ryzhe:~$ cat test.sha1.sig.
test.sha1.sig.asc  test.sha1.sig.bin
rybolov@ryzhe:~$ cat test.sha1.sig.bin
0E fÔøΩÔøΩÔøΩ]ÔøΩÔøΩÔøΩd!ÔøΩ|ÔøΩÔøΩÔøΩÔøΩÔøΩÔøΩ6ÔøΩ2hÔøΩ-u;ÔøΩÔøΩX!ÔøΩÔøΩÔøΩÔøΩk|›õ
ÔøΩÔøΩ’RMÔøΩÔøΩR]ÔøΩ3ÔøΩÔøΩÔøΩÔøΩpÔøΩ` π
rybolov@ryzhe:~$ openssl dgst -sha1 -verify key.test.pub -signature test.sha1.sig.bin msg.test
Verified OK

We can also do a prverify which is to verify the signature using the private key:
rybolov@ryzhe:~$ openssl dgst -sha1 -prverify key.test.pem -signature test.sha1.sig.bin msg.test
Verified OK

Now to use this whole thing, you’ll need concatenate the signature with the massage and add a delimiter or send 2 messages, one with the message, the other with the signature.  Any kind of special character like |!^% etc works great as a delimeter, so something like this works:

MEUCIGbR7ftdgICMZCGefKfd6waMvOM23DJo3S0adTvNH5tYAiEAuJ6Qumt83ZsLsxDqJ1JNH7XzUl28M/eYf52ocMZgyrk=destroy all monsters

destroy all monsters|MEUCIGbR7ftdgICMZCGefKfd6waMvOM23DJo3S0adTvNH5tYAiEAuJ6Qumt83ZsLsxDqJ1JNH7XzUl28M/eYf52ocMZgyrk=

Topics for further research:

I haven’t talked at all about key distribution.  This gets real hard real fast just for the simple fact that you have to get an initial key to both ends of the conversation.  You can do key rotation inband, but that first hookup is a logistical effort.  Glad to hear ideas on this.

To get a smaller signature, use MD5 and secp112r1.  Normally you wouldn’t create digital signatures using MD5 (US Government standard is moving to SHA-256), but it’s a tradeoff in paranoia/crackability with signature size.  You have to do each of the steps manually because the objects for ECDSA only use SHA-1:

  • Hash the command
  • Encrypt the hash using the private key
  • Convert the encrypted hash to base64

You can use the OpenSSL shell prompt to save some keystrokes: openssl<enter>  You can also call OpenSSL as a C library, which should work nicely for embedded code.

I’m interested in building a comparison table of the following items, I just haven’t had time to build a script to compare all the data for me:

  • ECC Curve
  • Time to Compute a Signature
  • Size of Signature
  • Relative key and signature strength


Similar Posts:

Posted in Hack the Planet, NIST, Technical, What Works | 3 Comments »
Tags:

When Standards Aren’t Good Enough

Posted May 22nd, 2009 by

One of the best things about being almost older than dirt is that I’ve seen several cycles within the security community.  Just like fashion and ladies’ hemlines, if you pay attention long enough, you’ll see history repeat itself, or something that closely resembles history.  Time for a short trip “down memory lane…”

In the early days of computer security, all eyes were fixed on Linthicum and the security labs associated with the NSA.  In the late 80’s and early 90’s the NSA evaluation program was notoriously slow – glacial would be a word one could use…  Bottom line, the process just wasn’t responsive enough to keep up with the changes and improvements in technology.  Products would be in evaluation for years before coming out of the process with their enabling technology nearly obsolete.   It didn’t matter, it was the only game in town until NIST and the Common Criteria labs  came onto the scene.  This has worked well, however the reality is, it’s not much better at vetting and moving technology from vendors to users.  The problem is, the evaluation process takes time and time means money, but it also means that the code submitted for evaluation will most likely be several revisions old by the time it emerges from evaluation.   Granted, it may only be 6 months, but it might take a year – regardless, this is far better than before.

So…  practically speaking, if the base version of FooOS submitted for evaluation is, say Version 5.0.1, several revisions —  each solving operational problems affecting the  organization — may have been released.  We may find that we need to run Version 5.6.10r3 in order to pass encrypted traffic via the network.  Because we encrypt traffic we must use FIPS-Level 2 certified code – but in the example above, the validated version of the FooOS will not work in our network…    What does the CISO do?  We’ll return to this in a moment, it gets better!

In order to reach levels of FIPS-140 goodness, one vendor in particular has instituted “FIPS Mode.”  What this does is require administration of the box from apposition directly in front  of the equipment, or at the length of your longest console cable…  Clearly, this is not suitable for organizations with equipment deployed worldwide to locations that do not have qualified administrators or network engineers.  Further, having to fly a technician to Burundi to clear sessions on a box every time it becomes catatonic is ridiculous at worst.  At best it’s not in accordance with the network concept of operations.  How does the CISO propose a workable, secure solution?


Standard Hill photo by timparkinson.

Now to my point.  (about time Vlad)   How does the CISO approach this situation?  Allow me to tell you the approach I’ve taken….

1. Accept the fact that once Foo OS has achieved a level of FIPS-140 goodness, the likelihood that the modules of code within the OS implementing cryptographic functionality in follow-on versions have not been changed.  This also means you have to assume the vendor has done a good job of documenting the changes to their baseline in their release notes, and that they HAVE modular code…

2. Delve into vendor documentation and FIPS-140 to find out exactly what “FIPS Mode” is, its benefits and the requirement.  Much of the written documentation in the standard deals with physical security of the cryptographic module itself (e.g., tamper-evident seals) – but most helpful is Table 1.

Security Level  1 Security Level 2 Security Level 3 Security Level 4
Cryptographic

Module Specification

Specification of cryptographic module, cryptographic boundary, Approved algorithms, and Approved modes of operation. Description of cryptographic module, including all hardware, software, and firmware components. Statement of module security policy.
Cryptographic Module Ports and Interfaces Required and optional interfaces. Specification of all interfaces and of all input and output data paths. Data ports for unprotected critical security parameters logically or physically separated from other data ports.
Roles, Services, and Authentication Logical separation of required and optional roles and services Role-based or identity-based operator authentication Identity-based operator authentication.
Finite State Model Specification of finite state model.  Required and optional states.  State transition diagram and specification of state transitions.
Physical Security Production grade equipment. Locks or tamper evidence. Tamper detection and response for covers and doors. Tamper detection and response envelope.  EFP or EFT.
Operational Environment Single operator. Executable code. Approved integrity technique. Referenced PPs evaluated at EAL2 with specified discretionary access control mechanisms and auditing. Referenced PPs plus trusted path evaluated at EAL3 plus security policy modeling. Referenced PPs plus trusted path evaluated at EAL4.
Cryptographic Key Management Key management mechanisms: random number and key generation, key establishment, key distribution, key entry/output, key storage, and key zeroization.
Secret and private keys established using manual methods may be entered or output in plaintext form. Secret and private keys established using manual methods shall be entered or output encrypted or with split knowledge procedures.
EMI/EMC 47 CFR FCC Part 15. Subpart B, Class A (Business use). Applicable FCC requirements (for radio). 47 CFR FCC Part 15. Subpart B, Class B (Home use).
Self-Tests Power-up tests: cryptographic algorithm tests, software/firmware integrity tests, critical functions tests. Conditional tests.
Design Assurance Configuration management (CM). Secure installation and generation. Design and policy correspondence. Guidance documents. CM system. Secure distribution. Functional specification. High-level language implementation. Formal model. Detailed explanations (informal proofs). Preconditions and postconditions.
Mitigation of Other Attacks Specification of mitigation of attacks for which no testable requirements are currently available.

Summary of Security Requirements From FIPS-140-2

Bottom line — some “features” are indeed useful,  but this one particular vendor’s implementation into a “one-size fits all” option tends to limit the use of the feature at all in some operational scenarios (most notably, the one your humble author is dealing with.)  BTW, changing vendors is not an option.

3. Upon analyzing the FIPS requirements against operational needs, and (importantly) the environment the equipment is operating in, one has to draw the line between “operating in vendor FIPS Mode,” and using FIPS 140-2 encryption.

4. Document the decision and the rationale.

Once again, security professionals have to help managers to strike a healthy balance between “enough” security and operational requirements.   You would think that using approved equipment, operating systems, and vendors using the CC evaluation process would be enough.  Reading the standard, we see the official acknowledgement that “Your Mileage May Indeed Vary:” TM

While the security requirements specified in this standard are intended to maintain the security provided by a cryptographic module, conformance to this standard is not sufficient to ensure that a particular module is secure. The operator of a cryptographic module is responsible for ensuring that the security provided by a module is sufficient and acceptable to the owner of the information that is being protected and that any residual risk is acknowledged and accepted.”     FIPS 140-2 Sec 15, Qualifications

The next paragraph constitutes validation of the approach I’ve embraced:

“Similarly, the use of a validated cryptographic module in a computer or telecommunications system does not guarantee the security of the overall system. The responsible authority in each agency shall ensure that the security of the system is sufficient and acceptable.”  (Emphasis added.)

One could say, “it depends,” but you wouldn’t think so at first glance – it’s a Standard for Pete’s sake!

Then again, nobody said this job would be easy!

Vlad



Similar Posts:

Posted in Rants, Risk Management, Technical | 4 Comments »
Tags:

FIPS and the Linux Kernel

Posted March 5th, 2009 by

Recently I was building a new kernel for my firewall and noticed an interesting new option in the Cryptographic API: “FIPS 200 compliance“.

You can imagine how very interesting and somewhat confusing this is to a stalwart FISMA practitioner. Reading through FIPS 200 it’s hard to find mention of cryptography, much less a technical specification that could be implemented in the Linux kernel. FIPS 140, FIPS 197, FIPS 186, FIPS 46 and FIPS 180 standards would be natural fits in the Cryptographic API but FIPS 200? The kernel help description didn’t clear things up:

CONFIG_CRYPTO_FIPS:

This options enables the fips boot option which is
required if you want to system to operate in a FIPS 200
certification. You should say no unless you know what
this is.

Symbol: CRYPTO_FIPS [=n]
Prompt: FIPS 200 compliance
Defined at crypto/Kconfig:24
Depends on: CRYPTO
Location:
-> Cryptographic API (CRYPTO [=y])
Selected by: CRYPTO_ANSI_CPRNG && CRYPTO

Given that examining the kernel code was a little beyond my ken and I couldn’t test to discover what it did I turned to the third of the 800-53A assessment methods, interview. A little digging on kernel.org turned up the man behind this kernel magic, Neil Horman. He was able to shed some light on what is called the fips_enabled flag.

As it turns out the FIPS 200 compliance function wasn’t as exciting as I’d hoped but it does point to interesting future possibilities.

So what does it do? In the words of Neil Horman, it is a “flag for determining if we need to be operating in some fips_compliant mode (without regard to the specific criteria)”. This means it is sort of a place holder for future developments so the kernel can operate in a mode that uses a FIPS 140-2 cryptographic module.

Did you notice the word that wasn’t included in the last paragraph? Validated. Yes, there are no validated cryptographic modules in the Linux upstream kernel. If you look at the kernel’s Cryptographic API you will find listed the “AES cipher algorithms” and “DES and Triple DES EDE cipher algorithms”. These may be compliant with FIPS standards but they are not validated.

This begs the question, why have a FIPS 200 compliance flag if you can’t meet the FIPS 140-2 requirement? This is the interesting part. Let’s say a distro decides it wants to become very FISMA friendly and get their kernel’s FIPS 140-2 cryptographic module validated. Well, if the validation of the OpenSSL VCM is an apt example the distro’s Linux kernel will need to operate in a FIPS compliant mode to verifiably load the cryptographic module. So the inclusion of the fips_enabled flag enables future compliance.

Sadly it is unlikely that any single Linux distro getting their cryptographic module validated will not translate to the upstream kernel having a validated cryptographic module. If you look at the catalog of FIPS 140-2 VCM’s the modules are only validated for particular code versions and operating mode. As the upstream kernel code won’t likely see the revisions made by the downstream distro in order to achieve the VCM until after the VCM is issued it doesn’t inherit the validation.

Polyester Resin Kernel photo by  Marshall Astor – Food Pornographer.

Two possible scenarios were discussed with Neil to allow for upstream Linux kernel incorporation of a VCM.

The first scenario would be that the upstream kernel gets all the revisions made by the downstream distro to gain the VCM designation. It then goes through the process to gain the VCM itself. Unfortunately as the code is under constant revision and can’t be locked as soon as a revision was committed to the code base the VCM would be invalidated. Only a particular build of the Linux kernel could claim to be validated.

The second scenario would be a revision to the Linux kernel that allowed for the downstream’s Linux distro’s VCM to be loaded instead of the standard Linux Cryptographic API. When asked about this scenario Neil had this to say:

“That said, theres no reason the crypto api couldn’t be ripped out and replaced with a different implementation, one that is maintained independently and its certification kept up. Of course, anyone so doing would need to keep up with the pace of kernel development, and that in turn brings the need for recertification, so its rather a lost effort in my opinion. I certainly wouldn’t recommend doing so, its just too much work.”

So the solution would either be short lived and costly or long lived and insecure.

Sadly this means that there is no easy way to include FIPS 140-2 VCM within the upstream Linux kernel. But each distro can modify their Cryptographic API and validate a cryptographic module to allow for FIPS 200 compliance. With the FIPS 200 compliance flag now in the Linux kernel it is possible for this to be verified. And that’s a happy thought for Federal Linux users.

My many thanks to Neil Horman, without whom I’d have nothing to write.



Similar Posts:

Posted in FISMA, Technical | No Comments »
Tags:


Visitor Geolocationing Widget: