Stress-Test Apache with Intent to Tune: BSOFH Tip for the Software Masochist

Posted August 28th, 2009 by

So I’ve been having some problems with my server for a month or so–periodically the number of apache servers would skyrocket and the box would get so overloaded (load of ~50 or so) that I couldn’t even run simple commands on it.  I would have to get into the hardware console and give the box a hard boot (a graceful reboot wouldn’t work).

Root cause is I’m a dork, but more about that later.

Anyway, I needed a way to troubleshoot and fix it.  The biggest problem I had was that the problem was very sporadic–sometime it would be 2 weeks between crashes, other times it would be 3 times in one day.  This is so begging for a stress-test really badly.  Looking on the Internet, I found a couple of articles about running a load-tester on apache and information on the tuning settings but not really much about a methodology (yeah yeah I work for a Big 4 firm, the word still makes me shudder even though it’s the right one to use here) to actually solve the problem of apache tuning.

So the “materials” I needed:

  • One server running apache.  Mine runs Apache2 under Debian Stable.  This is a little bit different from the average distro out there in that the process is apache2 and the command is apache2ctl where normally you would have httpd and httpdctl.  If you try this at home, you’ll need to use the latter commands.
  • An apache tuning guide or 3.  Here’s the simplest/most straightforward one I’ve seen.
  • A stress-tester.  Siege is awesome for this.
  • Some simple shell commands: htop (top works here too), ps, grep, and wc.

Now for the method to my madness…

I ssh into my server using three different sessions.  On one I run htop.  Htop is a version of top that gives you a colored output and supports multiple processors.  The output without stress-testing looks something like this:

(Click for a life-size image)

I keep one session free to edit files and do an emergency “killall apache2” if things get out of hand (and they will really quickly, I had to pull the plug about 20 times throughout this process).   I run a simple command on another ssh session to get a count of how many apache threads I have running:

rybolov@server:~$ ps aux | grep apache2 | grep start | wc -l
11

OK, so far so good.  I’ve got 11 threads running with no load and RAM usage of 190MB.  I needed the extra “grep start” because it removes the text editor I have open on apache2.conf and anything else I might be doing in the background.

I also killed apache, waited 10 seconds, and looked at the typical RAM use.  With no apache running, I use about 80MB just for the OS and everything else I’m running.  This means that I’m using 110MB of RAM for 11 apache threads, which means I’m using ~10MB of RAM for each apache thread.  Now that’s something important I can use.

I took my tuning settings in apache2.conf (httpd.conf for most distros) (Apache2 uses the prefork module which uses threads, read the tuning guide for more info) and set them at the defaults listed in the tuning guide.  They became something like the following:

<IfModule prefork.c>
  StartServers            8
  MinSpareServers         5
  MaxSpareServers        20
  MaxClients            150
  MaxRequestsPerChild  1000
</IfModule>

Notice how the MaxClients is set at 150?  This will prove to be my downfall later.  Turns out that my server is RAM-poor for as much processor as it has or WordPress is a RAM hog (or both, which is the case =)  ).  I’ll eventually upgrade my server, but since it’s a cloud server from Mosso, I pay by the RAM and drive space.

After each edit of apache2.conf, you need to give apache a configuration test and reload:

server:~# apache2ctl configtest
Syntax OK                        <- If something else comes back, fix it!!
server:~# apache2ctl restart

I’m now ready to stress-test using the default setup.  This is the awesome part.  First, I need to simulate a load.  I make an url seedfile so that siege will bounce around between a handful of pages.  I make a file siege.urls.txt and put in a collection of urls so that it looks like the following:

http://www.guerilla-ciso.com/
http://www.guerilla-ciso.com/about
http://www.guerilla-ciso.com/contact
http://www.guerilla-ciso.com/papers-and-presentations
....<about 20 lines deleted here, you get the point>
http://www.guerilla-ciso.com/page/2
http://www.guerilla-ciso.com/page/3
http://www.guerilla-ciso.com/page/4

I’m sure there is an efficient and fun way to make this, like say, a text-only sitemap or sproxy which is made by the same guy who does siege, but since I only needed about 30 urls, I just cut-n-pasted them off the blog homepage.

I fire up siege and give my webserver a thorough drubbing, running 50 connections for 10 minutes and using my url seedfile.  BTW, I’m running siege on the webserver itself, so there isn’t anything in the way of network latency.  <enter sinister laugh of evil as I sadistically torture my apache and the underlying OS>

server:~# siege -c 50 -t 600s -f siege.urls.txt
** SIEGE 2.66
** Preparing 50 concurrent users for battle.    <-The guy writing siege has a wicked sense of humor.
The server is now under siege...                <-Man the ramparts, Apache, they're coming for you!
HTTP/1.1 200   1.08 secs:   16416 bytes ==> /
HTTP/1.1 200   1.07 secs:   16416 bytes ==> /
....<about 2 bazillion lines deleted here, you get the idea>
HTTP/1.1 200   4.66 secs:    8748 bytes ==> /about
HTTP/1.1 200   3.92 secs:    8748 bytes ==> /about
Lifting the server siege...      done.

Transactions:                  61 hits   <-No, this isn't actual, I abbreviated the siege output
Availability:              100.00 %      <-with a ctrl-c just to get some results so I didn't
Elapsed time:                6.70 secs   <-have to scroll through all that output from the real test.
Data transferred:            0.87 MB
Response time:                3.27 secs
Transaction rate:            9.10 trans/sec
Throughput:                0.13 MB/sec
Concurrency:               29.75
Successful transactions:          61
Failed transactions:               0
Longest transaction:            5.61
Shortest transaction:            1.07

Now I watch the output of htop.  Under stress, the output looks something like this:

(Click for a life-size image)

Hmm, looks like I have a ton of apache threads soaking up all my RAM.  What happens is that in about 30 seconds, the OS starts swapping and the swap use just keeps growing until the OS is unresponsive.  This is a very interesting cascade failure because writing to swap incurs a load which makes the OS write to swap more.  Maybe I need to limit either the amount of RAM used per apache or limit the maximum amount of threads that apache spawns.  The tuning guide tells us how…

There is one setting that is the most important in tuning apache, it’s MaxClients.  This is the maximum number of servers (with the worker module) or threads (prefork module).  Looking at my apache tuning guide, I get a wonderful formula: ($SizeOfTotalRAM – $SizeOfRAMForOS) / $RAMUsePerThread = MaxClients.  So in my case, (512 – 80) / 11 = 39.something.  Oops, this is a far cry from the 150 that comes as default.  I also know that the RAM/thread number I used was without any load on apache, so with a load on and generating dynamic content (aka WordPress) , I’ll probably use ~15MB per thread.

One other trick that I can use:  Since I think that what’s killing me is the number of apache threads, I can run with a reduced amount of simultaneous connections and watch htop.  When htop shows that I’ve just started to write to swap, I can run my ps command to find out how many apache threads I have running.

rybolov@server:~$ ps aux | grep apache2 | grep start | wc -l
28

Now this is about what I expected:  With 28 threads going, I tipped over into using swap.  Reversing my tuning formula, I get (28 threads x 15 MB/thread) +80 MB for OS = 500 MB used.  Hmm, this makes much sense to me, since the OS starts swapping when you use ~480MB of RAM.

So I go back to my prefork module tuning.

<IfModule mpm_prefork_module>
 StartServers          8
 MinSpareServers       5
 MaxSpareServers      10
 MaxClients           25
 MaxRequestsPerChild   2000
</IfModule>

I set MaxClients at 25 because 28 seems to be the tipping point, so that gives me a little bit of “wiggle room” in case something else happens at the same time I’m serving under a huge load.  I also tweaked some of the other settings slightly.

Then it’s time for another siege torture session.  I run the same command as above and watch the htop output.  With the tuning settings I have now, the server dips into swap about 120MB and survives the full 10 minutes.  I’m sure the performance is degraded somewhat by going into swap, but I’m happy with it for now because the server stays alive.  It wasn’t all that smooth, I had to do a little bit of trial and error first, starting with MaxClients 25 and working my way up to 35 under a reduce siege load (-c 25 -t 60s) to see what would happen, then increasing the load from siege (-c50 -t 600s) and ratcheting MaxClients back down to 25.

And as far as me being a dork… well, aside from the huge MaxClients setting (That’s the default, don’t blame me), I set MaxRequestsPerChild to 100 instead of 1000, meaning that every 100 http requests I was rolling over and making a new thread.  That would lead to cascade failure under a load. (duh!)



Similar Posts:

Posted in Technical, The Guerilla CISO, What Works | 5 Comments »

A Short History of Cyberwar Lookalikes

Posted June 17th, 2009 by

Rybolov’s Note: Hello all, I’m venturing into an open-ended series of blog posts aimed at starting conversation. Note that I’m not selling anything *yet* but ideas and maybe some points for discussion.

Let’s get this out there from the very beginning: I agree with Ranum that full-scale, nation-v/s-nation Cyberwar is not a reality.  Not yet anyway, and hopefully it never will be.  However, on a smaller scale with well-defined objectives, cyberwar is not only happening now, but it is also a natural progression over the past century.

DojoSec Monthly Briefings – March 2009 – Marcus J. Ranum from Marcus Carey on Vimeo.

Looking at where we’re coming from in the existing models and techniques for activities similar to cyberwar, it frames our present state very nicely :

Electronic Countermeasures. This has been happening for some time.  The first recorded use of electronic countermeasures (ECM) was in 1905 when the Russians tried to jam radio signals of the Japananese fleet besieging Port Arthur.  If you think about ECM as DOS based on radio, sonar, etc, then it seems like cyberwar is just an extension of the same denial of communications that we’ve been doing since communication was “invented”.

Modern Tactical Collection and Jamming. This is where Ranum’s point about spies and soldiers falls apart, mostly because we don’t have clandestine operators doing electronic collection at the tactical level–they’re doing both collection and “attack”.  The typical battle flow goes something along the lines of scanning for items of interest, collecting on a specific target, then jamming once hostilities have begun.  Doctrinally, collection is called Electronic Support and jamming is called Electronic Attack.  What you can expect in a cyberwar is a period of reconnaissance and surveillance for an extended length of time followed by “direct action” during other “kinetic” hostilities.

Radio Station Jamming. This is a wonderful little world that most of you never knew existed.  The Warsaw Pact used to jam Radio America and other sorts of fun propaganda that we would send at them.  Apparently we’ve had some interesting radio jamming since the end of the Cold War, with China, Cuba, North Korea, and South Korea implicated in some degree or another.

Website Denial-of-Service. Since only old people listen to radio anymore and most news is on the Internet, so it makes sense to DOS news sites with an opposing viewpoint.  This happens all the time, with attacks ranging from script kiddies doing ping floods to massive DOSBots and some kind of racketeering action… “You got a nice website, it would be pretty bad if nobody could see it.”  Makes me wonder why the US hasn’t taken Al Jazeera off the Internet.  Oh, that’s right, somebody already tried it.  However, in my mind, jamming something like Al Jazeera is very comparable to jamming Voice of America.

Estonia and Gruzija DOS. These worked pretty well from a denial-of-communications standpoint, but only because of the size of the target.  And so what if it did block the Internet, when it comes to military forces, it’s at best an annoyance, at most it will slow you down just enough.  Going back to radio jamming, blocking out a signal only works when you have more network to throw at the target than the target has network to communicate with the other end.  Believe it or not, there are calculators to determine this.

Given this evolution of communications denial, it’s not unthinkable that people wouldn’t be launching electronic attacks at each other via radar, radio, carrier pigeon, IP or any other way they can.

However, as in the previous precedents and more to some of the points of Ranum’s talk at DojoSec, electronic attacks by themselves only achieve limited objectives.  Typically the most likely type of attack is to conduct a physical attack and use the electronic attack, whether it’s radio, radar, or IT assets, to delay the enemy’s response.  This is why you have to take an electronic attack seriously if it’s being launched by a country which has a military capable of attacking you physically–it might be just a jamming attack, it might be a precursor to an invasion.

Bottom line here is this: if you use it for communication, it’s a target and has been for some time.



Similar Posts:

Posted in Technical, The Guerilla CISO, What Doesn't Work, What Works | 5 Comments »
Tags:

Working with Interpreters, a Risk Manager’s Guide

Posted June 3rd, 2009 by

So how does the Guerilla-CISO staff communicate with the locals on jaunts to foreign lands such as Deleware, New Jersey, and Afghanistan?  The answer is simple, we use interpreters, known in infantrese as “terps”.  Yes, you might not trust them deep down inside because they harbor all kinds of loyalties so complex that you can spend the rest of your life figuring out, but you can’t do the job without them.

But in remembering how we used our interpreters, I’m reminded of some basic concepts that might be transferable to the IT security and risk management world.  Or maybe not, at least kick back and enjoy the storytelling while it’s free. =)

Know When to Treat Them Like Mushrooms: And by that, we mean “keep them in the dark and feed them bullsh*t”.  What really mean is to tell potentially adversarial people that you’re working with the least amount of information that they need to do their job in order to limit the frequency and impact of them doing something nasty.  When you’re planning a patrol, the worst way to ruin your week is to tell the terps when you’re leaving and where you’re going.  That way, they can call their Taliban friends when you’re not looking and they’ll have a surprise waiting for you.  No, it won’t be a birthday cake.  The way I would get a terp is that one would be assigned to me by our battalion staff and the night before the patrol I would tell the specific terp that we were leaving in the morning, give them a time that I would come by to check up on them, and that they would need to bring enough gear for 5 days.  Before they got into my vehicles and we rolled away, I would look through their gear to make sure they didn’t have any kind of communications device (radio or telephone) to let their buddies know where we were at.

Fudge the Schedule to Minimize Project Risk: Terps–even the good ones–are notorious for being on “local time”, which for a patrol means one hour later than you told them you were leaving.  The good part about this is that it’s way better than true local time, which has a margin of error of a week and a half.  In order to keep from being late, always tell the terps when you’ll need them an hour and a half before you really do, then check up on them every half hour or so.  Out on patrol, I would cut that margin down to half an hour because they didn’t have all the typical distractions to make them late.

Talk Slowly, Avoid Complex Sentences: The first skill to learn when using terps is to say things that their understanding of English can handle.  When they’re doing their job for you, simple sentences works best.  I know I’m walking down the road of heresy, but this is where quantitative risk assessment done poorly doesn’t work for me because now I something that’s entirely too complex to interpret to the non-IT crowd.  In fact, it probably is worse than no risk assessment at all because it comes accross as “consultantspeak” with no tangible link back to reality.

Put Your Resources Where the Greatest Risk Is: To a vehicle patrol out in the desert, most of the action happens at the front of the patrol.  That’s where you need a terp.  That way, the small stuff, such as asking a local farmer to move his goats and sheep out of the road so you can drive through, stays small–without a terp up front, a 2-minute conversation becomes 15 minutes of hassle as you first have to get the terp up to the front of the patrol then tell them what’s going on.

Pigs, Chicken, and Roadside Bombs: We all know the story about how in the eggs and bacon breakfast, the chicken is a participant but the pig is committed.  Well, when I go on a patrol with a terp, I want them to be committed.  That means riding in the front vehicle with me.  It’s my “poison pill” defense in knowing that if my terp tipped off the Taliban and they blow up the lead vehicle with me in it, at least they would also get the terp.  A little bit of risk-sharing in a venture goes a long way at getting honesty out of people.

Share Risk in a Culturally-Acceptable Way: Our terps would balk at the idea of riding in the front vehicle most of the time.  I don’t blame them, it’s the vehicle most likely to be turned into 2 tons of slag metal thanks to pressure plates hooked up to IEDs.  The typical American response is something along the lines of “It’s your country, you’re riding up front with me so if I get blown up, you do to”.  Yes, I share that ideal, but the Afghanis don’t understand country loyalties, the only thing they understand is their tribe, their village, and their family.  The Guerilla-CISO method here is to get down inside their heads by saying “Come ride with me, if we die, we die together like brothers”.  You’re saying the same thing basically but you’re framing it in a cultural context that they can’t say no to.

Reward People Willing to Embrace Your Risks: One of the ways that I was effective in dealing with the terps was that I would check in occassionally to see if they were doing alright during down-time from missions.  They would show me some Bollywood movies dubbed into Pashto, I would give them fatty American foods (Little Debbie FTW!).  They would play their music.  I would make fun of their music and amaze them because they never figured out how I knew that the song had drums, a stringed instrument, and somebody singing (hey, all their favorite songs have that).  They would share their “foot bread” (the bread is stamped flat by people walking on it before it’s cooked, I was too scared to ask if they washed their feet first) with me.  I would teach them how to say “Barbara (their assignment scheduler back on an airbase) was a <censored> for putting them out in the middle of nowhere on this assignment” and other savory phrases.  These forays weren’t for my own enjoyment, but to build rapport with the terps so that they would understand when I would give them some risk management love, Guerilla-CISO style.

Police, Afghan Army and an Interpreter photo by ME!.  The guy in the baseball cap and glasses is one of the best terps I ever worked with.



Similar Posts:

Posted in Army, Risk Management, The Guerilla CISO, What Works | 1 Comment »
Tags:

Blow-By-Blow on S.773–The Cybersecurity Act of 2009–Part 2

Posted April 16th, 2009 by

Rybolov Note: this is part 2 in a series about S.773.  Go read the bill hereGo read part one here. Go read part 3 here. Go read part four hereGo read part 5 here. =)

SEC. 7. LICENSING AND CERTIFICATION OF CYBERSECURITY PROFESSIONALS. This section has received quite a bit of airtime around the blagosphere.  Everybody thinks that they’ll need some kind of license from the Federalies to run nessus.  Hey, maybe this is how it will all end up, but I think this provision will end up stillborn.

I know the NIST folks have been working on licensing and certification for some time, but they usually run into the same problems:

  • Do we certify individuals as cybersecurity professionals?
  • Do we certify organizations as cybersecurity service providers?
  • What can the Government do above and beyond what the industry provides? (ISC2, SANS, 27001, etc)
  • NIST does not want to be in the business of being a licensure board.

Well, this is my answer (I don’t claim that these are my opinion):

  • Compulsory: the Government can require certifications/licensure for certain job requirements.  Right now this is managed by HR departments.
  • Existing Precedent: We’ve been doing this for a couple of years with DoDI 8570.01M, which is mandatory for DoD contracts.  As much as I think industry certification is a pyramid scheme, I think this makes sense in contracting for the Government because it’s the only way to ensure some kind of training for security staff.If the Government won’t pay for contractor training (and they shouldn’t) and the contractor won’t pay for employees to get training because their turnover rate is 50% in a year, it’s the only way to ensure some kind of training and professionalization of the staff.  Does this scale to the rest of the country?  I’m not sure.
  • Governance and Oversight: The security industry has too many different factions.  A Government-ran certification and license scheme would provide some measure of uniformity.

Honestly, this section of the bill might make sense (it opens up a bigger debate) except for one thing:  we haven’t defined what “Cybersecurity Services” are.  Let’s face it, most of what we think are “security” services are really basic IT management services… why should you need a certification to be the goon on the change control board.  However, this does solve the “problem” of hackers who turn into “researchers” once they’re caught doing something illegal.  I just don’t see this as that big of a problem.

Verdict: Strange that this isn’t left up to industry to handle.  It smells like lobbying by somebody in ISC2 or SANS to generate a higher demand for certs.  Unless this section is properly scoped and extensively defined, it needs to die on the cutting room floor–it’s too costly for almost no value above what industry can provide.  If you want to provide the same effect with almost no cost to the taxpayers, consider something along the 8570.01 approach in which industry runs the certifications and specific certifications are required for certain job titles.

SEC. 8. REVIEW OF NTIA DOMAIN NAME CONTRACTS. Yes, there is a bunch of drama-llama-ing going on between NTIA, ICANN, Verisign, and a cast of a thousand.  This section calls for a review of DNS contracts by the Cybersecurity Advisory Panel (remember them from section 3?) before they are approved.  Think managing the politics of DNS is hard now?  It just got harder–you ever try to get a handful of security people to agree on anything?  And yet, I’m convinced that either this needs to happen or NTIA needs to get some clueful security staffers who know how to manage contracts.

Verdict: DNSSEC is trendy thanks to Mr Kaminski.  I hate it when proposed legislation is trendy.  I think this provision can be axed off the bill if NTIA had the authority to review the security of their own contracts.  Maybe this could be a job for the Cybersecurity Advisor instead of the Advisory Panel?

SEC. 9. SECURE DOMAIN NAME ADDRESSING SYSTEM. OK, the Federal Government has officially endorsed DNSSEC thanks to some OMB mandates.  Now the rest of the country can play along.  Seriously, though, this bill has some scope problems, but basically what we’re saying is that Federal agencies and critical infrastructure will be required to implement DNSSEC.

Once again, though, we’re putting Commerce in charge of the DNSSEC strategy.  Commerce should only be on the hook for the standards (NIST) and the changes to the root servers (NTIA).  For the Federal agencies, this should be OMB in charge.  For “critical infrastructure”, I believe the most appropriate proponent agency is DHS because of their critical infrastructure mission.

And as for the rest of you, well, if you want to play with the Government or critical infrastructure (like the big telephone and network providers), it would behoove you to get with the DNSSEC program because you’re going to be dragged kicking and screaming into this one.  Isn’t the Great InfoSec Trickle-Down Effect awesome?

Verdict: If we want DNSSEC to happen, it will take an act of Congress because the industry by itself can’t get it done–too many competing interests.  Add more tasks to the agencies outside of Commerce here, and it might work.

Awesome Capitol photo by BlankBlankBlank.

SEC. 10. PROMOTING CYBERSECURITY AWARENESS. Interesting in that this is tasked to Commerce, meaning that the focus is on end-users and businesses.

In a highly unscientific, informal poll with a limited sample of security twits, I confirmed that nobody has ever heard of Dewie the Webwise Turtle.  Come on, guys, “Safe at any speed”, how could you forget that?  At any rate, this already exists in some form, it just has to be dusted off and get a cash infusion.

Verdict: Already exists, but so far efforts have been aimed at users.  The following populations need awareness: small-medium-sized businesses (SMBs), end-users, owners of critical infrastructure, technology companies, software developers.  Half of these are who DHS is dealing with, and this provision completely ignores DHS’s role.

SEC. 11. FEDERAL CYBERSECURITY RESEARCH AND DEVELOPMENT. This section is awesome to read, it’s additions to the types of research that NSF can fund and extensions of funding for the existing types of research.  It’s pretty hard to poke holes in, and based on back-of-the-envelope analysis, there isn’t much that is missing by way of topics that need to be added to research priorities.  What I would personally like to see is a better audit system not designed around the accounting profession’s way of doing things.  =)

Verdict: Keep this section intact.  If we don’t fund this, we will run into problems 10+ years out–some would say we’re already running into the limitations of our current technology.

SEC. 12. FEDERAL CYBER SCHOLARSHIP-FOR-SERVICE PROGRAM. This is an existing program, and it’s pretty good.  Basically you get a scholarship with a Government service commitment after graduation.  Think of it as ROTC-light scholarships without bullets and trips to SW Asia.

Verdict: This is already there.  This section of the bill most likely is in to get the program funded out to 2014.



Similar Posts:

Posted in NIST, Public Policy, What Doesn't Work, What Works | 2 Comments »
Tags:

Blow-By-Blow on S.773–The Cybersecurity Act of 2009–Part 1

Posted April 14th, 2009 by

Rybolov Note: this is such a long blog post that I’m breaking it down into parts.  Go read the bill hereGo read part two hereGo read part three here. Go read part four hereGo read part 5 here. =)

So the Library of Congress finally got S.773 up on http://thomas.loc.gov/.  For those of you who have been hiding under a rock, this is the Cybersecurity Act of 2009 and is a bill introduced by Senators Rockefeller and Snowe and, depending on your political slant, will allow us to “sock it to the hackers and send them to a federal pound-you-in-the-***-prison” or “vastly erode our civil liberties”.

A little bit of pre-reading is in order:

Timing: Now let’s talk about the timing of this bill.  There is the 60-day Cybersecurity Review that is supposed to be coming out Real Soon Now (TM).  This bill is an attempt by Congress to head it off at the pass.

Rumor mill says that not only will the Cybersecurity Review be unveiled at RSA (possible, but strange) and that it won’t bring anything new to the debate (more possibly, but then again, nothing’s really new, we’ve known about this stuff for at least a decade).

Overall Comments:

This bill is big.  It really is an omnibus Cybersecurity Act and has just about everything you could want and more.  There’s a fun way of doing things in the Government, and it goes something like this: ask for 300% of what you need so that you will end up with 80%.  And I see this bill is taking this approach to heart.

Pennsylvania Ave – Old Post Office to the Capitol at Night photo by wyntuition.

And now for the good, bad, and ugly:

SEC. 2. FINDINGS. This section is primarily a summary of testimony that has been delivered over the past couple of years.  It really serves as justification for the rest of the bill.  It is a little bit on the FUD side of things (as in “omigod, they put ‘Cyber-Katrina‘ in a piece of legislation”), but overall it’s pretty balanced and what you would expect for a bill.  Bottom line here is that we depend on our data and the networks that carry it.  Even if you don’t believe in Cyberwar (I don’t really believe in Cyberwar unles it’s just one facet of combined arms warfare), you can probably agree that the costs of insecurity on a macroeconomic scale need to be looked at and defended against, and our dependency on the data and networks is only going to increase.

No self-respecting security practitioner will like this section, but politicians will eat it up.  Relax, guys, you’re not the intended audience.

Verdict: Might as well keep this in there, it’s plot development without any requirements.

SEC. 3. CYBERSECURITY ADVISORY PANEL. This section creates a Cybersecurity Advisory Panel made up of Federal Government, private sector, academia, and state and local government.  This is pretty typical so far.  The interesting thing to me is “(7) whether societal and civil liberty concerns are adequately addressed”… in other words, are we balancing security with citizens’, corporations’, and states’ rights?  More to come on this further down in the bill.

Verdict: Will bring a minimal cost in Government terms.  I’m very hesitant to create new committees.  But yeah, this can stay.

SEC. 4. REAL-TIME CYBERSECURITY DASHBOARD. This section is very interesting to me.  On one hand, it’s what we do at the enterprise level for most companies.  On the other hand, this is specific to the Commerce Department –“Federal Government information systems and networks managed by the Department of Commerce.”  The first reading of this is the internal networks that are internal to Commerce, but then why is this not handed down to all agencies?  I puzzled on this and did some research until I remembered that Commerce, through NTIA, runs DNS, and Section 8 contains a review of the DNS contracts.

Verdict: I think this section needs a little bit of rewording so that the scope is clearer, but sure, a dashboard is pretty benign, it’s the implied tasks to make a dashboard function (ie, proper management of IT resources and IT security) that are going to be the hard parts.  Rescope the dashboard and explicitly say what kind of information it needs to address and who should receive it.

SEC. 5. STATE AND REGIONAL CYBERSECURITY ENHANCEMENT PROGRAM. This section calls for Regional Cybersecurity Centers, something along the lines of what we call “Centers of Excellence” in the private sector.  This section is interesting to me, mostly because of how vague it seemed the first time I read it, but the more times I look at it, I go “yeah, that’s actually a good idea”.  What this section tries to do is to bridge the gap between the standards world that is NIST and the people outside of the beltway–the “end-users” of the security frameworks, standards, tools, methodologies, what-the-heck-ever-you-want-to-call-them.  Another interesting thing about this is that while the proponent department is Commerce, NIST is part of Commerce, so it’s not as left-field as you might think.

Verdict: While I think this section is going to take a long time to come to fruition (5+ years before any impact is seen), I see that Regional Cybersecurity Centers, if properly funded and executed, can have a very significant impact on the rest of the country.  It needs to happen, only I don’t know what the cost is going to be, and that’s the part that scares me.

SEC. 6. NIST STANDARDS DEVELOPMENT AND COMPLIANCE. This is good.  Basically this section provides a mandate for NIST to develop a series of standards.  Some of these have been sitting around for some time in various incarnations, I doubt that anyone would disagree that these need to be done.

  1. CYBERSECURITY METRICS RESEARCH:  Good stuff.  Yes, this needs help.  NIST are the people to do this kind of research.
  2. SECURITY CONTROLS:  Already existing in SP 800-53.  Depending on interpretation, this changes the scope and language of the catalog of controls to non-Federal IT systems, or possibly a fork of the controls catalog.
  3. SOFTWARE SECURITY:  I guess if it’s in a law, it has come of age.  This is one of the things that NIST has wanted to do for some time but they haven’t had the manpower to get involved in this space.
  4. SOFTWARE CONFIGURATION SPECIFICATION LANGUAGE: Part of SCAP.  The standard is there, it just needs to be extended to various pieces of software.
  5. STANDARD SOFTWARE CONFIGURATION:  This is the NIST configuration checklist program ala SP 800-70.  I think NIST ran short on manpower for this also and resorted back to pointing at the DISA STIGS and FDCC.  This so needs further development into a uniform set of standards and then, here’s the key, rolled back upstream to the software vendors so they ship their product pre-configured.
  6. VULNERABILITY SPECIFICATION LANGUAGE: Sounds like SCAP.

Now for the “gotchas”:

(d) COMPLIANCE ENFORCEMENT- The Director shall–

(1) enforce compliance with the standards developed by the Institute under this section by software manufacturers, distributors, and vendors; and

(2) shall require each Federal agency, and each operator of an information system or network designated by the President as a critical infrastructure information system or network, periodically to demonstrate compliance with the standards established under this section.

This section basically does 2 things:

  • Mandates compliancy for vendors and distributors with the NIST standards listed above.  Suprised this hasn’t been talked about elsewhere.  This clause suffers from scope problems because if you interpret it BSOFH-stylie, you can take it to mean that anybody who sells a product, regardless of who’s buying, has to sell a securely-configured version.  IE, I can’t sell XP to blue-haired grandmothers unless I have something like an FDCC variant installed on it.  I mostly agree with this in the security sense but it’s a serious culture shift in the practical sense.
  • Mandates an auditing scheme for Federal agencies and critical infrastructure.  Everybody’s talked about this, saying that since designation of critical infrastructure is not defined, this is left at the discretion of the Executive Branch.  This isn’t as wild-west as the bill’s opponents want it to seem, there is a ton of groundwork layed out in HSPD-7.  But yeah, HSPD-7 is an executive directive and can be changed “at the whim” of the President.  And yes, this is auditing by Commerce, which has some issues in that Commerce is not equipped to deal with IT security auditing.  More on this in a later post.

Verdict: The standard part is already happening today, this section just codifies it and justify’s NIST’s research.  Don’t task Commerce with enforcement of NIST standards, it leads down all sorts of inappropriate roads.



Similar Posts:

Posted in Public Policy, What Doesn't Work, What Works | 7 Comments »
Tags:

Analyzing Fortify’s Plan to “Fix” the Government’s Security Problem

Posted April 1st, 2009 by

So I like reading about what people think about security and the Government.  I know, you’re all surprised, so cue shock and awe amongst my reader population.

Anyway, this week it’s Fortify and a well-placed article in NextGov.  You remember Fortify, they are the guys with the cool FUD movie about how code scanning is going to save the world.  And oh yeah, there was this gem from SC Magazine: “Fortify’s Rachwald agrees that FISMA isn’t going anywhere, especially with the support of the paper shufflers. ‘It’s been great for people who know how to fill out forms. Why would they want it to go away?'”  OK, so far my opinion has been partially tainted–somehow I think I’m supposed to take something here personal but I’m not sure exactly what.

Fortify has been trying to step up to the Government feed trough over the past year or so.  In a rare moment of being touch-feely intuitive, from their marketing I get the feeling that Fortify is a bunch of Silicon Valley technologists who think they know what’s best for DC–digital carpetbagging.  Nothing new, all y’alls been doing this for as long as I’ve been working with the Government.

Now don’t get me wrong, I think Fortify makes some good products.  I think that universal adoption of code scanning, while not as foolproof as advertised, is a good thing.  I also think that software vendors should use scanning tools as part of their testing and QA.

Fortified cité of Carcassonne photo by http2007.

Now for a couple basic points that I want to get across:

  • Security is not a differentiator between competing products unless it’s the classified world. People buy IT products based on features, not security.
  • The IT industry is a broken market because there is no incentive to sell secure code.
  • In fact, software vendors are often rewarded market-wise because if you arrive first to market with the largest market penetration, you become the defacto standard.
  • The vendors are abstracted from the problems faced by their customers thanks to the terms of most EULAs–they don’t really have to fix security problems since the software is sold with no guarantees.
  • The Government is dependent upon the private sector to provide it with secure software.
  • It is a conflict of interest for the vendors to accurately represent their flaws unless the Government is going to pay to have them fixed.
  • It’s been proposed numerous the Government use its “huge” IT budget to require vendors to sell secure projects.
  • How do you determine that a vendor is shipping a secure product?

Or more to the point, how do I as a software vendor reasonably demonstrate that I have provided a secure product to the government without a making the economics infeasible for smaller vendors, creating an industry of certifiers ala PCI-DSS and SOX, or dramatically lengthening my development/procurement schedules?  Think of the problems with common criteria, because that’s our previous attempt.

We run into this problem all the time in Government IT security, but it’s mostly at the system integrator level.  It’s highly problematic to make contract requirements that are objective, demonstrable, and testable yet still take into account threats and vulnerabilities that do not exist today.

I’ve spent the past month writing a security requirements document for integrated special-purpose devices sold to the Government.  Part of this exercise was the realization that I can require that the vendor perform vulnerability scanning, but it becomes extremely difficult to include an amount of common sense into requirements when it comes to deciding what to fix.  “That depends” keeps coming back to bite me in the buttocks time and time again.  At this point, I usually tell my boss how I hate security folks, self included, because of their indecisiveness.

The end result is that I can specify a process (Common Criteria for software/hardware, Certification and Accreditation for integration projects) and an outcome (certification, product acceptance, “go live” authorization), leave the decision-making authority with the Government, and put it in the hands of contracts officers and subject-matter experts who know how to manage security.  Problems with this technique:

  • I can’t find enough contracts officers who are security experts.
  • As a contractor, how do I account for the costs I’m going to incur since it’s apparently “at the whim of the Government”?
  • I have to apply this “across the board” to all my suppliers due to procurement law.  This might not be possible right now for some kinds of outsourced development.
  • We haven’t really solved the problem of defining what constitutes a secure product.
  • We’ve just deferred the problem from a strategic solution to a tactical process depending on a handful of clueful people.

Honestly, though, I think that’s as good as we’re going to get.  Ours is not a perfect world.

And as for Fortify?  Guys, quit trying to insult the people who will ultimately recommend your product.  It’s bad mojo, especially in a town where the toes you step on today may be attached to the butt you kiss tomorrow.  =)



Similar Posts:

Posted in Outsourcing, Technical, What Doesn't Work, What Works | 2 Comments »
Tags:

« Previous Entries Next Entries »


Visitor Geolocationing Widget: