NIST Security Automation Conference

Posted September 13th, 2010 by

It’s at the end of September, check it out.  Even if you’re not in the vulnerability/patch rat race on a daily basis, it would “behoove” you to go check out what’s new.  If you’ve been paying attention to OMB Memo 10-15, you’ll notice that Cyberscope takes some SCAP input.



Similar Posts:

Posted in FISMA, NIST, Technical | 1 Comment »
Tags:

Barcode Hacking Process

Posted April 12th, 2010 by

This is something I’ve been working on in my spare brain cycles:  building a process for barcode hacking.

Limitations with barcode hacking:

  • Feedback: is hard to get and depends on the scanner and the scanner app.  In other words, you really need access to a working setup to test any kind of techniques.  This isn’t web-based SQLi where you can compare the output against other results, you have to look “inside the guts” to see if a change happened.
  • Reflections and Noise: Laser-based scanners have problems with reflection on phone screens.  This *almost* limits you to printed barcodes and reduces some of the interactivity.
  • UPC: This symbology sucks for barcode hacking because you’re limited to 12 digits, no letters are supported.

Kernels of nummieness:

  • Most modern barcodes are attached via USB and are recognized as a keyboard.
  • Read the previous sentence again.  =)  You know what to do here.
  • The USPS uses DataMatrix barcodes for postage.  These include command characters that “freak out” anything I read them on.  This has much potential, now if I can figure out how to harness this for the powers of mischief.
  • I have a Symbol 2D barcode reader, you can buy them on eBay for ~$120.

The process should run something like this:

  • Configuration injection: given the make and model of the scanner, turn on all available symbologies to increase the reader attack surfaces. These command sets are available from the manufacturer and there is a wealth of untapped firmware vulns in them.
  • Discovery test: to determine which symbologies are supported by the barcode scanner.  The goal is to get something that supports the full ASCII set.  Code 128 (1D), PDF-417, QR, Aztec, and DataMatrix are your friends here.  For discovery, you can use “all 1’s” or something along those lines.
  • Command injection: attempt to pass OS commands to the reader application and download and install a payload onto the OS via browser, ftp, etc or to gain a shell on the box.
  • Application escape: Attempt to escape out of the application and into the OS.  Then it’s just a simple matter of regular exploits *or* if you’re lucky, you’re already admin.  At least try a ctrl-alt-del and see what happens.
  • SQL injection: this you know, string concatenation that’s passed to the database.  The problem is that depending on the system, you might not get feedback so blind SQLi is harder.  “‘ or 1=1;–” probably won’t work because there isn’t really a login or when you’re scanning barcodes you’re already past that point.  I think the goal here should be command execution: add users, exec OS commands, and turn on additional services.
  • Malformed barcode: as a last resort, try fuzzing with non-standards-compliant barcodes to get either the scanner or the application to barf.

BTW, all the kids with their barcodes that say “‘ or 1=1;–” crack me up because they’re being barcode skiddies and don’t understand how barcodes are really used.  =)

SQLi Test

SQL Injection Bogus Example by ME!  Only you can stop the stupidity.



Similar Posts:

Posted in Hack the Planet, Technical | 1 Comment »
Tags:

Barcode Hacking

Posted January 13th, 2010 by

A little presentation I did for NoVA Hackers.  Basic intent was to be more workshop than something more formal and to give everybody the tools to do their own experimentation at home.

I even inspired Jack to write a blog post.

Caveat: this has nothing to do with FISMA or Government InfoSec.  =)

Links in the Presentation:

Links of interest:



Similar Posts:

Posted in Hack the Planet, Speaking, Technical | 6 Comments »
Tags:

Save a Kitten, Write SCAP Content

Posted August 7th, 2009 by

Apparently I’m the Internet’s SCAP Evangelist according to Ed Bellis, so at this point all I can do is shrug and say “OK, I’ll teach people about SCAP”.

Right now there is a “pretty OK” framework for SCAP.  IE, we have published standards, and there are some SCAP-certified tools out there to do patch and vulnerability management.

What’s missing right now is SCAP content.  I don’t think this is going to get solved en-masse, it’s more like there needs to be an awareness campaign directed at end-users, vulnerability researchers, and people who write small-ish tools.

So I sat around at home trying to figure out how to get people to use/write more SCAP content and finally settled on “Everytime you use SCAP content, a kitten runs free”.

Anyway, this is a presentation I gave at my local OWASP chapter.



Similar Posts:

Posted in NIST, Speaking, Technical | 4 Comments »
Tags:

Federated Vulnerability Management

Posted July 14th, 2009 by

Why hello there private sector folks.  It’s no big surprise, I work in the US Federal Government Space and we have some unique challenges of scale.  Glad to meet you, I hear you’ve got the same problems only not in the same kind of scale as the US Federal Government.  Sit back, read, and learn.

You see, I work in places where everything is siloed into different environments.  We have crazy zones for databases, client-facing DMZs, managment segments, and then the federal IT architecture itself: a loose federation of semi-independent enterprises that are rapidly coming together in strange ways under the wonderful initiative known as “The TIC”.  We’re also one of the most heavily audited sectors in IT.

And yet, the way we manage patch and vulnerability information is something out of the mid-80’s.

Current State of Confusion

Our current patch management information flow goes something like this:

  • Department SOC/US-CERT/CISOs Office releases a vulnerability alert (IAVA, ISVM, something along those lines)
  • Somebody makes a spreadsheet with the following on it:
    • Number of places with this vulnerability.
    • How many have been fixed.
    • When you’re going to have it fixed.
    • A percentage of completion
  • We then manage by spreadsheets until the spreadsheets say “100%”.
  • The spreadsheets are aggregated somewhere.  If we’re lucky, we have some kind of management tool that we dump our info into like eMASS.
  • We wonder why we get pwned (by either haxxorz or the IG).

Now for how we manage vulnerability scan information:

  • We run a tool.
  • The tool spits out a .csv or worse, a .html.
  • We pull up the .csv in Excel and add some columns.
  • We assign dates and responsibilities to people.
  • We have a weekly meeting togo over what’s been completed.
  • When we finish something, we provide evidence of what we did.
  • We still really don’t know how effective we were.

Problems with this approach:

  • It’s too easy to game.  If I’m doing reporting, the only thing really keeping me reporting the truth is my sense of ethics.
  • It’s slow as hell.  If somebody updates a spreadsheet, how does the change get echoed into the upstream spreadsheets?
  • It isn’t accurate at any given moment in time, mostly because changes quicker than the process can keep up.  What this means is that we always look like liars who are trying to hide something because our spreadsheet doesn’t match up with the “facts on ground”.
  • It doesn’t compare with our other management tools like Plans of Action and Milestone (POA&M).  They usually are managed in a different application than the technical parts, and this means that we need a human with a spreadsheet to act as the intermediary.

So this is my proposal to “fix” government patch and vulnerability management: Federated Patch and Vulnerability Management through SCAP.

Trade Federation Battle Droid photo by Stéfan.  Roger, Roger, SCAP means business.

Whatchu Talkin’ Bout With This “Federated” Stuff, Willis?

This is what I mean, my “Plan for BSOFH Happiness”:

Really what I want is every agency to have an “orchestrator” ala Ed Bellis’s little SCAP tool of horrors. =)  Then we federate them so that information can roll up to a top-level dashboard for the entire executive branch.

In my beautiful world, every IT asset reports into a patch management system of some sort.  Servers, workstations, laptops, all of it.  Yes, databases too.  Printers–yep.  If we can get network devices to get reported on config info via SCAP-enabled NMS, let’s get that pushing content into our orchestrator. We don’t even really  have to push patches using these tools–what I’m primarily concerned with at this point is to have the ability to pull reports.

I group all of my IT assets in my system into a bucket of some sort in the orchestrator.  That way, we know who’s responsible when something has a problem.  It also fits into our “system” concept from FISMA/C&A/Project Management/etc.

We do periodic network scanning to identify everything on our network and feed them into the orchestrator.  We do regular vulnerability scans and any findings feed into the orchestrator.  The more data, the better aggregate information we can get.

Our orchestrator correlates network scans with patch management status and gives us a ticket/alert/whatever where we have unmanaged devices.  Yes, most enterprise management tools do this today, but the more scan results I have feeding them, the better chance I have at finding all my assets.  Thanks to our crazy segmented architecture models, we have all these independent zones that break patch, vulnerability, and configuration management as the rest of the IT world performs it.  Flat is better for management, but failing that, I’ll take SCAP hierarchies of reporting.

The Department takes a National Vulnerability Database feed and pushes down to the Agencies what they used to send in an IAVA, only they also send down the check to see if your system is vulnerable.  My orchestrator automagically tests and reports back on status before I’m even awake in the morning.

I get hardening guides pushed from the Department or Agency in SCAP form, then pull an audit on my IT assets and have the differences automagically entered into my workflow and reporting.

I become a ticket monkey.  Everything is in workflow.  I can be replaced with somebody less expensive and can now focus on finding the answer to infosec nirvana.

We provide a feed upstream to our Department, the Department provides a feed to somebody (NCSD/US-CERT/OMB/Cybersecurity Coordinator) who now has the view across the entire Government.  Want to be bold, let Vivek K and the Sunlight Foundation at the data feeds and have truly open and transparent, “Unbreakable Government 2.1”.  Who needs FISMA report cards when our vulnerability data is on display?

Keys to Making Federated Patch and Vulnerability Management Work

Security policy that requires SCAP-compatible vulnerability and patch management products.  Instead of parroting back 800-53, please give me a requirement in your security policy that every patch and vulnerability management tool that we buy MUST BE SCAP-CERTIFIED.  Yes, I know we won’t get it done right now, but if we get it in policy, then it will trickle down into product choices eventually.  This is compliance I can live with, boo-yeah!

Security architecture models (FEA anyone?) that show federated patch and vulnerability management deployments as part of their standard configuration.  OK with the firewall pictures and zones of trust, I understand what you’re saying, now give me patch and vulnerability management flows across all the zones so I can do the other 85% of my job.

Network traffic from the edges of the hierarchy to…somewhere.  OK, you just need network connectivity throughout the hierarchy to aggregate and update patch and vulnerability information, this is basic data flow stuff.  US-CERT in a future incarnation could be the top-level aggregator, maybe.  Right now I would be happy building aggregation up to the Department level because that’s the level at which we’re graded.

Understanding.  Hey, I can’t fix everything all the time–what I’m doing is using automation to make the job of fixing things easier by aggregation, correlation, status reporting, and dashboarding.  These are all concepts behind good IT management, why shouldn’t we apply them to security managment also?  Yes, I’ll have times when I’m behind on something or another, but guess what, I’m behind today and you just don’t know it.  However, with near-real-time reporting, we need a culture shift away from trying to police each other up all the time to understanding that sometimes nothing is really perfect.

Patch and vulnerability information is all-in.  It has to be reporting in 100% across the board, or you don’t have anything–back to spreadsheets hell for you.  And simply put, why don’t you have everything in the patch management system already?  Come on, that’s not a good enough reason.

POA&Ms need to be more fluid.  Face it, with automated patch and vulnerability management, POA&Ms become more like trouble tickets.  But yes, that’s much awesome, smaller, easily-satisfied POA&Ms are much easier to manage provided that the administrative overhead for each of these is reduced to practically nothing… just like IT trouble tickets.

Regression testing and providing proof becomes easier because it’s all automated.  Once you fix something and it’s marked in the aggregator as completed, it gets slid into the queue for retesting, and the results become the evidence.

Interfaces with existing FISMA management tools.  This one is tough.  But we have a very well-entrenched software base geared around artifact management, POA&M management, and Security Test and Evaluation results.  This class of software exists because none of the tools vendors really understand how the Government does security management, and I mean NONE of them.  There has to be some weird unnatural data import/export acts going on here to make the orchestrator of technical data match up with the orchestrator of managment data, and this is the part that scares me in a federated world.

SCAP spreads to IT management suites.  They already have a footprint out there on everything, and odds are we’re using them for patch and configuration management anyway.  If they don’t talk SCAP, push the vendor to get it working.

Where Life Gets Surreal

Then I woke up and realized that if I provide my Department CISO with near-real-time patch and vulnerability mangement information, I suddenly have become responsible for patch and management instead of playing “kick it to the contractors” and hiding behind working groups.  It could be that if I get Federated Patch and Vulnerabilty Management off the ground, I’ve given my Department CISO the rope to hang me with.  =)

Somehow, somewhere, I’ve done most of what CAG was talking about and automated it.  I feel so… um… dirty.  Really, folks, I’m not a shill for anybody.



Similar Posts:

Posted in DISA, NIST, Rants, Technical | 12 Comments »
Tags:

Security Automation Developers Conference Slides

Posted July 2nd, 2009 by

Eh? What’s that mean?  Developer Days is a weeklong conference where they get down into the weeds about the various SCAP schemas and how they fit into the overall program of security automation. 

Highlights and new ideas:

Remedial Markup Language: Fledgeling schema to describe how to remediate a vulnerability.  A fully automated security system would scan and then use the RML content to automagically fix the finding… say, changing a configuration setting or installing a patch.  this would be much awesome if combined with the CVE/CWE so you have a vulnerability scanner that scans and fixes the problem.  Also needs to be kept in a bottle because the operations guys will have a heartattack if we are doing this without any human intervention.

Computer Network Defense: There is a pretty good scenario slide deck on using SCAP to automate hardening, auditing, monitoring, and defense.  The key from this deck is how the information flows using automation.

Common Control Identifier:  This schema is basically a catalog of controls (800-53, 8500.2, PCI, SoX, etc) in XML.  The awesomeness with this is that one control can contain a reference implementation for each technology and the checklist to validate it in XCCDF.  At this point, I get all misty…

Open Checklist Interactive Language: This schema is to capture questionaires.  Think managerial controls, operational controls, policy, and procedure captured in electronic format and fed into the regular mitigation and workflow tools that you use so that you can view “security of the enterprise at a glance” across technical and non-technical security.

Network Event Content Automation Protocol:  This is just a concept floating around right now on using XML to describe and automate responses to attacks.  If you’re familiar with ArcSight’s Common Event Format, this would be something similar but on steroids with workflow and a pony!

Attendance at developer days is limited, but thanks to all the “Powar of teh Intarwebs, you can go here and read the slides!



Similar Posts:

Posted in NIST, Technical | 3 Comments »
Tags:

« Previous Entries


Visitor Geolocationing Widget: