But you probably knew that already, didn’t you? =)
But you probably knew that already, didn’t you? =)
I spent last night writing a press release for ISM-Community Top Ten. Press of the world, be warned, you will be hearing from me soon.
Anyway, lessons learned from writing a press release:
Seriously, though, it’s good skills to learn, even if you think you’ll never need them again.
Highly loveable movie about zombies as pets and assistants. Until their collar stops working, thus proving my theory that zombies just want to eat your brains!
Trailer here at Daily Motion. I wouldn’t recognize Carrie Ann Moss if I didn’t read the credits. =)
Now, to tie this back into security, Fido is a classic example of what happens when you depend upon a security architecture based on one control when it becomes a single point of failure. Then it’s time for the shovel.
Let me be one of the first to congratulate you. Whether your title is CISO, ISSO, Manager, or Consultant, being a security manager is an accomplishment.
Now for the bad news: You need to go into the job knowing that you will always be short on people, time, and money. Good people are hard to come by, and as soon as you get them trained up, they’ll change jobs because they outgrew what you hired them to do. Time is critical because effective security requires cooperation with all the other business disciplines which takes time and effort. Security is seen as a cost center, so any good business will try to limit security spending in order to maximize their profit.
My friends at ISM-Community have developed an Information Security Management Top 10 document with some very solid practical advice for how to survive in today’s security environment. Think of it as a list of meta-themes that all successful security managers and programs have in common.
The ISM Top 10 doesn’t solve all of your people, time, and money problems, but it can help you to recognize trends and set a long-term strategy to winning.
Computer lab that I cared for and kept running as a side job to keep from going crazy from the heat. Check out the layer of dust.
You can read about my satellite adventures here.
If you haven’t heard of The Long Tail by now, you’re either not a student of Web x.0, only read the mainstream mass media, or you live under a rock. Or all 3. I was going to do some “esspraining”, but wikipedia does it way better than I can.
Here, this is the picture from Wikipedia:
In this picture, the part in green represents the high-demand, high-sales products/services and the yellow represents “The Long Tail” or low-demand, low-sales products/services that actually constitute the majority of sales. So in other words, if you’re a Netflix, you rent more movies simply by having all the obscure titles that a brick-and-mortar video store can’t afford the shelf space for.
This concept has also been used to explain blogs, where blogs represent The Long Tail and are free to talk about the niche subjects that the mainstream mass media ignores because the mainstreamers are constrained by time and applicability to their readership.
As with just about everything I write, by now you’re thinking “What does this have to do with information security?” Yes, I hear this quite a bit, so don’t be worried if it’s not immediately transparent.
Imagine the same drawring with “Level of Effort” (LOE) as the X-Axis and “Return on Investment” (ROI) (what I really want to say is “payoff” but I’m trying to be pseudo-scientific, so humor me) as the Y-Axis. It would look something like this:
Anything that is green represents “high-payoff activities” or “common sense security”–the easy controls that provide a high level of security or other benefits. In this group, we have change control, automated patching, and testing backup tapes. You probably have a handful of similar controls that come to mind.
Anything in yellow represents “excessive spending” or “you must be out of your mind”. In other words, the amount of resources that you would have to expend to build the control outweigh the benefits that you would get.
But there’s one catch: what we are trying to do in deciding if/how to implement a security control is to make a decision based on cost, benefit, and risk. We have cost and benefit, how do we account for risk?
If you take a look at where the division is between green and yellow, that line represents what we would call “acceptable risk”–it’s a sliding scale along the X-Axis. Where that tipping point lies depends on the nature of the system, the mission that it supports, and the types of data that it stores, processes, or forwards.
For high-critical systems, you move the line to the right and you actually become more inefficient at the types of security controls that you build–you’re into The Long Tail for all it’s worth.
But for low-criticality systems, all you really have to focus on is the high-payoff activities because your level of acceptable risk is lower.
Now when you’re in a compliance information security management model, what’s happening is somebody is setting that level of acceptable risk for you. I think this is the reason that there is such a backlash on most compliance frameworks. What is low LOE for somebody else might be high LOE for me because of the technology I have in play or due to other externalities, and if you hold me to that pre-determined level of risk acceptance, then I’m back to spending inefficiently. As a business, I hate it when people tell me to spend inefficiently “for my own good”.
What do I expect you to do with this model? Not much, I ‘m just building on the ideas from Jacquith, Earl Crane, and other people that I know. I just figured it would help somebody explain acceptable risk and compliance in a format that was easier to understand.