Wednesday 13 April 2016

How to prepare sensibly for a disclosure of nothing

If there's any lesson from Badlock, it is that it has taught us how not to disclose a security vulnerability. If you get a logo, a snazzy name, a url, a website and a marketing agency, and then ramp up the suspense for about three weeks ("Mark the date,"  "Please get yourself ready to patch all systems on this day" - courtesy wired, who wisely kept a copy of the older site), you have to deliver the goods.
SerNet surely delivered a bug and a vulnerability, but, I think, failed to properly appreciate the difficulty of the threat vector, as well as the impact of a successful exploit and hence underwhelmed in overall risk.
Let me briefly explain this. If the threat vector is 'man in the middle' and the impact is 'privilege escalation' or 'denial of service' (as opposed to the holy grail of remote code execution), then a successful exploit in many environments adds nothing new to what you've got already. If you can MITM something, you can stop it from delivering service. And also, you probably already have elevated privileges on a network or one of the endpoints. Therefore a successful exploit may only add some accounts to your arsenal, and not deliver more rights.
Should you patch? Absolutely. Should you pull out all stops to patch now? Perhaps not. At least not everywhere.
To sum up, I, like many other security professionals, am somewhat peeved at SerNet for kicking up the stink and then not delivering the goods. That may sound like masochism, as some have suggested, but these sort of goings-on can be an actual threat to security operations to the degree that they breed mistrust in the infosec profession and complacency everywhere else.
A problem for people running security teams is how to prepare for threats like this. It is easy to do a SerNet and pull out all the stops on our community, and get them ready for disaster to strike. But any failure to then deliver the goods on the day will only set the business up for complacency in an area of security that is already notoriously difficult: patching and maintenance. And it can, in one day, destroy a reputation for the security team that has taken years to build.
I am fortunate. One of my messages to the business was that I did not discount the possibility that this would turn out to be a storm in a teacup, as indeed it was. We were prepared, even for something like this.
I think there are a few lessons to learn here if you run an operational team:
  1. Have discovery infrastructure for your own environment. By that I mean that you have to be ready to find out what systems are running SMB ports on your network right now. Not what is in the asset register. Not what the architects think you have. Be able to discover what you have with minimal effort, so that you can prepare with minimal effort.
  2. Communicate with care. Threat inflation is not a strategy for success. It is the road to failure.
  3. A Corollary: Be aware of the fact  that in the infosec profession there is a fine line between being seriously 'leet with 'attitude' and being a jerk. Always prepare for the possibility that the announcing party is a jerk. Chances of that increase if the secrecy and alarming language ramp up.
  4. Recognise the signs. The serious vulnerabilities - Heartbleed as an example - were patched on the quiet first and only got the website and logo once the vulnerability was ready to be released (or very shortly before, if my memory serves). There was no three week 'ramp up the heat' time.
Of these, I think 1 and 2 are the most important to manage threats like this one.

Thursday 9 July 2015

How smart organizations implement security

There are a number of ways in which organizations can improve their cyber security - a few smart ones and many dumb and expensive ones. It is the role of an effective security leader to ensure that their organization gets the most out of the smart ways to improve security, and avoids the dumb ones as much as possible.

In the last decade, I have come across [1] four methods: three (dumb ones) seem to be in widespread use, and the fourth (smart one) seems to go largely unnoticed. The four methods are:
  • The FUD bomb
  • The Risk based strategy
  • The Compliance strategy
  • Guerilla tactics
The first three are the losing methods. The FUD bomb used to be popular with vendors of security technology, and in its normal incarnation consisted of hour-long sales talks in which the first half hour was devoted to scaring the pants off the potential client, after which the product to be hawked was delivered--deus ex machina fashion--as the savior with blinking lights to make it all go away. The security holes left by this approach do not need laying out to anyone who's spent more than a few months working in cyber security, executives have now cottoned on to and largely immunized themselves against this sales tactic, and it can be considered ineffective at this point.

The Risk based and Compliance based methods are sort of complementary in their effects, though both are failures. The problem with using Risk and Compliance as a driver to security spend is that it ties the security leader up in ROI [2] discussions which are impossible to win. Usually, the compliance-based discussion ends up with a certain percentage coverage of pre-set security controls, which is then deemed 'acceptable' from a risk perspective.

Even the run of the mill not-so-serious cyber security event cannot be tackled with the cyber defense resulting from a risk and compliance approach: this is the sort of cyber defense a good hacker usually runs rings around. Hackers just are experts in finding that 15% of your 'compliance' that is yet not covered by controls. A daily report of failed authentications, for instance, is utterly ineffective against an intruder who pivots hourly. Of course, in the event, you do not get the 'penetration testing report' to fix your problems at leisure. Instead, you're pwned, and it's up to you, whatever logs and data you have, and the victims to figure out what just happened.

Serious cyber security events with the potential to be company-terminating are typical long-tail, black-swan events that do not fit with risk frameworks at all. A company termination event is the moment at which the risk and compliance discussion is losing its meaning entirely, and is therefore 'out of scope' of such discussions.

This leaves us with the guerilla tactic to run a security team. I will have much more to say about that later, but for now, consider this: the main tactic of guerilla warfare is using someone else’s resources augmented with agile and highly pluggable minimal infrastructure to achieve your goals. Effective security teams work this way, and that is what makes an effective security a somewhat different beast in any IT organization.

[1] And have also tried to use, to my shame.
[2] Return on investment

Thursday 29 January 2015

What is philosophy of cyber security yet again?

The short answer to that question is that it's something that's not yet there but that ought to be invented. It is no secret that we have, in the last twenty years or so, built a world that is radically different from what came before, and that we are in the middle of some sort of technical curve to an unknown destination.

We now have the internet, and many of the 'disintermediations' that were driving the 'dotcom' boom of the late 1990s are now more or less reality. If you doubt this, ask your local author, musician or filmmaker (and yes, in New Zealand these sort of people are 'locals'). And where have all the bookshops gone?

Along the road we know that we have a problem with 'security' in that world, and it is unclear what to do about that and how a possible solution might work. To many it seems that the powers that be are always just asking for more and more control over data and metadata, and run undisclosed 'dragnet' operations on our connections. At the same time, judging only from our mail inboxes, virus and spam writers run unchecked. Little wonder that many feel that these 'powers that be' are asking for more control, and then delivering little.

It is a good question whether a philosophy of all of this is possible. I think it is. Like many good philosophical questions, this problem can be phrased in terms of a dilemma between two fairly easily understood extremes. The truth then at first seem to lie somewhere in the middle. But good philosophy usually gives it a little twist.

In the case of security versus surveillance I think the extremes are this:
  1. A (Hobbesian) state of nature, in which each internet user is on their own. A lot of innovation happens, along with a lot of good self-organisation and a lot of bad stuff. The closest we have to something like this in real life is probably the deep web, or darknets. Contrary to public opinion, there is life on the deep web apart from weapons and drugs, but it's not a place to go (digitally) unarmed.
  2. A total surveillance state, in which internet crime is quickly stamped out, along with dissent and free speech. The closest we would have to this world is the Chinese firewall. And there seems to be no shortage of politicians in the West who want to take us into this direction too.
Can the future lie somewhere between these two extremes? A first answer to this question would be that it has to - neither alternative is on its own acceptable as a future model for the internet.

A second answer, and one that I've heard given, is that it makes sense to give up some of our freedoms from the Hobbesian internet in order to have some security, and that a Hobbesian 'social contract' with a central power is required to keep the net relatively secure and free. For several reasons that answer - when it comes in the form of the Hobbesian deal - doesn't satisfy me. Hobbes was, for starters, not a democrat, so it is hard to see how such a future state of the internet will align with our wider democratic institutions. My hunch would be that it probably can't, and that a democratically uncontrolled central power would, little by little, take us into the total surveillance horm of the dilemma.

A third answer, and I think the right one, is that we need to rethink the notion of 'security', 'freedom', 'dissent' and so on for a digital world. We sacrifice a lot in the name of security, but to me it seems pertinent that we do not even have a robust philosophical candidate definition for what 'security' actually is. As such, the concept is woolly and vague, an ideal frame to attack all kind of ill-conceived notions of 'security measures' to.

So I think what's necessary is a philosophical twist. That twist has to start by asking some pertinent questions, and coming up with some new principles governing our digital beings. That is what philosophy of cybersecurity is about.

Wait, another new research area?

As time goes by, I'm sort of painfully aware that I'm using this blog to announce some new research topic, like this post on theoretical terms, that then subsequently doesn't happen. I could of course delete these posts, since I'm pretty sure they are interesting to no one and are not informative. However, I'll leave them for now - maybe in the future I may want to pick them up again.

With the increasing interest in cybersecurity, which is what I've been doing over the past 10 years or so, I find the spare time that I have to solve these sort of 'pure play' academic puzzles is sort of lacking.

That doesn't mean these puzzles are not interesting, but for now, they just are not interesting enough.

Operational Implications of the Cyber Security Threat Stack

Cross posted from the Leiden Safety and Security blog:

Looking back on 2014, one can say that 2014 has been the year of the hacker. The world over, cyber security agencies, and cyber securitycompanies, are reporting an increase in the number and the complexity of cyber-attacks. In University of Auckland's IT Security Team in 2014, we have had to deal with more, and with more complex, attacks than before. Such developments place significant demands on cyber security teams. If one thing stands out about cyber attacks, it is that they do not come in one variety. Another thing that can be said is that a single cyber attack is lonely: many incidents now consist of multiple ‘attacks’ using a variety of tools.

For people in business, universities, government and as individuals, the question then arises how we prepare for yet another increase in the number and complexity of cyber security incidents. One particular tool that we use in our team is the threat stack. Used with some caution, the threat stack allows forward planning of our defences against the sort of attacks that we can expect in the next 12 months.

The threat stack is a categorisation of attacks indexed by likely actor and motivation. As shown in the table, it indexes cyber threats from fairly innocuous experimentation, primarily by researchers, to advanced cyber crime and advanced persistent threat. In 2013, Richard Stiennon extended it by adding surveillance to it. At its simplest level, the threat stack can be interpreted as a measure of the motivation and sophistication of a particular group of attackers. It is also possible to attach an approximate timeline to the threats, indicating when these threats were most prominent, and the maturity level of the threat.

Friday 24 October 2014

Hacker we can see you: getting through to security incidents

The announcement is simple:

"Time: 8 AM to 9 AM
Location: University of Auckland Business School
Event Date: Nov 13, 2014
Organization: New Zealand Information Security Forum (NZISF)

In this talk I will focus on how to detect the groups behind our incidents, and some of the methods that we use in the security team at the University to detect hacking early, preferably before it has done any damage. We have developed a number of ‘predictive controls’ that have proven successful in detecting and deterring compromises of University data. I provide an overview of some external research outlining why such predictive controls are now a necessity for any security team. I then discuss the sort of security skills and security operations that are required to implement and maximise the usefulness of predictive controls."

And that's it. The topic is not so simple, but unfortunately much of it is not something I'd post on my blog. The upshot is that if you run a security team, and do your security operations well, then you'll know what I'm talking about. Many organisations, prior to getting hacked, discover that they had the data pointing to an impending attack all along. But it is searching for, and operationalising this data that is the hard bit.

The key to doing this well is to abstract from incidents. Incidents are one-offs, which you open when they happen and close when done. But the majority of our 'incidents' is generated by groups who keep coming back for more. My talk is about how to identify these groups, and then how to use your incidents in a constructive manner to predict when they'll strike next.

Thursday 17 July 2014

A revival and a shift of focus

If further evidence was needed that our lives seldom unfold according to plan, here is some. Thinking I would safely disappear into writing about theoretical terms and similar niceties about scientific theories, I've become very interested in the philosophy of cyber security. So this post marks a shift of focus, and also serves as an announcement (and promise!) that I'll blog more regularly from now on.

Sunday 10 June 2012

New Project: Theoretical Terms

With the thesis now more or less done, the book on the Tractatus well underway, and (with the exception of one) all the material that didn't make it into the thesis turned into papers, submitted and partially accepted, I can turn some initial attention to my next project: a book and some spin-off papers on Theoretical Terms.

As I've written on my 'About' page over at academia.edu, this project is currently a gleam in my eye more than anything concrete. I've decided to blog about planning and progress of this project, for three reasons: first, it provides some motivation to go and get the project done, and second, it will help keeping some sort of structure to the project. Lastly, looking back, it may be interesting to see how such as (largish) project can be undertaken in the half-academic world. I say ha;f-academic, becuase with my recent job-change, it probably would no longer be a good idea to identify as post-academic. Perhaps (very perhaps) more about that later.

The working title of the project at the moment is something along the lines of 'Theoretical Terms - what theories tell us about nature', and its theme is to give an account and characterisation of the theoretical core of theories of physical science (while the aim is more or less general, it's just that I know most about the latter). I plan to start with Kant, and end with a computational characterisation of theories.

Friday 23 September 2011

Chemical objects as iterated Kantian objects

In a previous post, I referred to iterated Kantian objects, without really explaining what they were. I can remedy that now. The iterated Kantian object is the end result of my thinking on what theoretical terms and objects featuring in chemical theories really amount to, and hence it is my proposal for how I think the ontology of chemistry might just function.

The idea starts with the discussion that we've had recently, in book length form, on Kant's transcendental idealism. For non-philosophers, transcendental idealism is really how to conceive of the relation between the 'thing in itself' and the 'thing as it appears to us', or, in Kant's terms, the noumena and the phenomena. For Kant, we don't have access to the thing in itself, and our access to the thing as we perceive it is epistemically mediated by what our mind brings to the table in our perception. Here's the discussion point: The 'thing in itself' is only minimally accessible, if at all. I know, I do gross injustice to Kant here, on several levels.

This is where the trouble starts. For one might argue that if the thing in itself is not accessible, then by the same token we don't know that it exists. In Henry Allison's 'Transcendental Idealism' he argues that the 'thing in itself' and the thing as we perceive it' really refer to two modes in which we may reflect on concepts and objects. The 'thing in itself' is what there is beyond the appearances, so to say. In Rae Langton's 'Kantian Humility' she argues that the 'thing in itself' and the 'thing as we perceive it' really refers to two non-overlapping sets of properties of an object -- the intrinsic properties which are inaccessible to us, and the relational properties, which we perceive through our process of receptivity to the thing observed.

All this degenerates pretty quickly into a debate on what Kant might have meant exactly when he made the distinction. This discussion is of historical interest, but as a philosopher of chemistry, with a nasty ontological problem to solve, I propose that we read Kant here in the sense in which Derek Parfit proposes we read him in his 'On What Mattters' -- roughly as a philosopher with a large amount of creative ideas, but also one that sometimes lacks in coherence.

Specifically, I propose that we read the 'thing in itself' as an interesting scientific challenge, and read Kant as a philosopher inviting us to do some probing, even while he maintains that we're trying to peel an onion with an infinite amount of layers. As a scientist, I'm happy with that. I am still somewhat uncertain of how much exactly of Nineteenth-Century science may be read as a debate between scientists probing for this 'thing in itself' and neo-Kantian's admonitions that such probing was somehow inadmissible. I think the development of atomic theory and the tetrahedral carbon atom pose some interesting examples, but for now I digress.

That approach can, I think, make sense of some puzzling issues in current philosophy of chemistry. For the ontological issue plaguing philosophy of chemistry is that in a sense we have two cooperating -- or some might say, competing -- theories of matter: chemistry and physics.

From the viewpoint of Kantian objects, and taking Kant as a creative, but perhaps not entirely coherent philosopher, we can then develop the view that chemistry and physics form two different sets of epistemic conditions placed on matter, and hence develop a different set of pointers to the 'thing in itself'.

The suggestion is that chemistry, as a theory of 'transformation' and 'stuff', does not probe as deeply into matter as physics, which is trying to delve deeper into a foundational theory. These are gross oversimplifications, but will have to do for now. What this suggests is that what's lacking in the Kantian object is a notion of depth, or, more precisely, an account of how things in themselves may sometimes break through their shells of epistemic conditioning if we ask the same question in a different way and then compare notes.

To develop the notion of depth, it is useful to get to some concepts from object oriented programming. In object oriented programming a large complex program is split up into 'objects', say, transactions in a banking system, or personal records, which perform certain functions. The object 'person' in the computer program may 'expose' certain methods, such as 'age', 'address', 'sex', 'income' (if the program in question is run by the IRD) and so on. Other parts of the program can 'consume' these methods, but do not have to know how they are implemented, the 'method' itself is 'encapsulated'. The internal definition is hidden from view, but the results are accessible to the component that wants to use the method.

This is what the idea that I'm developing on chemical objects amounts to: both chemistry and physics populate their relevant concepts -- say, atom -- with encapsulated methods. Physicists developed 'orbitals', 'nucleus', 'electron' etc., chemists developed the concept of 'valency', 'directed bonding', and so on. From inside the science itself this makes sense, but ultimately, these concepts get 'imported' to somewhere else in encapsulated state, and relevant context is lost in the process. By the way, at the epistemic level, I think that Brown and Priest's 'Chunk and Permeate' approach is one example of what I mean here.

The concept of 'iteration' really means that this importing and exporting over time serves to refine the concepts, and develop further relevant contexts to this process. That is, at some point the process iterates out to a more or less coherent view of nature. A large part of the ontological problems currently existing in the philosophy of chemistry are because this process is not finished, though a significant amount of ontological debate also ensues (orbitals, I'm looking at you here) because the philosophers involved have no clue of what they're dealing with. I've read more silliness about the ontology of orbitals in the last year than I care to mention.

So, in a nutshell, this approach depends on a number of things:
  • A reading of Kantian 'things in themselves' and 'things as they appear' that's more like Allison's than Langton's
  • The idea that some of Nineteenth-Century science may be read as an attempt to probe the inaccessible 'thing in itself'
  • The idea that chemistry and physics place different epistemic conditions upon how we perceive objects
  • The idea that 'deeper' concepts are encapsulated in the ontology of the object.
  • The idea that progress is made through a process of iterating the imports / exports.
By the way, programming has a few more terms that are of interest to this sort of philosophy of science. To get a sense of how suggestive some of them are, consider 'refactoring' and 'reverse engineering' (the last is not from object oriented programming, but is a key hacking / security technogy).