In our everyday lives, we have to manage risks constantly. Going down the stairs there is a possibility to slip and fall, riding a car may get you into an accident, the groceries you buy may be out of date and give you food poisoning.
And yet, despite all the risks, you do take the stairs, get into a car and eat. Why on Earth?
The reason is that all these risks are calculated and there are controls in place to minimize them. You have had a lot of exercise walking up and down the stairs, a car is stuffed with breaks, airbags and other safety features, and the grocery store is expected to monitor the best-before deadlines (and so are you).
Of course, staying far away from the cars is the only 100% sure way of not getting into a car accident, but when doing so, you would also lose the benefit of fast transportation.
Thus every time you encounter a risk situation, you perform a little cost-benefit analysis. What can I gain by taking the risk, and what is the expected price of it?
This decision is not always a conscious one. You may rely on experience that you have gained long ago, or on rules and traditions that someone else has set for the society. But one way or another, this cost-benefit reasoning is always below there somewhere.
In many cases the risk decision is really clear-cut. You have to eat to survive, and even when you get a food poisoning, this happens really seldom and medical assistance is sufficiently near and well-equipped, so you will survive with very high probability.
But what do you do when the decision not so straightforward? This situation is more likely to occur when there is little personal or societal experience, or when the domain is changing so rapidly that yesterday's best practices are not of much help today. The prime example of such a domain is if course information technology -- computers, networks, smart homes, intelligent robots, etc.
For instance, should I install an antivirus program on my computer? The cost is clear -- the license will have some price tag attached, and typically I even have to pay it every year. But what do I gain in return? Maybe some assurance that my files will be safe from damage or being silently stolen. But hey, I don't have much critical data on my home gaming PC anyway, right? Or maybe I can be more certain that there will be no botnet agent installed as part of malware. But then again -- the bots are used to attack someone else, not me, so why should I care?
Rightness or wrongness of this reasoning aside, we can see that the question whether or how much one should invest into the protection of the IT infrastructure is not an easy one to answer. This is already true for your home network, let alone large, industrial infrastructures.
As we say already earlier, one big problem of the corresponding cost-benefit analysis is that the benefits are often, if not always, very hard to estimate. The ultimate goal of defense investments is to make sure that no bad things happen. But how can we estimate something that does not happen in the first place?
Short answer is -- you can't. In the worst case the best you can do is to use your intuition, and this is what makes IT risk assessment to look a lot like art. You try to create something that corresponds to your intuitive definition of order or beauty, and hope that the result will be good enough in practice.
But as always, there is a longer answer, too, and that one is a bit more complicated. For one thing, complex infrastructures are composed of smaller components, and there may be something known about them.
For example, a company may keep its valuable digital assets like product source code on a server that is kept in a physically secured rack space. To get unauthorized access, the attacker would need to fool the guard and get through the door, and later make a successful exit holding a rack server in his hands. We may roughly estimate the success probability of each one of these actions, and conclude that the probability that ALL of them succeed, is acceptably low. If not, we can invest additional resources into guard training, better doors, or video surveillance.
Assuming that risk assessment of the individual system components is a somewhat more manageable task, the question of deriving the risk level of the whole infrastructure becomes more scientific. Soundness of such a derivation can be studied, reasoned about, tested, confirmed and overturned -- everything that we expect a scientific method to achieve.
Of course, having a sound method of deriving the system risk level based on its components does not yet solve all the problems. Some of the components may be hard to assess themselves (in which case we may try to split them further). All the low-level estimates contain some margin of error that may lead to a large imprecisions when amplified to the system level. And last but not least -- our underlying system model itself may be incorrect, giving rise to results that make no sense in practice.
Nevertheless, the need to have more rigorous estimates as the basis for security investment decisions is definitely there. We are far from solving this problem, but the first steps we have managed to make look promising. And without difficult problems to solve, the society would have no need for researchers in the first place.