People are very poor at risk analysis. As such, people “in the biz” come up with metrics to determine ROI. This might seem cold hearted. Since money is not infinite, we need to have some tool that we can use to measure the effectiveness of each dollar that we put into mitigating risks.
Let’s imagine that your mom had been killed by a falling piano. If we could mitigate this threat and it would cost 2/3’s of our budget, would you? What if random lunchmeat explosions costs 2/3’s of the budget to mitigate as well? What if falling piano’s claim 10 people a year, and random lunchmeat explosions claim 100,000?
When you look at events with the correct lenses, it’s possible to start to understand what’s really going on. These are the tools that actuaries use to model the world. It’s also the tools that security folks should use to model their spending against threats.
There is a general agreement about risk, then, in the established regulatory practices of several developed countries: risks are deemed unacceptable if the annual fatality risk is higher than 1 in 10,000 or perhaps higher than 1 in 100,000 and acceptable if the figure is lower than 1 in 1 million or 1 in 2 million. Between these two ranges is an area in which risk might be considered “tolerable.”
These established considerations are designed to provide a viable, if somewhat rough, guideline for public policy. In all cases, measures and regulations intended to reduce risk must satisfy essential cost-benefit considerations. Clearly, hazards that fall in the unacceptable range should command the most attention and resources. Those in the tolerable range may also warrant consideration — but since they are less urgent, they should be combated with relatively inexpensive measures. Those hazards in the acceptable range are of little, or even negligible, concern, so precautions to reduce their risks even further would scarcely be worth pursuing unless they are remarkably inexpensive.
As can be seen, annual terrorism fatality risks, particularly for areas outside of war zones, are less than one in one million and therefore generally lie within the range regulators deem safe or acceptable, requiring no further regulations, particularly those likely to be expensive. They are similar to the risks of using home appliances (200 deaths per year in the United States) or of commercial aviation (103 deaths per year). Compared with dying at the hands of a terrorist, Americans are twice as likely to perish in a natural disaster and nearly a thousand times more likely to be killed in some type of accident. The same general conclusion holds when the full damage inflicted by terrorists — not only the loss of life but direct and indirect economic costs — is aggregated. As a hazard, terrorism, at least outside of war zones, does not inflict enough damage to justify substantially increasing expenditures to deal with it.
To border on becoming unacceptable by established risk conventions — that is, to reach an annual fatality risk of 1 in 100,000 — the number of fatalities from terrorist attacks in the United States and Canada would have to increase 35-fold; in Great Britain (excluding Northern Ireland), more than 50-fold; and in Australia, more than 70-fold. For the United States, this would mean experiencing attacks on the scale of 9/11 at least once a year, or 18 Oklahoma City bombings every year
Makes you think, eh?