Most organisations use a 5x5 risk matrix of some description when assessing the level of risk across the business. On one side is a sliding scale for expected frequency for a period of time (typically 12 months) and on the other is the impact of the incident that itself is a further variable of different loss dimensions such as financial, customer impact, regulatory, staff health, safety and moral and so forth. This works wonderfully well in highlighting if a particular event of scenario will land the business inside or outside of risk appetite and therefore highlights if a risk requires further attention or should simply be entered into the risk register, accepted and monitored. When I say this works wonderfully well what I mean is it works well when you have suitable data to base your assessment around. If for example you work in OH&S and you want to risk assess an existing piece of machinery such as a car for example you can look at historical data of frequency of accidents and amount of loss. Of course all of this information is being recorded in the loss database or within an incident report form awaiting entry into the loss database. So this makes life nice and easy as you can make a reasonable assumption that left unchanged whatever frequency and impact occurred last year would occur again in the following year within a small margin of variance.
The challenge arises for estimating IT losses. Few organisations have suitable measures in place to record and capture frequency and impact, let alone traceability back to how effective existing controls are in reducing impact and or likelihood. In typical organisational segmentation architecture and design respond to new business requirements and operations are left to run what ever is handed over from the build team. Rarely does the business understand, nor care about the operational impact of their new project, until of course something doesn’t work and everyone gets in a big meeting room and looks at each other to discuss how the situation can be remediated.
While insurance companies can risk assess the likelihood and impact based on historical data down to the street and house number to adjust insurance premiums the same cannot be said for IT actuary data. Finance companies can use predictor and historical customer data with reasonable accuracy to predict customers that are likely to default on loans prior to the default taking place. Yet ask any organization the likelihood and impact of unintentional data loss for the next 12 months and sadly you will get either wild speculation or blank stares. Likewise ask the same organization what the reduction value of the existing firewall infrastructure was for the prior 12 months in ensuring customer trust and confidence and you will probably be met with silence. For a variety of reasons the industry cannot draw on third party aggregated data to assist in investment decisions.
Organisations should be looking internally to collect and aggregate this data. By tracking actual frequency and loss data over time the organization can build a profile of historical risk. Through expanding this tracking information existing controls can be tested for effectiveness. Time-driven activity-based costing is a potential tool that could be leveraged to assess the control cost using a total cost of ownership (TCO) model. By implementing effectiveness measures risk owners are better informed around lifecycle costs of implementing controls versus potential loss. This can flow through to a relaxation or tightening of specific IT policies, standards and guidelines.
By using a robust actuary database an organisation can clearly articulate the value of existing controls such as firewalls, IPS, DR plans, anti-virus but can evaluate the risk of new projects and technology effectively rather than relying on guestimates or blank faces.