Menu

Creating Effective Incident Reporting


BY DR. MIKE HUMPHREY, FORMER HEAD OF SECURITY, UK NATIONAL CRIME AGENCY

Effective reporting of security incidents is vital. However, today, many incidents still go unreported.

What are the barriers to good‐quality reporting and how can institutions overcome these? The lessons are broad and are not simply for information security; indeed, they apply to any areas where individuals must report when things go wrong.


The Case for Good‐Quality Incident Data

Security professionals and academics believe that the true scale of information security incidents is unknown due to under‐reporting. For example, at a conference of chief information security officers (CISOs) in July 2017, members of the audience were asked the following question: “How confident are you that your staff know how to report strange activity or a potential security incident?” Only 22% said they were very confident, while 50% were fairly confident and 28% had low confidence.

This is a real problem for risk management, which relies on accurate and comprehensive empirical incident report data to make informed risk assessment and risk management judgements. When such data are partial, decisions related to resourcing and expenditure may be focussed on the wrong issues. Moreover, there is a danger that incidents that are reported are given higher prominence, simply because that is the only available information.

For example, electronically gathered incident reports from audit logs or intrusion detection systems (IDS) are automatically generated and are therefore more readily visible. Since they are tangible, these incident logs are often used to justify risk‐based decisions. However, without a wider perspective of the true nature of the type and volume of incidents and near misses, this may give undue prominence to the electronic log indicators while masking the real threat.

This perceived lack of data could also undermine efforts to share incident and threat information between communities. While providing some basis for risk assessment and management, incidents that are reported, may contain unknown biases that could affect any such assessments. If organizations have little to share, then there is little to gain.

But there are also increasing pressures from regulators in the form of mandatory reporting of security incidents (e.g., via GDPR legislation). So, for both reputational and regulatory reasons, firms need to know what is happening to their data assets and the possible security incidents that could lead to breaches. For these tasks, they need staff to report incidents. Indeed, when assessing risk, information security incidents that rely upon staff to report are equally important as electronically‐gathered incident reports.


Why Don't People Report Incidents?

There are a host of reasons people don't report incidents. For starters, if you expect to be blamed for making an error, or you expect that no one will notice, then there is an incentive to simply not report it.

But there are a host of other reasons for not reporting. Often people don't think an incident is serious enough to bother reporting. This was confirmed by research by Plews and Ogan, who also found that if a mistake is corrected for the future, individuals often then decided that they don't need to report it.

Making incident reporting mandatory doesn't always work. For example, Soderburg observed that patient safety incidents at health care facilities were not being reported, despite it being a mandatory requirement.

Rank can affect reporting too. Indeed, research has shown that doctors are far less likely to report incidents than nurses.

Another study found that new recruits were treated differently than existing employees, with fewer incidents made by new staff reported as security incidents. Without a central record of these incidents, it is harder to learn lessons and improve training for new recruits. Moreover, there's evidence that users often interpret formal work requirements in a local way, and then play out the processes to suit the informal element of their environment.

The complexity of a system can be a key factor inhibiting reporting. For example, if you can't understand how the whole system works, then you are unlikely to understand the significance of a local failure. So, you may not see the need to report – or you may not understand what needs to be reported.


Overcoming Barriers: Critical Success Factors

Most organizations have considerable avenues open for staff to report incidents, including forms, email, intranet, phone lines and direct communication with line managers. But are these channels fit for purpose?

An effective system of reporting needs to factor in human behaviours and their attitude to risk. Through a series of studies with information security professionals, our research identified four critical success factors (CSFS) for effective incident reporting:

  • Recognition by senior management that incidents will happen, and that employees must play a full and active part in the incident management process
  • Easy processes for creating and submitting a report. If reporting incidents is difficult, individuals will be less likely to submit them. This may particularly affect the reporting of near misses
  • Rapid, useful, accessible and intelligible feedback to the reporting community
  • Incident analysis that considers root causes and wider systems and processes, not just the initial impact assessment

These factors complement the five stages in the British Standard ISO/IEC 27035 approach to managing incidents.


Parting Thoughts

Incidents, like accidents, will happen. They are often preventable, but still occur. Accepting your organization will inevitably be, or has already been, subject to a security incident, the key thing is to make sure you are ready.

Keep in mind that you have to worry about third‐party vendors as well as your internal data and systems. If one of your key suppliers gets hacked, your company's sensitive data could very well be compromised. In any outsourcing arrangement, your organization is still responsible for the privacy and security of its data (including customer information) and still must report incidents.


FOUNDATIONAL QUESTIONS

For Practitioners

  • Do you have a clear and well‐understood incident reporting system?
  • Is it supported – and, importantly, also followed – by senior management?
  • Does your company have a blame culture or a learning culture?
  • Are those who report incidents supported to demonstrate to others that it is a learning culture?
  • Do you have a tested plan to put in place when a breach occurs?
  • Do you have prepared media lines to answer questions in the immediate aftermath of an incident and to hold the fort until more facts are known?

For Regulators

  • Is the intention to punish or improve companies subject to a data breach?
  • Do you encourage learning?
  • Do you have rules in place to ensure that organizations that commit infractions are not just punished but actually learn from their mistakes?
  • Do you focus on the overall process of incident reporting, as opposed to the incident that was the subject of regulatory intervention?

 

About the Author

Dr. Mike Humphrey was a police officer for 30 years, working in a variety of operational, research and planning, and IT roles, including being Head of Security at the UK National Crime Agency. He is a fellow of the Institute of Information Security Professionals and an elected member of the UK Information Assurance Advisory Council's (IAAC) management committee.

NEXT:

The Critical Connection Between Culture and Misconduct Failures
By Caroline Stroud, Emma Rachimaninov, and Holly Insley

MORE FROM THIS SERIES
 
Insights from Across Industries

By Jo Paisley

The Critical Connection Between Culture and Misconduct Failures

By Caroline Stroud, Emma Rachmaninov, and Holly Insley

BylawsCode of ConductPrivacy NoticeTerms of Use © 2024 Global Association of Risk Professionals