Time to scan for Emerging Risks

Directors and officers have a duty to address reasonably forseeable risks.  They need to be satisfied that their risk framework, risk appetite and management reporting are appropriate for the circumstances.

So what’s on the horizon that You need to consider?

Data Privacy Act breaches – we know the impact that such breaches have had on entities such as Medibank and Optus. We also know companies are capturing significantly more amounts of personal data that could be captured by the Act. At a recent AICD governance conference, the First Assistant Director-General of the Australian Signals Directorate (ASD) indicated cyber attacks now cost more, cause longer interruptions and have greater downstream reputatution risks. The key question you need to ask is “Do you know what you would do in the instance your data is hacked?”

In order to prepare, companies need to understand what data they have, where it is held and the security of the systems that store the data.  Critically, what controls do you have in place, ie, regular whole team scenario testing, penetration testing, phishing training, use of mult-factor authentication, strong patching procedures, backup processes and relationships with relevant advisers.  To assist in benchmarking the effectiveness of your controls, the ASD provides a list of recommended cyber security frameworks organisations should consider – Cyber Security | Australian Signals Directorate (asd.gov.au)

AI Technology – it’s a buzz word for the next big thing that can create significant operational efficiencies and personalise customer experiences.  In theory, it’s the development of computer systems that are able to perform tasks normally requiring humans, using machine learning algorithms. Generative AI tools, such as ChatGPT, typically use large data models and real-world data to create human-like responses.

However, as with any new technology, there are risks.  These include data poisioning (manipulation of AI model data with inaccuacies), AI hallucinations (output that may not be factually correct), AI model stealing (where a malicious actor replicates a model stealing the IP) and data privacy breaches.  All have the potential to cause serious harm and reputational damage. Companies that intend to leverage the technology need to consider establishing an appropriate controls framework that is supported by an IA governance policy that addresses legal and ethical implications, data privacy, data security, code/algorithm development and auditing of outputs.

Given the rapid transformation of the above risks, not to mention climate change, financial sustainability and regulatory compliance, it is important organisations frequently scan their environment for changing risks that may impact their ability to achieve their strategic plan.  Foreseeing emerging risks enables organisations to anticipate disruptions so they can build control capacity and preparedness to act, turning risk into opportunity.

Contact Us

  • By submitting your email address, you acknowledge that you have read our Privacy Policy and Terms & Conditions and that you consent to our processing data in accordance with the Privacy Policy. If you change your mind at any time about wishing to receive the information from us, you can send us an email.
  • This field is for validation purposes and should be left unchanged.