In an era where artificial intelligence is woven into every aspect of business, the hidden dangers of unsanctioned AI tools are increasingly coming to light. In the following article, cyber specialist Anita Molitor examines the phenomenon of “Shadow AI” and highlights the urgent need for awareness and vigilance across organisations.
Nowadays, we all use AI in some form, regardless of age or profession. It is everywhere.
It’s easier, faster and helps us save a lot of time. The risks have been widely publicised. However, there is one little known risk which is creeping to the fore: Shadow AI.
What is Shadow AI?
Shadow AI is not a new product that we need to take a closer look at. But rather the concept of multiple AI applications in one organisation, and how different departments are implementing different programmes.
Imagine you have a company, and you are excited, let’s say, about CoPilot. You implement it, train your employees, and think you’re prepared for anything. But what you may have forgotten is that not all departments require only CoPilot. The marketing department wants something different, the sales department wants something else, and the creativity and enthusiasm of your individual employees will not stop them from downloading different kinds of AI programmes based on personal preference. On average, companies register more than six particularly risky GenAI applications in their infrastructure.
Why is this a problem?
This scenario is Shadow AI. Everyone installs his/her own AI without informing the IT department. If the IT department does not know about these programmes, how can they protect the company or react if necessary?
Training should be paramount
Training is very important here. Employees need to know about and fully understand the risks. They need to be aware that even if they install a programme on their personal devices, they cannot use company data willy-nilly.
According to a recent article in Infosecurity Magazine “nearly 74% of ChatGPT usage in corporate environments happens through personal accounts. That means enterprise controls like data loss prevention (DLP), encryption, or logging are nowhere in sight. Combine that with the 38% of employees who admit to inputting sensitive work data into AI tools without permission, and you’ve got a significant insider threat. While accidental, it’s no less dangerous than a user clicking on a link in a phishing email.”
What are the biggest risks?
There are several significant risks associated with Shadow AI that organisations must address. For instance:
- Meeting tools can store confidential conversations offsite, potentially exposing sensitive information.
- Developers who utilise AI for coding may inadvertently inject insecure code into applications, heightening vulnerabilities.
- Customer support teams relying on AI-powered chatbots face privacy concerns regarding the management of sensitive customer data.
These risks underscore the importance of safeguarding against unintended consequences when integrating AI into corporate environments.
What can we do?
Education, education, education, and transparency. Transparency can be built with clear use policies outlining in detail what is acceptable regarding AI usage. Employees need to be clearly told which tool is allowed, and which could be a risk for the company.
It is also essential that businesses educate employees about how to use AI effectively. This includes guidance on the correct prompting to use with AI and why doing it differently could lead to data leaks and compliance breaches.
Monitoring systems are better than a total ban because if a company blocks a tool, employees will simply use their private devices to ask AI for help with company related topics. In this regard, IT departments must therefore educate themselves on the needs of the employees and potentially work together with them to build usage rules.
More than corporate protections
It’s time to create a safe environment for AI because Shadow AI’s greatest risk lies in human behaviour, not the technology. This is about more than corporate protection – it ensures trust, compliance, and innovation. Act now: invest in education, foster transparency, and build frameworks to responsibly leverage AI’s potential while minimising risks.
HORIZON Risk Thought >> Fast Forward
The complexity of today´s risk environment is changing at an accelerating pace, making risk management even more challenging. We have created HORIZON, firstly as a print publication and now as a page for sharing the latest insights about ongoing transformations. Our risk specialists will continue to provide their expertise and knowledge to shine a light on the challenges of the future.
