‘Shadow AI poses a risk to companies’ data security and compliance’

Employees want to use AI tools, whether or not their employer provides them. This shadow AI entails risks – but also opportunities, as Beni Eugster, Cloud Operation and Security Officer at Swisscom, explains in an interview.

April 2025, text Andreas Heer           5 min.

Generative artificial intelligence (GenAI) is useful in daily work. And employees want to use tools such as Microsoft Copilot, ChatGPT, Perplexity.ai and Google Gemini. If employers do not provide the relevant resources, employees will assist themselves by using the tools of their choice through private accounts. Microsoft’s Work Trend Index 2024(opens in new tab) found that about 80 per cent of those who use GenAI bring their own tools to work. This ‘shadow AI’ presents companies with similar issues to shadow IT, i.e. the use of private, non-approved devices and cloud services for work. In this interview, Beni Eugster explains where the most important risks lie – and why shadow AI also offers opportunities.

Beni Eugster, how big is the problem of shadow AI in business?

This varies greatly depending on the industry, company size, IT structure and data and service criticality. However, as more and more companies are using AI technologies, the risk of unsupervised AI use that has not been explicitly authorised is also increasing. Shadow AI creates significant risks in respect of data security, compliance and company reputation. The main problem is that, as with shadow IT, there are a lot of free and cheap AI services on the internet. This increases the risk of employees using non-approved services to work more efficiently, which in turn represents an opportunity for companies.

Learn about current trends in cybersecurity and relevant threats.

What is the risk of shadow AI?

The main problems with shadow AI are data security and compliance. Depending on why it is being used, a variety of data may be affected, including customer data, employee information, in-house communication, internal vulnerabilities, passwords, annual reports, code and other sensitive information that is normally protected by IT security policies and measures. AI technologies should be considered equivalent to other data processing. This means that the same regulations apply as we have for personal data, for example. We therefore have to carefully analyse the risks and the flow of data(opens in new tab).

We are also seeing new, specific AI regulations being created at this time to cover the use of artificial intelligence in connection with sensitive data and, in general, the deployment of AI models that have been trained on specific data.

From a data security and compliance perspective, what is the issue with employees using shadow AI with business data?

When this happens, IT departments often have no overview or control over the data processed by these AI tools. This can lead to confidential data being saved on insecure platforms, the data being used to train the models or unauthorised persons gaining access to the data. In addition, the use of shadow AI may violate legal and contractual compliance regulations governing the secure handling and storage of data.

If companies detect a security incident related to shadow AI, what should they do?

In that case, they should immediately take action to investigate this and minimise its impact – as with any incident. This could include forensic analysis of the incident and carrying out a risk assessment, for example with the SOC or CSIRT(opens in new tab). The analysis might include new elements such as whether the data entered will be used for training and if it can be deleted again. Measures should then be taken to prevent similar incidents in the future.

Employees use shadow AI because they don’t have official access to the AI tools of their choice at work. How can companies better respond to their needs when it comes to using GenAI?

Companies can develop formal processes to review and approve new AI applications and tools, and to provide training and support for employees on how to use AI tools securely. Instead of imposing an absolute ban, which often proves to be ineffective, organisations should draw up a strategic plan for the safe use of these applications. Such a plan could, for example, include providing employees with an internal, private GPT model rather than having them use one available to the general public.

We know from experience that open communication between employees, IT departments and security teams is helpful in finding out which AI tools are useful and beneficial for a company, and then defining safety measures for the risks at the same time. If a company understands the needs of its employees and the business, it can offer useful AI tools within a regulated framework.

 ‘Employees want to use AI tools, whether or not their employer provides them. This shadow AI entails risks – but also opportunities.’

Beni Eugster, Cloud Operation and Security Officer at Swisscom

What technical and organisational measures can companies take to restrict the use of shadow AI?

Technical measures might include implementing solutions such as secure browsers and network proxies to identify or block unauthorised AI tools, as well as others with SASE or CASB features. Such tools have been created to deal with issues such as data loss prevention (DLP) and can therefore help ensure safe use of AI. There is also a new generation of tools that focus on shadow AI and support secure deployment.

Organisational measures might include policies and procedures that require the consent of the IT department for the adoption of new AI tools and provide training on the risks associated with shadow AI.

How can cybersecurity even keep pace with the rapid development of AI?

Companies should invest in new security technologies as quickly as possible, conduct ongoing security training and establish strong policies and procedures for the safe use of AI. They can also hire or train cybersecurity professionals who specialise in AI security risks.

I think there’s still a lot ahead of us here. And because the technology is developing so quickly, it’s important for companies to get on board early and familiarise themselves with this, for example through small trials. They can then use the results of these to make better decisions about company-wide deployment.

Beni Eugster

Beni Eugster is Cloud Operation and Security Officer at the Swisscom Outpost in Silicon Valley. He deals with the increasingly critical issue of AI security and tests new solutions such as automated pentesting tools and guardrails for AI applications.

More articles