Companies have to navigate the tricky balance of minimising the risks of shadow AI while getting the most out of GenAI. The solution lies in the use of sanctioned tools that can be combined with targeted rollout measures to create a secure and efficient working environment.
Key facts at a glance
What are the risks associated with shadow AI?
- Employees are increasingly using unauthorised GenAI tools (shadow AI) and thereby putting data protection, compliance and the company’s reputation at risk.
- These tools store sensitive data and use it for training.
What are the reasons behind this careless use of AI?
- GenAI tools are easily accessible, easy to use and free in some cases.
- The apparent ‘empathy’ of the AI promotes trust, even though the tools are continuously collecting and storing data.
Solution: using authorised GenAI tools
- Companies should offer their employees tested, secure tools such as Microsoft 365 Copilot Chat that meet data protection and compliance requirements.
Why the successful introduction of GenAI requires more than just technology
- Introducing the technology alone is not enough – change management and active support for employees are crucial.
- Raising awareness and training promote secure practices.
Discover the world of Microsoft 365 Copilot with Swisscom
We offer companies comprehensive support for all aspects of Microsoft 365 Copilot. We accompany you during the introduction and work with you to develop deployment scenarios.
Generative artificial intelligence (GenAI) has long since become a part of everyday working life – at a pace that has often run ahead of strategies, guidelines and governance. Employees use GenAI tools intuitively and with the best of intentions. But this is precisely where an underestimated risk arises: shadow AI. When AI applications are deployed outside the controlled IT environment, companies lose control over sensitive data, which in the worst case can lead to violations of data protection law.
For companies, shadow AI is not a theoretical threat, but rather a management issue. The problem is not the use of AI per se, but the lack of a framework. What data is appropriate for use? Which AI applications are allowed? And who bears responsibility if business or personal data is used in unauthorised systems? A ban alone is not enough – users would simply circumvent it, and the risk would remain.
A responsible approach to GenAI therefore requires companies to provide their employees with secure, compliant alternatives while at the same time defining clear guidelines for the use of AI. Actively managing GenAI not only protects data and the company’s reputation, but also lays the groundwork for using the productivity potential of AI in a responsible manner.
Significant increase in shadow AI
As the use of artificial intelligence increases, so does the risk of shadow AI. According to Microsoft’s 2024 Work Trend Index, 80% of GenAI users are already using their own tools for work purposes. It’s important to understand that AI technologies are subject to the same regulations as other data processing operations, especially when it comes to personal data. This underscores the need for companies to strengthen their IT departments and ensure that they provide their employees with sanctioned offerings – i.e. GenAI tools that are secure, fall under IT governance and meet compliance requirements.
Why are we careless with GenAI tools?
Why do many employees use various GenAI tools with little apparent awareness of the risks? Factors such as targeted marketing, the unavailability of suitable tools in companies and a real lack of knowledge about data protection, compliance, artificial intelligence and more can lead to careless handling of technology, and GenAI in particular. The pressure to perform and the temptation to achieve rapid efficiency gains make shadow AI all the more appealing. This is particularly the case when there are no clear internal guidelines or official AI tools in place. Two additional aspects also play a key role in the use of shadow AI: easy accessibility and the apparent empathy of AI.
Easy and free access: GenAI tools are easily accessible via the web, appear to cost nothing and are extremely user-friendly – in short: they’re just plain practical. Users can fire off a question and receive a response just as fast, getting quick wins that support the daily workflow. In scenarios like this, they are focused on the efficiency gains and not on data protection.
AI – the empathetic friend and helper: GenAI tools appear to be patient, extremely friendly and empathetic helpers looking to take the stress out of office life. The quick wins and positive user experience foster parasocial attachment and trust in GenAI tools. We can begin to feel like there are no consequences to sharing information with them – an assumption that entails considerable risks, especially in a business context.
Authorising GenAI tools in Swiss companies
Shadow AI cannot be contained with technical measures alone. Even approved and safe GenAI tools are only beneficial if they are embedded in a clearly defined framework. For companies, this means that the decisive factor isn’t just which AI is used, but how it is allowed to be used. An approach based on three key guiding principles has proven successful:
- Offering employees secure GenAI tools
Employees need an attractive, compliant alternative to public GenAI services. - Defining clear rules
What information can be processed with AI and what can’t? - Supporting the introduction
Training, raising awareness and clear communication reduce the use shadow AI more effectively than bans.
The key to achieving a data protection-compliant solution is offering employees AI that integrates security and compliance functions. It offers high data protection standards and, if used correctly, can be monitored by IT departments or the IT service provider to ensure internal data security and compliance policies are observed.
Use of shadow AI in larger companies
Regulatory implications
Shadow AI increases the risk of compliance and data protection breaches, particularly in the context of international regulations (e.g. EU AI Act) and cross-border business models.
Binding AI governance is indispensable
Clear roles, responsibilities and company-wide guidelines (e.g. AI acceptable use policies) are essential in order to implement AI in a controlled and scalable manner.
Additional technical monitoring
Authorised GenAI tools need to be complemented by technical protection and monitoring measures to effectively detect and mitigate unauthorised use of AI.
Management rather than prohibition
Measurable indicators (use of authorised tools, level of training, reduction of external AI services) enable effective management and continuous improvement with regard to shadow AI.
The right GenAI tools for companies
Microsoft has recognised this challenge and provides two solutions: Microsoft 365 Copilot Chat and Microsoft 365 Copilot, which is more deeply integrated into company data and the Microsoft 365 universe. Both are available to business customers with a Microsoft 365 subscription (free of charge in the case of Copilot Chat), giving internal IT departments extensive control over company data and reducing the risks of shadow AI.
Determining the right GenAI tool depends on the intended use. For conventional uses like brainstorming, formulating raw texts or researching a topic, a web-based tool such as Copilot Chat is sufficient. However, if employees need to be able to access company data like e-mails, Teams chats, documents, and so on, a comprehensive solution such as the fee-based Copilot is required.
On the other hand, if you want to tap into your internal – and potentially confidential – documents in order to offer your employees day-to-day support, you can use assistants such as the Swiss AI Assistant from Swisscom. It makes it possible to browse through manuals, instructions and other materials using natural language – in various languages – to find the answers to questions from day-to-day work situations.
What to consider when introducing GenAI tools
GenAI tools can be used to best advantage when their introduction is not reduced to the technical implementation alone. It’s important to bridge the gap between technological potential and actual use – in other words, change management is key. Companies can get the most out of these tools by actively guiding and supporting their employees in introducing and using them in line with the three guiding principles.
This support effort includes building awareness of data security and compliance as well as providing targeted training in the practical use of GenAI tools. Such measures are essential to exploit their full technological potential and to ensure that employees use GenAI profitably and responsibly without resorting to shadow AI.
White paper: AI regulation and AI governance
Artificial intelligence brings with it new legal and ethical challenges. A clear understanding of the applicable regulations is crucial to ensure compliant use of the technology. Our white paper provides you with an overview of the current AI regulations in Switzerland and the EU, explains their relevance for companies and authorities in Switzerland, and illustrates the benefits of effective AI governance using Swisscom as an example.