From gatekeeper to enabler: how CISOs are strategically managing AI risks

Artificial intelligence is changing the rules of cybersecurity – turning known attack patterns into highly scalable fraud scenarios. For CISOs, it’s no longer just a question of defence, but a strategic response: how can AI risks be managed without slowing down innovation?

May 2026, Text Andreas Heer            4 Min.

In this article:

  • How AI is accelerating and professionalising attacks (e.g. CEO fraud, deepfakes, automated reconnaissance) and why response time is becoming a strategic factor.
  • How integrating AI in operations expands a company’s target area: chatbots, shadow AI and autonomous AI agents in particular pose new risks for cyberdefence.
  • This is why the most effective approach is neither bans nor uncontrolled growth, but a ‘golden path’ with clear AI governance combined with AI literacy in security awareness.

It’s Friday night, just after 5.00 p.m. The deputy CFO receives a call from the CEO. The voice sounds tense, but controlled. A strategically important deal is about to be concluded and the counterparty is demanding a last-minute down payment. Absolute confidentiality is crucial. Details to follow via e-mail.

The e-mail arrives a few minutes later. Error-free, in a familiar tone, with correct internal references. Everything seems consistent. The deputy CFO approves the payment.It’s not until Monday morning that it becomes clear that the CEO wasn’t even in a meeting at the time. The voice had been created synthetically, and the e-mail had been written with AI. The background information had come from publicly available sources and a small amount of targeted social engineering preparatory work.

What happened here wasn’t a highly complex cyberattack. It was a standardised, AI-supported fraud scenario that was implemented with minimal technological hurdles.

Attacks are getting faster – and better

This example of CEO fraud exemplifies a fundamental change in the threat landscape. Artificial intelligence allows attackers to scale social engineering, personalise content and vary attacks with high frequency. Quality and speed are increasing at the same time. AI is a fast-growing trend in cybersecurity. This is also reflected in the latest issue of the Swisscom Cybersecurity Threat Radar.

The situation is changing for CISOs. Cybercriminals can use AI to design more than just convincing phishing e-mails. ‘They also automate the detection of vulnerabilities, or reconnaissance, and can develop more sophisticated malware,’ says Collin Geisser, Lead Security Architect at Swisscom. This is demonstrated by VoidLink, a sophisticated Linux malware presumably created with AI that features functions that only resource-intensive APT groupings are normally capable of developing. And then there are audio and video deepfakes, which are becoming increasingly difficult to detect. The barrier to entry is getting lower even as the operational impact is rising.

These changes are impacting the Security Operation Center as well: manual triage processes are reaching their limits in terms of time. If attacks are orchestrated by machines, defences must also be machine-assisted, for example through AI-assisted alert triage or automated playbooks. Time is becoming a strategic factor for CISOs. Those who respond in hours will be outmatched by attackers who operate in minutes.

The new target: chatbots, shadow AI and AI agents

At the same time, companies are using AI operationally – with chatbots, automation solutions and agents. This poses a twofold challenge: AI intensifies both the threat on the part of attackers and the area of attack for companies. CISOs are therefore confronted with the question as to how to strategically manage AI risks without hindering the company’s innovation.

Alongside the heightened external threat situation, new internal areas of attack are emerging within the company. Employees are using LLM services (large language models), departments are experimenting with autonomous AI agents, chatbots are gaining access to internal knowledge databases. Innovation happens – often faster than governance can follow suit. And this is particularly the case when AI functions are increasingly included with the SaaS services being used.The risks arise primarily from uncontrolled use.

Typical risk areas include:

  • Uploading sensitive business data to public models 
  • Unregulated use of business data by GenAI providers, e.g. for LLM training 
  • Potential violation of data protection laws or industry-specific regulations 
  • Incomplete audit trails if GenAI service activities cannot be logged

Agentic AI – when software acts on its own

Agent-based systems create a new risk class. These systems not only generate content, but also independently execute actions on the web, in applications and in the local file system.

This leads to new threat scenarios. Security researchers recently demonstrated what these threats can look like. With Morris II, they developed an AI worm that can exfiltrate data and spread itself further via AI agents. 

AI agents pose the following risks:

  • Prompt injection through malicious e-mails and websites  
  • Installation of malware through malicious extensions (skills) for agents 
  • Data exfiltration, e.g. by entering login details on malicious websites

Finding the golden middle ground for the use of AI

Experience has shown that simply banning such services leads to shadow IT or, in this case, shadow AI. A ‘golden path’ approach is more fruitful: companies provide secure, business-compliant AI environments and define usage guidelines for them. ‘In practice, this means that flanking measures have already been implemented, such as a proxy alert when accessing AI services or blocking AI browsers. In general, however, AI tools aren’t blocked,’ says Geisser.

The key factor here is a differentiated policy that takes into account the different degrees of risk; for example:

  • Internal RAG systems with a controlled data basis as knowledge chatbots 
  • Use of public GenAI services with clear limitations 
  • Strict governance requirements for autonomous AI agents

This raises questions that every company has to answer for itself, says Geisser. ‘There’s a fine line between blocking innovation and strengthening security. Security needs to address this issue in a targeted manner.’

This includes answering questions such as:

  • What kinds of AI use do we consciously accept as a residual risk? 
  • Where is technical control mandatory, and where is governance sufficient? 
  • What AI risks are business risks, not security risks?

Alongside technical and organisational risks, the regulatory dimension of AI is also becoming increasingly important. Data protection laws, industry-specific requirements and new AI-specific regulations require transparency about where and how AI is used in a company, what data is processed and who is responsible for it. 
For CISOs, this means that AI risks cannot be handled in isolation as an IT issue, but must be integrated into existing governance, risk and compliance structures. It is less important to anticipate every regulatory detail than to establish clear responsibilities, risk classifications and documentation. This transforms regulation from a mandatory, reactive factor into a strategic tool that builds trust – both internally and externally – and enables the controlled use of AI in the first place.

Governance in the age of agents

Governance of AI agents must go further than for traditional applications. Every AI instance needs its own identity that can be used not only to identify it, but also to regulate it. The principles of least privilege should also apply to AI agents. Identities are the basis for making the actions of agents traceable and auditable. This also requires keeping an inventory of the deployed agents.

One possible approach for defining AI governance is a committee consisting of CISOs, data protection officers and business representatives. The aim is a common risk assessment and clear delineation of responsibilities.

Heightened security awareness

The benefits and convenience of GenAI mean that employees will use such services regardless of whether or not they are officially approved. Using such tools is not only a question of governance, but also an issue of security awareness in employees’ day-to-day work. It‘s less a matter of awareness in the strict sense of the word than AI literacy, i.e. skills in dealing with AI. Employees need to understand the strengths and limitations of models, including hallucinations and bias. This also includes employees being familiar with and able to understand the governance rules.

In the same way, employees need to be aware of the threats posed by AI-assisted cyberattacks.
Traditional phishing simulations fall short here. Awareness programs should therefore be expanded to include:

  • Handling AI-generated content: not everything that sounds plausible is 
  • Raising awareness of deepfakes: recognise clues, consider control questions 
  • Verification mechanisms for unusual requests: Review process for suspicious e-mails and messenger enquiries, queries via another channel  
  • Promoting a ‘human-in-the-loop’ culture: reviewing the proposed steps of an AI agent, checking the results

Geisser has also had good results with conducting targeted prompt injection sessions as part of the awareness measures: ‘Seeing how easy it is to fool even established players opens our eyes to the risks.’

The CISO as an enabler instead of a blocker

CISOs are facing a change in their role: those who instinctively reach for AI bans impede innovation and promote shadow AI. Yet those who allow AI unchecked risk losing data and control. The CISO’s strategic task is to define guardrails, enable innovation and build resilience against AI-based cyberattacks. The key is to find the golden path, regulating how privileged AI identities are handled and strengthening employees’ skills.

At the same time, CISOs need to explore the potential of GenAI in their own cyberdefence efforts. This ensures that the next attempt at CEO fraud can be detected and blocked at an early stage.

Swisscom Cybersecurity Threat Radar 2026: AI risks, supply chain attacks, digital sovereignty and OT security – an overview of the most important cyber trends.

More interesting articles