Analysts and agents: AI in cyberdefence 

The strength of artificial intelligence lies in its ability to process large volumes of data – such as those generated in cyberdefence. But what are the possible applications, what are the trends showing and how can AI agents support professionals? We take a look into the not-so-distant future. 

Text: Andreas Heer, Image: Swisscom, Date: September 2025        12 Min.

Artificial intelligence (AI) lends itself perfectly to cyber defence, for example in Security Operation Centres (SOCs) where analysts are inundated with alerts and log file warnings. This is where AI comes into its own – in the processing and analysis of large volumes of data. So is AI a cure for the dreaded alert fatigue – when important messages get overlooked due to the sheer volume of them? This article explores how (generative) artificial intelligence can be used for cyberdefence and what developments and trends are emerging. 

Which AI to use? 

SOCs have been using AI for a long time, though further explanation of what is meant by that is required. Machine learning models (ML) have consistently proven their worth in generating comprehensible forecasts from large amounts of data, determining the risk of hazards and discovering anomalies. 

‘We’re seeing a trend towards AI agents capable of handling routine incidents on their own’

Dusan Vuksanovic, CEO of the Swisscom Outpost in Palo Alto

That is not yet the case for the large language models (LLMs) used in generative artificial intelligence, or GenAI for short. While this rather new technology is also undergoing rapid development, it has really only knocked at the (well-secured) SOC door. This is not so much the familiar chatbots like ChatGPT and Copilot – rather, analysts are receiving support from digital assistants. ‘We’re seeing a trend towards AI agents capable of handling routine incidents on their own,’ says Dusan Vuksanovic, CEO of the Swisscom outpost in Palo Alto. 

IT Security: A modern SOC as the right response to emerging threats.

Using AI agents to lighten the load on people 

An AI agent is software that can independently collate information from different sources and evaluate it with the help of machine learning or an LLM. Based on the result, the virtual helper can then trigger an action. For example, an AI agent might use an API to get information about a security event from a SIEM system, enrich this with threat intelligence and execute the corresponding SOAR playbook by itself once the incident is confirmed. An infected laptop, for instance, might be automatically disconnected from the network. An SOC analyst would only have to confirm the process. AI agents are therefore 

AI agents are therefore not replacing professionals, but simply relieving them of routine work by reducing the manual tasks required. In this scenario, the control remains with the experts. Current developments among providers of security solutions are moving in this direction, as Vuksanovic notes: ‘AI for security operations has been the industry’s focus for several months now. The first projects are already underway – and are showing promising results.’ Oliver Stampfli, Head of Cyberdefence B2B at Swisscom, also sees this trend: ‘AI agents can help experts gather information that they can then use as the basis for making decisions or proposing solutions for incident response.’ 

Examples of the use of AI in cyber defence 

‘A big trend at the moment is the use of AI agents as autonomous analysts. They help automate first-level SOC tasks and can support professionals with data preparation and analysis, for example,’ says Beni Eugster, Cloud Operation and Security Officer at Swisscom. But the virtual helpers are also useful for more than just simple tasks. Eugster and Vuksanovic are observing further trends: 

  • Provision of support in complex cases: When large amounts of data from different sources need to be correlated during a security incident, an AI agent can take on this task. This relieves professionals of routine work and allows them to make greater use of their expertise. 
  • Managed Detection and Response (MDR): AI agents can carry out the steps from threat detection to response independently under the control of a human. ‘We believe that up to 80 per cent of incidents can be resolved automatically through this,’ estimates Vuksanovic. ‘This also reduces MDR costs, which makes it all the more interesting for SMEs.’ 
  • Automated reporting: Incident response reports have two characteristics: they are somewhat standardised and tend to be among the less popular tasks for cybersecurity specialists. AI can assist here in a variety of ways. A GenAI chatbot can help with the wording and summary for reports or translate them into other languages. At a more advanced level, an AI agent can independently compile the required information and create the reports. 
  • Vulnerability Management: An AI agent can collate information from sources such as threat intelligence feeds, security advisories and exploit databases to create a risk assessment. This proactive approach can help identify new threats at an early stage.

How cyberdefence work is changing 

Artificial intelligence will support rather than replace human security analysts, who will be able to use the results of AI-driven automated analysis to make an assessment more quickly and initiate appropriate measures based on their specialist knowledge and experience. With GenAI, junior analysts will also have a virtual coach available to them at all times and ready to support them in their day-to-day work. By leaving certain routine tasks to AI, security analysts will have more time to contribute their expertise in complex cases. Vuksanovic sees a clear advantage here: ‘The experts will be able to focus on the analysis and actionable recommendations, which will allow them to make better decisions.’ 

This way of working with virtual assistants will change not only the type of work, but also the demands placed on professionals. Communication in particular is becoming increasingly important for validating AI suggestions, interpreting them and sharing the results with other team members. A basic understanding of how AI models and agents work is also needed to assess the quality of results.  

Addressing AI transformation in cyberdefence 

AI agents are a recent technological advancement and still in their infancy. ‘This development is firmly in the security industry spotlight among start-ups, established providers and investors,’ says Vuksanovic. On the customer side, companies are launching initial projects and proofs of concept to explore the possibilities – ‘with promising results,’ adds Vuksanovic. Two approaches to integrating AI agents into security operations are currently emerging: 

  • New agent-based SIEM and SOAR solutions: This approach offers extensive integration of AI agents, but requires the replacement of existing systems. These projects are correspondingly large. 
  • Expansion of existing solutions: These may be further developments that support AI agents for certain work steps. Or they could be separate agentic AI solutions that are additionally implemented and access existing systems via APIs.  

Cyberdefence is being expanded 

GenAI and AI agents will change cyber defence processes and tasks. ‘The use of agentic AI in SOCs offers the potential for greater efficiency, shorter response times and comprehensive threat analysis,’ emphasises Oliver Stampfli. ‘This can lead to a significantly improved security situation overall.’ The expected benefits are obvious. But it’s also important to be aware of the risks – not just in relation to aspects of data security, but also AI agent answers and decisions. As Stampfli explains: ‘It’s important to consider the ethical and security implications in order to minimise risks such as wrong decisions and unintended consequences.’  

More on the topic