Well thought-out AI governance is key to using AI effectively in the company. Interview (image: a man pushes a bicycle over a bridge)
11 min

The journey to becoming an AI-driven organisation

Artificial intelligence (AI), particularly GenAI, and data analysis are key drivers of digital transformation. Christof Zogg, Head of Business Transformation, and Anne-Sophie Morand, Data Governance Counsel, both from Swisscom, explain the success factors needed to transform into an AI-driven organisation.

AI regulation white paper

This white paper provides an overview of current AI regulations in Switzerland and the EU, and what’s important when it comes to AI governance.

How far are Swiss Companies with AI and data analysis?

Christof Zogg: There is no uniform picture, but the trend is as follows, as we found out in a joint market study with the HWZ Zurich University of Applied Sciences in Business Administration (german PDF): Swiss companies are at roughly the same stage in the introduction of generative AI (38%) as they are in the systematic use of data analysis (35%). Larger companies are, on average, further ahead of SMEs. And certain sectors, such as banking, insurance, industry and commerce, are further ahead than others. However, the most important driver is and will remain: innovative companies, regardless of their size and industry, are relying more consistently and more quickly on data and AI, and can therefore benefit more from them for their competitiveness.

How is competitiveness currently manifested through data and AI?

Christof Zogg: Companies have already been able to benefit from professional data analysis by better managing their business, such as through comprehensive management dashboards, and thus gaining a business-critical edge with their knowledge. These benefits can be increased with the latest AI methods in areas such as forecasting (e.g. predictive maintenance in infrastructure maintenance) or pattern recognition (e.g. unsupervised learning in customer segmentation). However, these applications require the company’s own data platforms to have a high degree of maturity. Nevertheless, all companies can now benefit from the generative language and multimodal models, as the heavy burden of data provision has largely been borne by model operators such as OpenAI and Google.

To what extent does the automation of processes with the support of GenAI have transformational potential? What role do AI agents play in this?

Christof Zogg: Ideally, digital transformation and automation always lead to improvements in efficiency or quality – regardless of the technology used. Just think of all the examples from the area of self-service, such as buying digital tickets on public transport, ordering documents from public authorities (eGovernment) or scanning goods yourself in retail shops.

Generative artificial intelligence (GenAI) brings powerful new abilities to automate processes that were previously denied to humans. This includes editorial skills (writing reply emails in customer service), text editing and analysis (summarising and structuring medical admission interviews) and image understanding (categorising tax receipts).

Christof Zogg, Head of Business Transformation at Swisscom

‘Generative artificial intelligence brings powerful new abilities to automate processes that were previously denied to humans.’

Christof Zogg, Head of Business Transformation at Swisscom

Agents extend generative AI with active decision-making and action competences. In addition to content processing, it can also autonomously access environments (such as enterprise systems) and execute actions. For example, it can analyse a customer enquiry by email and automatically open a case in the Customer Service Management application or – if incomplete and unclear – automatically send the customer an email to clarify things.

A strong data culture is essential for implementing AI technologies. How can this be promoted in companies?

Christof Zogg: Fostering a data culture starts with a company recognising the relevance and value of its own data. Secondly, it needs to understand that the data should not be hoarded in organisational silos, but – with the appropriate role and access system, of course – should be made available throughout the company via a data catalogue, and this requires management support from as far up as possible.

Ultimately, experience shows that this will only work if the company recognises that it requires a centralised analytical platform for structured and unstructured data. The idea that the core process application (ERP) could be this data hub is no longer enough in practice today. Business-relevant data sources (from e-commerce and social media to IoT sensor data) are too diverse and dispersed.

How important is data governance in organisations?

Anne-Sophie Morand: Data governance is a key component of modern organisations and crucial for the successful implementation of digital transformation. It supports companies such as Swisscom in establishing a responsible and legally compliant data culture that influences both strategic and operational decisions. If an organisations has not yet focused on this, it risks losing valuable competitive advantage and should take steps to implement it. Continuous adaptation and monitoring of data governance is essential in order to be able to respond to new legal and technological developments. Ultimately, strong data governance helps to increase customer benefit and shareholder value.

Why is AI governance needed in addition to data governance?

Anne-Sophie Morand: AI governance is an independent discipline that is required alongside data governance, as it deals with the specific challenges of artificial intelligence. While data governance primarily governs the handling of static data, the highly dynamic nature of AI technology requires specialised governance. In times of complex laws such as the EU’s AI Act, AI governance can help companies meet legal requirements efficiently and build trust among customers, partners and employees.

It is therefore important to view AI governance internally not as a hurdle, but as a tool that enables the full benefits of AI technologies to be exploited, while ensuring compliance with security standards and regulatory requirements.

What are the core principles of robust AI governance?

Anne-Sophie Morand: When implementing AI governance, it can be useful to define principles that guide how AI is handled. Since there is no one-size-fits-all approach, each company has to define its own relevant principles and fill them with content. Swisscom takes a risk-based approach to its AI governance, similar to the EU’s AI Act. A comprehensive audit is being carried out for the ‘high-risk AI systems’ risk category to assess compliance with the following six principles: responsibility, compliance, transparency, quality and precision, security and fairness.

‘It is important to view AI governance internally not as a hurdle, but as a tool that enables the full benefits of AI technologies to be exploited.’

Anne-Sophie Morand, Data Governance Counsel, Swisscom

These principles were selected with a view to Swisscom’s values and the ways in which AI could be used and developed that were relevant to the company; they are therefore tailored to the corporate context. For example, since 2019 Swisscom has had a Data Ethics Board, which is now also involved in reviewing the principle of fairness. The Board examines the principle of fairness in high-risk AI systems on the basis of six specific sub-principles, including ‘respect for personality’ and ‘no discrimination’.

To what extent is the EU’s AI Act even relevant for a company based in Switzerland that develops or uses AI technology?

Anne-Sophie Morand: Although Switzerland is not a member of the EU, due to its extraterritorial effect, the AI Act can still be directly applicable to companies domiciled in Switzerland, even if they have neither their registered office nor a branch in the EU. This is the case if the company is active in Switzerland as the provider of an AI system that is marketed or put into operation in the EU, or if it markets a GPAI model (general purpose AI model). The AI Act may also apply if the company in Switzerland is a provider or operator of an AI system whose output (e.g. forecast, recommendation or decision) is used in the EU. This regulation aims to prevent EU companies from outsourcing high-risk systems to third-country providers, which then have an impact on people in the EU. Finally, the AI Act is also relevant for product manufacturers who place an AI system on the EU market or put it into operation in the EU together with their product and under their own name. Switzerland has an open economy, so if we export AI technology to the EU, the AI Act is therefore very relevant.

What measures can companies take to strengthen and promote their customers’ trust in the use of AI systems? 

Anne-Sophie Morand: The use of AI systems requires trust, especially when they are used for topics that are complex, have far-reaching effects and are not visible or comprehensible to outsiders. Transparency is a key principle for building trust when using AI systems. Transparency in this context means, firstly, that it should be obvious to humans that they are not interacting with a human, but with a machine, or that an AI system has generated content (recognisability). Secondly, it means that humans can adequately understand how an AI system came to a certain prediction, recommendation or decision, or how the AI system created certain content (traceability).

Transparency also plays an important role in the EU’s AI Act. For example, when development of so-called high-risk AI systems, the provider is required to create an instruction manual. It must contain precise, complete, accurate and unambiguous information such as features, capabilities and performance limits of the high-risk AI system, including technical capabilities that contribute to the traceability of its output.

Who actually makes the better decisions – people or AI algorithms?

Christof Zogg: In his book Noise, a great read, Nobel-prize winning economist Daniel Kahneman shows that human experts, be they judges, forensic scientists or insurers, systematically overestimate their ability to make accurate and consistent judgements. He shows, among other things, that judges hand down significantly harsher sentences the next day if their favourite team lost a game the evening before. From this perspective, algorithmic support could help to make human decisions less biased, i.e. less dependent on random circumstances.

Anne-Sophie Morand: Who actually makes the better decisions – human beings or machines – depends heavily on the context and therefore cannot be answered in principle. The use of automated decision-making systems offers numerous advantages that make them a valuable tool in decision-making. AI systems are highly efficient and can analyse large data volumes in a very short time in order to draw objective conclusions. Ideally, these conclusions are free of emotional influences and personal prejudices – provided, of course, that the underlying training data and the algorithm itself are free of prejudices. In addition, AI systems can continuously improve the quality of their decisions by learning from new data. An exciting development can be seen, for example, in the medical sector, where automated decision-making systems are becoming increasingly important. With their ability to analyse large data volumes and recognise complex patterns, AI systems are revolutionising medical decision-making. I am convinced that the decision-making power of AI systems in medicine will be so strong in the future that specialists will be forced to use them in certain use cases in order to achieve optimal results.

On the other hand, human decision-making processes have qualities that machines cannot offer, such as intuition and genuine empathy. People are able to draw on implicit knowledge and emotional aspects, and have the ability to be flexible and creative in unpredictable situations. These are skills that are crucial for ethical considerations and are difficult to grasp on a purely data-based basis. Even though artificial empathy (AE) has long been a reality and AI systems can simulate emotions to promote our well-being, it is still the domain of humans to establish authentic emotional connections and feel deep empathy. This unique ability will continue to be an indispensable part of certain decision-making processes.

Finally, let’s take a look into the future: quantum computing is making huge progress, and it will be possible to combine it with AI. What impact could this development have on AI technologies and AI governance?

Christof Zogg: Processing power is an important prerequisite for further improving the performance of AI models, especially when training new models, but also when operating them (inference). This is because the more models are used, the more resources are required. the publisher of ChatGPT, OpenAI, recently illustrated this when it announced that 700 million images were generated with the greatly improved, new model GPT-4o within a month.

Quantum computing could make a contribution here in the medium to long term and replace the graphics card architecture (GPUs) that has dominated to date. But I still dare to make the prediction that quantum technology will play more of a minor role in AI disruption.

For the time being, traditional AI chips are also becoming more powerful at Moore’s Law speed, meaning that the limiting factors on the path to artificial general intelligence (AGI) are likely to be the maturity of learning algorithms and the availability of training data.

AI regulation white paper

This white paper provides an overview of current AI regulations in Switzerland and the EU, and what’s important when it comes to AI governance.

Anne-Sophie Morand: AI is evolving at a rapid pace, and quantum computing is making spectacular progress at the same time. While the performance of today’s AI systems in analysis and decision-making is impressive, they are still reaching their limits with exponentially growing data volumes and complex optimisation tasks. Quantum computing, on the other hand, could solve problems that will remain impossible for conventional computers for millions of years to come. Quantum artificial intelligence (QAI) would significantly extend the capability of today’s AI systems and enable complex data analyses, first-class optimisations and precise simulations. This would also fundamentally change the way people work and live.

In terms of AI governance, this technological development means that we must constantly rethink and develop governance structures. Given the potential power and speed of QAI, I believe ethics would become even more important. Only in this way can we control the rapidly shifting boundaries of what is technologically feasible and ensure that these developments are in line with human values and social well-being. Governance structures need to be flexible enough to handle rapid innovation while being robust enough to prevent misuse. With regard to QAI, far-sighted AI governance will therefore be essential in order to proactively and quickly manage both opportunities and risks.

Read now