ⓘ​  This page has been translated using artificial intelligence.

12 minutes

Artificial intelligence: opportunities, risks and applications

AI-generated content has become an integral part of everyday life. The colleague who structures his annual report with ChatGPT. The student who has a graphic created for her presentation. Or the promotional video for an SME that was created without a single day of filming. Artificial intelligence is changing the way we work, learn and communicate. And faster than we would have expected. But what is behind the technology? What can it really do, and where are its limits? And what does that mean for us?

You will find these topics on this page:

You will find the following topics on this page

Multimodal AI goes beyond text and images.

Topic

How does generative AI work?

Generative artificial intelligence is based on algorithms that it uses to solve specific tasks. There are several approaches, such as machine learning and deep learning. This may sound complicated, but basically it always involves artificial intelligence attempting to mimic the performance of our brain.

Machine learning, deep learning, computer vision or natural language processing – what do these terms mean?

In machine learning, AI attempts to identify specific patterns in huge amounts of data. By analysing this data, it learns to always find the most likely answer to a specific question or task. However, the most likely solution is not necessarily the correct one. It is always possible that generic AI will invent an answer, i.e. hallucinate. That is why it is important to cross-check important information with other sources. 

As in the human brain, many artificial brain cells (known as neural networks) send signals to each other during deep learning in AI. The AI builds up its knowledge by linking these artificial brain cells. We humans also learn by linking our brain cells together.

One early application of AI was natural language processing. Here, AI (for example in voice assistants or chatbots) analyses and processes human language and learns to respond intelligently to questions.  

Image AIs work fundamentally differently from language models. For a long time, so-called diffusion models dominated: they start with random image noise. Imagine a grey, grainy surface. This is gradually refined until an image emerges that matches the prompt. Models such as Midjourney or Stable Diffusion work according to this principle and are particularly strong at creating atmospheric, artistically designed images.

Newer models such as GPT Image or Nano Banana Pro, on the other hand, are autoregressive models: they do not generate an image by refining noise, but build it up step by step (similar to how a language model builds up a text). Since these models understand text and images together, they follow instructions more precisely and are easier to adjust in conversation: ‘Make the background darker’ or ‘Add a person’. An autoregressive model understands this directly.

Modern language models such as ChatGPT, Gemini and Claude are based on what is known as transformer architecture, a breakthrough from 2017 that fundamentally changed AI development. A transformer does not simply understand individual words, but their meaning in context. It recognises that ‘bank’ in the sentence ‘I am sitting on the bank’ means something different than in ‘I am going to the bank’. This understanding of context is what makes modern language models so powerful.

Building on this architecture, large language models (LLMs) were developed. These are models that have been trained with enormous amounts of text and have thus learned to recognise and predict linguistic patterns. Put simply, an LLM calculates which word is most likely to fit next in each response and does so step by step until a complete text has been created.

Machine learning, deep learning, language processing, transformer architecture and image generation together form the basis for what you experience today as generative AI in everyday life: so-called multimodal systems that can read, write, analyse and generate simultaneously.

Topic

What generative AI is available for images,
text and video?

Since the first version of ChatGPT was released in 2022, the range of AI software on the market has virtually exploded. Due to high demand, the range has been developing rapidly ever since. Here you will find an overview of the most popular models currently available.  

Language models

ChatGPT(opens in new tab)  is a multimodal model that originated as a language model from OpenAI. Its main task is to understand natural language and respond to text-based queries. This also includes response formats such as tables, code or mathematical forms. Even diagrams are possible. 

ChatGPT as a service offers different models (including GPT-5.2, GPT-4.1 and gpt-oss), which differ in their functions and strengths. Since December 2025, the image model GPT Image has been available in ChatGPT for creating images. Previous models such as GPT-4o and DALL·E have been replaced.

DeepSeek(opens in new tab) is a language model from China that caused a stir in early 2025: it achieved top performance at a fraction of the usual development costs.

The DeepSeek R1 model (released in January 2025) was developed for complex thinking tasks, specialising in mathematical problem solving, challenging programming tasks and logical conclusions. For the first time, it was possible to follow the AI's thought process, as the AI reveals its intermediate steps, reflects on them and corrects them independently.

This step-by-step method (chain of thought) leads to precise results and has now become standard in many language models.

The all-round DeepSeek V3 model for everyday tasks such as text creation, conversations and general questions was introduced in December 2024. DeepSeek V3.2 is currently in operation (as of February 2026). As an open-weights model (MIT licence), DeepSeek can be installed and used via a website and smartphone app as well as via a local installation (e.g. Hugging Face).

Google Gemini(opens in new tab) is Google's multimodal AI model. Originally released in March 2023 under the name ‘Google Bard’, the model is now a direct competitor to ChatGPT after a bumpy start and rebranding to ‘Google Gemini’.
Gemini processes not only text, but also images, audio, video and code. This allows you to analyse photos, summarise complex documents or solve code problems, for example.

Currently (February 2026), Google offers different model families: Gemini Flash for fast tasks, Gemini Pro for demanding queries and Gemini DeepThink for complex thinking tasks. For image generation, Google offers Nano Banana (Gemini 2.5 Flash Image) and Nano Banana Pro (Gemini 3 Pro Image). And Google is no slouch when it comes to video generation either: with Veo 3 (May 2025), the company enables all users with a Google One AI Premium Plan to generate videos including synchronised audio: 8-second video clips in 4K quality can be created from text descriptions or images.

Gemini can also be directly integrated into Google Workspace apps (Docs, Gmail, Sheets, etc.) with the AI Premium subscription. Without a subscription, Gemini can be used as a chatbot free of charge, but with limited functionality.

Llama(opens in new tab) (Large Language Model Meta AI) is Meta's AI model family, which has been under continuous development since 2023. The current generation, Llama 4 (as of February 2026), is multimodal (text and image) and, like DeepSeek, uses a mixture-of-experts architecture. This means that only those parts that are needed for a specific task are activated. This makes the AI faster and more resource-efficient.

The Llama models are openly available and are primarily aimed at developers, researchers and companies.

Since March 2025, Meta AI, Llama's chatbot, has also been available in Switzerland. You can find it on WhatsApp, Instagram, Facebook and Messenger: simply search for the blue circle icon or type @MetaAI in a chat. For data protection reasons, the European version has limited functions compared to the US version.

Claude is Anthropic's AI model family, which is particularly known for its safety focus. The current generation, Claude 4.5 (as of February 2026), offers three variants: Haiku (fast and inexpensive), Sonnet (balanced) and Opus (highest performance). Opus 4.6 followed in February 2026 with enhanced capabilities.

Claude excels at creative writing, code generation, analytical reports and structured documents. With the Extended Thinking feature, the model shows its thought process and reflects on intermediate steps to provide more precise answers. The new Agent Teams feature (available from Opus 4.6) makes it possible to divide complex tasks among multiple AI agents working in parallel.

Anthropic attaches great importance to data protection and develops Claude according to ethical principles.

Mistral AI(opens in new tab) is considered a leading European company for AI language models. The French start-up was founded in April 2023 by former Google DeepMind and Meta researchers. Since then, Mistreal has deliberately positioned itself as a European, privacy-friendly alternative to US chatbots with its open-weight models.

The current Mistral Large 3 model (as of February 2026) and the Ministral 3 family offer multimodal capabilities (text, image and audio) and support dozens of languages. The chatbot ‘Le Chat’ was launched in February 2024 and offers features such as Voice Mode for speech recognition, Deep Research for structured research and Think Mode for complex thinking tasks.

Mistral AI attaches great importance to GDPR compliance and offers both free and enterprise versions for businesses.

Microsoft 365 Copilot(opens in new tab) is Microsoft's AI assistant and is integrated directly into all Microsoft apps (Teams, Word, Excel, PowerPoint, Outlook). The assistant is based on OpenAI technology (currently GPT-5.2, as of February 2026).

Since 2026, Copilot has even been operating in Agent Mode as an autonomous agent that can perform multi-step tasks independently and across multiple apps. Instead of just responding to commands, Copilot works iteratively with you, creating drafts, refining them and adjusting documents step by step. Thinking Mode activates advanced thinking skills for complex analyses. Voice Mode enables voice control. Implicit Grounding automatically uses the context of open emails or highlighted text to provide more accurate responses.

This makes Copilot a great sparring partner for everyday tasks such as creating Excel formulas, designing presentations with information from Word documents, summarising email conversations or collecting meeting notes. Copilot respects your permissions and security settings.

myAI(opens in new tab) is Swisscom's AI assistant for everyday life in Switzerland. It is based on Anthropic's Claude language model and is specifically tailored to Switzerland.

myAI helps you, for example, to write, translate and summarise texts, analyses uploaded documents (PDF, Word, Excel) and generates images. The integrated web search provides up-to-date information, with Swiss sources being prioritised.

Swiss services are also integrated, such as SBB timetables, weather forecasts from MeteoSchweiz, Swiss spelling and, because myAI is a Swiss product, the assistant also understands and speaks Swiss German.
The data remains under Swisscom's complete control. It is not passed on to third parties or used for AI training. You can delete all your conversations at any time. myAI is available to all Swiss residents with a Swisscom login.

Apertus(opens in new tab) is Switzerland's first large language model. It was developed by EPFL, ETH Zurich and the Swiss National Supercomputing Centre (CSCS) and released in September 2025.

The name says it all: apertus is Latin and means open. A fitting choice, as the language model is completely open, meaning that anyone can see what data the AI was trained on, what method was used and how it works technically. This makes Apertus the first large language model to comply with the transparency requirements of the EU AI Act and it was developed in accordance with Swiss data protection law and copyright law.

Apertus is available for download on Hugging Face(opens in new tab)

PulicAI(opens in new tab) serves as a free chat and API demo interface.

Perplexity(opens in new tab) is an AI-powered search engine that, unlike traditional chatbots, is based on real-time web searches rather than stored (trained) knowledge. Perplexity specialises in fact-based information retrieval and is particularly valued by students and researchers.

Perplexity is a multi-model platform and incorporates different models such as GPT-5, Claude, Gemini, etc., depending on the version. With Deep Research, Perplexity creates comprehensive reports, cites sources cleanly and exports them as PDFs. So-called Focus Modes tailor the search results specifically.

Perplexity is free to use. The Pro version offers advanced features and model selection.

Image AIs

GPT Image(opens in new tab) is OpenAI's current model family for image generation. GPT Image is the successor to DALL·E, which still functioned as a separate diffusion model. GPT Image is now natively integrated into the language model. This means that text and images are created in the same neural network. This is convenient for you because you can continue working with the image directly in the chat, formulate adjustments in natural language, and the model understands the entire context of your conversation.

With GPT Image, you can also edit images. The model makes very precise changes to what you request (e.g. image content) while keeping exposure, composition and facial features consistent.

GPT Image is available in ChatGPT, but it can also be accessed via Microsoft Copilot or Apple Intelligence.

Since its launch in 2022, Midjourney(opens in new tab) has been considered by many to be the gold standard in AI image generation. The model is known for its exceptional stylistic depth and artistic quality. This makes the model ideal for you if you want to create something visually expressive and unique rather than just a functional image.

The current version 7 (as of February 2026) has been completely rebuilt from the ground up, with more precision in the details, more realistic textures and lighting. You can also formulate your image requests via voice input. Midjourney adapts to your visual taste through personal style calibration.

Since June 2025, video generation has also been possible: you can animate an existing image into short video clips. Midjourney is accessible via the web interface and Discord.

Nano Banana Pro(opens in new tab) is Google's new image generation and editing model based on Gemini 3 Pro. It stands out from other models such as GPT Image or Midjourney with its photorealistic, natural-looking results. You can also influence the camera angle, lighting, depth of field and colouring in a very targeted manner. Images can be generated in up to 4K resolution, and text within images can be correctly displayed in different languages, fonts and styles.

Nano Banana Pro is available in the Gemini app, Google Slides, NotebookLM and other Google products. The image model can be used free of charge with a limited quota.

Adobe Firefly(opens in new tab) is Adobe's AI image generator and is integrated directly into Photoshop, Illustrator and Premiere. All Firefly models have been trained exclusively with licensed content, meaning that images generated with them can be used commercially without hesitation. Firefly also functions as a model hub, integrating partner models such as GPT Image and Nano Banana Pro.

Adobe Firefly is available via your browser and Adobe Creative Cloud. Limited generations are possible with a free Adobe account.

Canva(opens in new tab) is the most popular design platform for beginners and students. Dream Lab now features powerful image AI. You can generate your images directly in your flyer, presentation or social media post without having to leave the platform. Since the acquisition of Leonardo.ai, Dream Lab delivers significantly better results than before.

Canva AI is accessible via browser and app. With a free account, only limited generations are possible.

In addition to commercial image AI, there is a vibrant world of open-source models, including Stable Diffusion, FLUX and HunyuanImage. They are all free to use, can be freely customised and run on your own computer. This makes them particularly interesting for tech-savvy users and data protection-conscious institutions that do not want to send data to external servers.

Setting up and maintaining such AI image models requires technical expertise and powerful hardware. Those who opt for open-source models benefit from maximum control and an active community that continuously publishes new models, fine-tuning and enhancements.

Video generators

With Sora 2(opens in new tab), OpenAI not only released an improved video generation model in September 2025, but also a standalone social app called ‘Sora by OpenAI’. Similar to TikTok, its feed shows short, AI-generated videos that you can create, remix and share via text prompts.

Sora 2 generates videos with synchronised dialogue and sound effects and understands how people or objects behave and move in space. With the ‘Characters’ feature, you can insert yourself or friends (caution: data protection) directly into any scene after a one-time video recording.

The app is available in selected countries for iOS and Android (Switzerland is not yet included). If you want to continue accessing the image model without a social app, you can do so via sora.com or ChatGPT.

Veo(opens in new tab) is Google DeepMind's video generation model and can create videos from text or images. The model is characterised by particularly realistic motion physics and precise camera control.

Since version 3, Veo can also generate audio directly with the video. This means you can describe sound effects, ambient noise and even dialogue in the prompt, and Veo will generate a synchronised soundtrack that matches the image. Since October 2025, Veo 3.1 has further improved lip movements and speech, allowing characters to talk to each other.

With a Google AI Pro subscription, you can use Veo 3.1 Fast in the Gemini app; Google AI Ultra is required for full access. Developers can access Veo via API or Vertex AI.

Runway(opens in new tab) is one of the pioneers of AI video generation and its models are aimed at professional filmmakers, content creators and studios. The Gen-4.5 model (since December 2025) supports the familiar text-to-video, image-to-video and video-to-video input modes.

Gen-4.5 is even better than its predecessors in terms of physical accuracy: people and objects move with realistic weight and momentum, and liquids flow naturally. The result is footage that looks deceptively real. Complex scenes with multiple elements, precise composition and expressive characters that remain consistent in their gestures and facial expressions across multiple shots are the strengths of the model.

Runway can be used via the web and app. Gen-4.5 is also integrated into Adobe Firefly, which means the model can also be used in professional editing tools such as Adobe Premiere and Adobe After Effects.

WAN(opens in new tab) originates from China and is also one of the most advanced video AIs. It can create fluid videos from text, images and short clips. The current version, WAN 2.5 (as of February 2026), builds on the strengths of earlier models and adds native audio generation. Dialogues, sound effects and ambient noises are generated together with the video.

As an open-source model, WAN is available to developers via the API and can be integrated into their own applications. For everyone else, there are various cloud-based subscription models.

In addition, there are many other types of artificial intelligence for a wide variety of applications, such as medical or financial assistants. AI also helps with intelligent data processing and analysis in research.

Here is an overview of other AI tools.(opens in new tab)

Topic

AI on social media

Social media platforms have also begun to integrate their own AI assistants into their ecosystems. You can chat with these digital companions and have them generate platform-specific content.

Meta AI is Meta's AI assistant and is integrated into Facebook, Instagram, WhatsApp and Messenger. The assistant has also been available in Switzerland since March 2025.
 
The assistant is based on Llama and helps you with questions, text improvements and research directly in the chat. You can activate the AI via the blue circle icon or by writing @MetaAI in the message.

For data protection reasons (GDPR), the EU version of Meta AI is only available as text chat without image generation and without a memory function. Unlike in the US, the models were not trained with European user data. The AI assistant is free to use for all Meta users.

Grok is the AI assistant from Elon Musk's company xAI, which is integrated into X (formerly Twitter). This gives the AI assistant real-time access to the social network's data. Grok 4.1 has been in use since November 2025 in Thinking and Non-Thinking variants.

In addition to text, Grok can also generate meme-like video clips. Simply tag Grok in posts to get answers. You can also write to Grok directly via the chat box in the left-hand menu of your profile. Grok responds in a humorous, direct tone that clearly distinguishes itself from the neutral style of other language models.

Grok came under criticism at the end of 2025 because its image editing function was misused for non-consensual sexualised image manipulation – especially of women and minors. This led to legal action and tighter security measures.

Topic

What opportunities does generative AI offer?

The potential of generative artificial intelligence is vast and diverse. Generative AI can enable personalised experiences, increase your efficiency or create new creative realities.

Generative AI is fed enormous amounts of data during its training. This large data pool and the targeted analysis of the data create unprecedented potential for you personally, but also for us as a society and for research and science.

These are the potential opportunities offered by AI:

Generative AI takes over time-consuming routine tasks and gives you valuable time. For example, if you are a teacher and need to write a letter to parents at the end of the term, you can generate an initial structure with just a few inputs and immediately focus on formulating the content.

AI can also serve as an idea generator and help you overcome creative blocks. For example, let AI suggest different concepts for a project at work or in your free time and use these as a starting point to develop your own ideas further. (Please remember to observe data protection regulations and do not prompt any sensitive information.)

With the help of generative AI, content can be tailored to your personal needs. Yes, this happens thanks to the algorithms used in the adverts you see on social networks. But think, for example, of a language course that, thanks to generative AI, is tailored precisely to your level and offers you customised exercises so that you can get the most out of the course.

Generative AI also democratises access to creative tools and knowledge. Perhaps you want to design a family album with beautiful captions or a picture to hang above your sofa in your living room? With generative AI, you can design like a pro without having to be a gifted designer.

Generative AI can also help you develop and deepen new interests. Perhaps you want to work in the garden for your personal well-being, but don't know where to start? AI can not only make planting suggestions for your location, but also explain how you can design your garden so that something is always in bloom throughout the year.

Topic

What are the dangers of generative AI?

However, generative AI also has its downsides, which should not be ignored. As with any new technology, it is important to be aware of these downsides in order to deal with them consciously. So what risks should you keep an eye on?

Generative AI struggles to say ‘I don't know’. Instead, it prefers to invent a probable solution (a so-called hallucination) – which may not necessarily be correct. So if you are researching information with a language model and need to be able to rely on the accuracy of the information, the basic rule is: always check important information using several reliable sources.

If realistic images, videos or audio files can be generated with little effort, how can you tell the difference between what is real and what is fake? This is indeed an important question, and one that is becoming increasingly difficult to answer as the quality of generative AI improves. The key is to be aware of this fact and use your common sense. Here are a few tips on how to spot deepfakes in videos.

A look at current technical developments: Adobe, Arm, Intel, Microsoft and Truepic have formed an alliance to certify the origin of media content using watermarks. This Coalition for Content Provenance and Authenticity (C2PA)(opens in new tab) has already developed and implemented initial technical standards that can be used to track the origin and history of media content (e.g. images or videos). This measure is intended to help expose fake news and deepfakes.

Even though the companies behind generative AI generally try to train their models in a value-free manner, certain opinions can gain the upper hand due to biased training data. Problems can also arise if the AI itself does not pursue any values and has been programmed to praise and confirm all opinions expressed by users (including morally questionable, racist or discriminatory statements). Without any relativisation or discussion of opinions, these can become increasingly radical and extremist.

In principle, generative AI can reuse the information you share with it for other purposes (such as training). AI is only as good as its training data. That's why companies like OpenAI are interested in having access to large amounts of data to optimise their AI models. The good news is that you can usually object to the use of your data in the privacy settings of the AI you use. However, the best protection is still not to share any personal data with generative AI in the first place.

 It is often forgotten that training and operating generative AI consumes enormous amounts of energy. When you ask an AI a question such as ‘What should I cook for dinner today?’, energy-hungry data centres are running at full speed in the background. Conversing with an AI typically consumes significantly more energy than a conventional internet search.

Want to know more about this topic? In early May 2025, Watson wrote: ‘This tool shows how much electricity your AI queries consume’(opens in new tab). And SRF also published an article on the topic in 2023: ‘Artificial intelligence is a huge power guzzler’(opens in new tab).

To save energy, our brain trains and retains only those skills that we use regularly. While it may be convenient to have AI do all our (home)work for us, there is a risk that we will forget how to do these tasks ourselves. We may also fail to develop our skills further. Children in particular are at risk of missing out on developing their own thinking skills – the ability to solve problems through their own reasoning.

The European Parliament(opens in new tab) has also commented on the opportunities and risks of artificial intelligence.

Topic

What regulations are in place?

The phenomenon of generative artificial intelligence is still relatively new. Regulations and guidelines for generative AI are therefore only just emerging. Here is a brief overview.

Switzerland signs Council of Europe convention on AI (March 2025)

By signing the Council of Europe Convention(opens in new tab), Federal Councillor Albert Rösti has reaffirmed Switzerland's commitment to the responsible use of AI technologies in accordance with fundamental rights. Switzerland will now prepare the necessary legislative amendments. An initial consultation draft is expected to be available by the end of 2026.

An assessment of the guidelines set by the Federal Council in February 2025 by Isabelle Oehrli, lecturer and project manager at the Lucerne University of Applied Sciences and Arts: AI regulation: Switzerland takes a stand – or does it?(opens in new tab)

EU Artificial Intelligence Act ‘AI Act’

The EU AI Act is the world's first comprehensive set of rules for regulating artificial intelligence. It came into force on 1 August 2024 and will be fully effective in stages by August 2027. Detailed information on the timetable and step-by-step implementation can be found on the Implementation Timeline(opens in new tab).

The regulations take a risk-based approach. The EU AI Act classifies AI applications into four risk categories: 

  1. Unacceptable risk – AI systems that violate EU fundamental values are prohibited.
  2. High risk – Systems in critical areas are subject to strict requirements.
  3. Limited risk – Systems such as chatbots must meet transparency requirements.
  4. Minimal risk – Systems such as spam filters or AI games are not subject to any special regulations.

As a general rule, the greater the impact on security, democracy, health and the environment, the stricter the rules.

SRF Arena discussed the EU AI Act in November 2024: Artificial intelligence: The debate over regulation begins. The explanatory video on the EU AI Act(opens in new tab) from the programme.

Topic

What should parents be aware of when it comes to AI?

Children and young people are already using AI in their everyday lives: in voice assistants, games with AI elements or personalised recommendations from streaming services, for example.

You may now have all sorts of questions running through your head: Will my child only do their homework with the help of AI tools? Should AI be banned or integrated into schools? How can exam topics be adapted so that AI does not diminish the learning effect? These and other questions are a real challenge today that we as a society must address.

 It is important that you, as parents, support your children in using intelligent systems and teach them how to use them responsibly and safely. Test AI applications together and pay attention to the following:

Familiarise yourselves with the technology together. Try out different AI models and discuss how they work. For example, ask the same question to several models and compare the answers. Or test which formulations can be used to optimise search results. How does an answer change when you change the question?  

Encourage your child to question the results of generative AI. For example, play a fact-checking game where you compare AI results with other sources from Google searches, books or your own knowledge. Ask your child, ‘How do you know if that's true?’ and encourage a healthy scepticism towards AI answers.

Is it acceptable to present a generated image as your own work? Discuss questions like this with your child. For example, you could talk about why and when it might be problematic to use AI to do homework – or, to be fair, tasks from your everyday working life.

Clear guidelines that the whole family adheres to can help ensure responsible use of generative AI. It is best to decide together what the AI can and cannot be used for. And why. An example? AI can help with research, but it cannot do all of your homework. Because I want to learn something myself.

Um deinem Kind KI-Systeme näherzubringen, finde erklärende Medien, die für seine Altersgruppe geeignet sind. Es gibt fachlich korrekte Lernmedien, die extra für Kinder aufbereitet wurden. Zum Beispiel ZDFtivi: «Künstliche Intelligenz einfach erklärt»(opens in new tab) oder «Digitale Sprachassistenten»(opens in new tab).  

To help your child understand AI systems, find explanatory media that is suitable for their age group. There are technically accurate learning media that have been specially designed for children. For example, ZDFtivi: ‘Artificial intelligence explained simply’(opens in new tab) or ‘Digital voice assistants’(opens in new tab).  

We humans (and children in particular) learn consciously and unconsciously by observing others. So show your child how you use AI as a tool, but are not totally dependent on it. For example, show them how you use AI to summarise information in a practical way. But also make it clear that you consciously choose not to use AI for certain tasks and explain why.  

And yes, privacy and data protection are also important topics to discuss. It is best to work with clear examples or analogies. For example: ‘You write a letter to a dear friend. But it ends up at the wrong address and is shared with lots of strangers instead. What information would you not want it to contain?’ Then discuss together what information is best kept private when prompting.

This is important

  • The most widely used approach in everyday life is natural language processing.
  • Tools such as ChatGPT, Nano Banana Pro or Veo have different qualities.
  • AI brings advantages such as automation and new opportunities for research and education. But AI also brings challenges, e.g. around data protection or discrimination as a result of training that is never free of bias.
  • Supporting our children in their use of generative AI is not a one-off task, but an ongoing dialogue. As a parent, remain open to your child's questions and experiences and learn how to deal with AI together.

Useful links

Further content

Would you like more information on the topic of AI? We have compiled the most important blog posts and links here.

Videos

What does AI mean and where do we encounter it?

This is how you can integrate ChatGPT into your everyday life.

Practical Gemini hacks.

Other interesting topics

Ask Marcel

Marcel is a trainer at Swisscom. He is available to answer any questions you may have about AI.

Portrait des Leiters Jugendmedienschutz Michael In Albon
Marcel

Trainer at Swisscom