Application Scenario

Protect AI Systems from criminal Use

AI systems enable innovative developments in very different areas of application and thus make our everyday lives and work easier. Like all IT solutions, however, they can also be misused. The possible consequences have a particular scope: malicious attacks can manipulate AI systems - and thus also the actions of people who use the technology as the basis for certain decisions. Similarly, given a lack of safeguards, AI systems can be used to monitor people, for industrial espionage, or as weapons. Protecting AI systems from misuse by criminals, terrorists, competitors, or employers is therefore a highly relevant task for responsible use of the technology.

The following three examples illustrate possible scenarios for misuse of AI systems and show which protective measures can prevent it.

  • Autonomous Vehicle as a Weapon

  • Drone Attack on a Soccer Stadium

  • Inadmissible Performance Control

Scenario 1: Attack on an Autonomous Driving Vehicle

In the road traffic of the future, numerous vehicles will be on the road autonomously - as passenger cars or as port of the local public transport. These vehicles are embedded in very complex mobility systems in which they communicate with the traffic infrastructure. This enables them to process sensor and traffic information as well as to learn continuously during their journeys. The networked mobility system places high demands on the functional reliability of hardware and software as well as security against cyber attacks. At the same time, a balance must be maintained between individual freedom and necessary control.

Click on the graphic to see how malicious attackers can be prevented from manipulating an autonomous vehicle to cause damage.

The question of which measures are suitable for effectively preventing misuse of AI systems is also addressed in the white paper Protecting AI systems, preventing misuse by the IT Security, Privacy, Law and Ethics working group.

Scenario 2: Manipulation of Aerial Drones at a Major Event

In hard-to-reach, dangerous or complex environments, AI systems can take over routine tasks. They explore areas, monitor infrastructures or support the communication of specialists. To enable these AI systems to act as independently as possible - for example, when investigating or repairing infrastructures - they are equipped with special capabilities; they act (semi-)autonomously. They are used both as individual systems and in interaction with each other, for example as flying drones in a swarm.

Aerial drones can be used in the future to ensure safety at major events. The integrated sensors capture important information and forward it. However, since the airspace hardly has any efficient defense systems or mechanical protection devices, flying drones can also be misused for targeted attacks on people, critical areas or objects. Our scenario outlines the World Cup finals in a major city. The organizers have the stadium monitored from the air with the help of a swarm of AI-based drones. These communicate with each other and cooperate when necessary.

Click on the graphic to see how malicious attackers can be prevented from manipulating the drones and causing damage.

The question of which measures are suitable for effectively preventing misuse of AI systems is also addressed in the white paper Protecting AI systems, preventing misuse by the IT Security, Privacy, Law and Ethics working group.

Scenario 3: Inadmissible Performance Control of Employees

AI systems can relieve the burden of tedious or repetitive tasks and help workers avoid mistakes. By continuously collecting and analyzing data from the work process, collaborative systems adapt to workers and their routines. For example, self-learning robotic tools in production will in the future record step sequences and work techniques of skilled workers and be able to assist them at the appropriate time. By also processing physical parameters such as employees' stress levels, fatigue or concentration, AI systems further optimize collaboration and can provide indications of excessive or insufficient demands. On an aggregated level, individual job satisfaction and the efficiency of the company can thus be increased.

However, AI systems can also be misused to monitor employee performance. How this can be prevented is outlined in the following example of a company that uses self-learning and interconnected robotic tools in production.

The question of which measures are suitable for effectively preventing misuse of AI systems is also addressed in the white paper Protecting AI systems, preventing misuse by the IT Security, Privacy, Law and Ethics working group.