Perspektives on AI

One topic, different perspectives: Experts from the interdisciplinary Plattform Lernende Systeme assess current developments in the field of Artificial Intelligence from their respective specialist backgrounds.

Learning robotics

Robots capable of learning are no longer science fiction. Robotic systems equipped with AI are being used in more and more areas of application beyond industry. Whether in care, in the event of a disaster or in the household - robots can support us in many areas and take over strenuous or dangerous tasks. Their potential for the economy and society is enormous. How do robots work together successfully? How do robots learn? And what progress can we expect from adaptive systems in the near future? Experts from Plattform Lernende Systeme provide answers.

  • Barbara Deml
    Karlsruher Institut für Technologie

  • Sven Behnke
    Universität Bonn

  • Dorothea Koert
    Technische Universität Darmstadt

Prof. Dr.-Ing. Barbara Deml | Karlsruhe Institute of Technology

Barbara Deml is head of the Institute of Ergonomics and Industrial Organization at KIT and a member of the Future of Work and Human-Machine Interaction working group of Plattform Lernende Systeme.

Friend and helper: How humans and social robots work together

How can Artificial Intelligence (AI) enable socially interactive robotics? To answer this, we must first ask the question: What is meant by social robots? Social robots are the antithesis of industrial robots. They are not autonomous tools, but interactive partners. These include toy robots, such as robot dogs, service robots in the field of care or therapy, collaborative robots, so-called cobots, in an industrial context, but also software robots, such as chatbots. They are also often robots whose shape resembles a human body or which have human-like characteristics. They are then referred to as humanoid robots. Social robots should be able to communicate with us in order to build a trusting relationship. Above all, this requires social intelligence, for which AI is an essential prerequisite:

  • AI makes it possible to understand human language and the context of conversations. Ideally, social robots will be able to have a human-like conversation.
  • AI can be used to recognize faces and analyse facial expressions or gestures. This allows social robots to recognize emotions or hand signals and react appropriately.
  • Machine learning allows a robot to learn from experience or observation. Social robots can thus adapt to the individual preferences and needs of their users. This enables personalized interactions.

How can a robot recognize human emotions such as stress? How do we humans recognize whether our fellow human beings are stressed or whether they are happy or angry, for example? As a rule, we unconsciously interpret the context as well as various verbal and non-verbal signals such as smiles, frowns, eyebrow raises or other facial movements. Also, the way someone speaks - including tone of voice, speed and emphasis - can reveal a lot about our emotional state. There is a long tradition of research on this in psychology. Many of these behavioral indicators are well described today and can now also be observed by a robot's technical sensors. Advances in speech, image and video analysis enable robots to recognize body postures, facial expressions, pupil reactions, changes in tone of voice or speed of speech. Robots can analyze movement patterns or interaction behavior with technical devices. A combination of several such sensors then makes it possible to draw conclusions about emotions. But, of course, this is not always one hundred percent successful. We humans are not always right with our emotion recognition either.

How can humanoid robotics be used in care in a humane way?

Human-centered technology design always focuses on the needs, abilities and preferences of the user. The same applies when humanoid robots are to be used in care. The top priority is the question: What are the needs, abilities and preferences of care staff and caregivers when interacting with a robot?

  • Humanoid robots can be developed to help elderly people or people in need of care with everyday tasks, such as getting up, getting dressed, preparing meals and other basic activities. This can promote people's independence and at the same time relieve the burden on care staff.
  • Humanoid robots can be equipped with sensors to monitor the environment and raise the alarm if unusual activity or emergencies are detected. This is particularly useful in care facilities or for elderly people who live alone.
  • Robots can be used to dispense medication and remind people to take it. This is particularly important for people with complex medication schedules.
  • Humanoid robots can be designed to enable social interaction and provide companionship. This can be particularly important for older people who may feel lonely.

It is important that humanoid robots are used ethically and sensitively in care. This includes respecting privacy and ensuring safety. Humanoid robots are not intended to replace human caregivers, but rather to support them and improve the quality of care.

Empfohlener redaktioneller Inhalt

An dieser Stelle finden Sie einen externen Inhalt von YouTube, der den Artikel ergänzt. Sie können ihn sich mit einem Klick anzeigen lassen und wieder ausblenden.

Dr. Dorothea Koert | Technical University of Darmstadt

Dr. Dorothea Koert is head of the IKIDA junior research group at the Intelligent Autonomous Systems Lab at TU Darmstadt and a member of the Learning Robotic Systems working group of Plattform Lernende Systeme.

How robots learn

The possible tasks that robots will be able to perform in everyday life in the future are diverse - as are the preferences of their users as to how they want to be supported by a robot. This makes pure pre-programming of future robots almost impossible. The ability to learn new tasks in interaction with humans is therefore becoming a key component in the development of intelligent robotic systems.

In order to enable a large section of society to participate in robots that are capable of learning, it is essential that robots are also able to learn new tasks from everyday users without prior programming knowledge.

Learning from demonstrations and feedback

Two promising approaches to how robots can learn from humans are demonstration learning and interactive reinforcement learning. When learning from demonstrations, robots can either be "taken by the hand" by humans and guided through the task or they can observe humans performing a task themselves and then try to understand and copy what they have seen. Human demonstrations can be used to recognize familiar subtasks and perform them in a new order, as well as to learn completely new movement and task sequences.

In interactive reinforcement learning, on the other hand, robots use feedback gained through interaction with humans to iteratively improve what they have previously learned. Humans can evaluate robots during the execution of their tasks. In this way, robots can also learn their users' personal preferences for task execution. Feedback can either be given explicitly, for example via tablet or voice input, or robots can learn through implicit feedback, i.e. by how their behavior influences human behavior or the success of task execution.

Human error sources in learning

Adaptive robotic systems that learn through direct interaction with humans and can improve on what they have previously learned have great potential in many areas of application. Prerequisite: the robots are safe. An important question in current research is therefore how robots and the algorithms they use can be protected against incorrect or undesired human demonstrations. In contrast to classically programmed robots, robotic systems that are capable of learning should, for example, ensure that they understand potential uncertainties or inconsistencies in human feedback. It is equally important that the robots cannot leave a previously defined core task area, even through human demonstrations.

The development of safe and human-centered future learning algorithms therefore requires interdisciplinary research in cognitive science, robotics and machine learning. The aim is to understand how people want to give and receive demonstrations and feedback and to explore how the robots of the future can best learn from this.

Learning robots in action

Prof. Dr. Elsa Kirchner

Dipl.-Ing. Gunnar Bloss

Prof. Dr. Oskar von Stryk

Dr. Sirko Straube

Dr.-Ing. Armin Wedler

David Reger

Prof. Dr.-Ing. Sami Haddadin