Ms. Bittner, what fascinates you about your field of research?
Eva Bittner: The current challenges show that the design of AI systems is not only a technical issue, but also very centrally a social and cultural issue. Socio-technical system design takes into account the interactions between humans and technology in the organizational context to ensure that technology is used in a meaningful and human-centered way. If we do not want to leave the decision of what rules and for what benefits AI systems are designed and deployed solely in the hands of technology corporations, we need strong inter- and transdisciplinary research. The fact that we often work with practice partners and real users to solve real novel challenges motivates me. Working together in mixed teams at the interface of (business) computer science, psychology, human computer interaction (HCI), ethics and other disciplines is incredibly enriching because we learn from each other every day. It requires a lot of communication and teamwork skills and the openness to engage with other perspectives.
How will man and machine work together optimally in the future?
Eva Bittner: For me, optimal cooperation between humans and machines means focusing on human needs and capabilities. Machines should be designed and integrated into work processes in such a way that they relieve people of repetitive tasks, enable them to use their strengths and support them in doing so. I think we will see more and more so-called "hybrid intelligence" in the future, where human and machine work are very closely and interactively interwoven and where humans and machines continuously learn from each other. For many knowledge work tasks, such as solving complex problems or developing creative solutions, full automation is out of the question. However, combining human judgment, empathy and creativity with machine speed and data analysis capabilities can add real value here. How exactly the division of labor and handovers between humans and machines take place in the concrete work process should be consciously designed with a view to issues such as transparency, trust, privacy and data protection, and responsibility for the results.
What are the challenges? How can they be solved?
Eva Bittner: AI systems - not always recognizable as such or transparent in their mode of operation and data basis - are penetrating more and more fields of activity, both professional and private. Their impressively convincing and sometimes human-like appearance - see ChatGPT - makes it increasingly difficult for us to realistically assess their actual capabilities and limitations and to muster an appropriate degree of trust, but also critical caution. People tend to perceive IT systems as social actors and attribute human behavior and intelligence to them. This can lead us, for example, to overestimate the content of suggestions made by the generative AI system and to adopt them unchecked if they are presented eloquently. But it can also lead us to disclose private or internal company information to the friendly chatbot, if necessary, because its use promises convenient support and time savings. We can meet these challenges by making AI systems themselves as trustworthy and transparent as possible according to ethical standards - negotiated for the respective purpose - wherever we as researchers, developers or decision-makers have influence on their development. But we also need to better understand how humans and machines function as teams and how the dynamics differ from collaboration between humans or the use of classic IT tools. On this basis, we can qualify people for the competent and critical use of AI systems and design human-machine collaboration holistically.
The interview is released for editorial use (if source is acknowledged © Plattform Lernende Systeme).