Law and Ethics

Mojito_mak/iStock

Wanted: New regulations and open dialogue

As with virtually all technological innovations, self-learning systems too raise new legal issues which must be put in a regulatory framework. These issues concern liability or the handling of personal data. The ethical-moral aspects of surrendering human control to machines must also be evaluated.

The introduction of self-learning systems comes along with new chances but also risks, making it necessary to either establish new regulations or adapt existing legislation. New issues e.g. in liability law will arise as learning systems increasingly take over tasks from humans. Self-learning systems do not have any legal personality as such, the target group of legal regulations being natural and legal persons. Nevertheless, a malfunctioning of self-learning systems is not always 100% traceable to human action because self-learning systems also acquire knowledge when they are in operation. It is therefore not always possible to determine the origins of a certain learning outcome.

Other questions concern data and privacy protection, namely the ability of people to control when and how much personal information is divulged or kept private. Many applications and the progress made in the area of self-learning systems is based on using large amounts of data – some of which is personal and personally identifiable information – to acquire new knowledge or to train the systems. Self-learning systems support humans among others in making purchasing decisions or with assistance systems at the workplace. They also have the potential to be used for surveillance, or the combination of different sets of data might be used to divulge information about individuals against their will.

When self-learning systems take over tasks or make decisions that have societal or ethical dimensions, they also take on the responsibility for meeting relevant societal and ethical requirements. However, the systems are not able to make their own moral decisions or to judge the decisions they make according to any moral compass. Ethical standards therefore focus on the programming process and use of self-learning systems. The criteria of fairness and non-discrimination of AI decisions must also be considered as well as how to give these criteria a formal definition. AI-based decisions could possibly discriminate against people even though with was not the intention of the systems’ developers. This involves the question how society can have a say in the ways in which self-learning systems are used.

These issues are the focus of Working Group 3 (Law and Ethics) headed by Mrs Frauke Rostalski (University of Cologne) and Mrs Jessica Heesen (Eberhard Karls Universität Tübingen) of the Plattform Lernende Systeme.