Confidence is good, but supervision is better
But rapidly growing technical capabilities require a watchful eye for the risks that accompany them. Can AI create mass job losses? How will our consumer behavior change if more and more people follow shopping recommendations of a smart search engine? What happens to our media if AI contributes to the production of fake news or does not dissolve ideological filter bubbles, but rather expands and reinforces them? What can happen if the government uses AI, for example to carry out predictive policing, to issue regulations or to reduce the workload at courts? How should research and education react to the opportunities and risks of AI and which competences are particularly relevant for today's researchers and future decision makers in order to make the best possible use of AI for society?
These and similar questions were addressed in the TA-SWISS study by an interdisciplinary project team led by Markus Christen (Digital Society Initiative, University of Zurich), the Empa researchers Clemens Mader, Claudia Som and Lorenz Hilty and Johann Čas (Institute for Technology Assessment, Austrian Academy of Sciences). The researchers drew up their results using methods such as targeted literature studies, workshops and interviews with more than 300 international experts.
Recommendations for research, education and policy
This work has resulted in nine recommendations for the sectors examined: work, education and research, consumption, media and administration. In the educational sector, for example, it is important not only to enable experts to develop and implement AI systems, but also to build competences to judge the legal, ethical and social effects of AI. In sectors where the risk level is unknown, the experts demand that more research should be done to identify such risks. For this purpose, funding from universities or through third-party funding would be desirable.
The experts also comment on the lack of transparency of AI and its possible discriminatory characteristics. Potential control mechanisms for these systems are discussed, as are legal aspects arising from the use of AI, such as liability or data protection.