Artificial intelligence (AI) is increasingly deployed around us and may have large potential benefits. But there are growing concerns about the unethical use of AI. Professor Anthony Davison, who holds the Chair of Statistics at EPFL, and colleagues in the UK, have tackled these questions from a mathematical point of view, focusing on commercial AI that seek to maximize profits.
One example is an insurance company using AI to find a strategy for deciding premiums for potential customers. The AI will choose from many potential strategies, some of which may be discriminatory or may otherwise misuse customer data in ways that later lead to severe penalties for the company. Ideally, unethical strategies such as these would be removed from the space of potential strategies beforehand, but the AI does not have a moral sense, so it cannot distinguish between ethical and unethical strategies.
In work published in Royal Society Open Science on 1 July 2020, Davison and his co-authors Heather Battey (Imperial College London), Nicholas Beale (Sciteb Limited) and Robert MacKay (University of Warwick), show that an AI is likely to pick an unethical strategy in many situations. They formulate their results as an “Unethical Optimization Principle”:
If an AI aims to maximize risk-adjusted return, then under mild conditions it is disproportionately likely to pick an unethical strategy unless the objective function allows sufficiently for this risk.
This principle can help risk managers, regulators or others to detect the unethical strategies that might be hidden in a large strategy space. In an ideal world one would configure the AI to avoid unethical strategies, but this may be impossible because they cannot be specified in advance. In order to guide the use of the AI, the article suggests how to estimate the proportion of unethical strategies and the distribution of the most profitable strategies.
“Our work can be used to help regulators, compliance staff and others to find problematic strategies that might be hidden in a large strategy space. Such a space can be expected to contain disproportionately many unethical strategies, inspection of which should show where problems are likely to arise and thus suggest how the AI search algorithm should be modified to avoid them,” says Professor Davison. “It also suggests that it may be necessary to re-think the way AI operates in very large strategy spaces, so that unethical outcomes are explicitly rejected during the learning process.”
Professor Wendy Hall of the University of Southampton, known worldwide for her work on the potential practical benefits and problems brought by AI, said: “This is a really important paper. It shows that we can’t just rely on AI systems to act ethically because their objectives seem ethically neutral. On the contrary, under mild conditions, an AI system will disproportionately find unethical solutions unless it is carefully designed to avoid them.
The tremendous potential benefits of AI will only be realized properly if ethical behavior is designed in from the ground up, taking account of this Unethical Optimisation Principle from a diverse set of perspectives. Encouragingly, this Principle can also be used to help find ethical problems with existing systems which can then be addressed by better design.”