A new report from UCL School of Management MBA students and consultancy firm Capgemini argues that not only is it essential for businesses to implement artificial intelligence ethically, but also that frequent measurement of this AI is required in order to ensure its ethical nature. Carried out at the UCL School of Management’s Analytics Lab, the report explores why an ethical approach is required and examines the challenges posed by measuring AI ethics.
As the implementation of artificial intelligence becomes more prevalent to everyday life, organisations are frequently exposed to new questions and issues regarding the ethical use of technology. As the authors note, without a clear approach to AI ethics, the use of artificial intelligence would fall to business users and technical teams without a clear understanding of the risks or means of mitigating this technology.
The need to measure AI implementation is therefore essential, and these measures must accommodate all ways of implementing AI, including by professional development teams and data scientists, through the procurement of software products or services that incorporate AI, or through the use of ‘no code/low code’ tools, which provide access to AI for non-professional developers.
However, the authors also acknowledge the irrefutable presence of challenges within the process of measuring AI ethics, which can be both theoretical and pragmatic. There may be unexpected outcomes, for example, such as unanticipated operational consequences of implementing AI, or complexities surrounding the measurement of AI. Easy measurement mechanisms may not be readily available, which means that AI measurement may be sporadic and not routine. Human judgement is therefore required, though this is problematic in itself, the authors note, as human judgement is not consistent and does not guarantee objectivity.