A new report from UCL School of Management MBA students Ian Cooper and Wendy Kent and the Consultancy firm Capgemini, argues that the key to the success of AI is explainability. Carried out at the UCL School of Management’s new Analytics Lab, the report explains why it is so important for people to understand what AI is doing and why in order for the technology to succeed.
As AI and machine learning technologies advance it exposes new issues regarding ethics and the use of the technologies arise. Explainability and understanding how the technologies work is a key element of the new regulatory environment emerging for AI, but AI and machine learning challenge many deeply embedded habits and assumptions about technology and its uses. The report shows that implementing AI without understanding these habits will cause unexpected consequences.
Ai and machine learning technologies can be a huge asset to organisations in improving their processes and identifying issues unbeknown to the organisation. However, the authors argue that for companies to successfully implement AI and machine learning technologies, people must understand how the technology works and why has arrived at a specific conclusion.
They continue that the relationship with organisational knowledge applies both ways. Machine learning may also contribute to organisational knowledge by finding patterns in the data that experts were not aware of. This may leave some people in the uncomfortable position of having their expertise challenged. Establishing trust in cases like this is especially important in the implementation of AI. Explainability should enhance human expertise not threaten it.
The report suggests organisations should zoom out to consider the context in which explainability sits when using AI. They suggest considering these five important themes; Trust, Value, Lifecycle and Knowledge, Skills and Organisational change.
Read the full report and their top tips to avoid the pitfalls of explainability.