Artificial Intelligence continues to be the driver of innovation in business; from enhancing diagnostic imaging to translating languages with greater accuracy than before. While the advances in AI are helping a number of companies reduce costs and increase competitiveness, there is still a lot of hype around the technology. Gary Richardson, MD of Emerging Technology at 6point6, discusses the ethical considerations AI will require us to make.
When combined with a lack of real world use cases, and an unwillingness to engage in the ethical debate around AI without being drawn into hubris, putting forward a compelling business case for AI is difficult.
While it’s unlikely that AI will cause the destruction of all humanity, the introduction of AI into Enterprise does give rise to a number of ethical questions and concerns. Although recent developments in data science and AI in industries such as financial services, agriculture and medicine promise significant benefits to companies, the public and society at large; they should be framed within a constructive debate about AI. Practical questions such as, “how do we ensure AI is applied responsibly?”, “Just because we can deploy AI, should we?”, to philosophical ones like, “what place do we want AI to have in our society?” should be debated openly and transparently.
It will be essential for every organisation that deploys AI to have this debate as a business. However, we do need to recognise the subtle difference between ethics and controls. Debated items such as model bias, accuracy and correlation are subject to risk and control frameworks and have no place in the ethics debate.
Say we wanted to drive an increase in diversity across new job applicants at an organisation. As such, we train an AI algorithm to scan CVs, using data from the last 10 years of employment history at the company to train the model. In theory, this should work. However, if 90% of applicants over the last 10 years were white males, then the AI algorithm will naturally be biased towards white males and won’t help in increasing diversity. The AI needs to be trained on a wider data set than simply that which can be gathered from the company internally. This isn’t actually an ethical issue though. This is one of risk and compliance. We can put numerous controls in place that ensure we have a controlled business process. But the true ethical question here is, just because we can use an AI algorithm to scan CVs, should we?
As society becomes increasingly more aware of the acuteness of global issues such as climate change and data privacy, boards will have to think about both the business efficiencies and ethical impact of harnessing AI. What we must be careful to avoid is conflating issues of risk and controls with those of ethics. Risk and controls is the domain of engineers, while AI ethics should be debated at board level and as part of a wider societal piece.
Undoubtedly, there are some fundamental concerns about the AI algorithms themselves and the data that is used to train them on. In the financial services, AI is already taking on a number of repetitive and fairly mundane tasks, such as automating insurance claims; freeing up employees to take on more complex tasks that require higher critical thinking and creativity. However, this involves relying on an AI to make decisions that can have a significant impact on people’s lives.
While important, it is essential that we don’t get sidetracked by worrying about if a model’s results are 100% transparent and can be analysed to ensure they align with the problem at hand. Vast amount of resources are being driven into model explainability but this is essentially an engineering challenge to be solved; not an ethical one.
When thinking about whether to deploy AI in enterprise, from an ethical standpoint, we actually need to ask a very simple question first - should we be doing this? In the case of CVs, it might actually make more sense for them to be analysed by humans, not an AI. It’s much easier for a human HR manager to decide on whether to interview someone when they can take the context and layout of a CV into account; instead of just relying on an AI algorithm which only analyses the text used.
The ultimate ethical question then is, just because we can integrate AI into almost every facet of enterprise, should we?
Our BQ Bulletin emails will land in your inbox at 7.30am, Monday to Friday, with a mix of the latest local business news, national news, and features to inspire you. Sign up here!
Click here to read our privacy statement