HomeReviewsEthics in artificial intelligence

Ethics in artificial intelligence

Whether you’re chatting with an online customer service bot, using your smart phone, applying for a job or loan, scrolling on social media, or riding in an autonomous car, today we find ourselves interacting with artificial intelligence (AI) in more ways than ever before. As those interactions increase, and as AI begins to play a role in more important parts of our lives, the need for responsible AI ethics and oversight becomes more pronounced.

Artificial intelligence applications are making decisions that impact our privacy, finances, safety, employment, health and much more. With so much at stake, it’s important that those who utilise this advanced technology understand the risks, have a plan for mitigating those risks, are abiding by some sort of responsible ethics, and have accountability if they fail to do so.

Research indicates the global AI software market is expected to expand from $10billion in 2019 to $125billion by 2025. With organisations making sizeable financial investments in this field, the application and responsible use of AI is increasingly relevant for the internal audit profession. Over the last few years, The Institute of Internal Auditors (IIA) has produced several thought leadership pieces on artificial intelligence, including articles on understanding and auditing AI, and a three-part research report.

The need for AI strategy and standards

From an internal audit perspective, everything starts with the organisation’s AI strategy. Without a well-developed and frequently updated strategy, an organisation is at considerable financial and reputational risk if something should go wrong. Before AI can – or rather, before it should – be utilised, organisations must not only develop such a strategy, but also the robust governance needed to execute that strategy.

Yet a significant challenge stands in the way – the lack of uniform standards. The technology is advancing and being implemented faster than standards can be developed to guide it. Several organisations, like the International Ethics Standards Board for Accountants (IESBA), the World Health Organisation (WHO), and even tech companies like Microsoft, Google and Twitter are currently developing, or have developed, standards that would focus on specific areas of AI, including data privacy, ethical use and technical design, which, while undoubtedly helpful, is likely to result in a patchwork of guidance that’s difficult to navigate and fluctuates wildly across industries, organisations, and geographies.

As with any technological development, appropriate safeguards are needed to protect privacy, ensure accuracy, and prevent misuse. We first need to ensure users can’t intentionally or unintentionally misuse its capabilities. But here’s where AI gets particularly interesting, because we then have to go several steps further than we would with a more traditional technology. We need to consider how to protect the data and privacy of those who may not even be aware they’re interacting with AI. And, going further still, because, by its very nature, artificial intelligence is only as good as the data that informs the AI algorithms, how do we account for the biases and flaws in the algorithms that drive AI to ensure accuracy and fairness?

If we accept that artificial intelligence is only as good as the inputs with which it’s programmed, we should think of AI outputs not as ‘indisputable truths’ but rather ‘predictions’ or ‘recommendations’ and those outputs should be verified before being implemented.

For example, there are clear benefits to a company utilising a bot to screen through hundreds of job applications – automating that process frees up your HR team to focus on higher value services like training and developing your current workforce. But that bot is only as useful as the guidelines with which it was programmed. It has no way, for example, of overcoming the programmer’s unconscious biases, and could also struggle to measure a candidate’s emotional intelligence or other intangible traits. How many qualified potential employees might that company be missing out on because of faulty parameters or unconsciously biased keywords? And, if that company has no human oversight to back up the newly automated process, how long will it be before that company realises this flaw? These are just some of the considerations that should be mapped out as part of an organisation’s AI strategy.

Protecting privacy

For those who aren’t business leaders, there are plenty of ways that AI touches your life as well, including the facial recognition that many of us use to unlock our smartphones, the recommendations you see on your streaming services, and much more.

Artificial intelligence not only helps automate processes, but it also influences the products we see and the decisions we make. Many of us have had a conversation with a friend – let’s say you talked about wanting to take a Caribbean cruise – and then noticed that your social media newsfeeds and online searches started showing ads for cruise lines and vacation packages. With smart phones, virtual assistants and other smart technologies integrated into so much of our daily lives, we’re never far away from AI or from its impact. So, what happens to all that data being collected and used to streamline applications, process requests and send you targetted recommendations?

According to Statista, the total amount of data created, captured, copied, and consumed globally is forecast to increase rapidly, reaching 64.2 zettabytes in 2020 – that’s more than 64 trillion gigabytes. Over the next three years, up to 2025, global data creation is projected to grow to more than 180 zettabytes. That’s a lot of information to protect.

With so much of our personal information being collected and stored, it’s vital that the companies who build and utilise these technologies are taking appropriate steps to protect your privacy, safely store your information, and ensure your security.

A consistent code of ethics

While the need for ethics in AI is clear, how those ethics are determined is less clear. Ethics can be a changing and hard-to-define concept, which can vary significantly among different cultures, societies, and value systems. Although there have been many attempts at drafting ethical AI guidelines, what remains unclear is if everyone – regardless of demographic, industry, socioeconomic status, culture, or religion – can ever be in agreement on those guidelines.

“UNTIL CONSISTENT STANDARDS ARE IN PLACE, WE HAVE TO RELY ON THE INDIVIDUAL ETHICS OF THE LEADERS WHO CHOOSE WHEN AND HOW THIS TECHNOLOGY IS APPLIED”

What’s ethical for citizens in the US and Europe may not be so to those in other countries. Protection from facial recognition may be important to some countries that view their privacy as a human right, but not matter as much for people in other countries who are more accustomed to surveillance.

Tone at the top

With or without a formal global code of ethics or regulatory oversight, it’s our responsibility, as leaders, to ensure the ethical application of AI as much as any technology, products, or services our organisations provide. Until consistent standards are in place, we have to rely on the individual ethics of the leaders who choose when and how this technology is applied. It is our job to ensure our organisations have a robust AI strategy that ensures accuracy, privacy, and safety.

Closing recommendations

Artificial intelligence offers endless possibilities for efficiency, security, wellbeing, and more. The long-term benefits of this technology, as well as its limitations, are, ultimately up to those who determine its use. As society tests and implements new AI capabilities, we should keep a few guidelines in mind:

Have a plan Organisations that utilise AI must have a plan in place to govern its use, appropriate measures/checkpoints to ensure accuracy and identify violations, and a strategy for addressing any issues that might occur

Exercise judgement We cannot assume that technology will always come to the correct solution. It’s vital that we have appropriate safeguards, human verifications, and mechanisms in place to confirm or challenge AI decisions

Push for consistent standards and ethics While we’d like to believe that organisations have the public’s best interests at heart, we know that’s not always the case. Any time there exists a tool that could serve the greater good, there also exists the potential for someone to misuse that tool for their own benefit. Regulatory requirements, oversight, and an AI-specific code of ethics are essential for the responsible adoption of this promising technology

Commit to continuous learning Technology continues to bring us further into a world of ‘unknowns’: unknown possibilities, risks, unintended consequences, etc. The further we venture into these unknowns, the more cautious we need to be. Technological innovation still involves a great deal of trial and error, so as AI brings about more unknowns and ‘never-before-seen scenarios’, we need to adapt a mindset of humility and continuous learning. We must admit that we don’t know everything about how this capability could be utilised, and, as we learn more, we must be willing to address errors, amend our guidelines, and improve processes so that our safeguards continue to expand in parallel with our body of knowledge.

ethicalboardroom

Ethical Boardroom is a premier website dedicated to providing the latest news, insights, and analyses on corporate governance, sustainability, and boardroom practices.

ethicalboardroom
Ethical Boardroom is a premier website dedicated to providing the latest news, insights, and analyses on corporate governance, sustainability, and boardroom practices.
RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -

Most Popular