By Dr Andrea Bonime–Blanc – Founder and Chief Executive Officer of GEC Risk Advisory
In the recent past – especially in the last year – the world has witnessed a barrage of articles, pronouncements, edicts, warnings, promises and alarms about the imminent, pervasive, liberating and dangerous nature of artificial intelligence (AI).
This article examines the potential reputation risks and opportunities associated with the AI phenomenon. To do so effectively, we deploy an AI environmental, social and governance (ESG) lens, which helps us to understand and categorise AI ESG risk and opportunity, provide a snapshot of who the key actors and stakeholders are and suggest some of the questions that management and the board should be asking.
The future of AI is now. We haven’t seen anything yet – we learn daily and with lightning speed what AI’s actual and potential short, medium and long-term impacts are and might be on every aspect of society, economy, politics, science and even biology. Some forms of AI – machine learning (ML) and deep learning (DL) – are still in their early stages but are progressing rapidly. See Table 1 and Figure 1 for some basics on AI, ML and DL and on how they interrelate with one another.
The ‘big five’ AI tech actors and key emerging AI stakeholders
The big five AI technology companies – Amazon, Facebook, Google, Apple, Microsoft – not to mention other key market players globally (especially in China) have deep challenges and amazing opportunities, with technological breakthroughs and improvements occurring daily. The massive change, together with its breakneck speed, could become a runaway train without proper attention from the public, governments, academia and business.
It is incumbent on all of us – but in the business world especially on the powerful tech firms – to shape the development of AI in all its forms and nuances and in such a way that the runaway train does not rash to the detriment of the many existing and emerging stakeholders. Instead, we need to find ways to harness the energy of AI for the good of society. The tech firms’ biggest AI ESG reputation risk includes designing flawed algorithms with built-in biases or unethical or illegal choices that impact stakeholders adversely or worse. Examples of this range from autonomous cars designed to make morally questionable choices in the case of a potential accident, to algorithms designed by a thin sliver of society (mostly young, mostly white, mostly western, mostly male engineers) creating and deploying biased and potentially discriminatory algorithms into every aspect of business and social life with the very real potential of perpetuating and worsening stereotypes and inequalities. The big five AI tech firms’ greatest opportunity to enhance their reputation is to be the leader in designing and creating responsible AI products and services from their inception. Doing that would include having cross-functional, deeply diverse and expert AI ethics committees or councils made up of internal and external leaders to guide the creation of ethical and socially responsible products and services. A couple of the big five have started to do just this – Google and Microsoft. This is called embracing reputation opportunity. Figure 2 below highlights just a few of the key stakeholders with important expectations of investments in, or potential concerns about, AI.
Reputation risk and opportunity in the AI context
Before we tackle a conceptual understanding of AI reputation risk and opportunity, I would like to provide a basic definition of ‘reputation risk’ from my book The Reputation Risk Handbook: Reputation risk is an amplifier risk that layers on or attaches to other risks – especially ESG risks – adding negative or positive implications to the materiality, duration or expansion of the other risks on the affected organisation, person, product or service.
In my Deploying Reputational Risk 2.0 article in Ethical Boardroom last year, I provided an update and more context and detail about the importance of reputation risk in our era of hyper-transparency, super-connectivity, fake news and cyber war that can be helpful to the reader in terms of understanding what reputation risk is all about. So, what then is AI reputation risk? I would offer the following: AI reputation risk occurs when the underlying AI ESG risk of an entity (that is creating or properly integrating purchased AI or not creating or properly integrating purchased AI when they should be for market and/or other strategic reasons) is not understood or properly identified, managed, mitigated when it can or should be.
“It is incumbent on all of us – but in the business world especially on the powerful tech firms – to shape the development of AI in all its forms and nuances and in such a way that the runaway train does not crash to the detriment of the many existing and emerging stakeholders”
Conversely, AI reputation opportunity would occur under the following circumstances: AI reputation opportunity occurs when the underlying AI ESG issue or risk of an entity (that is creating or properly integrating purchased AI – or not creating or properly integrating purchased AI when they should be for market or other strategic reasons) is well-understood and properly identified, managed and mitigated, thus providing the entity with a reputation value creation opportunity.
AI & ESG: What’s the connection?
On Table 2 (below) is an overview of some of the ESG risks associated with AI that my co-author Anastassia Lauterbach and I gleaned in The Artificial Intelligence Imperative, several examples of which are also provided in the book. Companies – both executives (including chief risk officers) and boards – should be thinking about which of these ESG issues (risks and opportunities) are relevant to their businesses so that they can chart the important AI-related ESG risks and opportunities that apply to their business and strategy.
When analysing possible AI reputation risk from an ESG standpoint, executives and boards should be asking these basic questions:
1. What are the environmental risks and opportunities associated with the AI that we have or need in our company?
2. What are the social risks and opportunities associated with the AI that we have or need in our company?
3. What are the governance risks and opportunities that we have or need in our company?
An expansive and useful ESG issue table was recently provided by MSCI (see Figure 3).
Summarised below is a simple but I believe useful way for management and boards to capture the essence of AI ESG reputation risk and AI ESG reputation opportunity:
AI, ESG, risk and opportunity
Next are two illustrations of AI ESG risk and opportunities in the context of a crisis event or a value-creation situation.
AI ESG reputation risk example: traditional healthcare company purchases new AI product A traditional healthcare company with little experience in deploying AI, acquires an AI program from a relatively new player who may have a lauded product, but that product does not have the track record or quality and safety protocols that would be necessary to properly protect privacy data from cyber hacking.
Such a situation would present an AI privacy risk or cybersecurity risk (which would fall under the social or ‘S’ or governance or ‘G’ risk in ESG). Because the quality or effectiveness of the AI program was not properly triangulated prior to its acquisition (maybe because the management was in a hurry to adopt anything or maybe because they did not get good advice), the situation leads not only to financial risk but also possibly reputational risk affecting a variety of stakeholders adversely.
AI ESG reputation opportunity example: sophisticated data processing company acquires vetted AI product An example of AI ESG reputation opportunity may happen when a more sophisticated data processing company with serious cybersecurity protections and protocols already in place (because of its more advanced Enterprise Risk Management (ERM) system), acquires a vetted and tested AI program that does a proper and expected job at protecting privacy data.
When a cyberattack occurs, because of the appropriate security protections and protocols and crisis management in place, the crisis is mostly averted and negative consequences largely avoided or mitigated. In the process, the company may not only avert the most serious downsides of the cyber hack but may also gain the greater confidence and trust of its key stakeholders – consumers, third parties, the public in general – translating into greater reputational and even financial value creation. While some of the concepts outlined in this article are simple given the complexity of this topic, the intent of this whirlwind tour of AI ESG risk and opportunity is to signal to management and the board that they need to be industrious about examining their AI ESG risk and opportunity, taking into account all environmental market and strategic factors that apply to their company.
It is beyond the scope of this article to delve into the details of potential risk and opportunity at each level of AI, but it is possible to state generally that when companies and other types of organisations have some form of ERM in place and incorporate a taxonomy of possible AI ESG risks and opportunities into this ERM system, they will be best positioned to compete and create value, both reputational and financial, for their key stakeholders.
The reverse is true as well: when entities are not well prepared to understand their ESG issues and risks, do not have an appropriate risk management system in place (let alone an ERM system) and don’t factor environmental situational awareness into their business strategy, they are vulnerable to incurring not only a variety of risks but also AI ESG risk. This risk can range from small incidents all the way up to existential risks associated, for example, with complete digital disruption in an industry by a competitor that better understands its overall ERM and particular AI ESG-related risks and opportunities.
The bottom line is that every entity – no matter how big or small, or what sector it is in – needs to have some form of risk management (preferably ERM), including a serious understanding of al ESG issues, where AI risks and opportunities are considered and a reputation risk analysis is layered on top of the ESG and AI ESG risk consideration.
The central and most critical role of leaders on AI today – whether they are business people (both executives and board members), elected officials, researchers, inventors, investors or academics – is to quickly gain a grasp of the basics of AI and invest thought and effort now, up front, for the long term that is specifically focussed on ethical and socially responsible design and development of AI products and services as well as products and services containing AI. Hence the importance of also understanding the reputation risks and opportunities that exist side by side with AI ESG issues.
About the Author:
Dr. Andrea Bonime-Blanc is founder and CEO of GEC Risk Advisory, a strategic governance, risk, reputation and ethics advisor to business, NGOs and government. A former senior executive at Bertelsmann, Verint and PSEG, she is author of The Reputation Risk Handbook(2014), The Artificial Intelligence Imperative (April 2018) and Gloom to Boom:How Leaders Transform Risk into Resilience and Value (late 2018). She serves as Independent Ethics Advisor to the Financial Oversight and Management Board for Puerto Rico, start-up mentor at Plug & Play Tech Center, life member of the Council on Foreign Relations and faculty at the NACD, NYU, IEB and Glasgow Caledonian University. She tweets as @GlobalEthicist.
1.For a deeper dive on what my co–author, Anastassia Lauterbach and I call the ‘AI Imperative’, see our forthcoming book, The Artificial Intelligence Imperative: A Practical Roadmap for Business. Praeger Publishers: April 2018. http://bit.ly/2kKhDEu
2.Andrea Bonime–Blanc. The Reputation Risk Handbook: Surviving and Thriving in the Age of Hyper–Transparency. UK: Greenleaf 2014.
3.Adapted from: Lauterbach & Bonime–Blanc. The Artificial Intelligence Imperative.