Andrei Rogobete: Generative AI is Transforming Business – What are the Moral Implications?

A recent report by McKinsey & Co. addresses the impact of generative AI on the future of business. It makes several startling predictions:

 

  • – Generative AI’s impact on productivity could add trillions of dollars in value to the global economy. Generative AI could add the equivalent of $2.6 trillion to $4.4 trillion annually across the 63 use cases we analyzed—by comparison, the United Kingdom’s entire GDP in 2021 was $3.1 trillion.
  • – About 75 percent of the value that generative AI use cases could deliver falls across four areas: Customer operations, marketing and sales, software engineering, and R&D. The acceleration in the potential for technical automation is largely due to generative AI’s increased ability to understand natural language, which is required for work activities that account for 25 percent of total work time. Thus, generative AI has more impact on knowledge work associated with occupations that have higher wages and educational requirements than on other types of work.
  • – Generative AI can substantially increase labor productivity across the economy, but that will require investments to support workers as they shift work activities or change jobs. Generative AI could enable labor productivity growth of 0.1 to 0.6 percent annually through 2040, depending on the rate of technology adoption and redeployment of worker time into other activities.
  • – Generative AI has the potential to change the anatomy of work, augmenting the capabilities of individual workers by automating some of their individual activities. Current generative AI and other technologies have the potential to automate work activities that absorb 60 to 70 percent of employees’ time today.

 

What are some of the moral and ethical considerations of these potential changes?

The first issue is fairness and transparency. How should we integrate generative AI solutions in a manner that is conductive to promoting healthy competition whilst safeguarding favourable outcomes for both internal and external stakeholders (i.e. employees and consumers)? The symbiotic nature of the relationship that generative AI entails also raises questions of authenticity: at what point does work created (or completed) with the assistance of AI tools become plagiarism? Within academia for instance the use of AI is considered plagiarism – though not in the traditional sense. AI-generated content is not ‘stolen’ from someone else, but it does represent work that an individual has not written or created themselves.

A second problem is the issue of privacy and data gathering. This also carries deep implications for the evaluation and monitoring of employee performance. Will the early adopters of AI tools (in areas such as decision-making or processes optimisation) gain an unethical competitive advantage?  Companies need to think very carefully about striking the right balance between productivity increases, employee training and the use of private data. Indeed, determining what exactly constitutes private data is another problem in and of itself. Businesses need to develop an ethical, values-based approach to the handling of sensitive data that will invariably be generated by AI tools in the future. An approach like this takes heed of human dignity and individual freedom.

A third and final issue addresses a more subtle, yet equally profound question: will the synergy with conversational AI strengthen or diminish our humanity? Particularly when applying this thought within a business setting, we must think about ways in which new dimensions of work will impact interpersonal relationships. The involuntary and subconscious anthropomorphisation (“having human characteristics”) of many AI tools will undoubtedly carry some bearing on the changing cultural environment of a firm. Employee development in the long-run will be intrinsically linked to the use and reliance upon AI – particularly within data intensive fields.

A set of moral values should therefore be embedded within the AI programmes themselves. Philosopher Nick Bostrom argues that we need to make sure AI systems learn “what we value” and are “fundamentally on our side” (BBC News). It is also perhaps also not a bad idea to start distinguishing what is being created by AI and what is not. This applies to text, images, voice-overs and even video. Watermarking AI generated content is something that is currently being explored by OpenAI within ChatGPT – the success of which remains to be seen.

We are faced with distinct areas of enquiry that provide much food for thought yet remain largely unexplored – much work needs to be done by AI practitioners and those in the social sciences to address them. One of the most immediate challenges facing policymakers today is establishing a regulatory framework that is designed with a more dualistic goal of protecting users whilst also promoting innovation. More on this to come.

 

 


Andrei E. Rogobete is Associate Director at the Centre for Enterprise, Markets & Ethics. For more information about Andrei please click here.

 

 

 

 

Image used under CC Licence