A recent report by McKinsey & Co. addresses the impact of generative AI on the future of business. It makes several startling predictions:
What are some of the moral and ethical considerations of these potential changes?
The first issue is fairness and transparency. How should we integrate generative AI solutions in a manner that is conductive to promoting healthy competition whilst safeguarding favourable outcomes for both internal and external stakeholders (i.e. employees and consumers)? The symbiotic nature of the relationship that generative AI entails also raises questions of authenticity: at what point does work created (or completed) with the assistance of AI tools become plagiarism? Within academia for instance the use of AI is considered plagiarism – though not in the traditional sense. AI-generated content is not ‘stolen’ from someone else, but it does represent work that an individual has not written or created themselves.
A second problem is the issue of privacy and data gathering. This also carries deep implications for the evaluation and monitoring of employee performance. Will the early adopters of AI tools (in areas such as decision-making or processes optimisation) gain an unethical competitive advantage? Companies need to think very carefully about striking the right balance between productivity increases, employee training and the use of private data. Indeed, determining what exactly constitutes private data is another problem in and of itself. Businesses need to develop an ethical, values-based approach to the handling of sensitive data that will invariably be generated by AI tools in the future. An approach like this takes heed of human dignity and individual freedom.
A third and final issue addresses a more subtle, yet equally profound question: will the synergy with conversational AI strengthen or diminish our humanity? Particularly when applying this thought within a business setting, we must think about ways in which new dimensions of work will impact interpersonal relationships. The involuntary and subconscious anthropomorphisation (“having human characteristics”) of many AI tools will undoubtedly carry some bearing on the changing cultural environment of a firm. Employee development in the long-run will be intrinsically linked to the use and reliance upon AI – particularly within data intensive fields.
A set of moral values should therefore be embedded within the AI programmes themselves. Philosopher Nick Bostrom argues that we need to make sure AI systems learn “what we value” and are “fundamentally on our side” (BBC News). It is also perhaps also not a bad idea to start distinguishing what is being created by AI and what is not. This applies to text, images, voice-overs and even video. Watermarking AI generated content is something that is currently being explored by OpenAI within ChatGPT – the success of which remains to be seen.
We are faced with distinct areas of enquiry that provide much food for thought yet remain largely unexplored – much work needs to be done by AI practitioners and those in the social sciences to address them. One of the most immediate challenges facing policymakers today is establishing a regulatory framework that is designed with a more dualistic goal of protecting users whilst also promoting innovation. More on this to come.
Andrei E. Rogobete is Associate Director at the Centre for Enterprise, Markets & Ethics. For more information about Andrei please click here.
Image used under CC Licence.