Ethical Machines by Reid Blackman is one of the more recent books published that seek to make sense of the intersection between the domain of artificial intelligence, machine learning and the moral questions associated with the use of these technologies. This being all applied within the context of a company and its use of AI.
Reid Blackman is currently the CEO and founder of Virtue, an ethics AI consultancy that specialises in ethical risk mitigation and the implementation of artificial intelligence. Prior to this, Blackman was a professor of philosophy at Colgate University and the University of North Carolina, Chapel Hill.
Contrary to what the cover would suggest, Ethical Machines is more concerned with dispelling the myths surrounding the field of ethics rather than a technical discussion on AI itself. This perhaps comes as no surprise since the author’s background is primarily in philosophy, not computer science.
The book is comprised of seven chapters. Blackman makes extensive use of the first person and writes in an informal tone, making the book sometimes read more like personal diary than a specialised piece of literature (which paradoxically it is). No doubt readers will have their own personal preference so this might not necessarily be a negative feature.
Unlike most of the other books reviewed on this website, Ethical Machines is not aimed at the general public or even the inquisitive reader, but rather the business community and in particular, business leaders and managers that are trying to get their heads around implementing ethics in their firm’s use of AI.
The premise of the book seeks to dispel the myth and scepticism surrounding ethics as a subject and as something that can be defined and even implemented in quantifiable way. Blackman’s argument is that managers and those in decision-making positions are struggling to implement ethics because they are trying to “…build Structure around something they still find squishy, fuzzy, and subjective. They’re doing a lot of AI risk mitigation, and while they may know a lot about AI and risk mitigation, they don’t know much about ethics” (page 14).
The book’s thesis therefore is that an organisation can never achieve a “comprehensive and robust Structure” if it fails to understand ethics – i.e. the “Content side of things” (Ibid). In this case “Structure” refers to a governance structure: the “policies, processes, role-specific responsibilities […] a set of mechanisms in place to identify and mitigate the ethical risks it may realise in the development, procurement, and deployment of AI” (page 16). “Content” on the other hand refers to the ethical risks that the company wants to avoid (Ibid).
Chapter I opens the discussion by making an attempt to transform the general preconception of ethics as something that is “squishy” or “subjective” to something that is concrete (page 23-24). Blackwell argues that in order to better understand ethics one must first ask a series of questions that most people would consider to be ethical, such as: “What is a good life?”, “Do people have equal moral worth?”, “Is it ever ethically permissible to lie?”, and so on (page 26).
Chapters II – IV deal primarily with three larger problems surrounding the field of AI. The first is the issue of bias in AI and the challenges that data interpretation brings. The second is known as “explainability” (page 61) that looks at the journey between the input data and the resulting output data. The third and final issue is the problem of privacy in the use of AI and more importantly, the interplay between AI, privacy, and ethics (page 87).
Chapters V – VII are more focused on the structure side of things in implementing an AI ethical risk programme. Chapter V looks at how to construct an AI ethics statement that ideally changes behaviour and is not another PR experiment. Chapter VI further develops upon this and looks at what an effective structure or AI ethical risk programme actually looks like within the firm. The author emphasises that “…there is no such thing as a viable and robust AI ethical risk program without leadership and ownership from the top” (page 162). Chapter VII ends the discussion by specifically paying attention to the development team and their approach to implementing AI ethics in the product creation process. Blackman argues that rather than applying any particular moral theory, developers should instead focus on the wrongs – “the avoidance of harming people” (page 185).
In concluding, Ethical Machines is a curious addition to the literature on AI ethics. No doubt that there will be eyebrows raised amongst some readers, particularly those with a background in business who might be less convinced by the feasibility of Blackman’s approach. Some may doubt that trying to discover what is morally wrong will help clarify a firm’s ethical stance. This is particularly problematic when there are likely to be disagreements within the firm about the exact position of the ethical barometer with regards to any given issue.
Nonetheless the book offers plenty of food for thought – those with an interest in how AI ethics can be better understood and applied within the context of a firm will likely find it a worthwhile endeavour. However, those looking for a discussion on AI itself will best be served elsewhere.
Ethical Machines: Your Concise Guide to Totally Unbiased, Transparent, and Respectful AI by Reid Blackman was first published in 2022 by Harvard Business Review Press (ISBN. 1647822815, 9781647822811), 224pp.
Andrei E. Rogobete is the Associate Director of the Centre for Enterprise, Markets & Ethics. For more information about Andrei please click here.