As part of the new government’s effort to raise the number of houses built and spur economic growth, Labour ministers plan to allow homebuilders to receive planning permission for projects currently impacted by nutrient neutrality rules that require new construction in areas with high levels of nutrients in waterways to not contribute additional nutrients. The permission would be granted with so-called Grampian conditions (the name derives from a Scottish legal case) that would allow the homebuilding to begin subject to future off-site mitigation, rather than the status quo which requires the mitigation to be worked out before the homebuilding begins. This is one quick way that the government seeks to address the fact that because of nutrient neutrality homes impacting much of the country, homes can only be built after the details of mitigation are worked out. This is a huge administrative burden and is holding up something like 160,000 homes from being built. Few think that this will solve the issue, but it is a positive development and one of a number of efforts dealing with environmental concerns. Past efforts have failed to fix the issue and descended into rancorous debates, despite the specific issue of pollution from new homes being minimal.
The crux of the issue in question is that in recent years the environmental concerns surrounding nutrients like phosphates and nitrates in rivers have meant that because of new court rulings, new homes in a large proportion of the UK must mitigate all run-off that they would create.[1] Nutrient runoff into rivers causes algal blooms which consume oxygen and set off a chain of species die offs. European and British court rulings have widened the impact of the rules, including most recently applying the rules to projects which had already received planning permission.
This plan was suggested by Angela Rayner and Steve Reed last September when the last government attempted to overhaul the rules surrounding nutrient neutrality more comprehensively. That effort failed when the House of Lords defeated the Government’s plan to remove the legal requirements on homebuilders while increasing taxpayer funding of a more comprehensive scheme. At the time, the Labour Party was expected to support the reform but altered course just days before the vote. For those who haven’t been following the issue closely, much of the commentary on it in the main newspapers offers little insight and instead treats it as an elementary decision on a good environmental outcome or a poor one. Rather than going through the tangled legal history—I’ll leave that to those who charge by the hour (e.g. Zack Simons or Simon Ricketts)—I want to focus on how this issue and the failure to fix it has been emblematic of a broader problem with environmental issues.
What is the Problem with the Status Quo?
The problem with the current situation, one recognized by many experts, is that it places an extraordinary burden on a socially useful function: residential development. Taking as a given that the levels of nutrient pollution outlined in the rules are sensible, it is reasonable to want to limit any pollution that might occur beyond that point. When rivers reach a point where added pollution is unacceptable—as Natural England claims to be the case in 74 local authorities across the UK including most of Norfolk, much of Wiltshire and Somerset, and the area around the Solent—it makes sense to stop it from getting worse. As a result of this worthy cause, the rules are estimated to be holding up over 160,000 homes (a number that will continue to grow).
In principle, it could be reasonable to stop the market from building more much-needed houses because once the harm of the marginal nutrients is considered there is no net value being created. However, the marginal pollution created by people living in new homes is tiny. In fact, the pollution directly created by all structures (i.e. including existing structures) is supposedly just 5% of the total.[2] Basic economics suggests that there are many ways to regulate such that valuable uses like homebuilding proceed while paying for the reduction in pollutants in whatever way can be done at the lowest cost.
In fact, this is what is supposed to be happening now. The rules are supposed to set a budget for the total amount of nutrients in an impacted area and allow new homes if the developers offset the added pollutants by the same amount. This should encourage bargaining between homebuilders and other polluters. Homebuilders should be able to pay agricultural users who are the biggest polluters and who could reduce pollution at much lower abatement cost than homebuilders. This abatement could be achieved by farming less intensively by using fewer polluting fertilizers or by letting land go fallow. Alternatively, homebuilders could pay for methods to capture agricultural producers’ pollutants, or be able to pay water companies whose disposal of wastewater is one of the key mechanisms by which disparate users’ waste ends up in waterways. The benefit of such a type of regulation is that it creates market incentives to deal with the environmental problem at the lowest cost.
These are examples of the type of offsite mitigation schemes encouraged by the proposed reforms that (at a minimum) will net out the impact of the new development by reducing the relevant pollutants that new housing contributes. This would allow much needed new homes at a much lower cost than catching all added nutrients via onsite mitigation, while also keeping rivers limited to the same level of nutrient runoff. The rules allow nutrient offsetting schemes (and some have been worked out) but they don’t work well in practice due to procedural barriers. There is no environmental or economic reason for this to not be encouraged.
However, as Zack Simons writes:
Nutrient neutrality involves quantifying a ‘nutrient budget’ for both phosphorous and nitrogen, and then using either on or off-site mitigation measures to show that your scheme will not cause any net harm to the protected sites – see some guidance from Natural England here. Measures might include e.g. creating new wetlands, retrofitting sustainable urban drainage systems and making arable farmland fallow to reduce nitrates. But in many authorities, there simply is no standard nutrient neutrality strategy. Or no strategy at all. Very often, nutrient neutrality simply cannot yet be achieved – either viably, or at all.
As bad as the status quo which emerged from legal machinations is, perhaps more worrying is the fact that this issue, like many others, has become mired in unthinking partisan debate. Perhaps the worst offenders are the environmental groups whose interest in the issue suggests that they must realize the need for a better, more comprehensive regulatory system, but who instead depict the problem as a result of homebuilders’ actions.
Like many other countries in Europe, the UK is and has been facing a dramatic set of economic headwinds. Real reforms are needed to enable investment in green energy generation and transmission, but these too face opposition from those who would be expected to support them. Let us hope that this government can fix what the last couldn’t.
[1] The rules allow pragmatic schemes to mitigate nutrient pollution offsite, but the process for doing so is unnecessarily complicated.
[2] Baroness Willis of Summertown suggests that this number may be closer to 30%. But this makes little difference for this argument given this still suggests that the marginal addition is quite small, since the number of existing homes is far larger than the number of new homes held up.

John Kroencke is a Senior Research Fellow at the Centre for Enterprise, Markets and Ethics. For more information about John please click here.
The company American Rounds is supplying vending machines from which gun owners can buy bullets, with machines currently available in food shops in the states of Alabama, Oklahoma and Texas. There are plans to expand this provision to states where hunting is popular, such as Louisiana and Colorado. Customers simply select the ammunition that they would like to buy using a touchscreen, scan their identification and collect their bullets below, the machine having used ‘built-in AI technology, card scanning capability and facial recognition software’ to match the buyer’s face to his or her ID and to ensure that he or she is over 18 years old.
The states in which such machines are available at present place no minimum age limit on the purchase of ammunition, do not require the vendor to keep a record of the purchaser, impose no licensing regime for the sale or purchase of ammunition and do not prohibit those disqualified from purchasing or owning firearms from buying ammunition (though federal laws might impose such a restriction, without necessarily obliging vendors to check whether a customer is in fact disqualified). It would therefore seem that in checking the ID of a purchaser and maintaining a record of the transaction, the machines provided by American Rounds arguably do more than state law requires. This might be for the purposes of ensuring that the machines are unquestionably within the law, or, by ensuring sales are made to adults only, it might be an exercise in reputation management – perhaps both – but it does mean that the machines are likely to be legally compliant when installed in other states where tighter restrictions may apply.
Artificial Intelligence, Risk and Trust
Without entering into the wider issue of gun ownership and its regulation, there are nonetheless moral questions regarding the provision of something so potentially dangerous by way of a vending machine. Can we be certain that the technology will always perform as it is supposed to? We might ask whether such machines capable of discerning a forged ID from a genuine one. Moreover, will they identify buyers correctly? After all, numerous cases (at least seven in the US last year) have been documented of wrongful arrest as a result of facial recognition technology and it would appear that some technologies of this kind are prone to reflecting and perpetuating biases in the data with which they are trained. Whether the technology in American Rounds’ vending machines will accurately match the purchaser’s face to a photograph on an identity document is therefore a legitimate question. These concerns raise the much broader question of responsibility.
Decisions, Decisions…
Where there exists a right to own firearms and ammunition, there is no prima facie reason to disallow sales of ammunition provided by technological means, provided that the technology is reliable and ensures that sales are only ever made to the right people. What, then, is the role of people in such transactions? In a jurisdiction in which would-be buyers of ammunition were checked against a register of individuals disqualified from buying or owning guns, one would expect purchases to be carefully monitored – not least because the shop-owner’s livelihood is likely to be at risk for breaches of regulations. Such verification would doubtless be conducted by means of access to a database, such that the checks, while instigated and concluded by a human-being who makes a decision ‘in store’, would nonetheless be dependent on technology. Ultimately, therefore, while relying on the information provided, the individual vendor would be responsible for the sale. The question, then, is whether this decision, based on the same information, might safely be deferred to a machine that uses facial recognition software and searches databases itself.
The risks involved are different, but a similar question can be asked about the sale of alcohol. Practices vary but in some countries, alcoholic drinks can be bought from vending machines, with the identification of the buyer being verified either by biometric data gained by scanning the customer’s fingerprint, or by simply supplying the purchaser with a wristband to show that his or her ID has been checked by a member of staff. In other countries, alcohol can only be bought at certain times from state approved vendors.
Decisions and Responsibility
Whether the sale is of alcohol or ammunition, are those businesses and states who continue to require and rely upon a human decision at some stage in the transaction doing so based on an unjustified our outdated mistrust of technology, or because they acknowledge that responsibility can ultimately only be attributed to free human beings, who recognise the potential consequences of error? The question, therefore, becomes one not only of trust, but also of responsibility in relation to technology. Where certain decisions handed over to technology – which, of course, can be done more easily and more safely in some areas than in others – we are left with the matter of where responsibility lies, particularly when the technology ‘gets it wrong’. Other things being equal, the owner of a hunting supplies store will be liable if he or she sells a firearm to someone who is underage or disqualified from purchasing guns. Where does responsibility lie if a vending machine sells alcoholic drinks to children in error? Does this rest with the corporate owners or suppliers of the machine? If the machine is on licensed premises, is the landlord responsible? Perhaps there is a case for holding the suppliers of the technology used by the machine liable. This might not be a straightforward matter, as fatalities involving self-driving cars demonstrate: in one case, the back-up driver of the vehicle was convicted while the operating company was judged not to be criminally liable. When an algorithm becomes involved in decisions relating to sentencing for criminal misdemeanours or the provision of social security, where does responsibility for those decisions lie?
Regardless of the scenario, responsibility, as a moral category, must always reside with a person or (human) organisation, never a machine. Machines, however ‘intelligent’, are neither conscious nor free and as such, they are not moral agents. Where decisions are devolved to technology – and that technology ‘decides’ incorrectly – the challenge is for us to identify the responsible subject.

Neil Jordan is Senior Editor at the Centre for Enterprise, Markets and Ethics. For more information about Neil please click here.
This article on Thatcher’s Monetarism and Timothy Lankester’s ‘Inside Thatcher’s Monetarism Experiment’ originally appeared at TheArticle
Sir Timothy Lankester has had a distinguished career of public service. He has served as Permanent Secretary at the Overseas Development Administration and the Department of Education; Director of the School of Oriental and African Studies; President of Corpus Christi College, Oxford; and Chairman of the Council, London School of Hygiene and Tropical Medicine. Inside Thatcher’s Monetarism Experiment (Policy Press, University of Bristol, 240 pp, £19.99) is about his time seconded as a member of the Administrative (not Government) Civil Service from H.M. Treasury to 10 Downing Street to serve as the Prime Minister’s Secretary for Economic Affairs, initially for James Callaghan (7 months) and then Margaret Thatcher (two and a half years).
The purpose of the book is to fill a gap in the economic history of the late 1970s and early 1980s and to show how Mrs Thatcher became infatuated with monetarism as an economic doctrine and implemented it as a laboratory experiment. However, this involved a vast cost of nearly one and a half million people becoming unemployed and thousands of firms going out of business. The author writes as a “disbelieving monetarist”: namely, someone who went along with monetarism because markets believed in it, even though he personally, in the best traditions of the UK Civil Service, did not.
While I am critical of Lankester’s understanding and assessment of monetarism, I thoroughly enjoyed reading the book. Its highlights are his relationship with the Prime Minister and colleagues, as well as his comments on the role and views of politicians, civil servants, special advisers, Bank of England officials, journalists, academic commentators and others.
He writes with admiration and affection for Mrs Thatcher, even though his view of the role and potential of government and the failures and imperfections of markets was very different from hers. “Mrs Thatcher and I got on well from day one,” is the way he describes his working relationship with the Prime Minister. He found her a kind and generous boss, including being invited regularly to the study for a drink in the early evening or supper in the flat with Dennis. He admired her in many ways, not least because, as he puts it, “she greatly valued those of us who worked most closely with her … there was a strong chemistry between us … the closeness of our relationship surprised me then and it surprises me to this day.” Yet he recognises that she had “a schizophrenic attitude to the Civil Service”, largely because of its ineffective delivery of services and its failure to get to grips with an increasingly bloated public sector.
The author makes clear that he approaches the subject with some personal history: he grew up in the lengthened shadow of the Great Depression, because his grandfather, a medical doctor, was forced to close his medical practice in 1930, leaving his father with no money to finish his schooling or attend university. In the Thatcher years, his wife’s family-run cotton textile manufacturing business in West Yorkshire was also forced to close.
This personal background and the great increase in unemployment from 1979 onwards leads him to a positively Augustinian confession: “Although only a minor player in this sad saga, I have always found it difficult to come to terms with the part I both wittingly and unwittingly played in it. … with hindsight, my admiration for her at a personal level, and my wish for her to succeed, made me work almost too hard on her behalf … I put to one side my reservations about monetarism and made myself see the world through her monetarist lens … I might have done more to push back on what Lawson would later call her ‘primitivist monetarism’ … this book is, in part, my attempt to achieve some kind of personal resolution.”
One issue which he addresses inadequately is why Mrs Thatcher, as someone proud of her training as a scientist, became so committed to the importance of monetary policy in controlling inflation. Her conviction was far from some beatific vision: it took the best part of a decade to develop.
When Mrs Thatcher became Prime Minister in May 1979, she inherited an economy described at the time as “the sick man of Europe” and suffering from “the British disease”: low productivity, high inflation, rising unemployment, stagflation, militant trade unions, strikes, and so on. The real pre-tax rate of return on trading assets in the UK manufacturing sector, which averaged around 10% in the 1960s, fell to 2.7% in 1974 and 1.9% in 1975; in textiles and metal manufacture, the real return was negative. Between 1974 and 1979, inflation averaged 16% annually, productivity was stagnant and the Government found it easier to borrow through the nationalised industries than on the Government’s own credit. The conventional Keynesian orthodoxy in terms of policy-making was also proving equally bankrupt: the long-run trade-off between unemployment and inflation had broken down; extra public spending or lower taxes could not be relied on to create more jobs; and a succession of incomes policies, voluntary and statutory, had proved ineffective in controlling inflation under previous Conservative and Labour governments.
After sitting around Edward Heath’s Cabinet table for four years, Mrs Thatcher had become convinced that if inflation was to be brought down, it needed some overall financial discipline. Subsequently, in 1976, while she was Leader of the Opposition, the IMF granted a loan to the UK (when it was effectively bust), but did so only on the condition that the Government would place ceilings on public sector borrowing and money supply growth (in the form of domestic credit expansion). In the academic world, distinguished scholars such as Hayek, Friedman, Johnson, Brunner, Walters and others had conducted extensive research which established that money supply growth affected prices in the long term, but in the short term, mainly output and employment. In addition, the explanation put forward by the Bundesbank and the Swiss National Bank to account for the success of their policies in controlling inflation was their control of money supply growth in their respective countries. All of this was in marked contrast to the part that money played in the intellectual approach of the UK Treasury, Bank of England and distinguished members of the then highly influential Cambridge University economics department.
The author deserves credit for recognising some of this, but then concludes with two observations: first, that the assumptions according to which monetarism should work were incorrect; and, second, that the cost of implementing the policy in increased unemployment was unacceptably high.
Lankester is certainly right to point out that the optimism of some of the early monetarists, myself included, was unfounded. Over the short term there was no systematic relationship between money supply growth and prices – the time lag was two years or longer. There were also differences of outcome when using different measures of the money supply. In addition, the regulatory environment in which monetary policy was conducted was constantly changing. Innovations such as the introduction of Competition and Credit Control, regulation by the “corset” and the abolition of exchange controls, made time series analysis difficult. However, the demand for money (the inverse of the velocity of circulation) has turned out to be stable over the longer term, such that central banks are able to control a measure of broad money which will affect prices after two year time lag. The best advice for a central bank is to aim for a steady growth of the money stock which is in line with the trend growth of money income.
The additional objection the author has to monetarism is its enormous cost in human suffering: namely, over 1 million jobs made redundant.
When she became Prime Minister in 1979, Mrs Thatcher made a point of honouring the pay settlements carried over from the “Winter of Discontent”, as well as the public sector pay awards recommended by the Clegg Commission, which the previous Government had instituted. Both were factors which inevitably led to some increase in unemployment, as was the new policy of switching revenues from high rates of income tax to a higher rate of VAT. However, over the next six years, unemployment in the UK continued to rise: from 6% to 12% of the labour force.
What is equally remarkable is that between 1980 and 1985 unemployment increased on average in all European Economic Community (EEC) countries — and by a large amount, from 5.8% to 11.2%. In Belgium, Italy and the Netherlands it rose above 12%. Even in Germany the unemployment rate more than doubled from 3.4% to 8.4%. Among the causes of this increase were the quadrupling of the oil price following the Iranian revolution and “sticky” real wages, because trade union bargaining power was strong, especially in public sector industries. But research has shown that fiscal tightening was not the cause. In other words, even without Mrs Thatcher’s policy revolution, executed by her Chancellor of the Exchequer Geoffrey Howe, unemployment would have risen significantly, if not doubled.
Lankester also acknowledges that research by Stephen Nickell and Jan van Ours had estimated that the “natural” rate of unemployment over these years for the UK — that is, the level at which the rate of inflation would remain stable, whether it was 0%, 2% or 10% — had risen from 3.8% (1969-73) to 7.5% (1974-89) and then to 9.5% (1981-86).
He concludes by recognising that there are positives from the growth of monetarism: that money does matter, in both analysis and policy, although we are not told how exactly that is so; that there is a “natural” rate of unemployment and so no sustainable trade-off between unemployment and inflation; that monetary policy is to be preferred to fiscal policy in the management of aggregate demand; and that unacceptable levels of unemployment must be tackled through micro-economic not macro-economic policies.
Should Mrs Thatcher have introduced an incomes policy which might have restrained wage increases over these years? Incomes policy had been introduced on numerous occasions since the early 1960s. The evidence from all incomes policies — whether introduced by Labour or Conservative governments and regardless of whether they were voluntary or statutory — is that they had no lasting impact on inflation. While they did initially lead to some wage restraint, this was subsequently undone, usually accompanied by strikes and industrial unrest.
Could the “British disease” of high inflation and high unemployment have been remedied without shock treatment? In principle, of course, it could — but in practice I very much doubt it. The most difficult challenge for the Thatcher Government in 1979-81 was to confront the economic malaise and the entrenched expectations of future inflation by trade unions, companies and the general public. This required a belief that the Government had a clear policy, that it would stick to the policy even though unemployment was rising, and that it would not change course. This it did. Its anti-inflationary policy was strengthened later by trade union reforms which reduced their power to disrupt the economy.
The irony of all this is that the 1981 Budget, which was roundly condemned by 364 economists because it put up taxes in order to reduce public borrowing (which was already greater than planned), actually marked the moment from which the UK economy recovered. It was a recovery that endured for the rest of the decade.
Sign up to receive blogs and book reviews via email

Brian Griffiths (Lord Griffiths of Fforestfach) is a Senior Research Fellow at Centre for Enterprise, Markets and Ethics (CEME) and Founding Chair of CEME (serving as Chair until 2023). Among other things he served at No. 10 Downing Street as head of the Prime Minister’s Policy Unit from 1985 to 1990 and Chair of the Centre for Policy Studies (CPS) from 1991 to 2001.
Photograph at top: Valentin Poleac; reproduced from Wikimedia Commons in accordance with a Creative Commons Attribution-Share Alike 4.0 International licence.
When someone like me from an older generation is confronted with a new piece of technology, inevitably we must turn to a younger person for help. What is described as ‘user friendly’ is usually so only to those who are already familiar with the ways of the machines. ‘Well, they have grown up with the new technology’ we say, by way of excuse. And that is true, but might it also be true that they have grown up, not only with, but also in competition with, the new technology? Consider the experience of infants in recent decades. They have learned from experience that their cries get the attention of parents, but at the same time they have found themselves in competition with mobile phones whose ring tones are set to call attention to themselves even against the background of considerable ambient noise, just like babies’ cries. And they have seen these gadgets lifted up to mothers’ cheeks, and those mothers looking attentively (lovingly?) at screens, just as the infants desired to be so regarded. Even in the most intimate moments of mothers’ quality time with their children, that third other is always present, and always likely to interrupt with its incessant demand for attention.
I am not trying to lay blame, to accuse mothers of harming their children (though here the notorious line from Philip Larkin’s poem might be quoted) but am simply asking what happens to people who learn how to be human, how to love and relate, in this context. What happens to children formed and raised in a milieu in which they must compete for attention, not with other children, but with mysterious talking and crying machines? What do they learn about priorities in relationships, about securing their own identity and interests and desires in this complex world? How is interpersonal communication fostered or frustrated when it is so structured by the mediating technology? This is the kind of question that arises when we consider AI in the context of common goods.
The consequences of the denial of face-to-face encounter of pupils with teachers and children with their peers, required by lockdown in response to Covid-19, are becoming evident. Teachers now observe the effects of this interruption to the normal processes of socialisation. Children lack the ordinary skills of social interaction, that they would formerly have been expected to bring to their school experience. But might it be the case that our reliance on gadgets for communication and socialising is also likely to have a negative impact on our culture because in some way draining it of the shared capabilities and skills and knowledge that makes a decent social existence possible? This question can be sharpened specifically in relation to Artificial Intelligence and its increased usage in various domains of social life. Is our common good at risk from AI and its applications? To deal with this question, we need to specify what is involved in our common goods, and how AI might jeopardise them.
Distinction of Common Goods
We can consider two cases of common goods, practical and perfective. The practical sense is that wherever people cooperate, they have a good in common, a common good. That good in common might be a private good (school places for our children), a club good (networks for alumni of our school), a collective good (any school’s ambition for its students), or a public good (high levels of educational attainment conditioning political discourse and respect for the rule of law). Perhaps the less obvious but more important way in which cooperation is for a common good is the perfective sense of good.
Again, taking schooling as an example, we can see how education as accomplishment of persons and communities, enables people to be more and to realise to a greater extent their human potential. What fulfils people is for their good, enabling them to flourish. Hence a perfective sense of the good is relevant, that might not be at the forefront of our thinking when we collaborate in some project. Then we focus on the task in hand, but our performance also shapes us and our relationships.
When considering the relationship between AI and common goods it is understandable that people would spontaneously begin with common goods in the practical sense of the objectives they hope to achieve by relying on AI. There is the project of reducing drudgery and repetitiveness in work, so that machines can do what humans have had to do. There are projects of increasing effectiveness and efficiency as more accurate analyses and diagnoses are made possible, factoring out the fallibility of human processes. There are ambitions of increasing fairness when the processing of vast quantities of paper such as application forms, whether for jobs, or for mortgages, or for credit, or university places, can be done without risk of human tiredness or boredom or prejudice distorting the process. The list goes on. Many worthwhile objectives can be pursued with the use of AI bringing accuracy, reliability, efficiency, and fairness to the undertaking.
But what about the perfective common goods at stake? What impact is the use of AI having or likely to have on the development of human persons, and on the quality of the relationships between persons who interact with one another mediated by the relevant technology? What is it doing or likely to do to community, to the quality of the cooperation itself that is in turn capable of being an instance of flourishing, a perfective realization of human potential? In various areas in which AI is currently being deployed we find questions being raised that touch on these perfective goods in common. These are still questions, but sufficiently concerning as to suggest that we cannot be indifferent to the possible answers.
One area of concern is that signalled by the potential of large language models that are so sophisticated they can produce very plausible and convincing text in several genres. ChatGPT developed by OpenAI fascinates with its ability to engage in a conversation with the user, answering questions and producing convincing answers. This can be very useful, but what is its impact on our understanding of what is going on in a conversation, or in written communication? If I no longer must assume that there is another human being collaborating with me in such interaction, does it impact on how I participate in communication when other persons are involved? If language is no longer exclusively a medium between people, do I then hear words differently when I’m aware they could be generated by a machine and not spoken by a person? If the voice I hear on the phone might not be that of a person, does that reinforce a tendency to treat the speaker in an instrumental way, whether a machine or not?
The absence of persons from relevant decision making when aided by AI is another concern. The superiority of AI aided medical diagnoses (because standardised based on large data inputs) over those made by physicians is well established. But patients are concerned about the implications for treatment when it is not another person, a physician with compassion as well as competence, making the decision. Similarly, when decisions about the granting of credit, or mortgages for house-buying, or jobs, are made by machines benefiting from analyses of large databases, the people whose applications are rejected can be upset that decisions with life changing consequences for them are taken by a machine and not another person. Even international human rights adjudication can now be facilitated by automated management of documentation. One might argue that the benefits of fairer and more reliable decisions outweigh the distress occasioned for some. But that is not the issue here. The issue is what we are doing to our common life, and to the willingness of people to collaborate, and comply, and accept the burdens along with the benefits of social cooperation, when machines and not human partners seem to be in control.
Among the willingness to accept burdens in social life is Losers’ Consent, the willingness to accept unfavourable outcomes of democratic decision making, a fundamental precondition for peaceful democracy. Is it also jeopardised by the undermining of social bonds occasioned by the replacement of human decisions makers with AI powered machines? Formation for human relationships and its reinforcement through social interaction is a perfective common good that is also a public good. Human capacities for bonding are formed and strengthened through daily encounters. Now we must face the possibility that those capacities are not reinforced but are instead jeopardised when our daily social encounters are increasingly with machines, and not with people. Have some of the infants who once competed with iPhones for a touch of mother’s cheek become adults who prefer to relate online?
Sign up to receive blogs and book reviews in your email

Dr Patrick Riordan, SJ, an Irish Jesuit, is Senior Fellow for Political Philosophy and Catholic Social Thought at Campion Hall, University of Oxford. Previously he taught political philosophy at Heythrop College, University of London. His 2017 book, Recovering Common Goods (Veritas, Dublin) was awarded the ‘Economy and Society’ prize by the Centesimus Annus Pro Pontifice Foundation in 2021. His most recent books are Human Dignity and Liberal Politics: Catholic Possibilities for the Common Good (Georgetown UP, 2023) and Connecting Ecologies: Integrating Responses to the Global Challenge (edited with Gavin Flood [Routledge, 2024]).
Are drugs like semaglutide a quick fix, or might they be opportunities to practise virtue?
Obesity is thought to affect over 800 million adults worldwide and according to the World Health Organisation, has tripled since 1975. Indeed, estimates are that half of the world’s population will be overweight or obese by 2035 and very few currently have access to long-term treatment to address obesity or the conditions that accompany it. However, the development of several drugs that deliver significant weight-loss have the potential to revolutionise treatment. Semaglutide, for example, better known by its brand name, Wegovy, brings about a reduction in weight of up to 15 per cent in recipients. Given the clinical advantages, not least in the treatment of obesity related conditions such as diabetes or kidney disease, it has been approved for use within the National Health Service, where, in spite of soaring demand, it is prioritised for use by high-risk patients who need to lose weight prior to receiving surgery for cancer or organ transplants. Owing to the potential success of such drugs, pharmaceutical companies are keen find a share of a market that some recent reports estimate will be worth approximately $100 billion, or perhaps $200 billion, by 2030.
Some might argue that since, in most cases, obesity is caused by poor diet and lack of exercise, it a consequence of a failure of self-restraint on the part of the individual. It therefore constitutes a problem of willpower and should be treated as such. However, it can plausibly be argued that the emergence of drugs such as semaglutide, far from being a ‘quick fix’ for those who have failed to take responsibility for their own well-being, in fact represent an opportunity to practise virtues such as temperance.
Virtue Theory and the Question of Character
To adopt the language of the virtue ethics tradition, obesity can be seen as a failure of the virtue of temperance. As a virtue, temperance is recognised by both the ancient Greek philosopher Aristotle and the mediaeval theologian and philosopher St Thomas Aquinas, with Aristotle characteristically identifying it as an ‘excellence’ of character that lies between two vices: the deficiency of insensibility and the excess of self-indulgence. For neither Aquinas nor Aristotle is temperance a virtue that relates purely to the consumption of food and drink. Like the other virtues, it rests on the capacity to correctly apprehend one’s situation and respond appropriately. In Aquinas’ terms, this would mean being informed by ‘right reason’ and having a grasp of the truth. As such, temperance – sometimes better understood by the term ‘moderation’ – is a trait of character that pertains to various areas of life. For instance, it might be applied to the emotions, with the suggestion that someone should temper his anger (which of course is not to say that he shouldn’t ever be angry, but only that in the given situation, his anger is excessive). Temperance, then, helps to produce order and balance – and in connection with the body, this means health. In failing to grasp the truth of his situation, with regard to the order of goods (such as physical health, spiritual wellbeing, food and pleasure) or their respective value, the subject falls into self-indulgence. He fails to control or moderate his natural desires – for food or pleasure, in this case – and his well-being is sacrificed to transient goods. This outlook also reflects the wider teaching of Scripture on avoiding excess, developing character and personal responsibility. From such a perspective, then, one might argue that obesity is indeed a moral problem, or, rather, a problem of character, and must be addressed accordingly, with guidance, education and self-discipline.
Drugs as an Opportunity for Responsibility
This might very well be true in many cases, but it is not clear that the existence of weight-loss drugs does in fact undercut the exercise and training of virtue. Might it rather be the case that such treatments represent an opportunity to exercise moderation in a way that the subject has found impossible of late, his situation having become chronic and his attitude having degenerated into hopelessness? By way of comparison, one might say that smoking can be overcome by willpower alone – and for some people it can. For those who are heavily addicted to nicotine, however, and have been smokers for some thirty years, perhaps this is to expect too much. Nicotine patches, nicotine gum and e-cigarettes are, for some, a necessary aid to enable them to overcome their habit and, hopefully, to give up smoking for good, the idea being that they eventually rely on their own willpower. Obesity resulting from lack of exercise and excess calorie consumption is arguably different with regard to the question of physical addiction – and the cost of weight-loss drugs is far higher than that of e-cigarettes – but a similar principle can still be said to apply: at some point, the patient must rely on strength of will. Indeed, the nature of such treatments suggests as much. They are not to be taken forever; rather, they reduce weight to a certain point, after which it is for the individual to take responsibility. One of the major benefits reported by those researching the effects of the drug orforglipron (a tablet often known as Alii or Xenical) was that once they had lost a certain amount of weight, patients changed the way they thought about food and found that they were no longer constantly feeling hunger or thinking about it. What is this but an opportunity to begin to exercise temperance in a manner that had become impossible?
While there will remain questions about the desirability, costs and effects of an obesity market, it would appear that such a market is not of itself necessarily inimical to the exercise of the self-restraint that is so often central to maintaining health. Based on the indications of what certain treatments can achieve, it might well be the case that the development of weight-loss drugs provides some individuals with the means not only of avoiding some of the worst effects of obesity on their health, but, with judicious use, to regain the responsibility and personal agency which had become difficult for them.

Neil Jordan is Senior Editor at the Centre for Enterprise, Markets and Ethics. For more information about Neil please click here.
Adam Smith’s virtue ethics is of enduring relevance within contemporary business and societal discourse, also in the realm of e-commerce. Despite the widespread recognition of Smith’s contributions to economic theory and business ethics, there has been a gap in the exploration of Smith’s virtue ethics in the context of modern societal and technological advancements, such as digitalization. Smith’s virtue theory, particularly his emphasis on prudence, offers valuable insights for understanding and evaluating ethical behaviors also in e-commerce. Prudence, as articulated by Smith, serves as a foundational virtue that directs self-interested passions toward the preservation of wealth, health, rank, and reputation. In the commercial society envisioned by Smith, prudence generates attitudes like industry, frugality, parsimony, and thrift, essential for capital accumulation through saving. Alongside his contributions to economic theory, contemporary scholarship positions Smith as a virtue ethicist, highlighting his focus on virtues of character rather than prescriptive moral rules. In this sense, Smith portrays praiseworthy characters and their qualities, both in The Theory of Moral Sentiments and The Wealth of Nations. Prudence plays a role over and above other virtues in Smith’s moral framework, because prudence plays a pivotal role in regulating self-interest, especially in economic activities. While justice is crucial in Smith’s philosophy, he sees justice as a negative virtue primarily concerned with avoiding harm, whereas prudence is the virtue guiding decision-making and ethical reasoning. Smith’s concept of prudence, in contrast to justice, fosters economic activity and societal flourishing and, as a moral and economic virtue, regulates self-interest within the framework of a just society. Smith’s virtue of prudence, as expounded in his works, has garnered significant attention from scholars in recent years.
When we apply Smith’s virtue ethics to the contemporary context of e-commerce, we can make the Scottish Enlightenment thinker’s notion of prudence fruitful for guiding ethical behaviors among buyers and sellers in digital transactions. E-commerce, with its unique features and global reach, creates new forms of societal interaction that can benefit from Smith’s ethical principles. Just as Smith updated classical virtue ethics in the context of his commercial society, we can do the same to Smith’s notions in contemporary e-commercial society. Smith’s approach to prudence diverges from classical Christian ethics, where it is the central moral virtue regulating all aspects of life. Instead, Smith focuses on its role in regulating selfish passions and bodily desires within the bounds of health and fortune. While classical ethics emphasizes prudence as the charioteer of all virtues, guiding actions towards a proper end, Smith confines its scope to sentiments necessary for one’s good standing in a commercial society.
In portraying the prudent person, Smith emphasizes traits such as security-seeking, genuineness, and moderation in speech and social interactions. The prudent individual avoids unnecessary risks, values truthfulness, and maintains decorum in society. This depiction aligns with Smith’s belief in the importance of reputation and propriety in fostering social cohesion. Moreover, Smith discusses the impact of prudence on business activities, advocating for steady industry, frugality, and foresight in decision-making. Prudence, for Smith, involves sacrificing present desires for future interests and entails a careful balance between present enjoyment and long-term security. Smith’s conception of prudence extends into his economic theory as well. In The Wealth of Nations, he emphasizes the importance of saving, investment, and frugality in fostering economic growth and social progress. Prudent management of capital, Smith argues, leads to increased productivity, employment, and specialization, benefiting society. Smith’s advocacy for prudence in both moral and economic spheres reflects his belief in the interconnectedness of individual and societal well-being. While prudence serves the self-interest of individuals, it also contributes to the common good by fostering economic prosperity and social harmony. Smith’s theory was written for different times and circumstances. We thus need to acknowledge his limitations in addressing modern challenges. Nevertheless, we can judiciously apply his ideas to understand the dynamics of e-commerce.
The landscape of e-commerce is evolving, both in its technical capabilities and societal impacts. In business research technical aspects are widely explored, whereas ethical considerations tend to be underrepresented. Smith can supply some conceptual tools to help navigate the uncharted ethical waters of e-commerce.
We propose two propositions that link Smith’s detailed traits for prudent commercial activity with e-commerce. We also introduce two propositions elucidating how prudent e-commerce influences societal well-being. Smith breaks down the virtue of prudence into the following traits of the prudent person: balance, steadiness, sacrifice, living on current income, self-command. Applying these traits to e-commercial activity, we discover that e-commerce platforms can either foster prudence or reinforce vices, depending on users’ pre-existing moral dispositions.
The two propositions thus are:
Security, genuineness, moderation in speech, friendship, and observance of decency are explored as traits that positively impact societal well-being when cultivated by e-commerce users. Conversely, neglecting these traits can hinder societal flourishing. Two additional propositions (B1 and B2) summarize these insights, affirming that cultivating Smithian traits in e-commerce supports societal flourishing, while neglecting them detracts from it.
Practically speaking, the study of Smith’s concept of prudence in e-commerce would advocate the integration of virtue ethics education into business curricula and emphasize the importance of ethical considerations beyond those concerned with legal compliance.

Professor Msgr. Martin Schlag is the Alan W. Moss Chair in Catholic Social Thought and the Director of the John A. Ryan Institute for Catholic Social Thought at the University of St. Thomas
This paper is part of a series of essays that seek to explore the current and prospective impact of AI on business. A PDF version can be accessed here.
The advent of Artificial Intelligence (AI) upon the business world raises a myriad of challenges and opportunities for management theorists. The first of these is a matter of considered choice: Which are the most suitable theoretical lenses that one might apply in understanding the novel phenomena that AI represents? Might one start with Taylorism and the scientific management approach, or perhaps rather turn to Henri Fayol and his pioneering work on administrative management theory? Still, it may be wise to go back and consider Weber’s work on hierarchy and the resulting Bureaucracy Theory, or perhaps Elton Mayo’s advancements in Human Relations Theory and the creation of ‘humanistic organisations’.[1] Modern managerial thought (post-WW2) brought us the pioneering work of Joan Woodward and Contingency Theory which cannot be ignored. In the realm of psychology and the broader expansion of behavioural science and personnel management, we have Maslow’s influential Hierarchy of Needs and Douglas McGregor’s Theory X and Theory Y. Far from being exhaustive, this list illustrates the plethora of avenues that are available to the inquisitive researcher. This paper, however, elects what many might consider a less obvious choice – that is, an analysis of AI through Peter Drucker’s writings and more specifically, through his concept of the ‘Knowledge Worker’.
It is important to note from the onset that when referring to AI here we are referring specifically to Generative AI, which represents a branch of the wider field that is artificial intelligence, the main distinction being that generative AI has the capacity to learn and produce novel output autonomously.
In an article in the California Management Review during the winter of 1999, Drucker made a compelling statement:
The most important, and indeed the truly unique, contribution of management in the 20th century was the fifty-fold increase in the productivity of the manual worker in manufacturing. The most important contribution management needs to make in the 21st century is similarly to increase the productivity of knowledge work and knowledge workers. The most valuable assets of a 20th-century company was its production equipment. The most valuable asset of a 21st-century institution (whether business or non-business) will be its knowledge workers and their productivity.[2]
Throughout his lifetime Peter Drucker proved to be a prolific writer, having published some 41 books and countless articles, essays and lectures. The totality of his work amounts to over ten million words which as one scholar put it, is the equivalent of 12 Bibles or 11 Complete Works of Shakespeare.[3] No wonder then, that his alias as the ‘Father of Modern Management’ is a fitting title.
Drucker was born in Vienna in 1909 into a Lutheran protestant family.[4] His father was a lawyer and civil servant, and his mother studied medicine – both parents were considered intellectuals at the time. His house often served as a place of congregation for scientists, academics, and government officials, who would meet and discuss new ideas.[5] Yet his formative years were spent at Hamburg University, where he read international law and became heavily influenced by the works of Kierkegaard, Dostoevsky, Aquinas, Luther, Calvin and Weber. Here he developed a sense of Christian responsibility In tackling life’s challenges and made it his life mission to discover a society‘…in which its citizens could live in freedom and with a purpose’.[6] Interestingly, he was not swayed by Marxism because, in his view, the will of the collective came at the expense of the freedom and purpose of the individual: ‘there was no capacity for individual purpose in a collective society’, Drucker remarked.[7]
Amongst scholars and business executives he is perhaps best known for his work on decentralisation and a management approach that emphasises the value of employees and their contributions in achieving the shared goals of the organisation. His most celebrated theory is Management by Objectives (MBO Theory), initially presented in his 1954 book, The Practice of Management,[8] which was later refined in his 1974 magnum opus, Management: Tasks, Responsibilities, Practices.[9] Yet, Peter Drucker’s most distinguished and lasting contribution came in bringing about a novel way of understanding the field of management as an integrative whole. Previous writers such as Rathenau, Fayol, and Urwick drew connections between the varied functions of management, but it took Drucker to tie in all the strings and establish Management as a standalone discipline of study and practice.[10]
Drucker also fundamentally changed how employees were to be viewed by the company. He was the first to argue that they represent assets, not liabilities, and that within the modern economy, employee value and development is crucial to the well-being of the organisation.[11] Indeed, it is in the company’s best interest to invest and support career learning and the continual growth of its employees.
The Effectiveness of Knowledge Workers
Drucker first introduced the concept of a ‘knowledge worker’ in his 1967 book, The Effective Executive, where he defined it as ‘…the man who puts to work what he has between his ears rather than the brawn of his muscles or the skill of his hands’.[12]
He understood and foresaw the seismic shift that the well-developed economies of the West would experience in transitioning from a largely manual workforce to a predominantly knowledge-driven economy. The kickstart to all of this was, of course, the revolution in Information Technology (IT) from the 1950s onwards. Drucker explains that: ‘Today, however, the large knowledge organisation is the central reality. Modern society is a society of large organised institutions. In every one of them, including the armed forces, the centre of gravity has shifted to the knowledge worker’.[13]
What, then, makes the knowledge worker valuable? It is, put rather crudely, his or her ability to make a contribution to the firm. Unlike manual workers of the past, the knowledge worker benefits from a degree of heightened indispensability since the driving source of their effectiveness lies not in machinery or even in skill, but in the knowledge and judgement found between their ears. The concept of effectiveness becomes a key theme in Drucker’s writing on the knowledge worker: ‘…[those] schooled to use knowledge, theory and concept rather than physical force or manual skill work in an organisation and are effective only in so far as they can make a contribution to the organisation’.[14]
This raises questions surrounding the measurability of effectiveness of knowledge workers. It is important and interesting to note that throughout the 1950s the term ‘productivity’ was not yet in widespread use, hence Drucker’s reliance on ‘effectiveness’ as an early substitute. The traditional methods of measurement applied to manual work would no longer apply to knowledge work. The ‘yardsticks’ used for manual work such as industrial quality control or total output generation are ill-fitted to the knowledge worker. The knowledge worker also cannot be monitored ‘closely or in detail’, such an effort is futile for the organisation.[15] Instead, all efforts must be concentrated on the effectiveness of the knowledge worker. Drucker here usefully points out that unlike the manual worker, the knowledge worker produces immaterial things: knowledge, ideas, and concepts that remain unquantifiable in a physical sense. Instead, the ultimate task of the knowledge worker is to convert these abstract intangibles into tangible effectiveness for the organisation, this being in stark contrast to the manual worker who needn’t have to undergo this step of conversion – their contribution being already justified by the goods produced. Therefore, ‘Knowledge work is not defined by quantity. Neither is knowledge work defined by its costs. Knowledge work is defined by results’.[16]
If the knowledge worker ‘thinks’ in his or her contribution to the firm and this ‘thinking’ yields favourable results for the organisation, then surely the principal aim of the knowledge worker is to develop and grow their thought processes. In this sense they are all executives because they possess the capacity as well as the permission (given by their superiors or the company in general), to enact impactful decisions that are a direct result of their thinking.[17] Surely, then, the following challenge is one of discernment in differentiating the right decisions amidst the wrong ones. Within such a context, how might knowledge worker effectiveness be gained?
Drucker argues that it has nothing to do with personality traits: ‘Among effective executives I have known and worked with, there are extroverts and aloof, [others] even morbidly shy. Some are eccentrics, others painfully correct conformists. Some are worriers, some are relaxed. […] Some are men of great charm and warmth, some have no more personality than a frozen mackerel’.[18]
If personality traits have little to no bearing on effectiveness, or at least there is no evidence to prove the contrary, what does have an impact on effectiveness? Drucker argues that effectiveness ‘…is a habit, that is a complex of processes’. There is no silver bullet when it comes to seeking knowledge worker effectiveness, rather, it represents a collection of practices and habits that collectively amount to favourable results for the employee as well as the organization. The beauty of it is that practices and habits can be learned, meaning that any knowledge worker has the capacity to become effective.
However, as Drucker points out, ‘practices are simple, deceptively so; […] practices are always exceedingly hard to do well’.[19] There are five key practice areas that executives and knowledge workers need to master should they wish to become ‘effective’. The first is time – an effective knowledge worker knows what their time is mostly spent on and controls the allocated time that they have at work. The second is a focus on outward contribution – keeping one’s ‘eye on the ball’ so to speak. The effective knowledge worker always maintains an awareness of the overarching goal which helps direct the smaller practices and offers mental guideposts in achieving the desired outcomes. The third area is a sober awareness of one’s strengths and weaknesses. Effective workers build upon their strengths – be those inherent, personal strengths or the strengths conferred on them by their position within the organisation. The fourth area is the ability to distinguish what approaches are likely to yield the most impactful results and focus primarily on them. The fifth and final area is a fundamental understanding of the decision-making process and how to navigate it to make effective decisions. They are aware of operating within a system where too many, sometimes hasty, decisions, can lead to poor outcomes. Only a carefully thought-through strategy will result in favourable outcomes in the long-run.[20]
Drucker and Technology: The Impact of AI Upon the Knowledge Worker
What would the likely impact of AI be on the knowledge worker within such a context? The aforementioned five areas of practice offer multiple viewpoints for one to postulate how AI might augment (or replace), the daily activities of the knowledge worker.
When it comes to matters of automation and the arrival of new technologies, Peter Drucker warns against a position of extremes: technology is seldom a total panacea or an absolute disaster.[21] Indeed, in 1973 he pointed out that, ‘The technology impacts which the experts predict almost never occur’.[22] Drucker would have experienced the early hype surrounding digitalisation and the purported gifts of computing in the 50s and 60s. In some of his earlier writings he branded the computer a ‘mechanical moron’ – one that is very able at storing and processing precise data yet omits all that represents unquantifiable data, the problem of course being that it is often exactly this ‘unquantifiable data’ that becomes essential to the success of the organisation in the long-run.[23] It is often not the trends themselves that dictate a company’s future but rather changes in trends and the unique events which, at least in the early stages, are yet to be quantifiable. They are too nascent to become ‘facts’ and by the time they do become facts it is often too late. Drucker points out that the logical ability of computers represents both their biggest strength and their biggest weakness. One advantage that humans hold over the machine is their enhanced sense of perception and intuition. However, there is a serious risk that executives (i.e. all knowledge workers), might lose this sense of perception if they rely too heavily on quantifiable, computable data at the expense of unquantifiable, qualitative data.[24] This is a behavioural challenge that needs emphasising.
AI: Data versus Information
The resulting key theme that emerges in Drucker’s writing is the notion of data versus information. It’s relevance to analysing the potential consequences of AI lie within the wider scope of using software to effectively manage data. The crux of the problem is as follows: data, in its raw form, is inconsequential until it is interpreted and acted upon. Too many knowledge workers are ‘computer literate’ but not ‘information literate’: they know how to access data but aren’t adept at using it.[25]
For over half a century, Drucker argues, there has been an overwhelming focus on the ‘T’ in IT and the development of technology that stores, processes, transmits and receives data, but not enough effort has been placed on the ‘I’: What does this data mean to me? What does it mean to my business? What purpose does it serve? These are all fundamental questions that haven’t been given the prominence they deserve. [26] The main challenge is to ‘…convert data into usable information that is actually being used’.[27] This has ultimately resulted in decades of computer technology serving as a producer of data and not a producer of information. Drucker, quite rightly, points out that computer generated information has had practically no impact on a business deciding whether or not to build a new office, or a county council deciding to build a new hospital, a prison, a school and so on.[28] The computer has had minimal impact on high-level decisions in business.
Yet this is not just a failure of technology or even of some form of stubbornness amongst knowledge workers and executives; it is principally a failure of providing relevant information that is needed to perform and/or change the direction of any given task.[29] This effort is personalised and applies to each individual worker or executive. The focus then shifts from data gathering to data interpretation but, as mentioned, also to an astute discernment in organising and acting upon said data. The availability of data becomes second to the usability of data. Efforts move toward organising, interpreting, and acting upon reliable data.
As information is the principal resource of knowledge workers, Drucker suggests three broad organisational methodologies. We will briefly detail each in part and thereafter consider the potential implications of AI.
The first is called Key Event which looks at one or multiple important events that have a major contributing role towards the end performance of the knowledge worker. [30] This can be a single event or, as it is often the case, a series of key events that may direct certain outcomes. The event(s) in this case act as a ‘hinge’ upon which performance is dependent. Any executive or knowledge worker stands to benefit substantially in his or her career if they are able to identify, interpret and act upon such events.
The second methodological concept is based on modern Probability Theory and its resulting Total Quality Management (TQM).[31] This approach looks at a variety of possible outcomes that are expected to fit within a given range (i.e. withing the normal probability distribution), and singles out the outliers (those that do not meet the criteria). These exceptional events automatically move from being data (where no action is needed), to being information which necessitates immediate action.[32] This approach is useful when overseeing something like a large manufacturing process but can also be applied to the provision of services, for instance, a client going bankrupt, a deal falling through, a project yielding unexpectedly poor results, etc.
The third methodology for organising information is similar to the second and is based upon the Threshold Phenomenon and the field of perception psychology pioneered by the German physicist Gustav Fechner (1801-1887).[33] This holds that humans only perceive events to become a phenomenon once they cross a certain ‘threshold’ – and the threshold itself is subjective to each individual. In physical pain we only experience it once the stimuli are of such an intensity that they become categorised as ‘pain’. Similarly, it is the intensity and/or frequency of certain data points that lead to their recognition as phenomena. Drucker argues that accurately identifying the phenomena can assist knowledge workers (or managers, executives) in the early prediction of trends. The threshold concept is highly useful in identifying which sequences of events are likely to become trends and require immediate attention.
Conclusions: striving toward AI as a generator of useful information
How might AI assist within this context? The methodologies of organising data are effectively attempts to filter out and sieve the critical information from what otherwise is a plethora of largely useless noise. AI has a major role to play not merely in data monitoring and gathering but increasingly in extraction and accurate interpretation. Here lies the biggest challenge: which AI Large Language Model (LLM) will emerge as the most capable and useful to the knowledge executive?
The reality is that there are likely to be several dominant LLMs with differing traits and characteristics. It is becoming increasingly clear that a multimodal system will benefit from the advantage of being able to receive and work across differing types of data, including text, images, sound, and video. However, multimodality alone won’t suffice if the AI performs poorly at data interpretation and reasoning (e.g. hallucinations, general black box optimisation issues, and so on). We are currently in the nascent stages of a more in-depth, multi-layered reasoning approach with companies such as Meta and OpenAI investing heavily in the ability of AI chatbots to reason memorise, and comprehend more complex challenges. OpenAI’s chief operating officer Brad Lightcap said that in the near future, ‘We’re going to start to see AI that can take on more complex tasks in a more sophisticated way. […] I think we’re just starting to scratch the surface on the ability that these models have to reason. [Today’s AI systems] are really good at one-off small tasks, [but they are still] pretty narrow in their capabilities’.[34]
Again, for AI to have a significant impact on the decisions of knowledge workers they need to possess the capacity to provide a consistent supply of relevant, actionable information. We can already see the underpinnings of a technological infrastructure that may facilitate this: continued growth in the Internet of Things (IoT), the proliferation of AI hardware and artificial neural engines in a rising number of products and services, the consolidation of reliable datasets used to train LLMs, the fine-tuning of AI chatbots with specific characteristics and so on. This also creates a pool of moral and ethical challenges for executives: issues around data privacy, misinformation, bias, fraud, manipulation (e.g. impersonating people to promote products or services via ‘Deepfakes’), the recurring problem of AI hallucinations and so on. All of these issues require careful consideration. However, at this stage the importance of AI’s primary function as a provider of useful information cannot be understated. It may well represent the pivotal element in determining the success or failure of generative AI within business and beyond.
Andrei E. Rogobete is Associate Director at the Centre for Enterprise, Markets & Ethics. For more information about Andrei please click here.
This blog has also been published by the Catholic Social Thought website at St Mary’s University, where there are also many other useful resources on Christian social thought.
Despite the strong interest in property rights in Catholic social thought and teaching, their importance is rarely linked to the topic of the preservation of the natural environment. There is a clear prima facie case for doing so. It starts with what is often described as the ‘tragedy of the commons’.
Imagine, we have a forest, and nobody owns the forest: that is, it is a ‘common’. What will happen? People will come and harvest the trees for firewood, for sale or for industrial use, and they will not replace them. They will take as much as they can without restraint because, if one person or corporation does not harvest the timber, another will. And will anybody plant trees to replace the ones harvested? Of course not. If anybody plants trees, there is no chance they will be there in 20 years’ time for the person to harvest. We cannot expect all people to behave altruistically all the time and certainly should not organise our institutions assuming that they will do so.
If we do not have some way of controlling use, ascribing ownership and usage rights, and enforcing those rights, environmental resources will be exhausted.
If we have a private owner of the forest resources, that private owner will get the benefit of harvesting the trees in the indefinite future. The owner will want to make sure that there is sufficient replanting done to ensure that the forest is self-sustaining. The trees are a valuable resource. But the land is even more valuable if it carries on growing trees for harvesting year after year.
Sign up for our Substack to receive new blogs directly in your inbox!
This does not just apply to private ownership: government or community ownership might work in some circumstances. Indeed, it might be necessary at times, though it would be highly inefficient if the government owned all our environmental resources.
At the same time, property rights have to be enforced. The World Wildlife Fund, for example, estimates that, in Peru, illegal logging is 80 per cent of total logging; it is 85 per cent of total logging in Myanmar; and nearly 65 per cent of total logging in the Democratic Republic of Congo. Illegal logging is the lead cause of degradation of the world’s forests.
We need both strong property rights and strong institutions to protect those rights.
This table is instructive. The top half has the top seven countries by their rank for the protection of property rights in 2020 and their reforestation rate from 1990 to 2020. The bottom half has seven of the bottom eight countries in the international property rights index (the exception of Yemen which hardly has any trees at all!) and their reforestation rates which are negative – in almost all cases, they have high levels of deforestation.
| Country | Property rights rank 2020 | Reforestation rate (%) 1990-2020 |
| Finland | 1 | 2.4 |
| Switzerland | 2 | 10.0 |
| Singapore | 3 | 6.7 |
| New Zealand | 4 | 5.6 |
| Japan | 5 | -0.1 |
| Australia | 6 | 0.1 |
| Netherlands | 7 | 7.3 |
| Country (Yemen – bottom country has not been included) | Property rank index 2020 – places from bottom | Reforestation rate 1990-2020 (%) |
| Venezuela | 1 | -11.1 |
| Bangladesh | 2 | -1.9 |
| Nigeria | 3 | -18.5 |
| Madagascar | 4 | -9.2 |
| Zimbabwe | 5 | -7.3 |
| Nicaragua | 6 | -46.7 |
| Pakistan | 7 | -25.3 |
The second table just looks at those South and Central American countries for which there are data and ranks them by security of property rights and levels of reforestation (so a higher rank means higher levels of reforestation or lower levels of deforestation).
| Property rights index 2007 | Reforestation rank – (higher means higher reforestation or lower deforestation) |
Examples reforestation 1990-2020 % |
|
| Uruguay | 3 | 1 | 155% |
| Dominican Republic | 10 | 2 | |
| Chile | 1 | 3 | |
| Costa Rica | 2 | 4 | 4.00% |
| Peru | 9 | 5 | |
| Mexico | 7 | 6 | |
| Panama | 4 | 7 | |
| Colombia | 5 | 8 | |
| Honduras | 13 | 9 | -9% |
| Haiti | 19 | 10 | |
| Venezuela | 18 | 11 | |
| Bolivia | 17 | 12 | |
| Ecuador | 11 | 13 | -15% |
| Brazil | 6 | 14 | |
| El Salvador | 14 | 15 | |
| Argentina | 8 | 16 | |
| Guatemala | 12 | 17 | |
| Paraguay | 15 | 18 | |
| Nicaragua | 16 | 19 | -47% |
| pearson rank correlation coefficient |
0.6 |
There is high correlation between protection of property rights and reforestation levels. Even the exceptions are instructive. Although the Dominican Republic has a poor record in general for the protection of property rights, it has had a focused, government-led scheme to protect and promote forest growth. Government intervention can work in this field but, in general, it is the package of institutions (private property rights, well-functioning and uncorrupt courts and criminal justice systems, and an efficient state operation for where government intervention is needed) that is necessary.
More generally, private property rights are not a panacea. There may be limited situations in which governments should legitimately intervene to protect natural resources that a private owner might destroy for commercial or other reasons. However, such particular interventions are far more likely to be effective if they take place in a situation in which there are effective legal institutions for the protection of property rights more generally.
This reasoning applies to all environmental resources – forests, the conservation of water, the conservation of fish and ensuring that farming is sustainable. Effective protection of property rights is a vital stepping stone to promoting good environmental outcomes. Iceland, for example, has transformed its fishing grounds by establishing private property rights in fisheries. More generally, good institutions, the rule of law and private property are the foundations of harmonious and prosperous societies.
Catholics often cite St. Thomas Aquinas’s justifications for private property. He argued that: ‘Private property encourages people to work harder because they are working for what they would own’. It is a short step from that to the related: ‘Private property encourages people to conserve environmental resources because they are looking after and conserving property, the fruits of which efforts they will own’.
If we wish to tackle deforestation (or the degradation of many other environmental resources), we need to examine a range of institutions related to property rights. As Pope John Paul II put it in Centesimus Annus: ‘Economic activity, especially the activity of a market economy, cannot be conducted in an institutional, juridical or political vacuum. On the contrary, it presupposes sure guarantees of individual freedom and private property, as well as a stable currency and efficient public services. Hence the principle task of the State is to guarantee this security, so that those who work and produce can enjoy the fruits of their labours and thus feel encouraged to work efficiently and honestly. The absence of stability, together with the corruption of public officials and the spread of improper sources of growing rich and of easy profits deriving from illegal or purely speculative activities, constitutes one of the chief obstacles to development and to the economic order.’
Philip Booth is professor of finance, public policy, and ethics and director of Catholic Mission at St. Mary’s University, Twickenham (the U.K.’s largest Catholic university). He has a B.A. in economics from the University of Durham and a Ph.D. from City University.
Reasoning Robots
It was announced recently that developers of artificial intelligence models at Meta are working on the next stage of ‘artificial general intelligence’, with a view to eliminating mistakes and moving closer to human-level cognition. This will allow chatbots and virtual assistants to complete sequences of related tasks and was described as the beginning of efforts to enable AI models to ‘reason’, or to uncover their ability to do so.
This represents a significant advance. One example given, of a digital personal assistant being able to organise a trip from an office in Paris to another in New York, would require the AI model to seek, store, retrieve and process data recognised as relevant to a task. It would have to integrate ‘given’ information (for instance, the destination and the planned time of arrival), with stored information (such as the traveller’s home address) and search for new information (flight durations, train timetables and perhaps the morning’s traffic reports, for example), which it would have to select as relevant to the task of organising the journey. Following processing of the data, it would then have to instigate further processes (such as making reservations or purchasing tickets with third parties).
Human Reasoning and Moral Problems
Remarkable as such a development is, the concept of ‘reasoning’ in play is limited. Compare this to the kind of reasoning of which human beings are capable, particularly in morally ‘difficult’ situations.
Consider the case of someone who, while shopping in a supermarket, catches sight of a toy that he knows his daughter would love for her birthday, but which, down on his luck, he is unable to pay for. The thought crosses his mind that he could simply make off with it. He might think about how this is to be done. Maybe he could run through the door with an armful of goods before the security personnel have time to intervene. This is not certain to succeed, particularly if there are staff outside who might be alerted. Even if he escapes, he is still likely to have been recorded by in-store security systems and having drawn attention to himself by his flight, he risks identification and subsequent arrest. Perhaps, then, he could just hide the toy in his coat, or even use a self-service checkout but fail to scan some of the items that he wishes to leave with. He might then wonder whether he should steal. On the one hand, having lost his job some weeks ago and having been unable to find work, he is very short of money and he has children to feed – and there is a birthday coming; on the other, should he be arrested, he risks punishment under the law, which will in all likelihood worsen his family’s situation. Perhaps he will then consider whether he has any right to act in the manner that he has been considering, reflecting on whether his level of poverty justifies what he had been intending to do. The supermarket chain makes millions in profits every year, while he is only trying to provide for his family and make his daughter happy – and with an item that the shop probably won’t even miss. Ultimately, a consideration of principle alone – a conviction about the general immorality of theft – leads to a resolve to return some of the food items to the shelf, put the toy in his basket and to pay for everything he has selected.
Recognising Reasons
In this case, the subject has undoubtedly engaged in reasoning of an interesting and complex variety. First, it should be noted, the man has not processed data, but considered reasons. Moreover, he has considered not only a number of reasons, but reasons of different kinds. He has moved from an initial motive based on a desire (to please his daughter), through considerations of possible means and outcomes, to issues of familial obligation, distributive justice and moral principle. He has even considered both prudential and moral questions in relation to the matter of whether he ‘should’ steal. Just as easily, he could also have reflected on questions of political obligation and his duty to obey the law.
There are two points of importance here. The first is that in spite of their differences – a concern about the consequences of arrest for one’s family being different in kind from a speculation about the material injustice of one’s situation – the agent was able to recognise the relevance of these considerations as reasons. The second is that of all of the reasons under consideration, his action was ultimately motivated by a single consideration of a moral nature.
It is clear from this example that reasoning (at least as it relates to questions about how to act) extends far beyond calculation or processing data in order to reach a defined end. The man did not follow a set process to reach a pre-determined goal. Indeed, part of his dilemma involved reflection on what the goal should properly be, such that there was even reasoning about the desirable outcome. (He might, for instance, have decided to complete his shopping and leave the toy on the shelf, in the hope that it might soon be reduced in price.) Moreover, owing to the ‘qualitative’ differences between the reasons that he considered, he did not simply ‘pile them up’ and reach a decision based on a form of ‘difference calculation’. Indeed, it is not obvious that there is any intelligible sense in which one could ascribe a calculable ‘value’ either to the man’s conviction that stealing was wrong, or to his desire to please his daughter.
The Requirements of Reasoning
These two points reveal some important characteristics of human reasoning about action. The ability to deliberate about what constitutes a (good) reason for acting and to decide which reasons ultimately matter requires consciousness. This is surely one of the differences between data and reasons. Unlike machines processing data, we are aware of our reasons. Moreover, they have meaning for us, usually in relation to a projected aim or in light of our values – and quite often both. (The fact that a shop has CCTV, for instance, becomes meaningful if one is considering or planning theft.) In order for the man to reach the decision he did, he had to be aware of various reasons and to admit the overwhelming salience of the moral conviction that ultimately held sway because it meant more than the others. (To say that it ‘meant more’ is not to say that each reason has a measure of ‘meaning’ and that the moral consideration ‘weighed’ more than the others. This would be to reintroduce the idea of ‘calculation’. Rather, the moral consideration was a reason of a different kind and had a particular importance. Indeed, it is often the case that moral convictions will limit the courses of action that we are prepared to consider, such that a different person in a similar situation would not even have entertained the possibility of stealing.)
Thus, we are conscious of reasons and adopt them as the ground of our actions, such that they are usually realised or expressed in what we do. The fact that we are conscious of reasons and acquiesce in them when we act is what makes them reasons rather than causes: they are our reasons and make our actions meaningful. This is what renders our actions actions, rather than mere ‘behaviour’. In addition, they make our actions ours, such that we are responsible for them. This is why AI models functioning on data, without consciousness, cannot be considered agents and are not themselves deemed to be morally responsible.
Reasoning and Processing
Following a process or executing a calculation are of course central to certain types of reasoning, but it is doubtful whether in themselves they should be described as reasoning. Properly speaking, this surely requires consciousness. Where a student successfully solves an equation using a prescribed formula, but without understanding what is being done, what the formula achieves, why the answer is right or what it means, would we say that he or she had ‘reasoned’? This is debatable, but the student is conscious, has a concept of what equations are and grasps the notion of number. The tasks of which the most advanced AI models are soon to be capable will be completed without any conscious awareness of the task to be fulfilled or the meaning of the data deployed. This is not to denigrate what artificial intelligence is able to achieve: the technology is advancing quickly and with impressive results. In the absence of consciousness, however, its remarkable capacities might better be referred to as processing or calculation rather than ‘reasoning’ at all – and without reasons, it remains a long way from true human-level intelligence.
Neil Jordan is Senior Editor at the Centre for Enterprise, Markets and Ethics. For more information about Neil please click here.
On Thursday, 23rd May 2024 the Centre for Enterprise, Markets & Ethics (CEME) hosted an online event on the topic of Artificial Intelligence: Challenges and Opportunities.
The event was chaired by Revd Dr Richard Turnbull and our speakers were:
On Saturday, February 10th, in Chinatown in San Francisco, a crowd of people attacked and burnt a driverless car operated by Waymo (Google’s self-driving car project). This represents an escalation of activism against autonomous vehicles led by a group calling itself the Safe Street Rebels, who generally seek to disable driverless cars by placing a traffic cone on the bonnet (or hood, for our American readers), which renders the vehicle immobile until a company employee can attend to it. The group’s website expresses a variety of grievances regarding the introduction of autonomous vehicles on public roads, among them being claims regarding increased congestion, safety, surveillance, lack of accessibility and a dearth of legal accountability. Taken together, they amount to a moral case against driverless cars, but each one of the arguments is potentially capable of being addressed by way of advances in technology, the introduction of new legislation or changes in the conduct of the companies that operate driverless cars. There are, however, other arguments to consider – ones that focus less on from the practicalities, dangers and risks of AI controlled vehicles as they currently stand, and more on the value of human capacities and skill.
Traffic, Legislation and Accessibility
The argument that driverless cars (particularly robotaxis) encourage the use of individual vehicles rather than other forms of mobility is salient, particularly in cities that already suffer from congestion and in which there are concerns about air quality (as well as pollution more generally), but this is really an argument about our transport decisions more broadly and is not specific to driverless cars. That is to say, it is an argument against introducing more private transport, rather than specifically autonomous private transport. Similarly, the assertion that autonomous vehicles are exempt from citations for certain motoring offences (a result of the assumption in existing legislation that vehicles have drivers) could easily be addressed via the introduction of laws that treat as drivers either the safety operators present in certain cars or the companies that operate them, who can therefore be held culpable for traffic violations. (Safety operators are already liable to prosecution, as demonstrated by the charges brought against one such driver after a fatal accident in Arizona.) It is contended that self-driving cars are not accessible for those with disabilities. There do indeed seem to be problems with their ability to pull over to the kerb to pick up passengers. Operators are under pressure to rectify this issue (not least because their tendency to stop in traffic lanes creates obstructions) but on the matter of accessibility for those who use wheelchairs, Google suggests that accessible cars can be summoned, with safety operators to assist passengers. Not all driverless vehicles are accessible, but it could be argued that this issue is being addressed.
Surveillance and Sales
Two fundamental concerns are connected with surveillance and safety. With regard to the former, driverless cars do collect various types of data, whether connected with location and the journey itself, or information about passengers. The sensors on the car will also record information about journeys, including objects encountered in the course of travel, such as other cars, humans or animals, such data being necessary for improving safety and avoiding collisions. The use of this data is an area of legitimate concern. If this information is passed to authorities, people can rightly be worried about invasions of privacy and the growth of the surveillance state. Moreover, as Matthew Crawford argues in Why We Drive: On Freedom, Risk and Taking Back Control, citing in this regard the importance of Shoshana Zuboff’s work on surveillance capitalism, it is also quite possible that the technology firms developing driverless cars will be able to use passenger data in order to build a user profile and employ it for the purposes of ‘managing’ more of our activity and selling products and services to users – data being a valuable commodity in contemporary capitalism. This is of course a risk associated with advancing technology but need not be an argument against autonomous vehicles themselves. After all, there are concerns about the extent to which smart TVs and even Alexa devices record information about users. Were technology companies to behave differently, or were there to be rigorous data protection laws in place – laws which were actually enforced with penalties for companies found in breach of them – perhaps the capture, storage and use of data needn’t be of such concern.
Safety
Quite reasonably, safety is the fundamental reason for opposing the introduction of self-driving cars. There have been numerous incidents involving such vehicles. As a result of one fatal accident, Uber ceased testing autonomous vehicles in Arizona, while in the wake of a serious collision, Cruise, a subsidiary of General Motors, had its test licences revoked by the regulator (the DMV) in California on the grounds of ‘unreasonable risk to public safety’. Until such time as deaths and injuries caused by driverless cars can be avoided, it is quite proper to argue that such vehicles should not be on public roads. This is an argument based on the current state of technology rather than driverless cars per se. Without fundamentally redesigning cities and traffic management, there are good reasons to doubt whether self-driving cars will ever be completely safe – or at least no less safe than cars driven by humans – but should the technology ever advance to this point, this argument would no longer serve as a reason to keep autonomous vehicles off public roads.
Driving as a Skill
So far, all arguments, though compelling in their own way, are in a sense ‘time-limited’ and stand to lose their force were suitable changes to occur in legislation, technology or the behaviour of companies. However, an argument advanced by Crawford is rather more stubborn in the face of such changes because it is centred not on the state of technology or regulatory frameworks, but on the nature of driving itself, and human beings as drivers. This argument states that in a world in which driverless cars are the norm, we are rendered (even more) dependent on technology companies, thus impoverishing us as human beings. Much technology – dishwashers, for instance – has the advantage of freeing us from mundane chores to focus on other, more rewarding or fulfilling tasks. Autonomous vehicles are not like this. Driving is not like washing up: it is a skill that requires judgement and the honing of certain capacities, including the ability to negotiate solutions with other road-users to emergent situations through the use of accepted social cues (a feat that it is hard to see autonomous vehicles ever achieving). Driving grants us autonomy and (in some cases) enjoyment but is also a learned ability. Stripping us of this and replacing it with a form of automated transport, controlled by large companies, is to deprive us of something worthwhile and valuable.
Value and Flourishing
It might be replied that driving simply isn’t important: after all we don’t object to not being allowed to drive trains when we catch them. This is true, but it does not address the value of an acquired skill and its place in human flourishing. As we read about the capacity of AI to generate poems, we might wonder whether, at some point, the technology will be able to produce material comparable to that of the greats. Would we equally argue that since software can create poetry that is every bit as impressive as that written by a human, then humans might just as well give up poetry? Surely the answer is that we should not: a skill, an ability or an excellence, provided it is not in some way intrinsically disordered, is of value and is worth preserving. Driving might not be as rarified as great poetry (though Formula 1 aficionados might beg to differ) but this does not make it valueless. Being able to execute a three-point turn might not be an achievement comparable to the work of Thomas Hardy or Virgil, but it is also true that most poetry is not of this kind either. If a computer can produce a poem that is ‘better’ than that of a primary school child, should we simply leave poetry to the software and, once the children have decided on the subject, set them some other task while the computer writes the poem for them? Most would think not – because even if a skill is one in which we are not expertly proficient, or is less impressive than some other activity we might attempt, it can still quite properly be considered worthwhile and a ‘good’.
Conclusion
Were driverless cars to become so advanced that they had almost no environmental impact, were entirely accessible, were never involved in accidents and were subject to laws of the road just like human drivers; were the operating companies to work within a strict social and moral code; and were regulations to prevent the misuse of data always and everywhere enforced, there would still be the question of whether, in dispensing with cars driven by humans, we were in some (perhaps small, but still significant) way, having an adverse effect on human flourishing. This is a moral consideration for us to ponder.
Photograph ©Robin Hamman, downloaded from Flickr using a CC BY-NC 2.0 Deed creative commons licence.
Neil Jordan is Senior Editor at the Centre for Enterprise, Markets and Ethics. For more information about Neil please click here.