Dorset Lamb, photo Phil Hall
Moral justification requires human agency
by Phil Hall
.
The essential question is this: Does “humane” slaughter require human involvement, or is it solely about the animal’s experience? Any system that removes the visible hand of human agency from life-and-death decisions is not progress, it is the end of meaningful civilisation. Moral judgment cannot be automated. Human oversight must be visible, constant, and non-negotiable. Reject the utilitarian lie that outcomes alone justify the means.
In a civilisation where human agency is sacred and AI is strictly a tool—never an autonomous agent—technology must serve moral, intellectual, and aesthetic quality without usurping human judgment. As part of a hybrid Human / AI system, AI should amplify human judgment, never replace it. Its decisions should always be subject to human override, with clear accountability. It should enhance—never diminish—moral, intellectual, or aesthetic quality. Humans must answer for their actions and are responsible for them and governed by laws.
Consider this scenario: Temple Grandin was the person who developed new ways of calming down animals before they were slaughtered because, according to Temple Grandin, animals were in their own way ‘’autistic’’ like her. She identified with them. Animals are startled by small things that are out of place in their environment. They want reassurance. So Grandin designed a slaughterhouse. Here there are no little objects on the ground to frighten the animals and the walls are narrow and curved. The animals do not see ahead ao they amble securely to their deaths. We shudder, but most of us still eat the meat. Temple Grandin was lauded for this ”humane” way to kill animals.
Now contrast this with another scenario: The animal are sent. The slaughterhouse has very few humans in it. The animal is channelled towards the place where the AI controlled automaton ends its life. Soon, one way or another, the animal is dead, killed efficiently by a robot. What is your feeling about the fact that there is a killer AI robot trained to dispatch animals? If you’re like me, you feel even more repulsed. (By the way, if the killer AI robot were let out of the abattoir how would it behave?)
Oddly, what we want is for a human being to kill that animal. Even if it is with a machine and at arm’s length. We want some degree of moral responsibility, even if it is only the human being sending the animal into the kill zone. A fully automated slaughterhouse where AI robots efficiently slaughter higher order mammals like pigs, cattle and sheep and turn them into meat is a frightening concept. And yet in utilitarian terms, we shouldn’t have a problem with it. So where does the problem lie?
The rejection of AI as a means of slaughter reveals a crucial truth: an AI slaughterhouse, no matter how “humane,” feels profane—as if it were removing the last vestige of moral weight from an action that should carry it.
A pure utilitarian might argue: “If the robot kills painlessly, why should the killer’s identity matter?” Yet, the identity of the killer does matter! Perhaps we view humans killing as more acceptable because we can attribute intentions, whereas robots lack moral agency. Certain acts must retain human participation to be morally understood. For a while on British TV there was a programme that asked British people to face up to the morality of their decision to eat meat. Chefs like Hugh Fearnley-Whittingstal took pride in the fact that they killed the animals they had raised and looked after. Killing animals, even when necessary, must be mediated by human conscience—otherwise, it becomes a detached, and completely immoral operation.
The revulsion humans feel toward an AI slaughterhouse aligns with Robert Pirsig’s hierarchy of values (Metaphysics of Quality), reinforcing the idea that moral justification requires human agency—even in ethically fraught acts like killing.
Pirsig’s system ranks four evolving levels each with a different moral position in a hierarchy: at the lowest level are inorganic systems, including AI, which consist of basic matter and energy, and are governed by physical laws. Then come biological systems – life, survival, and biological needs (avoiding pain, seeking food). Subsequently social systems and values take precedence – cultural norms, traditions, laws, and community. The highest order of values relates to systems of awareness and thought which concern art, spirituality, love, reason, science, and problem-solving. ”Dynamic Quality” for Pirsig is the evolving, indefinable force that pushes beyond the inert towards higher truth and beauty. If something inorganic like an AI powered robot kills a higher order organism like a pig, cow or sheep, our sense of moral value is outraged.
Utilitarianism is a dangerous oversimplification of ethics. In contrast, by insisting that humans must remain the moral agents, Pirsig preserves the aesthetic, intellectual, and dynamic qualities that make us more than suffering-optimising machines.
Consider Peter Singer, a leading modern Utilitarian philosopher, who would most probably disagree fundamentally with Pirsig’s insistence that human agency is necessary for morally justifiable killing. Instead, Singer would prioritise outcomes—specifically, outcomes that minimise suffering. Peter Singer’s closely argued Utilitarianism leads to conclusions that many find disturbing—infanticide for severely disabled newborns, euthanasia for non-consenting persons with cognitive impairments, the selective abortion of foetuses with disabilities like Downs Syndrome. The nihilistic choice to euthanize when the utility line of living falls below a certain point on a graph. Singer’s stance on these issues is not arbitrary but flows directly from his philosophical commitments: Singer’s Utilitarianism (a form of Preference Utilitarianism) holds that the only morally relevant factor is whether an action reduces suffering and maximises well-being. The decision of who defines well-being and suffering is in the hands of the AI’s controller.

Peter Singer would argue perhaps that the identity of the moral agent is probably irrelevant if the outcome is the same. If he did release them, sentience/consciousness, not species preference or moral arguments would determine ethical actions and outcomes. Singer would release the bots in war and education and into the slaughterhouses on Utilitarian grounds. For a Utilitarian philosopher like Peter Singer, perhaps, the human repulsion toward robot killers would be seen as an irrational bias— a sentimental attachment to human centered morality.
Against this, religion and other humanistic moral philosophies would argue that, perforce, humans have intrinsic worth, regardless of cognitive capacity.
Singer’s Utilitarianism fits neatly into a desacralised, capitalist order where human beings are merely one of the several “resources” to be exploited in capitalist modes of production—valued (if the shark like corporation is honest) only for their contribution to marginal utility. Under corporate capitalism moral reasoning is reduced to a cost-benefit analysis, mirroring market logic. Life is disposable if it lacks “quality” (economic viability, cognitive capacity, etc.) as defined by the values of the capitalist state. Singer’s philosophy does not challenge dehumanisation—it rationalises it under a the guise of the philosophical good. The nihilistic core of philosophies of Utilitarianism is that they see morality as a form of calculus. But whose calculus?
This parallels how Darwin’s theory of natural selection has been weaponised from the beginning by Social Darwinists to justify eugenics and Neocon laissez-faire capitalism. Similarly, Singer’s Utilitarianism can be weaponised to justify the elimination of the “unfit.” Singer’s utilitarianism may indeed be the perfect moral framework for late capitalism: It quantifies life, turning ethics into a spreadsheet presented to the shareholders. Preference Utilitarianism erodes the sacred and indefinable in human life, encroaching on boundaries and crossing red lines. Utilitarianism disguises brutality under rationality for all its hypocritical invocations of ‘’wellness’’. “By their fruits ye shall know them”.
Singer’s philosophy is not an outlier but the logical endpoint of a disenchanted, technocratic worldview where scientism displaces the humanities and social science and where human values are usurped by the instrumentalism of power, where efficiency replaces wisdom, and the measurable casts shadows over the meaningful.
What we require is a morality that expands rather than contracts, one that embraces the full range of possible human experience—subjective, cultural, social, psychological and spiritual as well as material—rather than just reducing ‘The Good’ into something empirically quantifiable, where the only morality concerns things that can easily be said to exist, things you can see or touch or smell or hear not, for example, depression.
A Preference Utilitarian like Peter Singer operates under a ‘flattened’ ontology, where animals and humans coexist coeval and only the material, measurable and biological is considered to be real. Differences in self-awareness, moral reasoning, and identity are labeled as epiphenomena. Love, art, and religion are evolutionary quirks, not meaningful in themselves. Gender self identification, as a lived and social reality, is considered a “delusion” when it is not rooted in biology. Animals and people are equally conscious and worthy.
Returning directly to the question of AI and moral agency: AI is a stone that someone has thrown; and the hand is hidden—there are opaque power structures behind corporate AI. It is an imperative for humanity as a whole to abolish the prospect of automated slaughter. We must recognise that in a moral society, even the necessary killing of animals must remain a human act—witnessed, decided upon (and possibly regretted). To give such a high degree of agency to AI, to delegate death to algorithms, is not progress, it is moral abdication
Phil Hall is a college lecturer. He is a committed socialist and humanist. Phil was born in South Africa where his parents were in the ANC. There, his mother was imprisoned and his father was the first journalist from a national paper to be banned. Phil grew up in East Africa and settled in Kingston-upon-Thames. He has also lived and worked in the Ukraine, Spain, Mexico, Saudi Arabia and Abu Dhabi. Phil has blogged for the Guardian, the Morning Star and several other publications and he has written stories for The London Magazine. He started Ars Notoria in May 2020.
Discover more from Ars Notoria
Subscribe to get the latest posts sent to your email.


Comment
Comments are closed.