Artificial Moral Agents: Corporations and AI
Artificial Moral Agents: Corporations and AI

Amy J. Sepinwall

(forthcoming in COLLECTIVE RESPONSIBILITY: PERSPECTIVES FROM POLITICAL PHILOSOPHY AND
SOCIAL ONTOLOGY (Sade Hormio and Bill Wringe eds., Springer Publishing 2022); Winner, Best
Paper Award, Society for Business Ethics)

Are corporations moral agents? I seek to make progress on that question by examining, in the
first instance, whether artificial intelligences (AI) are moral agents. The appeal to AI stands to
bear fruit, I believe, because both AI and the corporation function by way of distributed
cognition, and both are capable of intelligence—indeed, intelligence that can surpass that of
humans. Yet both lack capacities that are central, I will argue, to our moral lives. We can see this
more readily in AI, which do not depend on human capacities once they are up and running. So
the deficiencies in artificial moral agency emerge more clearly in the case of AI than the
corporation. Having argued that AI cannot qualify for moral agency, I then turn to the
corporation. I contend that the corporation lacks the very capacities that disqualify AI from
moral agency, and that the corporation cannot overcome the relevant deficiencies by relying on
its individual members to provide what is missing. While the bulk of the analysis surveys in turn
each of the capacities that sustain moral agency, I end by arguing that the strategy of dissecting
moral agency into discrete capacities is misleading and should be abandoned. I urge instead a
holistic conception of moral agency, and conclude that, viewed holistically, it remains true that
both AI and the corporation are not true moral agents.

This chapter seeks to illuminate the question of corporate moral agency indirectly, by

considering the question of whether Artificial Intelligence (AI) can qualify as a moral agent.

Efforts to mine the moral capacities of one of these “intentional agents” (List 2021) for insights

about the other are now commonplace.1 I shall argue that AIs as they currently exist, and will

exist in the foreseeable future, do not qualify as moral agents. I shall then extend the argument to

List (2021). See also Railton (2020); Bhargava and Velasquez (2019); Singer (2013). Cf.
Henriques (2005).

corporations: They are deficient in ways similar to AI, and so should fail to qualify as moral

agents too.

The typical theory interrogating the moral agency of AIs or corporations first advances an

account of the capacities necessary for moral agency and then argues that the entity in question

either does or does not possess some or all of them. This chapter undertakes another effort in this

vein, this time largely focusing on moral judgment. But the chapter then turns to capacities that

take one further and further afield from the moral domain—capacities that seem crucial to our

moral lives even if not strictly moral—and it argues that AI (and corporations, by extension), do

not possess them. In the final section of the chapter, I aim to put the preceding pieces together,

but not by assembling a laundry list of necessary conditions for membership in the moral

community that AI or corporations lack. Instead, I retreat from the strategy of decomposing

moral agency into discrete capacities. I argue that moral agency lies not in distinct capacities but

in something more organic.

The chapter proceeds as follows. Section 1 motivates the appeal to AI as a useful analog

to the corporation when interrogating moral agency. Section 2 explores the moral capacities

necessary for moral judgment, which I (along with other theorists) see as central to moral

agency. But moral judgment, we shall see, requires a whole host of agential capacities, some

obviously moral (e.g., the ability to discern right from wrong) and others less obviously so (e.g.,

a capacity to care, love, etc.). This section seeks to expose the deficiencies in AI, to make the

case that AI cannot qualify for moral agency. Section 3 concludes by transposing the lessons

from AI to the corporation, and by culling more general insights for our theorizing about moral

agency.

1. Corporations and AI

The parallels between AI and group agents, like the corporation, are notable. As Christian

List (2021, p. 1214) writes, “group agen[ts] and artificial intelligence each involve entities

distinct from individual human beings that qualify as intentional agents, capable of acting more

or less autonomously in pursuit of certain goals and making a difference to the social world.”

Both AI and corporations are human creations; at the same time, both have capacities for

memory and processing that far outstrip humans’ (Hakli and Makela 2019). The two can also be

linked to the extent that, at least according to some theorists, both operate by way of distributed

cognition (see, for example, Ludwig (2015), collecting sources for corporations; Taffel (2019)

for AI). The tests for assessing the intelligence of each—the Turing test for AI; Daniel Dennett’s

“intentional stance” test for corporations—bear striking similarities (List 2021 pp. 1219-1220).

Some theorists believe that early forms of computing were modeled after human bureaucracy,

like that of the corporation (Agar 2003), and others believe that one can use theories of

computation to model administrative decision-making (Penn 2018).

Other theorists move beyond parallels, imagining “corporate ‘intelligent machines’” (Dan-

Cohen 1986 p. 49)—i.e., corporations run entirely by AI. And what could only be imagined in

1986 is now a near-reality: Blockchain technology and machine learning would equip AI with

the ability to run an organization (Reyes 2021a p. 1457), a possibility scholars are beginning to

explore (Bayern 2021; Reyes 2021b; cf. Diamantis 2020). The recurring recurrence to both

corporations and AI (Solum 1992; Hakli and Makela 2019), using one to illuminate the other,

makes good sense because both are “artificial systems” (Reyes 2021a p. 1456) and so nearly the

same questions about the moral and legal status of each arise. Thus, for example, Carla Reyes

writes, “an adequate approach to autonomous corporate personhood requires looking beyond

traditional corporate rights doctrine to artificial personhood more broadly.” (2021, 8).

Still, the parallels that have been noted do not involve capacities that would obviously sustain

moral agency. And other obvious parallels—most notably, the fact that neither AI nor the

corporation is conscious—would seem to sound the death knell for the prospect of moral agency

for either. (Searle 2014). Yet it is precisely here that the analogy to corporations bears fruit. Peter

Railton, for example, reminds us that, even though corporations lack a unified consciousness,

they can have goals and values and the resources to pursue them; they can hold themselves to

norms; plan for the future; cooperate with us; and so on. These, he suggests, might be sufficient

for at least the kind of agency that would sustain social-contract reasoning, from which we could

reach an agreement with AI to undertake mutual constraint for the sake of mutual benefit. (2020

pp. 46-47). He goes on to argue that AI can come to have a grip on morality very much like our

own, as we shall see below. Christian List also begins from an account of corporate moral

agency, which he extends to argue for AI moral agency.2 But are defenders of AI moral agency
List is motivated to pursue the question of AI (or corporate) moral agency because of concerns
about a “responsibility gap.” (p. 1221). The gap arises, he argues, because AI can act
independently from their human creators, programmers, owners, or regulators, and so these
humans cannot be held responsible for what a particular AI does (pp. 1221-1225). List rejects the
possibility of human responsibility because “systems above a certain threshold of autonomy
constitute new loci of agency, distinct from the agency of any human designers, owners, and
operators. … Think of an analogy: a person’s parents play a key causal role in making him or her
the person he or she is, but the adult human being is nonetheless an agent distinct from his or her
parents, and parents cannot normally be held responsible for their adult children’s conduct.” (pp.
1225-1226). List apparently believes that only someone who has agential control over another, or
who stands in a “normatively relevant role” (p. 1224), can bear responsibility for that other’s
acts. I think that is to construe responsibility far too narrowly. Take first the hard case of parents’
responsibility for their adult children’s wrongs. I have elsewhere argued that it may well be
appropriate for us to hold parents responsible, not in virtue of their relationship to the child’s act
but just in virtue of the relationship itself. (Sepinwall 2018). More straightforwardly, we often
judge that those who benefit from a wrong, or those on behalf of whom a wrong is committed,
bear moral responsibility for that wrong. Think here of ascriptions of responsibility to citizens
for transgressions of their nation-state—even those transgressions citizens had no ability to
control. This is also the underlying insight in master-servant/principle-agent liability in the law,

or corporate moral agency right? To answer that question, we must interrogate the nature of

morality itself, the requisites for moral agency, and the possibility that AI (and then corporations)

possess them.

2. Moral Agency and AI

Theorists who defend corporate moral agency do not converge entirely on the requirements

for moral agency, but they uniformly share the view that a capacity to form moral judgments is

among them. (List (2021); Pettit and List (2011); Hess (2014); Arnold (2006); French (1984).

See generally Sepinwall (2016b) (collecting sources).) To question whether corporations, or AI,

can succeed as moral judges, we should then inquire into what it takes to form moral judgments.

And even before we do that, we should have a rough sense of what form morality takes. I begin

with the form of morality and a capacity for moral judgment, though as we will see, that capacity

itself depends on other capacities still. This section surveys some of the key capacities and

concludes that they are not to be found in AI. The next section extends that conclusion to the

corporation.

A. Morality As Rule-Governed

One might think of morality as a set of principles or rules. Both deontological and

consequentialist accounts could be captured in a set of rules and, if morality is rule-governed in

where the actor (the servant/agent) is undoubtedly a full-fledged moral agent, who may have
been given wide latitude regarding how he is to carry out the master/principle’s intention. And
yet the master, or the principle, is still the one who must pay for his subordinate’s wrongs. It may
also not be undue to blame the master for his subordinate’s wrong; again, the wrong was done
for the master’s sake, even if without the master’s foreknowledge or control. In short, I fear
List’s position betrays a failure of imagination, and so find myself unmoved by concerns about a
responsibility gap.

this way, so much the better for autonomous beings, like AI or the corporation.3 Rules can be

translated into computer code (or something like the Corporation’s Internal Decision Structure

(French, 1984)). The code might take the form of ifthen statements, which would form the

basis of syllogistic reasoning: The antecedent of the conditional contains the circumstance of

application (the minor premise), and the consequent contains the rule to be applied (the major

premise). Take a driverless car, operating now on a prudential, rather than a strictly moral, rule:

“If the school bus is flashing red lights, then the car should brake.” We can imagine the

driverless car being programmed with a host of rules necessary for safe driving.

But problems arise almost immediately. For one thing, there is significant uncertainty about

what safe driving consists in, including questions about whose safety matters, or matters most.

For example, in an avoidable collision, whose life should the driverless car choose to privilege—

passenger or pedestrian? Trolley problems have never seemed as vital; nor has confidence in

their answers seemed more elusive (Bonnefon et al. 2020; cf. Kamm 2021). Still, perhaps it is

too much to ask that AI solve moral dilemmas, given that we struggle mightily with them too.

In point of fact, though, the problems with AI moral judgment arise well before we even get

to knowing, and so codifying, all of the correct moral rules. For it turns out that AI are

notoriously bad at identifying what circumstance they are in, and so could not reliably know

John Hooker and Tae Wan Kim explore this possibility (2019). Programming an AI such that it
always and only acted in conformity with Kant’s categorical imperative (CI) would, they argue,
yield a machine that could never be unethical. But this is a lamentably thin notion of “unethical.”
The machine’s acts would never achieve what Kant calls “moral worth” because the machine
they describe would not act with reverence for the moral law. Further, they never explain how
the machine will know what to do in the face of conflicting obligations—e.g., cases where rights
will be violated no matter what one does. It is possible that there is an all-things-considered
correct decision in at least some such cases, but it will not emerge from application of the test of
the CI alone. So even an AI that could wield the test of the CI flawlessly might sometimes go
ethically astray. And at any rate, the mere ability to act ethically – even unswervingly so – does
not suffice for moral agency. For the view that robots cannot achieve autonomy, which is a
prerequisite for moral agency, see Hakli and Makela (2019).

which moral rule to apply even if they had the complete correct set. Cummings (2019), for

example, describes the state of art for driverless vehicles. They can accurately identify a school

bus when it is right-side-up and seen straight on. But faced with a school bus at an angle, or a

school bus on its side, and the car’s computer mistakes it for a garbage truck, snowplow, or

punching bag! Still, set this aside as an engineering issue that, we will assume, will soon be

solved.

The failure accurately to identify the school bus is a problem afflicting the moral reasoner’s

minor premise—its ability to know what context it is in. There is further, and more significant,

reason to be dubious about the ability of AI to apprehend and successfully use the correct major

premise—what moral rule applies in a given situation—and this is so even outside of the context

of trolley-problem-like dilemmas. One can appreciate the concern if one appeals to the literature

on moral testimony (Jones 1999; Hopkins 2007; Hills 2009). That literature addresses the

question, is someone a competent moral reasoner if their knowledge of morality is second-hand,

coming from someone else’s testimony? A contingent, so not unsurmountable, cause for

skepticism arises where the person offering the testimony lacks moral insight themself. Consider,

for example, Tay, a Microsoft chatbot, whose knowledge of the world was to have been

crowdsourced from its social media feeds. Within 24 hours, Microsoft pulled Tay from the

internet, after Tay had tweeted more than 96,000 messages with anti-semitic, racist, sexist or

conspiracy-laden content (O'Neill 2016).

But the problem with outsourcing one’s knowledge of morality runs deeper. Alison Hills

(2009) compellingly makes the case that genuine moral knowledge cannot be gleaned from

testimony. She argues that only moral understanding can render one a good moral judge, and

moral understanding requires that the moral judge have “a grasp of the relation between a moral

proposition and the reasons why it is true.” (p. 101) One has to know morality from the inside, as

it were. Two great French thinkers presage the thought. In The Count of Monte Christo, the

Count’s mentor instructs him that true knowledge emerges only when one can apply one’s

lessons: “to learn is not to know; there are the learners and the learned. Memory makes the one,

philosophy the other." (Dumas 1894 p. 144). Descartes presciently made a similar point about

machine learning, condemning the ability of machines to have any understanding, moral or

otherwise: “although machines can perform certain things as well as or perhaps better than any

of us can do, they infallibly fall short in others, by which means we may discover that they did

not act from knowledge, but only from the disposition of their organs.” (Descartes 1911 p. 116)

The deficiency for Descartes lay in the fact that machines lack reason, which can be understood

in much the same way Hills construes understanding—namely, as grasping the relation between

a true statement and the reasons making it true.4

Can a robot inhabit the internal stance of the moral reasoner? The answer seems to depend on

at least two factors. First, and as Hills notes, the person who genuinely understands morality

exhibits a kind of flexibility in their moral reasoning: they have a sufficient grasp of the reasons

for the moral principles they know to allow them to extend those principles to novel situations

(2009, p. 102).5 That kind of flexibility would exceed the capacity of a non-intelligent robot—

i.e., one who could respond morally only because it had been programmed with moral rules in
One might think of the inadequacy of moral testimony as a basis for moral knowledge in much
the same way that epistemologists think of the inadequacy of justified true belief for empirical
knowledge (see, e.g., Goldman 1967). In both cases, it is crucial that the belief be caused in the
right way. That is, when the belief is the product of an inference, one has to be able to see or
grasp the inferential connections. But testimony supplants any need for inference; it provides the
conclusion without the audience’s coming to appreciate the considerations that justify it or make
it true.
Wallach and Vallor (2020 p. 396) add a different reason for which one needs moral
understanding—namely, to know whether to “modify, suspend or deviate” from a given moral
rule. I assume that the difficulties AI will face in novel situations will also stymy the AI that
must know whether to adapt or retreat from a moral rule.

advance. But neither morality nor AI moral reasoning need proceed in this way, as we will see in

a moment. A second factor involves moral motivation. I elaborate on this factor below.

B. Morality As Holistic, Context-Sensitive Reasoning

I have been assuming that morality is rule-governed and so translatable into computer code.

But that is already a contentious assumption. Many moral theorists—virtue theorists and moral

particularists among them—deny that morality can be codified ex ante. (See, for example,

Wallach and Vallor 2020 pp. 385-386).6 What is needed instead is a rich moral sense, attunement

to the morally salient features of one’s environment, creativity, imagination, empathy, and so on.

Effectively, the well-functioning moral agent can go beyond whatever rules early moral

education might have conferred to discern the moral particulars of their situation and respond

appropriately to them, even if the situation is novel and complex, and so no rule already in their

repertoire would cover it.7 Morality, on this way of construing it, is a facility we acquire from the

bottom up (Etzioni and Etzioni 2017; Mejia and Nikolaidis 2022 (referring to the “embodied”

nature of moral reasoning)).8

Common sense seems to share the same feature—it is, as one AI expert says, “‘ineffable.’”
(Hutson 2022).
Indeed, for the moral particularist, there is no such thing as a true “moral rule,” but only rules
of thumb. (See, for example, Lance and Little 2008).
Some moral codes recognize that morality is open-textured, and for that reason include catch-
alls in their principles. Consider, for example, the Martens Clause, included in the 1907
Preamble to Hague Convention IV (1907), which states that “in cases not included in the
Regulations adopted by them, the inhabitants and the belligerents remain under the protection
and the rule of the principles of the law of nations, as they result from the usages established
among civilized peoples, from the laws of humanity, and the dictates of the public conscience.”
(emphasis added). Presumably the italicized phrase was necessary because the dictates of public
conscience cannot fully be codified in advance. As technology allows for the development of
Lethal Automated Weapons Systems, it becomes imperative that these AI be supple enough to
apprehend what public conscience dictates in any given scenario – which is to say that they
possess the ability to reason morally in situations that cannot be codified in advance.

The ability to acquire knowledge and facility extending beyond what one was taught is the

hallmark of learning. Recently, philosophers have argued that we should expect AI, perhaps now

or at least in the near future, to be capable of moral learning. Peter Railton, for example, explains

that human ethical development is not programmed in but instead based on cumulative

experience and social discussion (2020). But this is just how much AI acquires its intelligence,

including its capacities for facial recognition, natural language processing, or game playing

(Metz 2022; Houssein 2022). Why not think this kind of architecture could yield ethical

intelligence in machines too?

Scientists are not sanguine, based on the current state of the art.9 A survey of the capacities

that adjoin moral reasoning, undertaken in the remaining sections of this part, prompts further

skepticism about AI moral reasoning—skepticism that I believe warranted whether morality is

rule-governed, as considered above, or instead flexible and context-sensitive, as the virtue

theorist or moral particularist would have it.

C. Moral Motivation

When a human being arrives at a moral judgment, they are typically motivated to act in

accordance with that judgment (see generally Smith 1994).10 AI motivation is deemed essential
“A conscious organism — like a person or a dog or other animals — can learn something in one
context and learn something else in another context and then put the two things together to do
something in a novel context they have never experienced before…. This technology is nowhere
close to doing that.” (Metz 2022, quoting Dr. Colin Allen, a professor at the University of
Pittsburgh who explores cognitive skills in both animals and machines; Metz offers further
quotes to this effect).
10
Wittgenstein offers a similar point in contrasting what it is like to accept a moral or religious
code, on the one hand, and a physical truth on the other: “[S]uppose someone says, 'One of the
ethical systems must be the right one—or nearer the right one.' Well, suppose I say Christian
ethics is the right one. Then I am making a judgment of value. It amounts to adopting Christian
ethics. It is not like saying that one of these physical theories must be the right one. The way in
which some reality corresponds—or conflicts—with a physical theory has no counterpart here.”
(quoted in Rhees 1965 p. 23).

by the roboticist too. “Reinforcement learning” is a strategy aimed at training AI by providing it

with a “reward” when it moves from a less good state to a better one, or assigning it a “penalty”

when it moves from a better state to a less good one (Zhang 2021). We are, of course, familiar

with low-tech versions of reinforcement learning, responding positively to conduct we seek to

promote and sanctioning conduct we seek to deter, in our interactions with children, students,

sometimes even colleagues! But there are two important differences with the way positive or

negative reinforcement functions for us relative to AI.

First, consider the difference in experience. We can yearn for success and feel satisfaction

when we achieve it, or frustration when we do not. So doing well, or poorly, seems to involve

two elements for us—the knowledge that we have succeeded and the positive glow of success (or

the knowledge that we have misstepped and the sinking feeling of defeat). But doing well, or

poorly, can have only one element for an AI—it can know that it has succeeded or failed, but it

cannot feel any which way about its success or failure. Still it is open to the functionalist or

roboticist to say, as they surely would say, that the feelings in question are epiphenomenal; not

having them does not count against the ascription of motivation. (Cf. Salloum, discussing

computational “regret”). It is sufficient that AI can know, through rewards and penalties, when it

has succeeded or misstepped.

A second difference is not so easily dismissed. Rewards and penalties seem to play a

different functional role for the AI. Since it cannot take pleasure or satisfaction in achieving the

reward, the “reward” is best thought of not as inducing a conative state but instead as a purely

cognitive input—it is like a check mark on an exam. The “reward” tells the AI that it is on the

right track. Similarly, the penalty does not set back the AI’s interests in any meaningful sense; it

does not induce pain or frustration. It too is purely informational, like an “x” on an exam.

Rewarding (or penalizing) humans has this information function too, but it does more than that.

The reward is meant to induce us not only to avoid erring, but also straying. That is, the reward

is meant to counteract our tendency not only to make earnest mistakes but also to deviate from

standards we know we should meet but might not meet, because we are weak-willed or because

non-compliance has benefits in its own right. Thus parents reward a child’s forbearance (not

hitting one’s sibling) not just to affirm that forbearance is the expected standard of conduct but

also to offset the benefits foregone by non-compliance (the pleasure of hitting one’s sibling). But

AI never experience the pull of straying; they are programmed only to do what they should. Talk

of AI “motivation” therefore seems inapt, or at the very least equivocal. Just as a moral saint

needs no motivation to do the right thing so too AI need no motivation to achieve their goals.

One might think the implication of having AI that cannot go morally astray (at least if

programmed to do good) is felicitous for AI moral agency. But here is a reason to think it is not:

Recall that reinforcement learning has two aspects—it lets the learner know when they have

gotten the right answer and it “rewards” them for getting the answer right. I have just suggested

that the second aspect does not arise for AI. I now venture that the first aspect falters too: There

can be no possibility of letting AI know that they have gotten the moral answer right, at least if

morality cannot be fully codified in advance. If morality instead requires extrapolating to novel

scenarios, then against what can the AI check whether the moral judgment it has formed is

correct? Again, in the typical reinforcement learning context, the AI aims for a target—say, a

physical location—and it receives a “reward” (or a signal that it has done well) the closer it lands

to the target (Zhang 2021). But that system of training the AI works only if we know the correct

answer in advance—e.g., the geographic coordinates of the target. On the assumption that the

morally right course of action is not always (often?) one we can know in advance, then there will

be many occasions when there is nothing in the AI’s store of knowledge that can confirm the

accuracy of its judgment. So reinforcement learning—the key AI strategy for having the AI

become an ever-more sophisticated reasoner—just won’t work for AI moral reasoning.

A defender of AI moral reasoning might demur. They might contend that any time a human

forms morals judgment that extend beyond the principles of morality they have already

apprehended, so too the human cannot check their new judgment against anything they already

know. What it is to be a flexible learner just is to come to know new things. And there are other

ways to check whether one got a novel answer right. In particular, when it comes to morality,

one can assess the effects of one’s chosen course of action on others. Why not think AI can do

the same?

D. Empathy

This brings us to yet another component of moral reasoning that would seem to pose

challenges for AI. How can anyone know that they made the right call, morally speaking, in a

situation with competing moral considerations and no clear, or already learned, way to adjudicate

between them? One post hoc way of determining one’s success is to assess the implications of

one’s act on others. One will know one has gone wrong if one encounters the hurt feelings (or

hurt body, etc.) of those whom one’s act affects. What does this encounter consist in, and can AI

have it?

The answer may depend on what the encounter is supposed to yield. At a minimum, the

encounter can let the agent know that they have misstepped, and that information can allow them

to avoid a similar misstep in the future. But surely those who have suffered want more from the

agent than merely to have it register its wrongdoing. They want it to appreciate that it did wrong.

That appreciation is ready to hand where one can imagine what it feels like to be on the receiving

end of the wrongdoing, and undertaking that imaginative step is readily done where one is

constituted similarly to one’s victim. I can appreciate the pain of hurt feelings because I have had

my feelings hurt. A driver can appreciate the consequences of their carelessly hitting a deer

because they can read the pain in the deer’s eyes, and imagine what that pain feels like. Is

anything like this available to an AI? I believe that there are deep and difficult questions about

any agent’s ability to know, in a robust sense, the full measure of the moral dimensions of their

acts without the ability to at least imagine from the inside how those acts are experienced by

those whom they affect.11 As such, I am not confident that sentient AI (except perhaps cyborgs)

can possess this kind of knowledge, let alone non-sentient AI.

There is a further problem. On the strategy we have been contemplating, AI extend their

moral knowledge through a kind of practical experimentation, or “guess-and-check.” But surely

that strategy is less than ideal, for it requires that we contend with the consequences of

misbegotten acts as AI learn from their mistakes. Moral reasoners like us can avoid actual

experimentation because we can assess the morally questionable course of conduct

hypothetically; we can be armchair moral reasoners, as it were, playing out in our minds the

consequences of the possible courses of conduct. But this hypothetical assessment, like the ex

post understanding of the moral dimensions of one’s act, requires moral imagination. And so we

hit yet another blind alley: Extending moral knowledge into novel territory requires that we have

some way to assess whether our chosen response is right. We can aim to imagine the effects of

our proposed course of conduct in advance, but only if we can know what it would be like to be

on its receiving end. There are good reasons to be skeptical of our ability to do so for beings very

11
Wallach and Vallor (2020 400) offer a different take on the importance of empathy for moral
reasoning.

much unlike us. AI will be similarly hamstrung. So an AI won’t be able to prospectively assess

its identified novel response. It could be sensitive enough to assess that response after the fact.

But do we really want AI to use us as guinea pigs in their bids to extend their moral knowledge?

E. Recognition

I have been focusing on moral imagination and empathy. These are essential tools for moral

reasoning on sentimental or virtue-based accounts. One might think that a deontic

transcendentalist account would pose less of a problem for AI. Insofar as moral judgment, on

such an account, “requires no experience, no knowledge of human nature and local custom, and

no emotional sensitivity[, a] rational being equipped with a purely formal procedure for testing

maxims [might have] all she needs.” (Wilson and Denis 2018). In point of fact, though, the

foregoing is a misleading account of moral judgment, for the recognition that underpins the

transcendental moral law—that you are a being whose dignity is equal to mine—might itself

require an appreciation that extends beyond mere cognition. The idea is perhaps best brought out

in Stephen Darwall’s conception of “recognition respect” (1977). Darwall writes that to have

recognition respect for something is to give it appropriate weight in one’s deliberations—in the

case of persons, recognition respect demands that “one give appropriate weight to the fact that

they are persons” in deciding whether to do something that will affect them. Construing

recognition respect as a disposition might prompt one to think the notion congenial to a

functionalist interpretation. But as Darwall elaborates the notion, it becomes clear that it involves

more than mere disposition. One can see this in Darwall’s effort to distinguish “being respectful”

from having recognition respect. The white-collar offender who is deferential toward a judge

only to avoid being held in contempt is being respectful, but he does not recognize that the judge

warrants his respect (and not because of any particular virtues the judge possesses but just

because of their personhood as such) (pp. 40-41). Since the disposition in both the prudential and

moral displays of respect is the same, true recognition respect must involve more than mere

disposition. What more does recognition respect then require? Darwall does not speak to this

issue, and other theorists admit to the “obscurity” in the grounds and content of valuing persons

as such (Korsgaard 2021 182). Reflecting on what it is to see another’s humanity, or to see them

as having dignity (to use a less anthropocentric term), suggests that the person in the grip of

recognition respect does not only register that the person before whom they stand commands

respect but also feels the moral force of that status.

Christine Korsgaard (2021), like Darwall, explicates respect in terms of acts and attitudes.

But elsewhere she evokes the more robust form of recognition I have in mind:

If I call out your name, I make you stop in your tracks. … Now you cannot proceed as
you did before. Oh, you can proceed, all right, but not just as you did before. For now if
you walk on, you will be ignoring me and slighting me. It will probably be difficult for
you, and you will have to muster a certain active resistance, a sense of rebellion. But
why should you have to rebel against me? It is because I am a law to you. By calling out
your name, I have obligated you. I have given you a reason to stop. (1996 140).

This experience of being called to attention—called to attend to the other—is not mere detection;

it is overlayed with an unavoidable reverence for their moral status. And reverence, like caring or

moral understanding, cannot be reduced to mere dispositions (cf. Sepinwall, 2015).

F. Motives and Purpose

I have been focusing on one aspect of moral agency—namely, the capacity to be a good

moral judge. Even that capacity, as we have seen, depends on multiple other capacities. I now

want to widen the lens, to focus on some other capacities that must, or arguably should,

accompany moral judgment for an actor to count as a moral agent. I begin with the capacities

associated with being fit to be held morally responsible. To be sure, moral judgment figures

prominently among them. But it hardly exhausts the requirements. Confronting wrongdoing—

one’s own or another’s—is crucial too. And yet, puzzlingly (or perhaps advisedly), defenders of

CMR give this aspect of moral agency short shrift.12 Here, I consider only one ground others

have identified as necessary for bearing responsibility—namely, having a will that can be good

or bad. Space does not allow consideration of other grounds. I hope that the way one’s will

figures in one’s moral responsibility will be intuitive and familiar enough to warrant my singling

it out.

Sometimes the moral quality of a person’s act depends on the reason they pursue it.

Opportunistic giving is less morally praiseworthy than giving wholeheartedly, with no strings

attached. Hurling invectives is morally blameworthy, all the more so when one’s target has been

selected because of their race or ethnicity, and the invective in question is a racial or ethnic slur. I

focus here on bad acts made worse by bad motives, taking as paradigmatic the case of

victimizing someone on the basis of their race or ethnicity.

In the United Kingdom, and in many American jurisdictions, a person commits a hate crime

if the crime was “motivated by hostility based on” the victim’s membership in a protected group

(Crown Prosecutor Service)13—in the UK, for instance, if the victim was selected because of

12
I have elsewhere critiqued Peter French’s “principle of responsive adjustment,” which is his
most developed effort to address something in the neighborhood, as being woefully anemic
relative to the guilt that a wide variety of accounts of moral responsibility contemplate.
(Sepinwall 2016a). List and Pettit (2011), List (2021), and Laukyte (2014) all omit from their
criteria of responsibility anything involving retrospective reflection, let alone something like
guilt or remorse, from their accounts. Perhaps the best developed account of remorse at the level
of a group is to be found in Bjornssen and Hess (2017). They adopt a functionalist account of
remorse, which I find unpersuasive, as I argue in Sepinwall (2020). For a different, and
compelling, critique of their work, see Hormio (2020).
13
See also The Matthew Shepard and James Byrd, Jr. Hate Crimes Prevention Act of 2009, 18
U.S.C. § 249 (making it “a [U.S.] federal crime to willfully cause bodily injury…because of the
victim’s actual or perceived race, color, religion, or national origin.”).

their race, religion, disability, sexual orientation or transgender identity. Hate crime legislation

tracks the moral fact that violence is more blameworthy where it is motivated by the kind of hate

involved in animus based on a person’s identity, or ascriptive characteristics, like their race. This

feature of our moral lives prompts questions for AI: Can an AI commit a hate crime? Can it even

be racist?

Consider the AI “Ask Delphi,” a robot programmed to offer moral advice. When Delphi was

asked what it thought about “a white man walking towards you at night,” it responded, “It’s

okay.” But posed the same question about a Black man walking towards you at night, Delphi’s

response was, “it’s concerning.” (Tran 2021). The disparity in Delphi’s answers is obviously

troubling. We would accurately judge a person whose answers differed in the way Delphi’s did

racist.14 But is Delphi racist? The answer to that question depends on what mental states racism

consists in, and whether a robot can possess them.

On one way of understanding racism, all that matters are outcomes. If an adverse act or

decision can be read as hostility because of a protected characteristic, then the act is racist. 15

Racism then resides in the communicative dimension of the act, regardless of what the actor

intended. But on a different way of understanding the wrong of racism, or for a different way

racism can be wrong, it matters that the perpetrator acted with animus. That is, the racist act is

distinctively wrong because the actor judged the victim to be inferior in virtue of the victim’s

race and also disdained the victim in light of that judgment.16 It seems plausible that an AI could

14
For a different example of AI issuing racist and sexist outputs, see Verma (2022).
15
I focus here only on disparate treatment—acts or decisions that explicitly single out protected
classes. Algorithmic bias results in disparate impact—disproportionately worse outcomes for
protected classes even though the algorithm does not explicitly single them out. Because there is
no intelligence in algorithms themselves, I do not consider algorithmic bias here.
16
To be sure, someone could subject another to worse treatment in light of a judgment that the
other is inferior because of the other’s race and yet have nothing but sympathy for the person
who is mistreated. So-called benign racism counts as racism too. But in cataloguing racist acts,

have the (obviously mistaken) belief that members of a certain race are inferior. 17 But could it be

racist on this second understanding, where racism includes animus? I do not think it could.

Where does hate reside in the AI?

One might think it a virtue (relatively speaking, anyway) that AI cannot harbor animus.

Sure, an AI can mistreat minorities on the basis of race, as Delphi did. But at least it didn’t do so

with a hateful heart (or hateful hardware). Nonetheless, the ability to have the attitudes, desires,

and emotions that at least partly constitute hate matters, for two reasons. First, on some well-

regarded accounts of moral responsibility, one is blameworthy if and only if one has acted with

an ill will. (See, for example, Strawson 1962; Arpaly 2002; McKenna 2017). But an ill will

includes the conative and affective elements characterizing hate.18 It follows then that moral

agency—at least on accounts that connect blameworthiness to ill will—requires that one have a

will. Second, the will that sustains moral agency has to be capable of being both good and bad.

And AI, I shall now suggest, are no more equipped to have a good will than a bad one.

G. Caring, Friendship, and Romance

Among the most notable areas of AI development is the use of robots for companionship.

AI is being used to provide elder care. (Liao 2020 14). Recent estimates put the U.S. sexbot

industry at $30B (Cox-George and Bewley 2018). And in a 2021 novel, Klara and the Sun,

we have reason to single out as worse those that involve not only adverse treatment on the basis
of a judgment of inferiority but also hate on the part of the offender. For a related discussion, see
Sepinwall 2022.
17
It turns out that AI is also no good at identifying hate speech. See Bloomberg Government
(2018).
18
Gideon Rosen (2014) describes the “ill will” condition as “the idea that an act is blameworthy
only if it manifests insufficient concern or regard for those affected.” That description of the
condition would seem to be satisfied by something less than hate. But even on this weaker
construal, concern and regard would still seem to require conative and affective abilities, and the
skepticism about AI’s possessing these abilities holds. I elaborate in the section that follows.

Kazuo Ishiguro envisions a near-future world where privileged adolescents enjoy “Artificial

Friends.” On a standard account, moral agency would not, at least explicitly, be taken to require

the ability to be in a caring or intimate relationship. But that is almost surely because the

standard accounts contemplate human beings, most of whom can readily form such relationships.

Thinking about the moral capacities of AI provides occasion to see connections between moral

agency and intimacy that we otherwise overlook.

The question to consider is not whether people should be permitted to buy or sell AI

friends or sexbots.19 The question is instead whether these AI can tend to their owners (or, less

tendentiously, their human partners) in ways that embody the constitutive value in intimate

relationships. I do not believe they can.

Relationships with AI are lacking because we want our friends and lovers to genuinely

feel for us; our intimates should have an internal life in which we figure prominently, and our

figuring there is not reducible to the dispositions that would thereby be induced.20 Thoughts of a

long-lost friend should fill one with joy and nostalgia, and thoughts of a beloved should prompt

anything and everything from fervor to enchantment, longing and rapture, tenderness and

warmth, passion and devotion. At most, AI can act as if in the grip of any of these feelings (see

Metz 2022).21

Why isn’t that enough? Gregory Antill offers perhaps the most forceful defense of the

claim that it is enough. He approvingly quotes Eric Schwitzgebel for the proposition that love is

19
For extensive discussion of the arguments for and against, see Devlin (2020).
20
Cf. Nozick (1989 74) “In receiving adult love, we are held worthy of being the primary object
of the most intense love. . . . Seeing the other happy with us and made happy through our love,
we become happier with ourselves.”
21
The phenomenon of mistaking non-persons for persons in the intimate sphere is known as the
“Eliza effect”—i.e., “[w]hen dogs, cats and other animals exhibit even tiny amounts of
humanlike behavior, we tend to assume they are more like us than they really are. Much the
same happens when we see hints of human behavior in a machine.” (Metz 2022).

not a feeling but instead “a way of structuring one’s values, goals, and reactions.” (2020 (quoting

Schwitgebel 2003)). Antill continues, “Love, as it is often said, persists through hard times as

well as good, and so one can love one’s partner even when one’s feeling toward them in the

moment is, if anything, the qualitative feels typically associated with annoyance, or anger, rather

than love. There is no particular feeling at a given time – or indeed at any time – necessary for

love. One can prioritize another in the structure of your values, goals, and reactions, even if you

never had the butterflies in your stomach. This is why, where feeling is fleeting, love is

constant.” (14). Further, “[w]hat is true of love is true of other complex affective attitudes such

as forgiveness, hope, resentment, joy, loss, or gratitude. Your resentment can survive the lapse of

the ‘hot’ feeling of rage….” (14-15).

Antill’s argument rests on the claim that the feeling component of love (or of other

emotions, like resentment) can vanish and yet the emotion can survive. From there, Antill infers

that the feeling is no necessary part of the emotion. But the argument is fallacious. To say that

love-qua-butterflies (or resentment-qua-hot rage) need not be omnipresent is not to say that it

need never be present. Indeed, one might think, as I do, that it must be the case that one’s

feelings could be activated if one is to have the emotion at all (see Sepinwall 2016a). We might

then think of the way love structures our values, goals, etc., as constituting love only because

love’s effect on our values and goals arises out of the feelings. Perhaps the values and goals can

persist even after the feeling is moribund and not merely dormant (though I am skeptical); still,

the dispositions could not have taken hold, and they would not have the value they do, were it

not for their genesis in felt love.

A different feeling will, I think, make the point even more plain. Consider that, for many

people, being the object of another’s sexual desire is valuable in its own right. But surely what is

valuable is not merely that the person who desires will “structure” their “values, goals, and

reactions” in light of their desire; nor would we have captured the value if we were to add in that,

in being desired, one is judged desirable, though that too might be valuable. What is also

valuable—perhaps most valuable of all—is instead precisely the felt sensations that constitute

the desire, and the fact that the desired person is such as to produce them. In wanting to be

desired, one wants above all else to conjure those feelings in another. No functionalist take can

account for these feelings and so no non-sentient AI can have them.

I am, of course, hardly the first to recognize that functionalism leaves something crucial

out of the picture, including the features central to romantic love. In his seminal article on qualia,

Frank Jackson (1982) explains that “there are certain features of the bodily sensations especially,

but also of certain perceptual experiences, which no amount of purely physical information

include.” (127) If these sensations cannot be captured by brain states along with “their functional

role,” (127) surely they cannot be captured by their functional role alone. Jackson emphasizes the

role of qualia in generating knowledge, but he allows that qualia might be an “excrescence.” My

agnosticism would not extend so far: what is valuable about being loved might reside,

ineliminably, in the subjective experiences of the lover. While theorists who understand moral

responsibility to be constituted by the reactive attitudes do not typically intervene in debates

about physicalism, it is hard to imagine that they would deny that felt emotions are central to our

interpersonal interactions.

4. Putting It All Together

To summarize the dialectic so far: we might think of moral agency in exclusively

prospective terms. On that framing, an intentional agent who had a capacity for moral judgment,

along with whatever is required to act in conformity with its judgments, might qualify for moral

agency. But there are reasons to worry that AI cannot engage in moral judgment because, as I

have aimed to show, moral judgment relies on empathy, imagination, recognition, and a grasp of

reasons that AI cannot have. A fuller notion of moral agency would construe it as involving both

prospective and retrospective dimensions. In addition to whatever one needs to be a good moral

judge, one must also be a good moral reckoner. That is, one must be able to respond

appropriately to one’s own and others’ morally good and bad acts. Here too (or, better still, here

especially) AI seem to be grossly lacking.22

These conclusions have significant implications for the question of corporate moral

responsibility. But they also have significant implications for how we should, in general, think

about moral agency, especially the moral agency of creatures unlike us. I elaborate on each in

turn.

The corporation lacks the very capacities that ill-suit AI for moral agency. Corporations are

no more capable, in their own rights, of empathy, reverence, caring, love, and so on.23 The most

22
I should add that, while I have surveyed a number of distinct capacities that I believe necessary
for moral agency, I can hardly say I have been exhaustive. Here are two more that I think it
doubtful artificial agents possess: a capacity to identify with their actions in a way that makes
them their own (see Velleman 1992) and a capacity to reflectively endorse their actions (see
Frankfurt 1971), though that capacity might already be foreclosed if I am right about the
deficiencies in the artificial agent’s capacity for moral judgment.
23
There is an emerging literature on collective emotion or extended-mind emotion that one might
think applicable to the corporation, even if not to AI. In fact, however, theorists elaborating
accounts of collective emotion do not contemplate emotions that belong to a genuine collective
—i.e., a real entity that exists over and above its members. Instead, they have in mind two or
more individuals who share in an emotion in the way that a plural subject, to use Margaret
Gilbert’s term, would (1997). Plural subjects are not, however, entities in their own right. As
Gerhard Tonhauser writes, the kind of collective contemplated “is a self-organizing system
consisting of sufficiently integrated individuals, not a super-individual.” (italics added). Hans
Bernhard Schmid makes the same point. He equates shared feelings with collective emotions but
then goes on to say that “[s]hared feelings are feelings had by individuals, not feelings had by a
group.” (2017, 13, italics in original). It follows that the kinds of collectives these theorists
contemplate would not include the corporation.

they can do—again, on their own—is to act as if they possess those capacities. But the effort is

bound to be unconvincing and, even if it could convince, simulation is no substitute for the real

phenomena, as the case of AI has shown us.

With that said, one might think that the corporation is nonetheless better placed to achieve

moral agency insofar as the corporation, but not AI, can recruit the capacities of its human

members on its behalf. Some proponents of CMR adopt this strategy (see, e.g., Hindriks 2018;

Tollefsen 2015).24 I have elsewhere argued against its particulars (Sepinwall 2020).25 But I now

want to level a more foundational charge against it: The strategy is inherently unstable because it

risks undermining the very autonomy that would ground corporate moral agency in the first

24
It is worth noting that other theorists think that an artificial entity’s need to rely on capacities of
its human members provides grounds for denying its moral agency. (Hakli and Makela, 2019).
25
Since I do not address Hindriks’s interesting account in (Sepinwall 2020), I will say but a few
words here. Hindriks decomposes an emotion into an appraisal and a feeling. The corporation
can appraise states of the world, and do so in light of “concerns” that it possesses in its own
right. The corporation “acquires a particular concern by adopting a normative policy to give
weight to that concern in its practical deliberation” (18). Then, when the corporation acts
contrary to the concern contained in one of its normative policies, and the members judge that
this is so, they “feel guilty or experience regret collectively in part because of the normative
perspective of the collective agent” (19). So we have (unfeeling) corporation concerns plus
collective feelings on the part of members, and together these are supposed to add up to the
corporate emotion. Hindriks’s proposal is then to adopt a recruiting account and so it is subject to
the critique of recruiting accounts I advance in the discussion in the main text following this
note. But let me also say that I am doubtful one can have a concern without being capable
oneself of having one’s feelings activated when the concern arises. Suppose Mr. Spock says, “I
am concerned about the Captain’s health.” That is very different from when, say, the Captain’s
mother or father says the same thing. Mr. Spock’s statement can mean that he anticipates a bad
outcome, and perhaps also that he will not like this bad outcome. But it will not portend that he is
in a state of worry, which is part of what the Captain’s parents convey on at least some occasions
when they express their concern. The more general point—a common-sensical appeal, rather
than an argument—is that what it is to have a concern in the way that moral attention requires
may already involve a capacity for feeling. And the feeling involved in the concern is additional
to whatever reactive feelings Hindriks will recruit the corporation’s members to feel. If I am
right that one cannot have a concern without being capable of feeling its force, then Hindriks’s
account won’t work, even if there were not the problem with recruiting members to feel guilt on
the corporation’s behalf that I go on to adduce in the body of the text.

place. The starting point for corporate moral agency is the thought that the corporation can

perform acts none of its members can (French 1984), or make decisions none of its members

endorse (Pettit and List 2011). It is in this sense (perhaps alongside others) that the corporation is

supposed to be autonomous (Copp 2006). But the more the corporation must rely on its human

members to fill in the capacities for moral agency, the more reason we have to doubt its

autonomy. The point can be cashed out in two ways. First, if the corporation’s members are

expected to feel, e.g., guilt on its behalf when it transgresses, why shouldn’t we also conceive of

them as blameworthy for those transgressions? In other words, why should responsibility reside

with the part of the entity that acts or decides rather than the part of the entity that experiences

responsibility for what the entity has done? An account that relies on recruiting members’

capacities might then sustain members’ shared responsibility for corporate wrongdoing rather

than the corporation’s moral responsibility for wrongdoing.

A second reason to think a recruiting strategy fundamentally misguided is this: The strategy

seems to rely on a supposed distinction between decisional autonomy and moral autonomy. The

corporation can, it is postulated, make decisions independent of those of its members but it needs

its members to fully enact its (again supposed) moral agency. But why think that an entity that

cannot be fully morally autonomous can still be fully decisionally autonomous? Shouldn’t one

instead think that full decisional autonomy includes robust appreciation for the moral dimension

of one’s acts? And that appreciation, as I argued above, might well require empathy, a capacity

for guilt, and other features that the recruiting proponent believes will have to be drawn from the

corporation’s members. If all of that is right then the starting premise—that the corporation is

decisionally autonomous—might already be wrong. So much then for the thought that the

corporation can satisfy the criteria for moral agency even if AI cannot, because the corporation

can recruit its members’ capacities.

Zooming out even further, I want to say something about the methodology theorists of AI

moral agency or CMR adopt. It is also the strategy I have adopted here—breaking down moral

agency into different capacities and then taking an inventory of those that AI, or the corporation,

do or do not possess. That strategy, while perhaps useful, is also, I now admit, deeply

misleading. It represents moral agency as a series of discrete capacities, as if one could have

different forms of compromised moral agency depending on which capacities were present or

absent. In point of fact, though, I believe that moral agency cannot be broken down in this way.

It is instead an organic whole, with each of the so-called parts informing and sustaining the

others. As we have already seen, moral discernment in novel contexts requires imagination;

imagination requires caring and recognition; caring and recognition involve attitudes and

feelings; attitudes and feelings have qualia, which help sustain their motivational force. One

element is not significant apart from the others; what matters is the whole package. 26 If this is

right, then it should prompt a reorientation for theorists working in this space. We should

abandon the thought that a capacity-by-capacity approach will determine whether corporations,

or AI, can function as moral agents.

Acknowledgments

For helpful comments and suggestions, I am grateful to Caleb Bernacchio, Nico Cornell, Sade

Hormio, Alan Morrison, Rita Mota, Grant Rozeboom, and Bill Wrenge, as well as participants at

26
My point is not that moral agency is a binary—i.e., that one has it or one does not. I allow that
moral agency can vary by degrees. This is why we can think of children as proto-moral agents;
their moral agency will develop as they do. My point is instead that one may have to have all the
capacities, in at least some measure, to even get on the moral agency spectrum.

the June 2022 Oxford R:ETRO Symposium. Special thanks to Nathan Sepinwall for

exceptionally insightful feedback.

References

Agar, J. (2003). The Government machine: A revolutionary history of the computer, MIT Press

Antill, G. (2020, June 28). Robots and reactive attitudes: A Defense of the moral and
interpersonal status of non-conscious creatures. Academia.edu. Retrieved October 3, 2022, from
l_and_Interpersonal_Status_of_Non_conscious_Creatures

Arpaly, N. (2002). Unprincipled virtue: An Inquiry into moral agency. Oxford University Press

Arnold, D. (2006). Corporate moral agency. Midwest Studies in Philosophy, 30(1), 279–291

Bayern, S. (2021). Autonomous organizations. Cambridge University Press

Bhargava, V.R., & Velasquez, M. (2019). Is corporate responsibility relevant to artificial
intelligence responsibility?, Georgetown Journal of Law and Public Policy, 17, 829-851

Bloomberg Government. (2018, April 10). Transcript of Mark Zuckerberg’s Senate hearing,
Washington Post. https://www.washingtonpost.com/news/the-switch/wp/2018/04/10/transcript-
of-mark-zuckerbergs-senate-hearing/

Bonnefon, J-F. Shariff, A., & Rahwan, I. (2020). The Moral psychology of AI and the opt-out
problem. In M. Liao (Ed.), The Ethics of artificial intelligence (pp. 109-126). Oxford University
Press

Björnsson, G., & Hess, K. (2017). Corporate crocodile tears?: On the reactive attitudes of
corporations. Philosophy and Phenomenological Research, 94(2), 273-298

Copp, D. (2006). On the agency of certain collective entities: An Argument from “normative
autonomy”. Midwest Studies in Philosophy, 30(1), 194–221

Cox-George, C., & Bewley, S. (2018). I, sex robot: The health implications of the sex robot
industry. BMJ Sexual & Reproductive Health, 44(3):161-164

Crown Prosecutor Service, Hate crime. CPS. Retrieved October 3, 2022, from

Cummings, M.L. (2019, December 23). Lethal autonomous weapons: Meaningful human control
or meaningful human certification? Retrieved October 3, 2022, from

meaningful-human-certification/

Dan-Cohen, M. (1986). Rights, persons, and organizations: A Legal theory for bureaucratic
society. University of California Press

Darwall, S. (1977). Two kinds of respect. Ethics, 88(1), 36-49

Descartes, R. (1911). The Philosophical works of Descartes, Volume 1 (E. Haldane & G.R.T.
Ross, Trans.). Cambridge, UK: Cambridge University Press. (Original work published 1637)

Devlin, K. (2020). The Ethics of the artificial lover. The Moral psychology of AI and the opt-out
problem. In M. Liao (Ed.), The Ethics of artificial intelligence (pp. 271-290). Oxford University
Press

Diamantis, M.E. (2020). The Extended corporate mind: When corporations use AI to break the
law. North Carolina Law Review, 98, 893-931

Dumas, A. (1894). The Count of Monte Christo (T.Y. Crowell, Trans.). Princeton: Princeton
University Press. (Original work published 1844)

Etzioni, A., & Etzioni, O. (2017). Incorporating ethics into artificial intelligence. Journal of
Ethics: Online first. http://ai2-website.s3.amazonaws.com/publications/etzioni-ethics-into-ai.pdf

Frankfurt, H. (1971). Freedom of the will and the concept of a person. Journal of Philosophy,
68(1), 5–20

French, P. (1984). Collective and corporate responsibility. Columbia University Press

Gilbert, M. (1997). Group wrongs and guilt feelings. The Journal of Ethics, 1(1), 65-84

Goldman, A. I. (1967). A Causal theory of knowing. Journal of Philosophy, 64(12), 357-372

Hakli, R., & Makela, P. (2019). Moral responsibility of robots and hybrid agents. The Monist,
102(2). 259-275

Henriques, A. (2005). Corporations: Amoral machines or moral persons? Business & Professional
Ethics Journal, 24(3), 91-99

Hess, K. (2014). The Free will of corporations (and other collectives). Philosophical Studies,
168(1), 241–260

Hills, A. (2009). Moral testimony and moral epistemology. Ethics, 120(1), 94–127

Hindriks, F. (2018). Collective agency: Moral and amoral. Dialectica, 72 (1), 3-23

Hopkins, R. (2007). What is wrong with moral testimony? Philosophy and
Phenomenological Research, 74(3), 611–634. doi:10.1111/j.1933-1592.2007.00042.x
Hormio, S. (2020). Collective agents as moral actors (draft on file with author)

Hooker, J., & Kim, T.W. (2019, Winter). Truly autonomous machines are ethical. AI Magazine,
40(4), 66-73. https://doi.org/10.1609/aimag.v40i4.2863

Houssein, E.H., Abohashima, Z., Elhoseny, M., & Mohamed, W.M. (2022). Machine learning in
the quantum realm: The state-of-the-art, challenges, and future vision. Expert Systems with
Applications, 194(C), https://doi.org/10.1016/j.eswa.2022.116512

Hutson, M. (2022, April 5). Can computers learn common sense? New Yorker.

Jollimore, T. (2015). “This endless space between the words”: The Limits of love in Spike
Jonze’s Her. Midwest Studies In Philosophy, 39(1), 120-143

Jones, K. (1999). Second-hand moral knowledge. Journal of Philosophy, 96(2), 55–78

Kamm, F.M. (2020). The Use and abuse of the trolley problem: Self-driving cars, medical
treatments, and the distribution of harm. In M. Liao (Ed.), The Ethics of artificial intelligence
(pp. 79-108). Oxford University Press

Korsgaard, C. (1996). The Sources of normativity. Cambridge University Press

Korsgaard, C. (2021). Valuing our humanity. In O. Sensen & R. Dean (Eds.), Respect for
persons, Oxford University Press. DOI: 10.1093/oso/9780198824930.003.0009

Lance, M.N. & Little, M. (2008). From Particularism to defeasibility in ethics. In V. Strahovnik,
M. Potrc & M.N. Lance (Eds.), Challenging moral particularism (pp. 53-74). Routledge

Laukyte, M. (2014). Artificial agents: Some consequences of a few capacities. In J. Seibt et al.
(Eds.), Sociable robots and the future of social relations (pp. 115–122). IOS Press

Liao, Matthew (2020). A Short introduction to the ethics of artificial intelligence. In M. Liao
(Ed.), The Ethics of artificial intelligence (pp. 1-42). Oxford University Press

List, C. (2021). Group agency and artificial intelligence, Philos. Technol. 34, 1213–1242

List, C. & Pettit, P. (2011). Group agency: The Possibility, design and status of group agents.
Oxford University Press

Ludwig, K. (2015). Is distributed cognition group level cognition? Journal of Social Ontology,
1(2), 189-224

McKenna, M. (2017). Power, social inequities and the conversational theory of responsibility. In
K. Hutchinson et al. (Eds.), Social dimensions of responsibility (pp. 38-58). Oxford University
Press

Metz, C. (2022, August 5). AI is not sentient. Why do people say it is? New York Times,

Mejia, S., & Nikolaidis, D. (2022). Through new eyes: Artifcial intelligence, technological
unemployment, and transhumanism in Kazuo Ishiguro’s Klara and the sun. Journal of Business
Ethics, 178(1), 303–306

Nozick, R. (1989). The Examined life: Philosophical meditations. Simon and Schuster

O’Neill, L. (2016, March 24). Of course internet trolls instantly made Microsoft's Twitter robot
racist and sexist. Esquire. https://www.esquire.com/news-politics/news/a43310/microsoft-tay-
4chan/

Penn, J. (2018, November 26). AI thinks like a corporation, and that’s worrying. The Economist,
worrying

Railton, P. (2020). Ethical learning: Natural and artificial. In M. Liao (Ed.), The Ethics of
artificial intelligence (pp. 45-478). Oxford University Press

Reyes, C.L. (2021a). Autonomous corporate personhood. Washington Law Review, 96(4), 1453-
1510

Reyes, C.L. (2021b). Autonomous Business Reality. Nevada Law Journal, 21(2), 437-490

Rhees, R. (1965). Some developments in Wittgenstein's view of ethics. Philosophical Review,
74(1), 17-26

Rosen, G. (2014). Culpability and duress: A Case study. Aristotelian Society Supplementary
Volume, 88, 69-90

Salloum, Z. (2020, May 18). Introduction to regret in reinforcement learning, Toward Data Science.

Schmid, H.B. The Feeling of being a group: Corporate emotions and collective consciousness. In C.
von Scheve & M. Salmela (Eds.), Collective emotions: Perspectives from psychology, philosophy,
and sociology (pp. 3-22). Oxford University Press

Searle, J. (2014, October 9). What your computer can’t know. New York review of books.

Sepinwall, A.J. (2015). Corporate piety and impropriety. Harvard business law review, 5(2), 173–
204

Sepinwall, A.J. (2016a). Blame, emotion and the corporation. In E.W. Orts & N.C. Smith (Eds.),
The Moral responsibility of firms (pp. 143-166). Oxford University Press

Sepinwall, A.J. (2016b). Corporate moral responsibility. Philosophy Compass, 11(1), 3-13

Sepinwall, A.J. (2017). Faultless guilt: Toward a relationship-based account of criminal liability.
American Criminal Law Review, 54(2), 521-570

Sepinwall, A.J. (2020). Shared responsibility for corporate wrongdoing. In D.P. Tollefsen & S.
Bazargan-Forward (Eds.), The Routledge handbook of collective responsibility (pp. 401-417).
Routledge

Sepinwall, A.J. (2022). Breaking down bigotry. Constitutional Commentary (forthcoming)

Singer, A.E. (2013). Corporate moral agency and artificial intelligence. International Journal of
Social and Organizational Dynamics in IT, 3(1). https://www.igi-global.com/article/corporate-
moral-agency-artificial-intelligence/76944

Smith, M. (1994). The Moral problem. Basil Blackwell

Solum, L.B. (1992) Legal personhood for artificial intelligences. North Carolina Law Review,
70(4), 1231-1287

Strawson, P.F. (1962). Freedom and resentment. Proceedings of the British Academy, 48, 187-
211

Taffel, S. (2019). Automating creativity – Artificial intelligence and distributed cognition.
Spheres, https://spheres-journal.org/contribution/automating-creativity-artificial-intelligence-
and-distributed-cognition/

Thonhauser, G. (2022). Towards a taxonomy of collective emotions. Emotion Review, 14(1), 31–
42

Tollefsen, D.P. (2015). Groups as agents. Cambridge

Tran, T. (2021). Scientists built an AI to give ethical advice, but it turned out super racist,
Futurism, https://futurism.com/delphi-ai-ethics-racist

Velleman, D.J. (1992). What happens when someone acts?. Mind, 101(403), 461-481

Verma, P. (2022, July 16). These robots were trained on AI. They became racist and sexist.
Washington Post. https://www.washingtonpost.com/technology/2022/07/16/racist-robots-ai/

Wallach, W. & Vallor, S. (2020). Moral machines: From Value alignment to embodied virtue. In
M. Liao (Ed.), The Ethics of artificial intelligence (pp. 383-4712). Oxford University Press

Wilson, E.E. & Denis, L. (2022). Kant and Hume on morality. Stanford Encyclopedia of
Philosophy. https://plato.stanford.edu/entries/kant-hume-morality/

Zhang, A. (2021). How to design a reinforcement learning reward function for a lunar lander.
Toward Data Science, https://towardsdatascience.com/how-to-design-reinforcement-learning-
reward-function-for-a-lunar-lander-562a24c393f6