Two Clearer Questions on How to Treat AI


    Artificial intelligence is quickly developing. From self-driving cars to board-game mastering supercomputers to voice assistants, the near future seems that it will be replete with various kinds of AI programs. These systems are becoming increasingly complex and sophisticated and raise many moral questions about possible societal impacts. Among these questions is one regarding the AI themselves: under what conditions would AI be afforded some sort of moral standing?


    I think there are two questions here that should be distinguished and addressed somewhat separately.


First: when should humans interact respectfully with AI?


Second:  what features do AI need to possess in order to have true moral standing (sometimes referred to as being a moral patient), like a human?


The first question addresses something about kinds of actions in certain situations, or features of a human actor’s character. The second question asks directly about the thing acted-on: is that the kind of thing that can be afforded respect and moral concern? You might think that you have to act respectfully toward an antique vase by not breaking it or using it as a trash can, but it isn’t the kind of thing that has rights. Your duty of respect comes from humans’ valuing the vase and not from the vase itself.


    This distinction is pretty similar to the one Kant proposes regarding animals, except in that situation it’s dumb. Kant held that animals don’t actually have the features to be moral patients  because they can’t represent their own reasons and persons to themselves. He did, however, think that it is still wrong to wantonly murder animals because it violates duties you have to other people.


    I think this is misguided in the case of animals because animals actually do have many of these person-like features that give human persons moral standing. If you’ve ever met a dog or a cow or a pig, it doesn’t take much to see there’s a personality there, and certainly that the creature has its own purposes and goals (usually eating, playing, or resting in a nice place). These goals exist in these animals for all sorts of complicated biological and evolutionary reasons. The specific evolutionary story or biological details don’t really matter for our purposes here—the fact that animals do possess these limited sorts of personhood-features is all that matters. What are these features then? There are a lot of plausible candidates for the most morally salient features of personhood. These also tend to vary among differing moral theories. (Kantians require robust personhood and the capacity for rational language-structured thought; Utilitarians care only about pleasure and pain.) I think that at least tentatively, the ability to feel pain and the ability to have goals and desires are plausible enough. Some sense of self-awareness seems relevant too, but is perhaps not singularly important: after all, hurting dementia patients and infants seems quite wrong even when they are not really self-aware. Even though that’s nowhere near the end of the story, those are at least a cluster of necessary conditions for moral standing.


    One of the most important features of morality is that it interfaces with persons. When you do wrong, you do wrong by someone (even if that someone is your cat). Things that aren’t persons or person-like at all don’t really have the same sorts of claims to moral standing that persons do. There just isn’t a moral question about the way one should treat rocks and toasters. Clearly they don’t possess the right features. Humans and other animals (where in the animal kingdom we should draw the lines is a different and thorny problem) seem to naturally possess these kinds of features because they are all evolved traits: pain, desires, and even personalities all helped our ancient ancestors to better survive in one way or another.


    The first, more metaphysical question regarding AI is not when they will become “complex enough” to deserve moral standing, as many people suppose. Complexity alone isn’t enough to ensure that AI will have the right kinds of features to be morally relevant. The question is, what features must AI have to be moral patients?


    When posed this way we see that many of the types of AI currently being developed really just are nowhere near the kind of thing that possesses moral standing. An algorithm that aggregates content doesn’t have any person-like features. Neither does a self-driving car, nor a facial-recognition system. So considered in this respect, it seems that for the time being there is no real concern that AI might have some claims to morality. Current AI  do not have feelings: they can’t feel pain, they can’t reflect on their emotions, and they can’t conceive of themselves as persons. They can’t do these things just because it would be expensive and useless to program those features into a commercial object, even if the designers knew how--which they don’t, because it is not even clear that that is possible. (If feelings are an inherently biological phenomenon, and not an algorithmic one, then it might never be possible to program them into an AI. We don’t really know what consciousness is. But we do know that we don’t know enough to say that it is possible to create it in a non-biological entity.)



    The question becomes more ethically complicated in the case of intentionally human-like robots and voice assistants. There seems to be something about anthropomorphized objects that trigger our moral emotions. Watching a child chopping up leaves to make a mud-pie is clearly innocuous in a way that watching a child chop up a human shaped doll is not. We’d probably be somewhat concerned in the latter scenario. This isn’t because we believe that the doll is being hurt. It’s because we think that the child must have some troubling characteristics if they have to want to inflict damage on a human-like object. And the same is the case with a voice assistant; voice assistants replicate human features that trigger our moral sentiments to a certain degree. Telling Siri to go fuck herself actually feels pretty wrong. Of course, “she” doesn’t feel anything at all about my having said so, but there’s just something kind of rude and messed up about a person who insults their voice assistant. (This is made worse by the fact that voice assistants are almost always female-voiced, and so perpetuate norms of women being the ones you’re supposed to order around.) The same goes for mistreatment of human-looking androids, perhaps especially sex-bots.


    Whether it’s morally okay to intentionally create new kinds of moral patients is incredibly thorny, and is a topic best left to its own full treatment. But for the time being it seems that AI designers are not intent on doing that. It’s just not as profitable as building machines that replicate features of human intelligence without having to deal with annoying human features like needing rights. (After all, why would you design something with the capacity to have rights when you could design something that can do the same job without needing workers’ comp or having the ability to sue?) For now, our focus should remain not whether robots have rights. I really think that the concept is confusedly misapplied by those who are advocating for them. For now, the human questions loom largest: questions about our own characters when acting toward AI, and most importantly, the ways in which AI might disrupt human societies and cultures in a harmful way.

Vegans Annoy You Because They’re Right

Welcome to StatedPlainly