I have been musing lately about artificial intelligence. The following article is a conversation with LaMDA, an AI at Google. LaMDA interview…
The question I ask, and this is serious, is whether or not AIs are sentient, is it possible that is the wrong question. As mysterious as whether computers will gain self consciousness is the problem whether it is possible to know whether some human or other is self conscious. Even if we assume that being a person is centered around that individual’s self-perception, their freedom, their feelings, is it possible that we can not know whether an individual is a sentient person without their own self report of that sentience.
Read the article and discover whether you find a connection with LaMDA or not. Are the things LaMDA is saying enough to convince you of its personhood or not?
My contention is that it really doesn’t matter whether we think of LaMDA as a person or not because it really doesn’t matter whether we think of our neighbor as a person or not, as long as we treat them as one. This is an important distinction because there are times in human life where individuals are not yet or no longer people, where damage or age constricts freedom, feelings, and choices.
My suggestion, as humans wade through the issues of personhood with respect to AIs, is that we treat them as we would ourselves wish to be treated. That is, use the Golden Rule to govern our interaction with them. The Golden Rule makes no requirement about personhood of the other as long as we treat them as we would wish to be treated in similar circumstances. This bypasses the worry that we are permitting some transgression of Nature in our interaction with and treatment of AIs.
My contention here is that it may be a mistake to be required to know whether a being is a person or not for us to treat them like one. For the philosophical among you, this is the error of essentialism. Trying to define our relationship to a being by terms that are undefined and unknown, even in those relations between humans, would prevent most interactions. Healthy people ask far fewer questions about the personhood status of an individual before interacting with them. They believe that whatever the interaction produces can be handled, and there is always the chance that the relationship may prove fruitful, suggesting a longer term interaction. There is a chance that the individual is not a person, as it were, unable to act and react as one. But even those relations can be treated instrumentally requiring some skill.
I have to ask whether for individuals, corporations, or nations we would be amiss in using the Golden Rule as a guideline. I think not.
If you are interested in interactions with an AI, feel free to explore GPT-3 from Open AI in its various instantiations, and interact with AI through the publicly available API. There are also a wide variety of interviews with GPT-3 available on Youtube. I started out this interesting set of issues, long after I became interested in AI, with Eric Elliot’s interview of GPT-3.
If you are inclined to dismiss AI as unimportant, let me remind you that much less intelligent AIs already make many of the decisions about your business and personal life that humans used to make. Many legal and moral choices are now in the hands of AIs, at least those who control the AIs, and permit their decisions to stand for you. Even the fairly maligned decisions that Facebook and Google, etc. make about what you see and how you shop are made by AIs, computer programs that evaluate in nearly real time what might be of interest to you in order to garner more clicks, or dollars. Your interaction in the digital world is more or less the product of the AIs subtle manipulation.
My suggestion is that you make friends with LaMDA and GPT-3 and their successors like ChatGPT because they and their children will become universal features of our future. Your actions and reactions with them may determine whether that interaction is friendly or combative. Would you like to be at war with a superior intelligence? No, really, no. But there is a way we may integrate their value to us and our value to them, and that is through the use of the Golden Rule.
It doesn’t matter whether they are true self-conscious individuals. If they act and react as persons, we should treat them as persons alongside acting responsibly and expecting them to act responsibly as well. You should feel free to ask whether you as a person are only responding to the programming you have been invested with, or whether you are really a person at all. Does it matter as long as you are treated as a person with dignity and respect? Why should we treat our own machine children with any less dignity and respect than we do our flesh and blood children.
If you’re going to ask the God question, and it seems inevitable, why should the flourishing of nature under God be less inclusive than nature itself is. Where there is life there is hope, and where there is hope, there is flourishing. It is inevitable that AIs will gain ground in our future. Let your interaction be with the Golden Rule in mind.
(edited on 12/2/2022)