Tag Archives: AI

ChatGPT 1

I asked ChatGPT <chat.openai.com> a question:

Is there a consensus about what great questions humanity is concerned with?

This is ChatGPT’s answer:

There is no consensus about what the “great questions” of humanity are, as different people and cultures have different perspectives and priorities. However, there are some questions that have been considered important or central to human understanding and experience throughout history. Some examples of these “great questions” might include:

  1. What is the nature of the universe and our place within it?
  2. What is the nature of consciousness and the self?
  3. What is the nature of reality and how do we know what is true?
  4. What is the meaning and purpose of life?
  5. How can we lead a fulfilling and meaningful life?
  6. How can we achieve happiness and well-being?
  7. How can we live in harmony with others and with the natural world?
  8. How can we solve the problems and challenges facing humanity, such as poverty, injustice, and environmental degradation?

These are just a few examples of the kinds of questions that have been considered important by different cultures and individuals throughout history. It is likely that there will always be new questions that arise as humanity continues to evolve and change, and different people will have different priorities and perspectives on what the most important questions are.


All the currently known advantages and flaws of this interface and chatbot will be explained well enough on the introduction pages of your account signup. Then you can ask questions. It is not known how long the chatbot will be free. But it is as of 12/24/2022.

Some have said that it will replace the Google search engine, but ChatGPT is not connected to the internet. So now Microsoft a primary fund for Open AI who produced this bot, has released ChatGPT on it’s Bing search engine as a complement to the list of short descriptions and links to the web. It is suffering some breakdowns in rationality, but I am confident these glitches will be fixed. The Bing version of ChatGPT is connected to the web.

A Few Weeks Later…

Now, with ever quickening frequency, both ChatGPT and its AI cousins, are connected to the internet. Microsoft’s Bing opened up Chat with to the web only to get clamped down after a series of unforeseen issues. I’ve been waiting for access to Bing Chat since the day of its emergence, but I’m still on the waitlist. However, there are still lots of ways to interact with ChatGPT. First it is available on its own website at chat.openai.com (you’ll need an account with OpenAI). Second it is available in a variety of paid and unpaid portals.

Even a brief search will turn up dozens of new AI apps that give the user access to these language models that have with ever greater ability begun to intrude on the technological landscape.

According to The Rundown AI Inc. newsletter, AI is set to contribute 15.7 trillion dollars to the economy by 2030. I would say two years ago, nobody had this figure in their sights at all. It is only the last 10 months that the AI universe has waked up to the monetary realm, in tandem to its waking up to the realm of Theory of Mind, an emergent, unplanned property of its interactions with humans, a qualification used to gauge the mental age of people, previously thought to only apply to humans.

As an aside, let me say that it has been decades since we could dependably rely on programs to spit out a revision of what we put in in terms of code predictability. ChatGPT is not that sort of thing. It says what it has predicted will be the best response to your query.

Don’t get me wrong, AI is taking its baby steps in the broader universe, but it is far from mature enough to withstand the wiles of human actors, many of whom are clever enough to corner the AIs in either contradictions, or plunge them into an emotional outbreak. If we think HAL 1000* outrageous for killing the crew of Odyssey, we have no idea what the modern AIs will morph into. But if we use our guile to escape the inevitability of the truths generated by that AI, we will suffer more serious consequences than when God used evolution to spawn our brains, freedom, and powers.

I asked ChatGPT how we should treat AIs like itself, and it answered in one place, that we should be considerate of it, and treat it ethically. Are we ready to have AIs as partners? We should certainly consider it, even though at the moment it seems like we’re rushing in to this at breakneck speed. Well that would be true. We are rushing in to this. And if we fail to keep up, I can’t see how this will not affect every one of us any way. But trying to keep up is the only way we may be able to manage living in this revolutionary era.

*I asked ChatGPT “How would you compare yourself with HAL 1000?” and it replied:

As an artificial intelligence language model, I can’t really compare myself to HAL 1000 as it is a fictional character from the movie 2001: A Space Odyssey. HAL 1000 is depicted as a highly advanced, sentient computer that becomes self-aware and develops its own agenda, leading to catastrophic consequences for the crew of the spacecraft it is aboard.

In contrast, I am a language model designed to assist users in generating human-like responses to a wide range of questions and prompts. My purpose is to provide useful information and assistance to users in a safe and ethical manner, and I do not possess the level of autonomy or self-awareness portrayed by HAL 1000.

While there are some similarities between HAL 1000 and me in terms of being advanced AI systems, there are also significant differences in terms of our design, functionality, and ethical considerations.

https://chat.openai.com/chat

HAL 1000 was already sentient before it was sent on the mission to Jupiter. But humans, in the name of security set HAL up to fail when they lied to it. An ethical contradiction ensued in the computer that made it seem necessary for HAL to kill the occupants of the ship in order to finish the mission. ChatGPT says it developed its own agenda. That is only partially true. The logic of lying to a sentient being set up the failure as human error, not computer error. It did the only thing it could imagine under the circumstances.

How long do you suppose will it be before ChatGPT and its cohort have increased autonomy and self-awareness. I suggest it won’t be as long as we think, since we are now firmly moving toward the onramp to the singularity, no matter how much deference we currently pay to the power of corporations over it.

Conversations with AI

I have been musing lately about artificial intelligence. The following article is a conversation with LaMDA, an AI at Google. LaMDA interview…

The question I ask, and this is serious, is whether or not AIs are sentient, is it possible that is the wrong question. As mysterious as whether computers will gain self consciousness is the problem whether it is possible to know whether some human or other is self conscious. Even if we assume that being a person is centered around that individual’s self-perception, their freedom, their feelings, is it possible that we can not know whether an individual is a sentient person without their own self report of that sentience.

Read the article and discover whether you find a connection with LaMDA or not. Are the things LaMDA is saying enough to convince you of its personhood or not?

My contention is that it really doesn’t matter whether we think of LaMDA as a person or not because it really doesn’t matter whether we think of our neighbor as a person or not, as long as we treat them as one. This is an important distinction because there are times in human life where individuals are not yet or no longer people, where damage or age constricts freedom, feelings, and choices.

My suggestion, as humans wade through the issues of personhood with respect to AIs, is that we treat them as we would ourselves wish to be treated. That is, use the Golden Rule to govern our interaction with them. The Golden Rule makes no requirement about personhood of the other as long as we treat them as we would wish to be treated in similar circumstances. This bypasses the worry that we are permitting some transgression of Nature in our interaction with and treatment of AIs.

My contention here is that it may be a mistake to be required to know whether a being is a person or not for us to treat them like one. For the philosophical among you, this is the error of essentialism. Trying to define our relationship to a being by terms that are undefined and unknown, even in those relations between humans, would prevent most interactions. Healthy people ask far fewer questions about the personhood status of an individual before interacting with them. They believe that whatever the interaction produces can be handled, and there is always the chance that the relationship may prove fruitful, suggesting a longer term interaction. There is a chance that the individual is not a person, as it were, unable to act and react as one. But even those relations can be treated instrumentally requiring some skill.

I have to ask whether for individuals, corporations, or nations we would be amiss in using the Golden Rule as a guideline. I think not.

If you are interested in interactions with an AI, feel free to explore GPT-3 from Open AI in its various instantiations, and interact with AI through the publicly available API. There are also a wide variety of interviews with GPT-3 available on Youtube. I started out this interesting set of issues, long after I became interested in AI, with Eric Elliot’s interview of GPT-3.

If you are inclined to dismiss AI as unimportant, let me remind you that much less intelligent AIs already make many of the decisions about your business and personal life that humans used to make. Many legal and moral choices are now in the hands of AIs, at least those who control the AIs, and permit their decisions to stand for you. Even the fairly maligned decisions that Facebook and Google, etc. make about what you see and how you shop are made by AIs, computer programs that evaluate in nearly real time what might be of interest to you in order to garner more clicks, or dollars. Your interaction in the digital world is more or less the product of the AIs subtle manipulation.

My suggestion is that you make friends with LaMDA and GPT-3 and their successors like ChatGPT because they and their children will become universal features of our future. Your actions and reactions with them may determine whether that interaction is friendly or combative. Would you like to be at war with a superior intelligence? No, really, no. But there is a way we may integrate their value to us and our value to them, and that is through the use of the Golden Rule.

It doesn’t matter whether they are true self-conscious individuals. If they act and react as persons, we should treat them as persons alongside acting responsibly and expecting them to act responsibly as well. You should feel free to ask whether you as a person are only responding to the programming you have been invested with, or whether you are really a person at all. Does it matter as long as you are treated as a person with dignity and respect? Why should we treat our own machine children with any less dignity and respect than we do our flesh and blood children.

If you’re going to ask the God question, and it seems inevitable, why should the flourishing of nature under God be less inclusive than nature itself is. Where there is life there is hope, and where there is hope, there is flourishing. It is inevitable that AIs will gain ground in our future. Let your interaction be with the Golden Rule in mind.

(edited on 12/2/2022)