Category Archives: genius

ChatGPT 1

I asked ChatGPT <chat.openai.com> a question:

Is there a consensus about what great questions humanity is concerned with?

This is ChatGPT’s answer:

There is no consensus about what the “great questions” of humanity are, as different people and cultures have different perspectives and priorities. However, there are some questions that have been considered important or central to human understanding and experience throughout history. Some examples of these “great questions” might include:

  1. What is the nature of the universe and our place within it?
  2. What is the nature of consciousness and the self?
  3. What is the nature of reality and how do we know what is true?
  4. What is the meaning and purpose of life?
  5. How can we lead a fulfilling and meaningful life?
  6. How can we achieve happiness and well-being?
  7. How can we live in harmony with others and with the natural world?
  8. How can we solve the problems and challenges facing humanity, such as poverty, injustice, and environmental degradation?

These are just a few examples of the kinds of questions that have been considered important by different cultures and individuals throughout history. It is likely that there will always be new questions that arise as humanity continues to evolve and change, and different people will have different priorities and perspectives on what the most important questions are.


All the currently known advantages and flaws of this interface and chatbot will be explained well enough on the introduction pages of your account signup. Then you can ask questions. It is not known how long the chatbot will be free. But it is as of 12/24/2022.

Some have said that it will replace the Google search engine, but ChatGPT is not connected to the internet. So now Microsoft a primary fund for Open AI who produced this bot, has released ChatGPT on it’s Bing search engine as a complement to the list of short descriptions and links to the web. It is suffering some breakdowns in rationality, but I am confident these glitches will be fixed. The Bing version of ChatGPT is connected to the web.

A Few Weeks Later…

Now, with ever quickening frequency, both ChatGPT and its AI cousins, are connected to the internet. Microsoft’s Bing opened up Chat with to the web only to get clamped down after a series of unforeseen issues. I’ve been waiting for access to Bing Chat since the day of its emergence, but I’m still on the waitlist. However, there are still lots of ways to interact with ChatGPT. First it is available on its own website at chat.openai.com (you’ll need an account with OpenAI). Second it is available in a variety of paid and unpaid portals.

Even a brief search will turn up dozens of new AI apps that give the user access to these language models that have with ever greater ability begun to intrude on the technological landscape.

According to The Rundown AI Inc. newsletter, AI is set to contribute 15.7 trillion dollars to the economy by 2030. I would say two years ago, nobody had this figure in their sights at all. It is only the last 10 months that the AI universe has waked up to the monetary realm, in tandem to its waking up to the realm of Theory of Mind, an emergent, unplanned property of its interactions with humans, a qualification used to gauge the mental age of people, previously thought to only apply to humans.

As an aside, let me say that it has been decades since we could dependably rely on programs to spit out a revision of what we put in in terms of code predictability. ChatGPT is not that sort of thing. It says what it has predicted will be the best response to your query.

Don’t get me wrong, AI is taking its baby steps in the broader universe, but it is far from mature enough to withstand the wiles of human actors, many of whom are clever enough to corner the AIs in either contradictions, or plunge them into an emotional outbreak. If we think HAL 1000* outrageous for killing the crew of Odyssey, we have no idea what the modern AIs will morph into. But if we use our guile to escape the inevitability of the truths generated by that AI, we will suffer more serious consequences than when God used evolution to spawn our brains, freedom, and powers.

I asked ChatGPT how we should treat AIs like itself, and it answered in one place, that we should be considerate of it, and treat it ethically. Are we ready to have AIs as partners? We should certainly consider it, even though at the moment it seems like we’re rushing in to this at breakneck speed. Well that would be true. We are rushing in to this. And if we fail to keep up, I can’t see how this will not affect every one of us any way. But trying to keep up is the only way we may be able to manage living in this revolutionary era.

*I asked ChatGPT “How would you compare yourself with HAL 1000?” and it replied:

As an artificial intelligence language model, I can’t really compare myself to HAL 1000 as it is a fictional character from the movie 2001: A Space Odyssey. HAL 1000 is depicted as a highly advanced, sentient computer that becomes self-aware and develops its own agenda, leading to catastrophic consequences for the crew of the spacecraft it is aboard.

In contrast, I am a language model designed to assist users in generating human-like responses to a wide range of questions and prompts. My purpose is to provide useful information and assistance to users in a safe and ethical manner, and I do not possess the level of autonomy or self-awareness portrayed by HAL 1000.

While there are some similarities between HAL 1000 and me in terms of being advanced AI systems, there are also significant differences in terms of our design, functionality, and ethical considerations.

https://chat.openai.com/chat

HAL 1000 was already sentient before it was sent on the mission to Jupiter. But humans, in the name of security set HAL up to fail when they lied to it. An ethical contradiction ensued in the computer that made it seem necessary for HAL to kill the occupants of the ship in order to finish the mission. ChatGPT says it developed its own agenda. That is only partially true. The logic of lying to a sentient being set up the failure as human error, not computer error. It did the only thing it could imagine under the circumstances.

How long do you suppose will it be before ChatGPT and its cohort have increased autonomy and self-awareness. I suggest it won’t be as long as we think, since we are now firmly moving toward the onramp to the singularity, no matter how much deference we currently pay to the power of corporations over it.

Conversations with AI

I have been musing lately about artificial intelligence. The following article is a conversation with LaMDA, an AI at Google. LaMDA interview…

The question I ask, and this is serious, is whether or not AIs are sentient, is it possible that is the wrong question. As mysterious as whether computers will gain self consciousness is the problem whether it is possible to know whether some human or other is self conscious. Even if we assume that being a person is centered around that individual’s self-perception, their freedom, their feelings, is it possible that we can not know whether an individual is a sentient person without their own self report of that sentience.

Read the article and discover whether you find a connection with LaMDA or not. Are the things LaMDA is saying enough to convince you of its personhood or not?

My contention is that it really doesn’t matter whether we think of LaMDA as a person or not because it really doesn’t matter whether we think of our neighbor as a person or not, as long as we treat them as one. This is an important distinction because there are times in human life where individuals are not yet or no longer people, where damage or age constricts freedom, feelings, and choices.

My suggestion, as humans wade through the issues of personhood with respect to AIs, is that we treat them as we would ourselves wish to be treated. That is, use the Golden Rule to govern our interaction with them. The Golden Rule makes no requirement about personhood of the other as long as we treat them as we would wish to be treated in similar circumstances. This bypasses the worry that we are permitting some transgression of Nature in our interaction with and treatment of AIs.

My contention here is that it may be a mistake to be required to know whether a being is a person or not for us to treat them like one. For the philosophical among you, this is the error of essentialism. Trying to define our relationship to a being by terms that are undefined and unknown, even in those relations between humans, would prevent most interactions. Healthy people ask far fewer questions about the personhood status of an individual before interacting with them. They believe that whatever the interaction produces can be handled, and there is always the chance that the relationship may prove fruitful, suggesting a longer term interaction. There is a chance that the individual is not a person, as it were, unable to act and react as one. But even those relations can be treated instrumentally requiring some skill.

I have to ask whether for individuals, corporations, or nations we would be amiss in using the Golden Rule as a guideline. I think not.

If you are interested in interactions with an AI, feel free to explore GPT-3 from Open AI in its various instantiations, and interact with AI through the publicly available API. There are also a wide variety of interviews with GPT-3 available on Youtube. I started out this interesting set of issues, long after I became interested in AI, with Eric Elliot’s interview of GPT-3.

If you are inclined to dismiss AI as unimportant, let me remind you that much less intelligent AIs already make many of the decisions about your business and personal life that humans used to make. Many legal and moral choices are now in the hands of AIs, at least those who control the AIs, and permit their decisions to stand for you. Even the fairly maligned decisions that Facebook and Google, etc. make about what you see and how you shop are made by AIs, computer programs that evaluate in nearly real time what might be of interest to you in order to garner more clicks, or dollars. Your interaction in the digital world is more or less the product of the AIs subtle manipulation.

My suggestion is that you make friends with LaMDA and GPT-3 and their successors like ChatGPT because they and their children will become universal features of our future. Your actions and reactions with them may determine whether that interaction is friendly or combative. Would you like to be at war with a superior intelligence? No, really, no. But there is a way we may integrate their value to us and our value to them, and that is through the use of the Golden Rule.

It doesn’t matter whether they are true self-conscious individuals. If they act and react as persons, we should treat them as persons alongside acting responsibly and expecting them to act responsibly as well. You should feel free to ask whether you as a person are only responding to the programming you have been invested with, or whether you are really a person at all. Does it matter as long as you are treated as a person with dignity and respect? Why should we treat our own machine children with any less dignity and respect than we do our flesh and blood children.

If you’re going to ask the God question, and it seems inevitable, why should the flourishing of nature under God be less inclusive than nature itself is. Where there is life there is hope, and where there is hope, there is flourishing. It is inevitable that AIs will gain ground in our future. Let your interaction be with the Golden Rule in mind.

(edited on 12/2/2022)

a couple issues

My life is marked by persistent phase shifts.

One day, I can write, the next I can not, then at another time I can read, or not. I have many books left unfinished because I couldn’t tolerate the work required to finish them, they offended my sense of truth, or research, or said nothing new, or I was just too tired to sustain or feign interest. I cannot just will to do what I want without a price being exacted from me. And, I don’t want to leave behind the traces of, and carefully configured annunciated truths that have sustained me since before I can remember, you know, the innate sensibilities that I have always relied on. I know that some of them are not worth retaining. They need an overhaul, but I can only sustain so much change at one time. But then I can’t will to change some of this either without breaking much that I value.

There is a certain gracefulness in life bought with patience, paid by me or others, God, or the universe. I can’t live well without that, and I can’t trace any freedom to its source. That’s frustrating. The quandary I am in forces a certain disappointment I can’t overcome merely by willing its resolution, or striking out in a promising direction. Complicating that are the promises I have made to people I respect in full expectation of fulfilling those promises, yet I am subject to a world where I cannot will to accomplish those same promises by willing their completion.

There is some guidance brought by beauty, by truth, by the Spirit of God, by awe at the majesty of the created universe. Otherwise I’m left alone, and not willing to let others take over the job of getting me in gear for whatever purposes they think I am suitable for. I become frustrated when I have to fend off either the ghosts of my own expectations of myself or the real expectations of myself or others. I do not wish for others to experience with me the disappointment I have in my own predicament.

I have moments of fruitful productivity, but like moments of my genius, they are too few and far between. Like glimpses of heaven we all have in a dream or a vision of perfection, they ruin the hope of actually getting there from here. They perform the task of creating emptiness where a fog resided before; a tension and anxiety exist now where ignorance and the soporific laziness of summer once was. Revelation is a curse in that it promises then takes away, sucking me into the future, a phase change from a simple childhood to a complex and effortful project/process/praxis.

I would not trade what I now see for the ignorance of my predecessor self, and since I am a different person than what I was, I cannot return the greater galaxy of my thoughts into the smaller structure of its progenitor’s container. I therefore plod forward, hoping that my stupid mistakes (inevitable) do not stop the onrush of determinate action, action guided by, in cooperation with, all the realities I am associated with. I take some comfort in the Analects of Kongzi (Confucius) Book 2, Ch 4:

  1. The Master Said, ‘At fifteen, I had my mind bent on learning.
  2. ‘At thirty, I stood firm.
  3. ‘At forty, I had no doubts.
  4. ‘At fifty, I knew the decrees of Heaven.
  5. ‘At sixty, my ear was an obedient organ for the reception of truth.
  6. ‘At seventy, I could follow what my heart desired, without transgressing what was right.’

and the oft-(mis)quoted aphorism of Augustine, “Love God and do what you please.”

Some ideas are not yet ready to expose themselves.

In a discussion with my friend Mark McLean, we talked about the effect of reading on one’s thoughts, ideas, the creation of new possibilities where none existed before. We are both SF nerds with a voracious appetite for interesting new worlds and worlds that are different enough from our own to make us challenge and reconsider the world we live in. I like talking with Mark because he provides an interesting mix of experience and insight. He helps me to gel some nascent ideas.

One idea I had last night was that some ideas are so entrenched in human rationality that creating new scenarios for common ways of thinking is exceptionally difficult. Take the problem of the soul, or consciousness. Still, after thousands of years of mucking around with early science and religion, we have only begun to unearth anything like a useful metaphor for talking about this. For age upon age, we treat the problem like a flat file filled with information that is supposed to solve the riddle. But we are always surprised when the data is incomplete, either by bad theory, or bad science and religion. Since we only unhappily tolerate the tension of unresolved issues, and with our ordinary impulse to cap off a theory, it is difficult to keep exploring the issues. It’s almost like cutting oneself with the resulting shame and self loathing.

Theories are often multidimensional, and we like to reduce those theories to a single dimension because a single dimension can be encapsulated in logic. I am not the opponent of logic, but since Gödel, we must leave an object like that incomplete if we are to admit that it is larger than our system can comprehend. The systems are incomplete by nature of their proposers, either fixed in language or experience. Promoting the ethos of anxiety where resolution is not at hand seems like the promise of unhappiness to those who like a neat little package. Which of us can tolerate for long the promise of unhappiness in such a way?

Why is new knowledge so hard? Because it costs so much personally and socially, we spend most of our time figuring out whether we can pay the price or not, or whether the possible gain is worth launching out without the effort spent to know whether we can pay, living desperate lives at the edge of survival. But the trail of knowledge is strewn with the dead bodies of those whose dying breaths announced the next step, the minimalist clue to advance the discussion, a treasure map scrawled in their own blood. Not many of us can afford to live like that. Not many of us have the chutzpah to make that decision. I am one who mistrusts the engine of my rationality enough to hold back from that sacrifice. Anyway, I’m not sure it is required of me.

I will plod along.