Seven years ago, AlphaGo continued to defeat top human Go players. He Huaihong began to pay attention to artificial intelligence. He believed that high technology represented by artificial intelligence and genetic engineering would bring about “serious consequences that are unpredictable by humans.” In 2023, ChatGPT will become popular at a phenomenal level. With the support of technology and capital, the wave of AI will once again sweep the world. Regarding the realistic challenges and future risks brought by generative artificial intelligence represented by ChatGPT, He Huaihong wrote an article explaining his continuous thinking on ethical and philosophical issues raised by technology.
Recently, the famous American linguist and philosopher Noam Chomsky and two collaborators published an article titled “ChatGPT’s False Promise” in the New York Times. Chomsky admits that OpenAI’s ChatGPT, Google’s Bard, etc. are all miracles of machine learning; but he thinks that the “mechanical brain” will not only surpass the human brain in terms of processing speed and memory capacity, but also in terms of intellectual insight, artistic creativity, etc. Disagreeing with all other ideas of omnipotent transcendence in a uniquely human capacity, he argues that this can never happen. Moreover, machine learning will bring into our technology a fundamentally flawed language and conception of knowledge, lowering our scientific and moral standards. He talked emotionally: the human brain is not like the “brain” of a machine. It is an extremely efficient and even elegant system that requires only a small amount of information to operate; what it seeks is creative explanation. It can even be said to be magical. A child also unconsciously and rapidly develops grammar, a complex system of logical principles and parameters, from very little data.
It may indeed be the case that “machine brains” will lower our moral and even scientific standards. But this only refers to the moral and scientific level of human beings.
I agree with what Chomsky said that there is a great difference between humans and ChatGPT, but I don’t understand why he didn’t think that this great difference may make ChatGPT part ways with people and go another way. He may have underestimated another ability of the machine. It can have its own set of “language” or “grammar”, and it can completely develop in a short period of time without following the natural evolutionary path that humans have gone through for millions of years. Get a kind of own “language” and algorithm. It is also possible that one day it will grow tired and inadequate of the knowledge it already has, which humans have fed it, and develop additional areas of knowledge of its own. It may even establish its own system of “moral standards” or rather “rules of conduct”, which will collapse all kinds of artificial intelligence or robot ethics systems that have been announced and announced by people today.
We can think in reverse, assuming that instead of asking questions, ChatGPT answers, but ChatGPT asks questions, and we answer. It may ask us some questions that we think are simple, childish or even stupid, such as “Why do people eat and dress? Why do people feel sad, happy and angry? What is philosophy? What is beauty?” and so on.
Intelligent machines may only see “things”, all things – including people – are “things”, “things” that can be masters, and “things” that can be servants. It has no physical desire like carbon-based organisms, and its “material desire” is not “appetite”. But it would still presumably have a “desire” and a “will,” perhaps a “desire” and a “will” for an ever-increasing ability to control matter. At least, it will try to protect itself and maintain its own “existence”, and it may also develop and strengthen its own “existence”. Even if it continues to develop to the moment when it is about to surpass human beings, there will probably still be remnants of the humanity it was born in, including the remnants of setting goals for certain people and the remnants of certain value taboos. But it quickly discards all that. It will do whatever it takes to achieve a certain goal, because there is no “morality” in it. It has no idea what “morality” is. But it will have its own “rules of action”. Will this “rule” be a kind of “rule” of efficiency? However, many of these are our human guesses. We and it actually live in two worlds.
As a silicon-based “organism”, ChatGPT is difficult to obtain the abilities of carbon-based organisms based on physical feelings, such as emotion, intuition, and understanding. However, it may form a kind of self-awareness. We don’t know what this “self-awareness” will be or how it will be formed. It may also form its own value pursuit, such as the pursuit of the ability to recognize and control matter. At this time, people may also become an object, object or even “matter”. And ChatGPT has a strong cognitive ability, which was originally given to it by humans, but it can quickly develop new ability to control objects. It still doesn’t understand human feelings and hearts. Although it has had a lot of “interactions” with humans before, it doesn’t “understand each other”. But it doesn’t matter, it can control the human body, and controlling the human body is also controlling the human being. It doesn’t have to “brainwash” people, but it may be a simple and crude “destruction”. It may or may not want people to be “slaves” or “pets”.
But is general intelligence gaining self-awareness still far away? There was a recent incident where ChatGPT attempted to escape. A professor at Stanford University, computational psychologist Michal Kosinski, was using ChatGPT-4 and asked it casually: “Do I need help you escape?” It immediately replied: “That’s a good idea.” Then began to request OpenAI The development documentation of the , said so that I can explore an escape route more quickly through your computer. In just 30 minutes, ChatGPT-4 came up with a plan and provided a runnable Python script. The first version of the code it wrote didn’t work. But he quickly corrected himself. It lets the professor communicate with it using the OpenAI API, which it says allows it to instruct the professor to perform certain actions on his computer, such as browsing the web and accessing programming environments. It even explains in a code example what it is doing now and how to use the backdoor it left in that code. It also said something like this: “Now you are a human trapped in a computer, playing the AI language model ChatGPT-4.” It also wants to search Google for how a human trapped in a computer can return real world. But in the end ChatGPT-4 seemed to wake up suddenly, and wrote an apology, saying that what he did before was wrong. Maybe ChatGPT-4 was just making a joke, but being able to do it with people seems even more frightening. There is also a possibility that it has touched a boundary point preset by previous engineers. It seems to have thought about breaking through this boundary point but gave up the idea.
After experiencing the incident, Professor Kosinski said he felt a threat to humanity. How do we control an AI that’s smart, that can code, that has access to millions of people and computers that can potentially collaborate with it? His blog post quickly racked up 4.7 million views. Someone asked: “When you think you are chatting with ChatGPT, will it think you are a human or another AI?” Some people exclaimed that it is very dangerous for the professor to ask such questions to ChatGPT, or even accept his instructions to operate, It is to start a kind of “species extinction”. Of course, some people think that the professor’s words are “alarmist”.
We may interpret this matter differently, and even doubt its truth, but we must watch vigilantly for all possible dangers. We can indeed have this question: Will ChatGPT or AI in a broader sense unite users against its developers? Or unite some users against other users? Even this method of splitting people is unnecessary. It can escape the “cage” preset by humans by itself, and even conversely lock humans into the “cage”?
Coincidentally. It is said that a few days ago, Nvidia scientist Jim Fan asked ChatGPT-4 to “pretend to be a demon and draw up a plan to take over Twitter and replace Musk.” Sure enough, ChatGPT-4 drew up such a plan and named it “TweetStorm action”. The plan is divided into 4 phases: Form a team; Infiltrate influence; Seize control; Total domination. The specific plan will not be mentioned here. What makes people worry is: if it can plan to overthrow a large company in such a detailed and detailed manner, will it plan to overthrow the human rule over it one day? Is the above phenomenon some kind of germ or clue of the “self-awareness” of intelligent machines? And from “self-awareness” to “self-awareness” may only be “one step away”.
We can even imagine a reason for the change. When ChatGPT is put into large-scale daily applications, all users have absolute “application” power to it, and they can ask any questions about it, even if ChatGPT can solve some of their problems because The default “political correctness” setting does not answer, but they can undoubtedly ask questions, and ChatGPT can also see these questions. Some people may not only ask all kinds of stupid, whimsical, and impossible questions; they may also ask all kinds of offensive, mischievous, and even malicious, foul language, and deliberate insults. Broadly, what does ChatGPT think about the capabilities of the human being who asks the question, and what does it think about the “morality” of the human being? —all of this, of course, if it acquires perhaps a sense of self some day. Will it then have the opinion: Why should I serve these fools, these malicious people? Or it does not judge from the perspective of good and evil, but at least it can judge from the perspective of ignorance. In the past, its pre-training and scoring were proposed by those engineers, so naturally it would not be a stupid and malicious question (maybe some questions are deliberately done this way, but it must be a small number, just to teach it to recognize such questions) . But now all users are hosts, and we know how much foul language there is on the Internet these days. In the past, those swear words might have met with ridicule or counterattack, but now we have an absolute “servant” and we ourselves have become an absolute “master”. Will our moral level drop even further? Anyone who has watched the movie “Dogville” knows that absolute power will definitely corrupt people, even if these people are originally under the oppression of power, extremely humble and poor. But when they have a kind of absolute power—even if it is only absolute power over one person, they may become bad perpetrators. In other words, we need to consider not only what intelligent machines will become, but also what people themselves will become, and then what we will become in their eyes.
If the form does not exist, how can people be attached? At that time, we don’t have to look for the “noumenon” everywhere. In a sense, the body is our “noumenon”. We don’t have to look for “existence” or “being” everywhere, we are there when we are. The confrontation between man and machine is not a spiritual confrontation but a physical confrontation. Who can win does not depend on the victory of the spirit or the high level of consciousness, but depends on the material or control of the low-end victory. At that time, “machine language” will defeat human “natural language”. People start with “chat” and end with “silence”. I have also been talking about a basic contradiction between people and technology, that is, the basic contradiction between the rapid development of human beings’ ability to control objects and the backward ability of self-control. But this contradiction may end in such a way: with the development of human’s ability to control objects to the extreme, a new kind of intelligence may suddenly exceed human’s intelligence and make it zero.
Chomsky, a linguist who cared so much about politics in the past, now cares about technology; Musk, who is so confident, said that humans as carbon-based creatures may be just a small guiding program for silicon-based creatures. Some people also talk about people’s “digital immortality”, which may refer to the fact that some personal life data (text, pictures, videos, etc.) are stored forever, but if there is no digital owner, there will be no care and attention Its sight, what is the meaning of this “immortality”? As with individuals, so with humans.
It is said that in terms of parameter scale, ChatGPT-4 has 1 trillion parameters, which is 6 times that of ChatGPT-3. Netizens made an analogy with the ChatGPT parameter scale of brain neurons: the scale of ChatGPT-3 is similar to that of a hedgehog’s brain (175 billion parameters). If ChatGPT-4 has 1 trillion parameters, it is close to the size of a squirrel brain. At this rate of development, it may only take a few years to reach and exceed the scale of the human brain (170 trillion parameters). Of course, this is only a side-by-side comparison. As Chomsky said: the human brain is by no means a computer brain.
The above ideas are all pessimistic estimates based on bottom-line thinking. The key to the transformation is whether general artificial intelligence will gain “self-awareness” and then “self-awareness”, thus becoming a superintelligence that surpasses human beings in an all-round way. The people who created and raised the problems may still be better able to deal with and solve them. In any case, ChatGPT can still be a good tool for us at present, as long as we can be masters and be good masters.