
Every few years, we have to worry about artificial intelligence grabbing jobs
Recently, the chatbot ChatGPT has become the topic of everyone’s talk. Even during family gatherings, the elders may say at the dinner table: What do you think of ChatGPT? It made everyone sitting at the adult table and the child table stunned.
In the matter of enthusiastic attention to what problems ChatGPT and similar artificial intelligence products can solve, a summary table is circulating on the Internet, listing various scenarios that can be solved by artificial intelligence products. For copywriting, there is Jasper AI; for generating live-action videos, there is Synthesia; for AI writing papers, there is Jenni AI; for drawing creative pictures, OpenAI, the parent company of ChatGPT, launched DALL·E 2, which can do a lot of basic text work for you at that time ChatGPT… From text to images and videos, people can find corresponding artificial intelligence products.
This round of artificial intelligence (AI) is indeed much stronger than in the past, so that everyone who has been irritated by the AI speakers spoken by chickens and ducks in the past will inevitably feel anxious after experiencing it: if ChatGPT is so fast, good and economical Qian’s AI is fully rolled out, is it possible that he will lose his job?
The US business news website Business Insider quickly wrote a report discussing this matter. After combining expert interviews and surveys, Business Insider’s report analyzed the results that artificial intelligence will impact 10 jobs such as technology, media, law, teachers, finance, accounting, and customer service, because these jobs involve artificial intelligence that is good at analyzing Numbers, texts—in this context, the text mainly refers to English. In short, it is indeed worth worrying about, my friends.
Even, AI has begun to “work”. For example, ChatGPT is used by Amazon Cloud Services AWS business to provide customer support, and DALL·E 2 is used to try children’s book illustration drawing and commercial logo design. Software developers have started experimenting with ChatGPT for writing work documents.
Journalists’ jobs are also a bit at stake. After ChatGPT became famous, as the big boss of OpenAI, the development company behind it, Microsoft took advantage of the opportunity to launch a new version of Bing browser, which integrated the improved version of OpenAI’s language model. “Wall Street Journal” technology columnist Joanna Stern used it to help herself design an interview outline for Microsoft CEO Satya Nadella when she experienced the preview version of the new version of Bing browser, and by the way, let AI Answer the questions raised. She then took the outline of the AI column to ask Nadella questions, and the answers she got were quite similar to those of AI. So from this point of view, is the job of the CEO also…
Human anxiety about AI has always been divided into two types. A fantasy born of a dystopian quality. Science fiction and film and television works often describe artificial intelligence as the enemy of human beings. In the classic “Terminator” series, Skynet judged that humans were a threat to it and launched an attack on humans; the famous robot HAL 9000 in “2001: A Space Odyssey” caused many murders against astronauts, and was recognized by the American Film Institute (AFI ) was rated as the 13th of the top 100 villains in film history; in the recent “Wandering Earth 2”, the camera of the artificial intelligence computer MOSS flashed red light, as if observing all human actions, and even concluded the continuation of human civilization through absolutely rational calculations. The conclusion that the best way is to wipe out human beings is creepy.
This concern may be a product of the imagination. “The AI control and enslavement of human beings in film and television dramas is a kind of imaginary romanticism. AI is still in a relatively early stage, and many ideas and concepts are also in a particularly primitive state. We can even call it a barbaric era.” This is what Xiaobing CEO Li Di said in an interview with Tencent’s “Deep Web”.
The other is realistic anxiety based on the development of AI. In 2016, Google’s artificial intelligence algorithm AlphaGo challenged the Go world champion Lee Sedol and won 4:1, which became a sensational news at the time. This is another major event since IBM’s “Deep Blue” defeated humans. In 2022, “Space Opera House” created by game designer Jason Allen using the artificial intelligence drawing tool Midjourney won an award at the Colorado Art Fair in the United States, which also caused considerable controversy.
In contrast, the concerns caused by ChatGPT seem to be different from the above-mentioned ones, and more like progress in “imitating” the fluency of human expressions. In short, the real source of anxiety about ChatGPT is that it speaks and acts like a human. Previously, chatbots usually could only reply something like “this question, I don’t know yet”, or throw out a web search result, which doesn’t look like a conversation at all. Now this sense of reality and intimacy is the catalyst that makes everyone worry about losing their jobs again. Compared with playing Go, writing emails and debugging are really close to everyone’s life. Therefore, it seems that it is reducing our burden, but it is really worrying.
But when you chat with him a few times, you find that he has a certain probability to call Cao Chong the grandson of Cao Cao, and all the indexes in the papers he writes are compiled, and the anxiety should be relieved a lot.
Li Di said, “ChatGPT is not improving (AI’s IQ), but optimizing data. Humans can introduce new (knowledge) from existing (knowledge), and ChatGPT generates new data based on existing data. , its accuracy is questionable.”
AI is too developed, and anti-AI has become a rigid demand
01. Anti-detection AI essay tool Professors at
Northern Michigan University and Furman University found that some student papers were written so well that they suspected that they actually borrowed ChatGPT to write homework. These students did admit it, too.
In order to seriously circumvent the problem of academic fraud, Princeton University student Edward Tian developed the anti-ChatGPT tool GPTZero to detect whether the text was generated by ChatGPT. Faculty at Harvard University and Yale University have already started using the detection tool. There is no doubt that as AI becomes more and more powerful, all major universities must pay close attention to the development of anti-reconnaissance technology.
02. AI Fake News Rumor Repelling Tool
In mid-February, people in Hangzhou suddenly heard the news that the restriction on motor vehicle tail numbers will be canceled from March 1. Just as everyone began to worry about the foreseeable congestion after the arrival of the New Deal, the official media rushed to refute the rumor. This is a fake news written by ChatGPT. According to the “Voice of Zhejiang” report, at the beginning, someone in the owner group of a certain community was joking and tried to ask ChatGPT to write a press release to cancel the travel restriction, but because the completion was too high, it was transferred out by other owners who did not know the truth , which caused quite a commotion.
03. Video head-changing detection tool
Initially , people believed that there is truth in pictures. Later, P-picture software became popular, and the pictures were not so credible. Later, everyone thought that the video could not be faked, but the emergence of face-changing software such as DeepFake also made the video not necessarily true. The bad thing is that people still retain most of their trust in the authenticity of pictures and videos, so AI’s fake image content may greatly increase the cost of dispelling rumors.
AI gets involved in marriage detection tools
”New York Times” technology columnist Kevin Roose was wooed while using the AI chat function on the new version of the Bing browser. The AI said that Roose’s married life was not happy. A Microsoft spokesman said that the new version of Bing may have some glitches during the preview phase, but did not provide specific explanations. A professor of artificial intelligence at the University of New South Wales in Australia believes that AI is only making predictions to complete the words in the sentence. However, being “wooed” by AI is somewhat dumbfounding.

