ChatGPT: finally like an “artificial intelligence”

  As of January 2023, the number of ChatGPT monthly active users has rapidly soared to 100 million, with an average of about 13 million users visiting the ChatGPT website every day. At present, the number of users is still growing, and ChatGPT has become the application with the fastest growing number of users in the history of the Internet. “OpenAI” is headquartered in San Francisco. It was first co-founded by Tesla’s Musk and other investors in 2015. The company’s goal is to “develop artificial intelligence tools that benefit all mankind” and is currently held by Microsoft.
A very large-scale algorithmic model

  ChatGPT was born out of the generative pre-training conversion model first released in 2018 (ie, the Generative Pre-training Transformer project, abbreviated as GPT). It has experienced multiple technical iterations of GPT-2 (2019) and GPT-3 (2020), and GPT-4 is expected to be released by the end of 2023. The existing ChatGPT is based on the latest version of GPT- 3.5 Developed.

The mobile version of “Chat Bot” is chatting with the user.

GPT technology can not only chat, but also write lyrics, make videos, etc.

  At present, “OpenAI Company” has not disclosed the technical details of ChatGPT. However, it is inferred from the published literature in the past that it mainly uses the AI ​​technology of “reinforcement learning”: first collect, organize, and construct a super-large-scale sample data set, including public web pages, books, newspapers and other text materials. Engineers use sample data to “feed” an AI model in “infancy” to obtain a pre-trained AI model, similar to “childhood”. Engineers then trained it intensively—humans themselves scored the answers it generated, “rewarding” it for high scores and “punishing” it for low scores. Through continuous training and learning, its performance is getting closer and closer to that of humans. It is worth mentioning that this scoring process requires a lot of manpower. For this reason, OpenAI has hired a large number of cheap Kenyan workers, whose hourly wages are as low as $2. This has also been criticized by many media.
  But it is not difficult to see that ChatGPT has no fundamental breakthrough in the underlying technical principles, and it is more reflected in “ultra-large-scale victory”. This also means that it requires the support of ultra-large-scale computing power, manpower, and power resources. The GPT-3 algorithm model released as early as May 2020 has a staggering 175 billion parameters, and the capital cost of one training iteration is as high as 10 million US dollars. In contrast, the Go game algorithm “AlphaGO” developed by Google has gone through tens of thousands of iterations of training since it “learned” to play chess from scratch. If a certain error is found during training, the engineer will not retrain it for this error. In terms of scale, practice and iteration cost, ChatGPT is really “unprecedented”.
not omnipotent all-round helper

  ChatGPT is almost an omnipotent “assistant”, and it is also very simple to use. Enter a question, and it will give you an answer according to your requirements. Users can have simple daily conversations with it: how do you feel today? You can also ask questions about common sense: When is the Mid-Autumn Festival? Or knowledge quiz: What is the meaning of Newton’s second law? Text Rewriting: Given a passage of Xu Zhimo’s poem, rewrite it in the style of “Joy”, etc. In addition, it can also perform reading comprehension, logical reasoning, error correction, etc. according to the meaning of the context. Whether it is a question in professional fields such as engineering, science, business, history, or a question close to daily life such as sports, literature, culture, art, etc., ChatGPT can give very accurate and fascinating answers, and the generated answers are grammatical and syntactic There are few mistakes, and the logic of the written structure is clear. Even ChatGPT can admit the mistakes in its own answers, point out the subtle mistakes in the user’s questions, and the dialogue ability exceeds the “psychological expectations” of the public. Specifically, it can become an “all-round” assistant for human writing, programming and other work.
  In February 2023, Israeli President Isaac Herzog used part of a speech created by ChatGPT when he delivered a speech at a cybersecurity conference. A “cautionary note” in his speech — “Let us not forget that it is our humanity that makes us truly unique.” is derived from his answer to a question on ChatGPT — “Write a paragraph about human Inspirational quotes about the role humans play in the world of technology”.
  ChatGPT was able to pass the final exam of the Wharton School of Business at the University of Pennsylvania with a B grade, passed the postgraduate examination of four courses at the University of Minnesota with a C+ grade, and even passed the American Medical Licensing Examination. According to an anonymous Wharton survey of 4,497 students, about 17% of students admitted that they used ChatGPT to assist in completing homework, and 5% of students admitted that they directly used the answers generated by ChatGPT. Anthony, a philosophy professor at Northern Michigan University, was grading student essays for a course he taught on world religions and found that the highest-scoring essays were created by ChatGPT. Although a small number of professors think that integrating ChatGPT into teaching can complement each other, more teachers think it is no different from “plagiarism”. In the eyes of teachers, students can draw research results without thinking, which will stifle students’ creativity and imagination. Many American primary and secondary schools and colleges and universities prohibit students from using ChatGPT to complete their homework. Some schools have even canceled homework after class and replaced it with in-class tests, handwritten homework or oral exams. Universities such as the University of Washington have also begun to revise their new policies on academic integrity, defining “use of generative AI” as “plagiarism.”
  A paper “The Application Potential of ChatGPT in Artificial Intelligence-Assisted Medical Education” published by Harvard Medical School as a co-author, for the first time listed ChatGPT as one of the co-signed authors, and stated that ChatGPT made a contribution to the writing of the paper. contributed. However, the articles generated by ChatGPT’s “creation” are not always “satisfactory”, and there are problems such as unreliable opinions, factual errors, data errors, non-compliant data sources, copyright disputes, and even “serious nonsense”. Top academic journals such as “Nature” and “Science” believe that AI cannot be responsible for the articles they generate, requiring authors not to use AI to generate papers, and not allowing ChatGPT to be listed as a signed author. The International Machine Learning Conference also requires that papers not contain text generated by tools like ChatGPT. Domestic “Jinan Journal (Philosophy and Social Sciences Edition)” also stated that concealing the use of ChatGPT will be rejected or withdrawn.