Tech

How GPT Can Be Used in Military Applications

  On November 30, 2022, OpenAI began to test ChatGPT. After that, ChatGPT quickly exploded the network. Two months later, the number of registered users reached 100 million, and the number of visits in January 2023 alone reached 590 million. The generative artificial intelligence (AI) technology represented by ChatGPT has quickly swept the Internet. At the same time, its huge potential value in the military field has begun to attract the attention of relevant parties.
  GPT Military Applications It is conjectured that
  the unique capabilities of GPT make it an ideal tool for military applications. As early as 2020, the U.S. RAND Corporation Air Force Project Team pointed out in the report “Joint All-Domain Command and Control in Modern Warfare—An Analytical Framework for Identifying and Developing Artificial Intelligence Applications” that AI technologies can be divided into six categories, One of these is technologies like natural language processing, which can be used not only to extract intelligence from speech and text, but also to monitor the chats of friendly forces to alert them to potential danger or opportunity. In January 2023, the Defense Information Systems Agency (DISA), which is responsible for the construction and operation and maintenance of the US military network infrastructure, included the “generative AI” technology similar to GPT in the “Technology Watch List”, indicating that this technology is likely to become The development focus of the U.S. military. Based on previous research results, GPT may make breakthroughs in the following military application directions:
  1. Intelligence acquisition and analysis
  GPT has strong natural language interaction and semantic understanding capabilities, which can be used to analyze and interpret a large amount of intelligence data and provide support for military information sharing . At present, similar technology has been applied in the conflict between Russia and Ukraine. Ukrainian Deputy Prime Minister Mikhailo Fedorov has led the Ministry of Digital Reform of Ukraine to develop an “e-Enemy” intelligence crowd based on the chat robot Telegram. Prepare the app, and call on the Ukrainian people to download it in large quantities, and upload and collect the real-time dynamics and location information of the Russian army captured by mobile phones. In the military field, GPT can also use the information entered by users to outline user portraits. Once users with high-value information are locked, they can use hacking techniques to implement intrusion and steal secrets.
  In addition, GPT can also be used to manage military databases. Through natural language processing and machine learning technologies, it can provide the military with intelligent knowledge management and information retrieval functions, and improve information management and utilization efficiency. Not only that, the development of military intelligent assistants based on GPT’s natural language interaction and semantic understanding capabilities can automatically identify and extract information related to military targets, helping soldiers or commanders to obtain effective information or perform tasks faster.
  2. Auxiliary command and decision-making
  As an ideal human-computer interaction interface tool, after improvement, GPT can be used to analyze the operational requirements put forward by the commander, accurately call multi-source intelligence data analysis tools and decision-making auxiliary tools, and analyze and predict the combat situation , retrieve and generate reference action plans, and provide commanders with suggestions on action plans, so as to solve the problem that it is difficult for humans to quickly and rationally allocate huge combat resources. In 2018, the U.S. Air Force Research Laboratory developed the MEADE (Multi-Source Development Assistant for Digital Enterprises) project, which is an intelligent question-answering system similar to ChatGPT. The project consists of two focus areas: “Real-time Operator-Driven Point Exploration and Response” (ROGER) and “Interactive Analysis and Contextual Fusion” (IACF). ROGER is a conversational question answering system that directly answers search questions, with an emphasis on analytics. IACF mainly uses structured narratives or similar general semantic expressions to organize information. In addition to collecting, processing, analyzing and sorting the analysis results, it can also determine the answers, including images, charts, tables, etc., and can use context fusion and predictive analysis. Find the best course of action for a specific situation, thereby improving the warfighter’s ability to analyze massive data and plan in a fast-paced battlefield.
  In addition, the U.S. Navy is also developing multi-source intelligence analysis tools and auxiliary decision-making models, and extensively collects the combat data required to train decision-making auxiliary models through various means, which can be used as the application basis of GPT in the future. For example, the “Minotaur” (Minotaur) multi-service ISR data fusion “system cluster” that plans to equip aircraft carriers, large amphibious ships, P-8A patrol aircraft, and MH-60 helicopters (Editor’s Note: It can be understood as a systematic family, refers to the integration of systems with different characteristics, so that they can play their respective expertise), can quickly distribute situational awareness data to the tactical edge to support “long-range firepower” (LRF), MIT Lincoln Laboratory developed decision-making assistance software based on COVAS, Then it can quickly recommend soft-kill defense measures based on the number, type, orientation, speed and other information of incoming anti-ship missiles, and the simulation effect is better than expert judgment.
  3. Carrying out information and public opinion wars
  Based on the current capabilities of GPT, it is entirely possible to launch a public opinion war. First of all, GPT can obtain a large amount of social media data by monitoring social media platforms, which can be used to extract information about public opinion, sentiment and hot topics, so that the military can better understand public reaction. GPT is then trained to generate text data consistent with a specific agenda, such as a news article, social media post, or speech, based on a promotional strategy. Due to the powerful language generation capability of GPT, it can be used to quickly screen and lock targets, and publish simulated speech, thus completely overturning the previous mode of “network trolls” that could only rely on copying and pasting. At the same time, GPT can also be used to generate realistic deep fake videos, which can be used to impersonate political figures or military leaders and spread false information on social media. In this way, the false information disseminated by GPT, as well as the refutation or approval of other users’ speeches, under careful guidance, are likely to cause social division, ethnic hatred, political turmoil and other consequences.
  At the same time, GPT can also be used to automatically generate phishing emails, malicious links or other forms of cyberattacks. The ability of ChatGPT to create false information has already been verified. A previous study that lured ChatGPT with 100 false narratives from the “Misinformation Fingerprints” directory found that in 80% of the prompts, the chatbot fell for it and provided false and misleading claims about substantive topics such as COVID -19, Russia-Ukraine conflicts and school shootings, etc.
  Multi-country competition for Al technology
  OpenAI released the earliest GPT-1 in 2018, and only in the past few months, GPT-4 released on March 15, 2023, has shown significant performance in various professional and academic examinations. human capacity. The technical iteration of GPT is growing at an exponential rate, which may bring the singularity of AI technology to come early.
  In the field of AI, the “singularity” usually refers to a hypothetical moment when AI capabilities surpass human intelligence, achieve self-improvement and autonomous development. This kind of transcendence may lead to such great changes in human history and society that existing humans cannot predict and understand the results of such changes.
  In order to support and expand military advantages with technological innovation, the White House Office of Science and Technology Policy released two reports entitled “Preparing for the Future of Artificial Intelligence” and “National Artificial Intelligence R&D Strategic Plan” in October 2016. The former proposes that the government should actively promote AI innovation and utility maximization, and reduce negative impacts. The latter positioned the development of AI as a national strategy, and put forward seven strategic directions and two suggestions for the priority development of AI in the United States. In recent years, the defense departments represented by DARPA (U.S. Defense Advanced Research Projects Agency) have realized the disruptive value of AI in the military field, and have successively formulated AI technology research and development plans, key project ideas, technical standards and specifications, and focused on building research and development The production and combat application system promotes the deployment of smart missiles, unmanned autonomous aerial refueling and other projects. In June 2018, the U.S. Department of Defense established the Joint Artificial Intelligence Center (JAIC), which aims to accelerate the adoption and integration of AI technology by the Department of Defense on a large scale. In September 2019, the Hughes Research Laboratory launched the “Causal Adaptive Decision Assistance” project. It plans to develop a new software system to capture massive data from multiple sources of intelligence and organize a list of action priorities for the Navy Command Center. In response to Musk’s recent proposal to suspend the training of an AI system that is more powerful than GPT-4 for at least 6 months, General Paul Nakasone, commander of the US Cyber ​​Command, clearly rejected it during the House of Representatives inquiry. Obviously, the United States does not want any obstacles on its way to “AI hegemony”.
  Russia is not far behind. In October 2019, Russian President Vladimir Putin approved the “Russian National Artificial Intelligence Development Strategy until 2030”, which aims to promote Russia’s rapid development in the field of AI. In the military field, the Russian military is working hard to incorporate AI into electronic warfare, missiles, aircraft and unmanned system technologies to make battlefield decision-making and target selection faster and more accurate.

error: Content is protected !!