Life

An In-Depth Look at the Opportunities and Challenges of Artificial Intelligence for Society

  With the continuous advancement of technology, artificial intelligence has become a hot topic in today’s society. From smart assistants to self-driving cars, artificial intelligence is gradually penetrating into our daily lives and work. The development of artificial intelligence technology has brought us many conveniences and also triggered a series of potential social problems.
Cause unemployment problem

  The application of artificial intelligence in certain fields has significantly improved production efficiency, but at the same time, it has also brought about unemployment problems. Many jobs in traditional industries may gradually disappear due to the development and popularization of automation and intelligence. For example, mechanical workers in manufacturing may be replaced by automated robots, and self-service in banks and supermarkets may eliminate the need for tellers and cashiers. Those people and organizations that have mastered advanced technologies and resources will be more likely to obtain huge economic benefits from the field of artificial intelligence technology, while those who have not been exposed to artificial intelligence technology or have low technical application capabilities may be gradually marginalized. Economists at Goldman Sachs predict that as many as 300 million full-time jobs could be automated thanks to the latest breakthroughs in artificial intelligence, with two-thirds of workers in Europe and the United States subject to AI-based automation Impact of transformation. This will lead to massive labor displacement and exacerbate economic inequality. At the same time, the application of artificial intelligence may also accelerate the monopoly of some enterprises, cause wealth concentration and social differentiation, and bring instability to society.
Leakage of privacy and data

  The widespread application of artificial intelligence technology has generated a large amount of personal data, which is crucial for training and improving artificial intelligence; however, irregular data collection and use also brings about issues of privacy leakage and data security. If this data is misused or hacked, it may lead to the leakage of users’ personal information and identity theft. We have all had this experience. If you like to read a certain type of article or video on your mobile phone, the relevant application will constantly push that type of content. This shows that users’ preferences (personal privacy) have been mastered by program operators; everyone now has multiple applications installed on their smartphones. We sometimes find that soon after using chat software to discuss a certain brand of goods with friends, , the relevant shopping application will push the brand’s product advertisements to itself, which shows that shopping applications can steal users’ privacy across borders; Facebook has experienced a large-scale data leakage, with more than 500 million users’ personal data leaked. Data, including sensitive information such as names, phone numbers, places of residence and birthdays, was leaked on hacker forums. The data leak attracted widespread attention and criticism, exposing the privacy protection and data security issues faced by social media platforms.
Create prejudice and discrimination

  The training data of artificial intelligence systems comes from data sets created by humans. These data may contain biased and discriminatory content, which causes some artificial intelligence systems to be discriminatory when making decisions and recommendations. This means that we need to re-examine artificial intelligence. Smart training data and algorithms. For example, in 2018, Amazon had to abandon an AI-based recruiting tool because it was biased against female candidates. The system is trained on data from the past 10 years of resume submissions, mostly from male applicants. As a result, AI algorithms learn to penalize or downgrade resumes that contain terms associated with women, leading to sexism in hiring. In the world of credit, some AI-powered credit scoring systems may unfairly deny loan applications to specific groups based on historical lending data. For example, if historical data shows that a certain minority or socioeconomic group has higher loan default rates, the AI ​​algorithm may tend to reject loan applications from that group, ignoring the individual’s actual credit risk. AI credit assessment systems may also be biased against certain groups when determining loan rates and terms, resulting in certain groups receiving worse loan terms or being charged higher interest rates than they should pay based on their actual credit risk.
Conflict with human intelligence

  With the advancement of artificial intelligence technology, artificial intelligence in some fields has shown the ability to rival or even surpass human intelligence. For example, in 2016, AlphaGo, owned by Google, won a game against the world’s top Go player Lee Sedol. This event marked the beginning of artificial intelligence’s ability to surpass top human chess players in complex games. The subsequent AlphaGo Zero did not even need human chess record data. It could surpass the strength of human professional chess players only through self-playing learning. As another example, artificial intelligence systems in the field of natural language processing have made significant progress in semantic understanding and generation. OpenAI’s GPT-4 model has demonstrated outstanding performance in natural language understanding and generation tasks. It can write articles, answer questions, generate dialogues, etc., and its performance level is close to that of humans. As artificial intelligence begins to take over our interactions with colleagues, friends, and loved ones, is it possible that this is taking away something more important to us? Is it blurring our individuality and taking away the joy of interacting with other people? Will this auto-complete technology even change the way the human brain works?
  In this way, the potential surpassing of human intelligence by artificial intelligence challenges human self-esteem and may trigger the issue of the boundary between humans and machines and the controversy about whether artificial intelligence is possible to possess human-like consciousness and emotions.
weaponization problem

  In February 2023, the U.S. arms dealer Lockheed Martin announced that artificial intelligence software had flown a modified F-16 fighter jet for more than 17 hours in December 2022 without any human pilots on board. . This is the first time artificial intelligence has flown a military aircraft. The U.S. Department of Defense issued an order on January 25, 2023, requiring the military to strengthen the development and use of autonomous weapons. NATO had previously released a related implementation plan on October 13, 2022, aiming to maintain the alliance’s “technical advantage” in “killer robots”. This means that weaponized artificial intelligence may become the way to fight future wars.
  Artificial intelligence is at the heart of the increasing autonomy of some weapons systems. Key capabilities of concern, such as targeting and autonomous counterattack, may soon be available to the military. The use of artificial intelligence on the battlefield can be clearly seen in the Russia-Ukraine conflict, and its applications include target locking assistance and advanced loitering missiles. Such technology could replace human soldiers on the battlefield, raising questions of ethical responsibility. For example, relevant weapon systems may not be able to distinguish combatants from non-combatants and civilians, leading to indiscriminate attacks and manslaughter. Ukraine is not the first country to conduct such deployments on the battlefield. As early as 2019, the Turkish-made “Kargu-2” suicide drones have participated in operations on the border between Syria and Turkey. The “Kalgu”-2 suicide drone is the first suicide drone to publicly claim that it can launch a “swarm” attack. Not only can they form a “swarm”, but they can also attack different targets at the same time. It has other functions such as facial recognition and has good battlefield practicality.
  Weaponization of artificial intelligence is now seen as central to the competition between the world’s major military powers. This competition is a wake-up call for those in the international community who call for maintaining and regulating human control of weapons systems. Without early action, humanity may soon lose its best chance of regulating AI weapons.

Encountering ethical and responsibility issues

  As artificial intelligence technology continues to advance, we also need to seriously consider the ethical responsibility of artificial intelligence. If an artificial intelligence system causes objective harm because it makes a wrong decision, who will bear the responsibility? There is a classic thought experiment in ethics – the trolley problem, which refers to the ethical binary opposition situation surrounding the trolley: the brakes of the moving trolley fail, and if there is no external intervention, it will hit the car ahead. 10 workers in construction. If someone moved the barrier to allow the tram to go to the other track, 10 workers would be saved, but there would also be one worker working on the other track. Should we save 10 people or 1 person? In this case, does the AI ​​have the authority to make decisions? What ethics will it base its behavior on? For another example, in the field of art, if an artist obtains sketches from a computer, is he still a real artist? The creation of art is often the most difficult part of the creative process, and it is also the core of human creativity. If it is left to artificial intelligence, the process seems more like an assembly line, in which humans only play the role of inspectors. Alternatively, artists may find themselves trapped in a parasitic cycle of technological advancement. As people continue to correct works created by AI, AI’s creative skills will improve and human involvement will become less and less involved. In this case, does an artist encounter an ethical dilemma in claiming copyright over his or her work?
Risk of loss of control

  As artificial intelligence technology continues to develop, some experts worry that artificial intelligence may lose control and pose a serious threat to human society. Take today’s deep learning systems, for example, which face the “black box” problem—processing information in a way that’s too opaque to experts. AlphaGo defeated world Go champion Lee Sedol in 2016, executing a completely unexpected move that turned the game around and demonstrated how complex networks make decisions in ways we can’t understand. If humans are still thinking hard about the 37th move; then, it can be said that in the context of the Internet super system, the “black box” problem will be more serious. Today’s AI uses self-improving algorithms, which scan the system for possibilities to improve itself and ultimately upgrade itself. As the system improves, the algorithm is run again, creating another improved version, and so on ad infinitum. Human observers will not be able to keep up with what they are dealing with because systems are constantly changing, becoming more complex and autonomous. Therefore, “self-improving” algorithms may introduce autonomous behavior. The end result may be that humans lose control of the super system.
  We have created a new type of intelligence that humans can neither predict nor understand. Researchers don’t yet know if there are any hard limits to artificial intelligence beyond basic physical constraints on computers. If we do not understand this problem and develop effective preventive measures, it may have disastrous consequences for human society.
  The development of artificial intelligence has brought huge opportunities and challenges to society. Although it may cause a series of potential social problems, we cannot stop researching artificial intelligence technology just because we are worried about problems. Instead, we need to face these problems seriously and take proactive steps to solve them. Only in this way can we fully realize its potential in order to promote the healthy development of artificial intelligence and truly benefit human society.

error: Content is protected !!