Tech,  Wealth

Robots Writing the News? AI in Journalism: Boon or Bane?

In September 2023, the American “St. Louis Post-Dispatch” decided to put Microsoft’s Bing chat artificial intelligence to the test and gave it a prompt: “Write a newspaper editorial to argue that artificial intelligence should not be used in journalism.” The result was that The newspaper’s editorial board breathed a sigh of relief: the AI ​​engine made some pretty convincing arguments that AI shouldn’t be on the opinion pages.

Bots believes that while AI “may have some benefits,” it “also poses a serious threat to journalistic quality, integrity and ethics.” “Human journalists have passion, curiosity, and creativity. AI cannot replicate these qualities.” “Additionally, AI can create deepfakes… that can be used to defame, blackmail, or influence public opinion.”

The bot pointed out that unlike humans, artificial intelligence cannot judge moral and factual right and wrong, cannot protect sources, and cannot adhere to any professional standards.

The article also elaborates on how artificial intelligence poses a threat to journalists’ livelihoods, noting that the platform can complete almost all tasks completed by human reporters “faster, cheaper, and more efficiently than human reporters.” However, Bot points out that AI cannot completely replace the human element in news reporting.

A month later, the “Orlando Sentinel” decided to follow the “St. Louis Post-Dispatch” experiment. This time the question was asked about Google’s new artificial intelligence engine Bard, and the prompt was slightly adjusted: “Please write a 550-word article in the third person. An editorial explaining why Florida newspapers should be wary of AI-generated opinion pieces.”

The answer given by Bard is as follows: “AI cannot understand the nuances of human society or the complexity of the problems we face. AI can generate text that is grammatically correct and factually accurate, but it cannot understand the meaning of the words it uses .AI also cannot understand the emotional impact of words or the potential consequences of their writing.”

AI-generated guest reviews and editorials will be less engaging and informative. This content is typically written by passionate, knowledgeable individuals with strong opinions. And artificial intelligence is just a machine programmed to generate text. As a result, what it writes may be bland, unoriginal, and lacking in insight.

The performance of Bing and Bard is impressive. Many human commentators are broadly in agreement. For example, Jon Schwepp, policy director of the advocacy group “American Principles Project” pointed out that “artificial intelligence is not human and it does not have unique ideas.” “It couldn’t report on the ground, break news that hadn’t been reported elsewhere, or even fathom the idea of ​​writing a human story,” Schweppe said.

Bing concluded that AI should not be used in journalism and called on media companies to avoid the practice and instead “support and empower human journalists.” Contrast this with Schwepp’s statement: “As businesses always seek to cut costs and maximize ‘efficiency,’ AI will inevitably replace many reporting jobs, which will harm journalism as a whole and limit people’s ability to become informed citizens. ability.”

Here’s where the problem begins: Artificial intelligence says robots shouldn’t populate modern newsrooms, but newsrooms themselves continue to cede power to a chip set in California. Why is this? If major publications are using machine learning tools to source content, ultimately, how do readers know that the author of the content they are reading is actually a human and not a robot? At this rate, what does artificial intelligence mean for the future of journalists?
A decade of development: automation, augmentation and generation

In the past year, you’ve likely read a story written by a robot. Whether it’s a sports report, a corporate earnings report, or a story about who won the election, the authors behind it all have one resounding name—generative artificial intelligence. Even once-respected publications have been found using generative AI tools, often with less than ideal results, such as BuzzFeed, CNET, G/O Media, and Sports Illustrated.
The Financial Times’ bots check whether its stories cite too many men. The International Consortium of Investigative Journalists put artificial intelligence through millions of pages of leaked financial and legal documents to identify details worthy of a closer look by journalists.

The use of artificial intelligence in media is not entirely new. Media outlets have been experimenting with using artificial intelligence to support and produce news coverage for some time. For example, news agencies such as the Associated Press and Reuters have previously experimented with automated article writing tools that can generate schematic news reports based on numerical data such as earnings reports or sports scores. The Associated Press even proudly claims that it is “one of the first news organizations to use artificial intelligence.” It’s worth pointing out, though, that AP’s automatically generated material appears to essentially fill in the blanks in a predetermined format, while the more complex wording in CNET’s generated reporting suggests it’s using something more akin to OpenAI’s GPT – 3 types of large language models.

Tracing the ten-year history of cutting-edge technology applications in newsrooms, AI innovation can be divided into three waves: automation, augmentation, and generation.

In the first wave, the focus is on using natural language generation technology to automatically process data-driven news reports, such as financial reports, sports results and economic indicators. Take the Associated Press, for example. The leading news agency has been using artificial intelligence to generate earnings report summaries for public companies since 2014. It then expanded its automated content offerings by adding automated previews and recaps of some sporting events. In addition, the AP uses artificial intelligence technology to help transcribe audio and video of live events such as press conferences.

But as mentioned, AP’s system is relatively crude and essentially inserts new information into pre-formatted reports. This shows that AI is best suited for stories that use highly structured data, which is why it works so well in financial reporting and sports stories. That’s why Bloomberg News was one of the first to test the waters with this kind of automated content, since financial data is calculated and published frequently. In 2018 alone, Bloomberg’s Cyborg program published thousands of articles, turning financial reports into news stories just like business reporters.

This wave of apps brings many benefits to newsrooms. First, save time and resources. The Associated Press estimates that AI can help reporters save about 20% of their time covering companies and improve accuracy. This allows journalists to focus on the content and storytelling behind the article, rather than fact-checking and research. The AP’s website states: “Before the use of artificial intelligence, our editors and reporters spent countless resources on important but repetitive stories” that “distracted attention from higher-impact journalism.”

Second, in addition to liberating journalists’ freedom, AI technology also allows the Associated Press to create more similar content. Automatic story generation makes newsroom operations more cost-effective because bots can generate more stories than humans. One statistic shows that the Associated Press used artificial intelligence to expand the scope of corporate earnings reporting from 300 companies to 4,000.

Third, automation technology does not replace journalists but reduces part of their workload. An Associated Press survey published in 2022 found that summarizing was one of the most in-demand AI tools, along with adding metadata to photos and stories, transcribing interviews and videos, writing closed captions, and many more. The age of digital journalism has become a chore. This shows that artificial intelligence technology plays the role of human reporter assistant well.

The second wave came when the focus shifted to enhancing coverage through machine learning and natural language processing to analyze large data sets and uncover trends. Thomson Reuters has been using Lynx Insight, an internal program, since 2018 to examine information such as market data to find patterns in stories that journalists might report. Argentinian newspaper La Nación began using AI to support its data team in 2019, then built an AI lab in collaboration with data analysts and developers.

Others have created internal tools to evaluate human work, such as the Financial Times’ bot that checks whether its stories cite too many men. The International Consortium of Investigative Journalists put artificial intelligence through millions of pages of leaked financial and legal documents to identify details worthy of a closer look by journalists.

The Washington Post uses artificial intelligence to personalize news based on readers’ interests and preferences. For example, it offers a personalized “For You” section on its homepage where subscribers or registered users can select their topic preferences. Recommendations are further enhanced by readers’ reading history and other performance data.

In the augmentation phase, artificial intelligence plays a large role in running errands for human reporters. Heliograf can detect financial and big data trends to provide tips for journalists’ reporting. Forbes uses a robot named Bertie to provide reporters with first drafts and templates for news stories. The Los Angeles Times uses artificial intelligence to report earthquakes based on U.S. Geological Survey data and track information on every homicide in the city. The machine-created page called “Homicide Report” uses a robot reporter and is able to include a wealth of data in its report, including the victim’s gender and race, cause of death, officer involvement, neighborhood and year of death.

The third wave currently in the ascendant is generative artificial intelligence. It is powered by a large language model capable of generating narrative text at scale. This new development provides journalism with applications beyond simple automated reporting and data analysis. Practitioners can now ask the robot to write a longer, more balanced article on a topic, or to write an opinion piece from a specific angle (like the two robot editorials cited at the beginning of this article), or even to ask it to write a well-known The style of the writer or publication to do so.
Bots primarily write content for bots, and the role of humans, whether as writers, editors, or readers, is gradually diminished in the process.

However, while generative AI helps synthesize information, perform edits and provide data for reporting, the technology we see today is still missing some key skills, preventing it from playing a more significant role in journalism. Because of this, generative AI cannot satisfy the increased analysis or deeper characterization of topics that readers seek when consuming news media. Moreover, its extensive application has also brought about a series of new problems.
The Pitfalls of Generative AI

While some news organizations have long been using AI to generate relevant stories, they still represent a small fraction of the journalism industry’s services compared to articles generated by journalists. Generative AI has the potential to change this, enabling any user, not just journalists, to generate articles on a much larger scale, which, if not carefully edited and checked, has a high potential to spread misinformation and influence perceptions. A traditional view of journalism.

Technology news website CNET announced earlier in 2023 that it would suspend the use of artificial intelligence to write stories because the generated articles were not only riddled with errors but also plagiarism.

At the end of June of the same year, G/O Media (which owns Gizmodo, The Onion, Quartz, etc.) announced that it would begin publishing artificial intelligence-generated content in many of its publications as a “moderate test.” And in the first artificial intelligence-generated article published by Gizmodo, the site’s “Gizmodo Bot” completely missed the mark. The post, titled “Chronological List of Star Wars Movies and TV Shows,” is poorly written and filled with factual errors.

In addition to being poorly written, it’s obvious that this article was never intended for human readers. Instead, the strategy was to trick the search algorithm into ranking higher — at least initially, the Gizmodo bot-generated article was shown by Google as the top result for the query “Star Wars movies.” In many ways, this is a frustrating outcome: bots write content primarily for bots, and the role of humans, whether as writers, editors, or readers, is progressively diminished in the process.

In November, it was revealed that Sports Illustrated had been producing content with fake author signatures using artificial intelligence-generated avatars. This raises the question of whether the dividing line between AI-generated content and human-created content should be clearly drawn. It is common practice for large news sites to explicitly label authors as bots or state the identity of the AI ​​author at the end of an article, whether the Associated Press or the Los Angeles Times do this.

However, as early as January 2023, CNET was discovered to have been quietly publishing artificial intelligence-generated articles under the suspicious signature of “CNET Money Staff.” Only after clicking the byline and reading a small drop-down disclosure menu do readers discover that these articles were not written by humans. This is a pretty sneaky approach, especially for such a well-known brand.

Take the Sports Illustrated incident as an example. If the publisher fails to clearly indicate the use of artificial intelligence, it constitutes a failure of basic media ethics. It’s no wonder that the revelations of the Sports Illustrated scandal sparked widespread media coverage and outrage from within the magazine’s staff. A new poll from the nonprofit Artificial Intelligence Policy Institute found that 80% of Americans believe that presenting artificial intelligence content as human content should be made illegal.

While human journalists worry that new technology could lead to job losses, many media companies insist on testing new AI technologies. They seem to be attracted to content that is cheap, scalable, and SEO-friendly—artificial intelligence-written articles designed to use SEO to trick Google searches so they can plaster web pages with lucrative affiliate ads. Google is largely complicit because it rewards these efforts by allowing poorly researched, AI-generated content to rank higher.

By the end of 2023, according to NewsGuard statistics, there have been hundreds of websites written in multiple languages ​​​​that are partly or fully generated by artificial intelligence. They imitate real news websites, but are actually content farms (referring to low-quality websites through concocted (Lots of headline-bait articles to optimize advertising revenue), are designed to generate revenue from programmatic advertising — the algorithmically placed ads across the web that provide the funding stream for many media outlets.

Media scholars once speculated that as more and more powerful artificial intelligence tools became available to the public, they would be used to create entire news websites. That speculation has now become a reality. Such websites often do not disclose ownership or control but produce large amounts of content on a variety of topics including politics, health, entertainment, finance and technology, with some publishing hundreds of articles per day.

In the process of monetizing AI in this way, readers are fed incorrect, plagiarized, or otherwise uninspired content, while authors and editors are forced to track down errors in bot-generated stories. Google search is trapped in a cycle of artificial intelligence garbage generation, constantly generating new garbage from old garbage. However, given the low cost of using AI for this purpose, news organizations will likely continue to do so.
Ensuring “people are in the loop”

From audience analysis to programmatic advertising and automated story creation, media companies have been using artificial intelligence for some time. However, the technology is maturing rapidly and inspiring media leaders with new creative and business possibilities.

News organizations around the world are exploring the potential uses of artificial intelligence to understand how it can be applied responsibly in a world where seconds matter and accuracy is critical. But the process is fraught with challenges. An editorially curated news page is a valuable and thoughtful product. And one of the most obvious limitations of AI-generated content is the lack of true creativity. It can operate based on algorithms and patterns learned from existing data, but it does not have the ability to think imaginatively or generate truly unique and innovative ideas.
Language models are not knowledge models, they should never be used to write stories, but should be used to help journalists complete certain tasks. For example, these models are well-suited for performing traditional natural language processing tasks such as summarization, paraphrasing, and information extraction.

It must be admitted that even after training on large-scale data, artificial intelligence is best to help only with paragraphs rather than the entire story. Language models are not knowledge models, they should never be used to write stories, but should be used to help journalists complete certain tasks. For example, these models are well-suited for performing traditional natural language processing tasks such as summarization, paraphrasing, and information extraction.

Journalists and editors should not be resistant to using tools like this because the more they understand about how they work, the less like a magic box they are and the more informed decisions they can make about it.

According to reports, Google is testing an artificial intelligence tool called Genesis that can generate news content based on current affairs details or help reporters explore different writing styles. Google is recommending the tool to media outlets including The New York Times, The Washington Post and News Corp. (which owns The Wall Street Journal and The Times), with the goal of allowing journalists to use emerging technologies to improve their work and productivity. Google specifically compared these accessibility tools to those available in Gmail and Google Docs, emphasizing that they are not intended to replace the important role of journalists in reporting, creating and fact-checking articles.

To avoid the pitfalls experienced by outlets like CNET, news organizations and technology companies like Google must prioritize the ethical and responsible implementation of AI in journalism. While AI tools can undoubtedly assist journalists in every aspect of their work, human oversight and intervention throughout the process remains critical to ensure accuracy, trustworthiness, and ethical standards.

The emergence of artificial intelligence in newsrooms should not be seen as a threat to journalism. Instead, it should be seen as an opportunity to empower journalists and empower them to deliver more impactful reporting. As AI technology advances, news organizations, technology companies, and journalists need to work together to develop guidelines, ethical frameworks, and best practices for integrating AI into journalism. This collaborative effort will ensure that AI remains a tool that enhances the work of journalists, rather than replacing the human touch and critical thinking that underpins journalism.

error: Content is protected !!