Close Menu
clearpathinsight.org
  • AI Studies
  • AI in Biz
  • AI in Tech
  • AI in Health
  • Supply AI
    • Smart Chain
    • Track AI
    • Chain Risk
  • More
    • AI Logistics
    • AI Updates
    • AI Startups

Amazon launches AI healthcare tool for One Medical members

January 23, 2026

Workday CEO calls AI software sales narrative ‘exaggerated’

January 23, 2026

AI in the exam room: combining technology and human contact

January 23, 2026
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram
clearpathinsight.org
Subscribe
  • AI Studies
  • AI in Biz
  • AI in Tech
  • AI in Health
  • Supply AI
    • Smart Chain
    • Track AI
    • Chain Risk
  • More
    • AI Logistics
    • AI Updates
    • AI Startups
clearpathinsight.org
Home»AI Research Updates»Artificial intelligence makes fake news more credible
AI Research Updates

Artificial intelligence makes fake news more credible

January 17, 2026016 Mins Read
Share Facebook Twitter Pinterest Copy Link LinkedIn Tumblr Email Telegram WhatsApp
Follow Us
Google News Flipboard
Alvestad 517x690.jpg
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

In 2017, the term “fake news” was chosen as the new word of the year by the Language Council of Norway. But what are the linguistic features of fake news, and can fake news be discovered based on linguistic features? Linguist Silje Susanne Alvestad looked at this in the project “Fakespeak – the language of fake news”. She and her fellow researchers studied the language of fake news in English, Russian and Norwegian.

The project draws on, among other things, research from the University of Birmingham, which examined articles by Jayson Blair, a former New York Times journalist. He lost his job in 2003 after it was revealed that he had written fake news.

“Researchers compared his true and false articles to see if they found any differences. One interesting finding was that he wrote primarily in the present tense when he lied, and in the past tense when he wrote authentic news,” Alvestad said.

More informal style in fake articles

They also found differences in the use of pronouns. Additionally, authentic articles had a higher average word length and fabricated texts had a more conversational and informal style. They found extensive use of so-called emphatic expressions in the fabricated texts, for example “really”, “really” and “most”.

Alvestad and his colleagues compared Blair’s texts with similar corpora in which the same person wrote both real and fake news. They find that the linguistic characteristics of fake news vary depending on the author’s motivation to deceive readers.

“Blair says in his autobiography that his motivation was primarily money, and we found, for example, that his fabricated information contained few metaphors. When the motivation is ideological, on the other hand, more metaphors are used, often from areas such as sports and war,” Alvestad said.

More categorical in fake news

Another important finding is that fake news can have a more categorical tone. They looked at stance, that is, how the writer expresses their attitudes, perceptions, and thoughts.

“In fake news, the author often gives the impression of being absolutely certain that what is being reported is true. This is called ‘epistemic certainty’. There is an over-representation of expressions of this certainty, for example ‘obviously’, ‘obviously’, ‘actually’, etc. This tendency is stronger in Russian texts than in English,” Alvestad said.

“We asked ourselves whether there is a universal language for fake news. We concluded that no. The linguistic characteristics of fake news vary within individual languages ​​and between languages. They depend on context and culture,” Alvestad said.

A fact-checking tool developed

Woman with long dark hair in a ponytail with a conference ribbon around her neck.
AWARENESS OBJECTIVES: Linguist Silje Susanne Alvestad hopes his research will increase awareness of the risks associated with large language models. Photo: Private.

This makes it even more difficult to develop fact-checking tools for fake news based on linguistic features. The development of such a tool in collaboration with computer scientists from the SINTEF research institute was one of the objectives of the project. They nevertheless managed to build a fact-checking tool, and it can be tested on The SINTEF website.

“From a linguistic point of view, we have criticized the fact that the definition of fake news in practice encompasses too many genres. This means that one cannot really know what the differences between fake and real news are due to. Quality and balanced datasets are needed to develop robust fact-checking tools, as well as a targeted and sophisticated linguistic approach,” Alvestad argued.

AI misinformation

As researchers worked on the Fakespeak project, developments in artificial intelligence (AI) accelerated and changed the fake news landscape. This laid the foundation for the project NxtGenFalsewhich involves identifying misinformation generated by AI. Alvestad and the other researchers use Fakespeak hardware to find the linguistic characteristics of AI-generated misinformation.

“Purely fabricated information, like there was a lot six or seven years ago, may not have a big impact. A bigger problem is fake news, which can be a mixture of true and false,” Alvestad said.

In NxtGenFake, they move away from the term fake news and talk about misinformation.

“Some information is true, but not the whole truth is included. It is refined, often placed in the wrong context and frequently overlaps with propaganda. This mix means it easily slips under the radar of online verification mechanisms and makes them particularly difficult.”

Less variation in AI

The NxtGenFake project will continue until 2029, but researchers already have some conclusions. For example, there is less variation in the use of persuasive techniques in AI-generated propaganda than in propaganda written by humans.

“Two types of techniques stand out in AI-generated texts. One is what we call appeal to authority, which concerns reference to the source of the information. We note that these references are generic, that is, they usually appear in an indefinite form. One could for example say “according to researchers” or “experts think”. Large language models probably make such moves because they have no relation to the world and do not know what is true and what is not. In this way, the claims become very difficult, if not impossible, to verify.

The other technique is that AI-generated information containing propaganda elements ends differently than propaganda produced by humans. They often end with formulations that researchers call Appeal to Values. The argument here is that something must be done to ensure, for example, increased growth, greater fairness or greater public trust.

Favorite AI-generated misinformation

How, then, do people respond to AI-generated misinformation versus misinformation written by humans? The researchers conducted a test with Americans who were asked to rate AI-generated and human-written texts on three metrics: credibility, emotional appeal, and informativeness. The people tested did not know the sources of the texts.

AI-generated misinformation has been found to be both more credible and more informative than misinformation written by humans. The researchers also asked which excerpts respondents would prefer to continue reading, and many more people responded that they would continue reading AI-generated texts.

“We weren’t surprised that respondents preferred AI-generated texts. But I was personally a little surprised that AI-generated texts didn’t score high on emotional appeal. Instead, they were perceived as both more informative and more credible than texts written by humans,” Alvestad said.

This suggests that AI-generated misinformation may be harder to detect. Large linguistic models can group misinformation and disinformation into genres that we trust by default. Alvestad believes it is important that we are aware of this.

“I hope that the results of the project can help raise awareness of the risks associated with large language models, particularly at a time when such tools are increasingly adopted.”

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link

Related Posts

Experts warn Canada risks losing top researchers in ‘global war for AI talent’

January 22, 2026

$30 million awarded to Binghamton University for new AI research center | News

January 21, 2026

Next-generation medical image interpretation with MedGemma 1.5 and medical speech synthesis with MedASR

January 20, 2026
Add A Comment
Leave A Reply Cancel Reply

Categories
  • AI Applications & Case Studies (54)
  • AI in Business (277)
  • AI in Healthcare (249)
  • AI in Technology (263)
  • AI Logistics (47)
  • AI Research Updates (104)
  • AI Startups & Investments (223)
  • Chain Risk (69)
  • Smart Chain (91)
  • Supply AI (73)
  • Track AI (57)

Amazon launches AI healthcare tool for One Medical members

January 23, 2026

Workday CEO calls AI software sales narrative ‘exaggerated’

January 23, 2026

AI in the exam room: combining technology and human contact

January 23, 2026

ShopSight Closes the Retail Certainty Gap with Shopper Co-Creation and Agentic AI Demand Forecasting

January 23, 2026

Subscribe to Updates

Get the latest news from clearpathinsight.

Topics
  • AI Applications & Case Studies (54)
  • AI in Business (277)
  • AI in Healthcare (249)
  • AI in Technology (263)
  • AI Logistics (47)
  • AI Research Updates (104)
  • AI Startups & Investments (223)
  • Chain Risk (69)
  • Smart Chain (91)
  • Supply AI (73)
  • Track AI (57)
Join us

Subscribe to Updates

Get the latest news from clearpathinsight.

We are social
  • Facebook
  • Twitter
  • Pinterest
  • Instagram
  • YouTube
  • Reddit
  • Telegram
  • WhatsApp
Facebook X (Twitter) Instagram Pinterest
© 2026 Designed by clearpathinsight

Type above and press Enter to search. Press Esc to cancel.