FRIDAY, March 5, 2021 (HealthDay News) – Readers pay attention when social media sites label an article “unverified” or “suspicious,” a new study suggests.
But the way an article is presented – including author credentials and writing style – does not affect readers’ opinion of its credibility.
The results show that big tech companies like Facebook and Twitter have a responsibility to fight the spread of misleading and dangerous information, according to researchers at the University of Kansas.
“Whenever we see information that has been reported, we immediately raise our skepticism, even if we don’t agree with it. Big tech companies have a very important role to play in ensuring a healthy and clean news environment, ”said study co-author Hong Tien Vu, assistant professor of journalism and mass communication.
Although the study was conducted before the emergence of COVID-19, the findings are particularly relevant today, given the dangerous role that “fake news” can play in the midst of the crisis. pandemic. Fraudulent or deceptive concerns vaccine this information could hamper efforts to crack down on virus transmission, has led Facebook, Twitter and YouTube to team up to tackle this misinformation.
For their study, the researchers shared eight versions of a fake article with 750 participants. The article mistakenly claimed that a lack of vitamin B17 could be a cause of Cancer.
One version had a doctor’s signature and included a brief description of his medical credentials. Another version describes the author as a mother of two with a background in creative writing, and another script says she is a lifestyle blogger.
Some versions of the article used a journalistic style, while others had a more informal language.
Reader responses varied, the researchers said.
Participants with greater knowledge of social media rated the article more carefully and said they would be less likely to share the article.
People interested in or looking for health information were no better able to determine the accuracy of the article, but were more likely to share it, even if they weren’t sure if it was true.
The author’s credentials and how the article was written did not have a significant bearing on how people judged its veracity or whether they would follow his recommendations or share it, said the authors of the study.
However, any sort of flagging that the article did not contain verified information made people much less likely to believe it, follow its recommendations, or share it, the researchers found.
The results are expected to be presented at the International Communication Association’s virtual conference, May 27-31.
“The results suggest that relying on audience members to determine fake news can be a long way to go. When people have to assess the credibility of news, it takes mental work. Web in general, we tend to rely on big tech companies to verify information, ”Vu said in an academic press release.
The results show the need for social media companies to verify information or flag content with false, unverified or dangerous information, according to the study’s authors.
The data and conclusions presented at the meetings should be considered preliminary until they are peer reviewed for publication in a medical journal.
The Pew Research Center has more on social media.
SOURCE: University of Kansas, press release, March 1, 2021