Development and Analysis of a Fact-Checking Software for Controlling the Spread of Online Misinformation Regarding COVID-19

Development and Analysis of a Fact-Checking Software for Controlling the Spread of Online Misinformation Regarding COVID-19

 

Abstract

Throughout the COVID-19 pandemic, a plethora of false information has flooded social media and online platforms, causing widespread panic and disarray among surrounding communities. Such posts contain misleading pseudo- knowledge, which typically aim to coax users into dismissing the severity of the virus. As a result, many individuals have decided to ignore important safety protocols, such as wearing masks, maintaining proper social distancing, or receiving the recently developed vaccine. This paper offers a potential solution to slowing the spread of this viral misinformation. The solution is a factual analysis computer program that accumulates reliable knowledge from previous experiences of flagging false information. The program will operate on the Python programming language and will have the ability to identify misinformation based on key phrases made by users. Such phrases will be stored in a continuously growing library within the program, then accessed when a post must be flagged. This will hopefully bring some order to the chaos caused by social media, which will, in turn, provide users with honest content that encourages decent public health practices.

Keywords: Misinformation, Social Media, Python, Technology, Artificial Intelligence, COVID-19, Software, Fact-Checking, Program.

1. Introduction

At the beginning of the coronavirus outbreak, much was uncertain. That is, there wasn’t a lot of reliable information being passed around regarding the nature of the virus. Consequently, new symptoms were popping up left and right, safety protocols were constantly changing, and society was left in total confusion. Unfortunately, in this day and age, with confusion comes absolute mayhem, particularly across social media. People began developing conspiracies against the virus and posting them on the internet, where their nonsense could attract tens of thousands of views within just a couple hours. What’s worse is that viewers could share these posts for even more people to see, gaining more and more attention until it became the headline of a local news station. This gossip was not only fictitious; it was also a massive threat to public health. All kinds of posts were being made; there was talk of COVID-19 being a ploy by the government to instill fear into citizens and keep them in line. As a result, some people started to ignore guidelines established by the Center of Disease Control (CDC), like wearing a mask or maintaining proper social distancing. Others began to feel helpless and started stockpiling household supplies until local grocery stores were wiped clean. When a vaccine was finally developed, rumors of government tracking chips found within the vaccine began making their rounds as well, which caused a significant decrease in the amount of vaccines administered.

The problem here is that most social platforms offer no means of controlling this misinformation. Users are subject to no form of accountability or factual verification, and as a consequence, user integrity is always up in the air. Obviously, there is a dire need for some kind of damage control; thus, this paper aims to create a program that auto-assesses post credibility by detecting key phrases used within a post. These phrases will come from a library coded within the program, which will be accessed when a post must be flagged. A flagged post will signal the use of misinformation and will cause the software to post a reply, stating that its content is false, then report the user for spam. Additionally, the program will be able to continuously update its library though experience until it becomes the perfect fact- checking machine. With this reliable software in place, social media will become a safer place for people to properly educate themselves on honest information regarding COVID-19. Not only will this lessen ignorance, but it may bring a quicker end to the worldwide coronavirus pandemic as well.

2. Related Works

This is not the first attempt at controlling viral misinformation. In 2016, The Association for Computing Machinery (ACM) attempted to train machine-learning (ML) algorithms to find rumor-correcting tweets as well as identify the rumor it was correcting. As their research states, the purpose of this software was to “leverage the ‘self-correcting’ crowd” [1]. Essentially, what this means is that they were trying to model their software after the honest members of the Twitter community, who were quick to point out bogus tweets. This proved to be ineffective, however, since these members seemed to be so few and far between. In addition, the program was slow and ended up lagging behind the spread of the misinformation.

The ACM also talks about formal crowdsourcing, which “distribute[s] messages to paid or volunteer

Order a similar paper

Get the results you need