28 ISE Magazine | www.iise.org/ISEmagazine
Tracking storms
of misinformation
spread amid disasters
Machine learning can be used to identify ‘fake news’ shared via social media
By Kyle Hunt, Puneet Agarwal and Jun Zhuang
September 2019 | ISE Magazine 29
O
Over the last decade, social media has been increas-
ingly employed for sharing opinions, personal up-
dates and breaking news. Platforms such as Twitter
and Facebook allow for the delivery of important
information at extreme speeds, facilitating the ef-
cient dissemination of content to millions of users
around the world.
Given these benefits, social media platforms are often used
to spread emergency communications such as evacuation
plans, shelter information and weather updates. During natu-
ral disasters, acts of terrorism, chemical threats and other crisis
situations, millions of people around the world turn to social
media to get the information they need to stay current during
plight situations.
Unfortunately, due to the unmoderated nature of social
media, misinformation has plagued the networks of platforms
such as Twitter. In the last few years, “fake news” has been a
trending phrase and topic across mass media, often identified
in politics and other controversial domains. Misinformation
and fake news are also spread across social media when infor-
mation integrity is crucial to the safety of the public, such as
during natural and manmade disasters. During these events,
timely and credible information is of the utmost importance
to those affected by the disasters, and also those following the
disaster-related news.
Examples of misinformation during disasters
On April 15, 2013, the United States was struck by an act of
terrorism when two homemade pressure cooker bombs were
detonated near the finish line of the Boston Marathon, killing
three people and significantly injuring hundreds more. Dur-
ing the chaos that ensued, many false rumors were spread. One
of the most prominent stated that an 8-year-old girl was killed
in the bombings while she was running in remembrance of the
2012 Sandy Hook school shooting victims.
Another false rumor took direct advantage of Twitter. A
fake account named @_BostonMarathon was created and
posted a tweet which read “For every retweet we receive we
will donate $1.00 to the #BostonMarathon victims.” Many
users ended up retweeting the post, believing it would aid re-
covery efforts. In fact, the account was not created to donate
money. Twitter eventually suspended the fraudulent account
and warnings were spread to look out for similar accounts. Be-
tween these two cases, millions of Twitter users were exposed
to false information.
On May 22, 2017, singer Ariana Grande was performing in
the Manchester Arena in England. When the concert conclud-
ed and attendees were beginning to leave the venue, a suicide
bomber detonated explosives attached to his body. The bomb-
ing led to 23 deaths and 139 injuries, the deadliest terrorist
attack in England since the 2005 London bombings. After the
bombing, a rumor was spread on Twitter and Facebook claim-
ing that unaccompanied children were being taken to safety
at the local Holiday Inn. Soon after the rumor was spread, a
Holiday Inn representative made a statement informing the
public that the rumor was false; there were no unaccompanied
children at the hotel.
On Aug. 25, 2017, Hurricane Harvey made landfall in Tex-
as. During the storm, there was legislation due to be passed
in Texas concerning immigration policies. As some people
began to inquire about eligibility requirements at evacuation
shelters, a false rumor began to proliferate throughout social
media and Texas that shelters were going to check IDs. This
rumor proved to be dangerous as many undocumented immi-
grants were afraid to go to shelters due to the potential threat
of deportation.
On the heels of Hurricane Harvey, Hurricane Irma was
generating immense damage across the Caribbean on its path
toward Florida. On Sept. 10, 2017, Irma made landfall in Cud-
joe Key, Florida, bringing deadly storm surges and rainfall.
Before Irmas landfall, a sheriff in Florida posted on Twitter
saying that he would be checking identifications at evacuation
centers in his jurisdictional county. Although the sheriff did
not spread false information, many inferred that he was check-
ing IDs to primarily scare undocumented immigrants from
seeking safety in those shelters, and this false rumor began to
spread both online and ofine. The sheriff later clarified his
tweet, reassuring the community that he was not targeting the
immigrant population. Many additional tweets were posted
by other agencies and accounts in order to help comfort the
population and deliver the correct information.
The plethora of evidence showing the spread of misinfor-
mation during disasters is proof that social media users should
proceed with caution when believing, posting or reposting
information on these platforms. In many cases, major govern-
mental and nongovernmental organizations choose to inter-
vene when misinformation is spread in order to provide the
public with updated and valid information.
The importance of major agencies,
veried users
In most cases, misinformation propagates throughout social
media and other online platforms at extreme speeds, reaching
millions of people around the world. Given this threat, social
media consumers need timely and valid information to cre-
ate a safer online and ofine environment. In most cases, the
postings that debunk misinformation are made by major gov-
ernmental organizations and, in some cases, nongovernmental
organizations.
When false rumors were spread that undocumented immi-
grants could not enter shelters during both Hurricane Harvey
and Hurricane Irma, many agencies posted to Twitter in order
to comfort the public and offer correct information. Some gov-
ernmental agencies that posted include the Federal Emergency
30 ISE Magazine | www.iise.org/ISEmagazine
Tracking storms of misinformation spread amid disasters
Management Agency (FEMA), the United States Department
of Homeland Security Customs and Border Protection (DHS
CBP), DHS Immigration and Customs Enforcement (ICE),
and the cities of Houston and Miami.
Many news organizations also posted information to let the
population know that it was safe to seek shelter, including The
Miami Herald, The Washington Post, CNN and The Hill, among
many others. Governmental ofcials, such as Houston Mayor
Sylvester Turner, also had to make public announcements in
order to disprove the threatening information.
Likewise, in the Manchester Arena (2017) and Boston Mar-
athon (2013) bombings, many agencies, celebrities and public
gures made announcements and postings to debunk the mis-
information that penetrated social media platforms.
Research has shown that it is the veried users (those ac-
knowledged by Twitter as accounts of public interest) who
receive the most interaction when posting misinformation de-
bunking messages. In many cases, more than 100,000 users in-
teract with these postings by retweeting, liking and comment-
ing on the content. This behavior helps to further spread the
accurate and needed information through Twitter’s network.
Likewise, many users and agencies cite information from
external sources, such as news websites and government web-
sites, in order to offer credibility in their misinformation de-
bunking posts. This information is vital to the safety of social
media platforms.
For Hurricanes Sandy (2012), Maria (2017), Harvey (2017),
Irma (2017), Michael (2018) and Florence (2018), false rumors
and misinformation were exposed on FEMAs “Rumor Con-
trol” pages (see related story above). On these web pages created
during or immediately after the hurricanes, FEMA keeps a
record of hurricane-related false rumors and offers valid infor-
mation alongside the different rumors. These web pages are
important resources for the public and especially social media
users to be aware of and explore before trusting disaster-related
information.
Assisting agencies via machine learning
Due to the speed, breadth and depth of information diffusion
across social media, it is increasingly important to develop and
utilize tools that can assist in the monitoring of information
and ultimately promote a safer online environment.
Technologies such as machine learning can be used to assist
agencies in the tracking of misinformation. In many disasters,
there are multiple false rumors being spread, and agencies have
to choose which rumors to combat with their limited resources.
A machine learning framework offers organizations and agen-
cies a tool to track identified misinformation on platforms such
as Twitter and make informed decisions on whether to use re-
sources in an attempt to debunk the false information.
By collecting Twitter data from previous disasters where
misinformation spread, machine learning models can be
trained to learn which tweets are spreading true informa-
tion, which are spreading false information and which contain
FEMA addresses rumors on web page
The U.S. Federal Emergency Management Agency knows how the spread of
misinformation after a disaster can lead to chaos and panic. To counter this, it has set
up rumor control pages following each catastrophic event. It also has an overview page
at link.iise.org/FEMArumors intended to address common scuttlebutt and frequently
asked questions.
With hurricane season in full force in the United States, here are common topics the
FEMA page addresses:
Do Disaster Recovery Centers distribute grants or money? (Answer: They do not.)
Does registering multiple times for aid from FEMA help speed up the process?
(Answer: It does not.)
Does the U.S. Army Corps of Engineers’ Operation Blue Roof repair service charge
for work? (Answer: It does not, and any claims to the contrary likely are a scam.)
September 2019 | ISE Magazine 31
opinions or comments on the subject mat-
ter (shown in Figure 1).
Supervised machine learning models as
basic as random forests to more architectur-
ally complicated models such as deep neural
networks have performed with more than
90% accuracy in deciphering the differ-
ences between the different types of tweets.
After training these models with enough
historical data, they can be deployed to pre-
dict the veracity of newly emerging tweets
(see Figure 2).
As misinformation is detected on social
media, or even in ofine social networks,
agencies can deploy these trained machine
learning models on livestream tweets. By
querying a certain misinformation topic
on Twitter and feeding these tweets to the
models, the incoming tweets will be au-
tomatically labeled as true, false or other
(consisting of opinions or comments). The
agencies can then monitor the tweets and
analyze how many users are spreading the
false information, as well as how many us-
ers are posting true tweets regarding the
false information.
If enough users are already posting val-
id information and few are continuing to
spread falsehoods, the agency may choose
not to use its resources to correct them. If
FIGURE 1
Misinformation tracking process
The decision to pursue debunking information spread on social media depends first on separating messages that are valid, false and neutral
(opinions or comments).
FIGURE 2
The process of debunking
Addressing disaster-related misinformation begins with collecting data from social
media posts by identifying key words and phrases, then using machine learning to sort
through what is true or false before deciding how to respond.
32 ISE Magazine | www.iise.org/ISEmagazine
Tracking storms of misinformation spread amid disasters
many users are continuing to spread the misinformation and
not many have posted the truth, the agency may choose to
debunk the misinformation and clarify any confusion or rid
any malicious intentions.
Machine learning offers a high speed, efficient method to
facilitate this. As long as there is a demand for credible infor-
mation, major emergency organizations and decision makers
could benefit greatly from machine learning and its broad ap-
plications.
Improvements in disaster research and disaster practices,
whether small or large, can make a significant impact on the
lives of people around the world. Having knowledge and
awareness of modern threats, such as false information spread-
ing on social media platforms, can create a safer environment
for the public.
Employing advanced technologies to improve the current
state of disaster management can offer modern and dynamic
solutions that are readily adoptable by agencies and companies
around the world.
Acknowledgement: This research was partially supported by the
National Science Foundation under Award No. 1762807. Any opin-
ions, findings, and conclusions or recommendations expressed in this
material are those of the authors and do not necessarily reect the views
of the National Science Foundation.
Kyle Hunt is a senior undergraduate student in the Department of
Industrial and Systems Engineering at the University at Buffalo. In
spring 2020, he will begin Ph.D. studies in industrial and systems
engineering, focusing in operations research. His research has been
funded twice by National Science Foundation REU awards, and twice
from the University at Buffalo’s Center for Undergraduate Research
and Creative Activities. In April 2019, Hunt received the University
at Buffalo’s Research and Scholarship Award of Distinction, and in
May 2019 he received the School of Engineering and Applied Sciences
Deans Achievement Award. His research interests are in emergency
management, decision-making and sports analytics.
Puneet Agarwal is a Ph.D. student specializing in operations research
in the Department of Industrial and Systems Engineering at Uni-
versity at Buffalo. His research interest lies in the field of disaster risk
management and strategic decision-making. He has had four papers
published in international journals. In 2019, he received the Geo-
hazards Research Award from the University at Buffalos Center for
Geohazards Studies to support his research on misinformation diffusion
during disasters. He also received a Graduate Achievement Award for
his excellent performance in graduate studies.
Jun Zhuang is a professor in the Department of Industrial and Sys-
tems Engineering, School of Engineering and Applied Sciences at the
University at Buffalo. He earned his Ph.D. in industrial engineer-
ing in 2008 from the University of Wisconsin-Madison. His long-
term research goal is to integrate operations research, big data analytics,
game theory and decision analysis to improve mitigation, preparedness,
response and recovery for natural and manmade disasters. Zhuangs
research has been supported by the U.S. National Science Foundation,
the U.S. Department of Homeland Security, the U.S. Department
of Energy, the U.S. Air Force Ofce of Scientific Research and the
National Fire Protection Association. Zhuang has published 90-plus
peer-reviewed journal articles in Operations Research, IISE Trans-
actions, Risk Analysis, Decision Analysis and European Jour-
nal of Operational Research, among others.
‘Sharknado’ hoax
just won’t swim away
One persistent hoax that tends to be spread through social media
during recent hurricanes is the oft-repeated claim that sharks
have come ashore in the flooded waters and are swimming
through highways and city streets.
In fact, the same doctored photo of a shark reportedly seen on
a flooded highway has been shared during different storms and
discredited each time. The rumor may owe its origin to the sci-fi
“Sharknado” movie series in which killer sharks rain down on
victims from a giant storm.
The original meme sent by a Scottish blogger after Hurricane
Harvey in 2017 was retweeted 80,000 times and addressed as
bogus by major media outlets such as The Washington Post. But
that didn’t stop the same image from showing up after storms
in subsequent years, each post claiming that oceanic predators
were prowling flooded neighborhoods.
An online meme generator called Break Your Own News
was the source of a bogus TV news headline during Hurricane
Florence claiming, “Florence Now Contains Sharks.”
“There are sharks in the water, that’s not a rumor. But, you
know, I don’t think there’s a sharknado effect or anything like
that,” said Jeffrey Byard, associate director of FEMAs Office of
Response and Recovery, as reported in the Post.
“Rumors for the sake of rumors doesn’t help things. That’s
not being a team player. That’s just clouding bandwidth. ... That’s
not needed.”