In the age of social media, the notions of truth, information, and knowledge are all changing. These notions were once amorphous and invisible – the kinds of airy, invisible topics only philosophers and a few scientists studied. But today truth, information, and knowledge are all represented, constructed, and battled about online. Page views, shares, and reactions clue individuals and companies in to what spreads from machine to machine and mind to mind. Content editable by users online is negotiated and changed in real time. In this chapter we’ll look at the problems and opportunities afforded by social media in relationship with truths and knowledge.
Knowledge is always based on multiple pieces of information, and usually involves finding coherence across them when they conflict.
“Fake news” and “post-truth”
Much has been made in recent years of “fake news.” This is a term, favored by the President of the United States among others, that circulates ubiquitously through social as well as traditional media. In 2016, Oxford Dictionaries presented “post-truth” as its “word of the year.” But what do these terms mean, and what do they have to do with social media?
To understand these terms, we have to look closely at what we expect with the word “news” and notions of truth and “fake”-ness. These conversations start with the concepts of objectivity and subjectivity.
For the election-related online public, I choose settle for Biden. Settle for Biden is a Twitter, Instagram, and Facebook account aswell as a hashtag. While I have lived in the United States for the past five years and am up to date with politics, I am not a United States citizen and can not vote in any of the elections. Therefore, I am not apart of this public and instead am just an observer of this public.
Settle for Biden is a grassroots group of former Elizabeth Warren and Bernie Sanders followers who understand Joe Biden’s flaws but believe that the United States will not last four more years with Donald Trump as president.
When exploring this public on Twitter, I found that one of the main goals of the Twitter account was to stop fake news about Biden. Fake news, as defined by Diana Daly is “a term recently popularized by politicians to refer to stories they do not agree with”. On Twitter, the settle for Biden page has retweeted and commented on several tweets from news stations and famous people and correcting them on their information. In the era of technology and how fast information can spread online, it is easy to spread fake news without the general public realizing that it is fake news.
After diving deeper into exploring more about the spread of fake news, I came to the conclusion that there is a lot of misinformation and disinformation present in the tweets that are retweeted by the Twitter account Settle for Biden. Misinformation, as defined by Diana Daly, is “inaccurate information spread without the intention to deceive” and Disinformation, as defined by Diana Daly as “information intended to deceive those who receive it”. There are a lot of retweeted quotes from famous people and news stations that are using their platform to disinform others. A lot of citizens of the United States will read these tweets and instantly believe that they are true due to it being from a famous person or a news station. Just seeing who the tweet is tweeted from can make others believe that whatever they say is true just because of their standing in society. As well as disinformation there are also a lot of retweets from settle for Biden from citizens of the United States that are tweeting information that they may believe is true but is actually incorrect or are making their own assumptions up about Biden and his policies and are essentially misinforming society. Disinformation and misinformation are the main two reasons why the settle for Biden Twitter account retweets these tweets so that they can prove them wrong and inform society on the correct information.
These behaviors and strategies used on the settle for Biden Twitter page show the era we live in. Every part of the election can be found online and shows how easy it is to spread fake news and be misinformed and disinformed. It is important to check the reliability of the source and compare sources to see if those sources have similar or different information to understand what the truth really is.
About the Author
Issy Brooker was born and raised in Kent, England and moved to the United States in 2012. Issy Brooker is currently 19 years old and a first year student at the University of Arizona.
Objectivity and subjectivity
To be objective is to present a truth in a way that would also be true for anyone anywhere; so that truth exists regardless of anyone’s perspective. The popular notion of what is true is often based on this expectation of objective truth.
The expectation of objective truth makes sense in some situations – related to physics and mathematics, for example. However, humans’ presentations of both current and historic events have always been subjective – that is, one or more subjects with a point of view have presented the events as they see or remember them. When subjective accounts disagree, journalists and historians face a tricky process of figuring out why the accounts disagree, and piecing together what the evidence is beneath subjective accounts, to learn what is true.
Multiple truths = knowledge production
In US society, we have not historically thought about knowledge as being a negotiation among multiple truths. Even at the beginning of the 21st century, the production of knowledge was considered the domain of those privileged with the highest education – usually from the most powerful sectors of society. For example, when I was growing up, the Encyclopedia Britannica was the authority I looked to for general information about everything. I did not know who the authors were, but I trusted they were experts.
Enter Wikipedia, the online encyclopedia, and everything changed.
A YouTube element has been excluded from this version of the text. You can view it online here: https://opentextbooks.library.arizona.edu/hrsm/?p=77
The first version of Wikipedia was founded on a more similar model to the Encyclopedia Britannica than it is now. It was called Nupedia, and only experts were invited to contribute. But then one of the co-founders, Jimmy Wales, decided to try a new model of knowledge production based on the concept of collective intelligence, written about by Pierre Lévy. The belief underpinning collective intelligence, and Wikipedia, is that no one knows everything, but everyone knows something. Everyone was invited to contribute to Wikipedia. And everyone still is.
When many different perspectives are involved, there can be multiple and even conflicting truths around the same topic. And there can be intense competition to put forth some preferred version of events. But the more perspectives you see, the more knowledge you have about the topic in general. And the results of negotiation between multiple truths can be surprisingly accurate when compared with known truths. A 2005 study in the prominent journal Nature comparing the accuracy of the Encyclopedia Britannica and Wikipedia found they had around the same numbers of errors and levels of accuracy.
What are truths?
A YouTube element has been excluded from this version of the text. You can view it online here: https://opentextbooks.library.arizona.edu/hrsm/?p=77
And the third ingredient of a truth? That is you, the human reader. As an interpreter, and sometimes sharer/spreader of online information and “news”, you must keep an active mind. You are catching up with that truth in real-time. Is it true, based on evidence available to you from your perspective? Even if it once seemed true, has evidence recently emerged that reveals it to not be true? Many truths are not true forever; as we learn more, what once seemed true is often revealed to not be true.
Truths are not always profitable, so they compete with a lot of other types of content online. As a steward of the world of online information, you have to work to keep truths in circulation.
Infographic by Diana Daly based on the article by Vosoughi, S., Roy, D., & Aral, S. (2018). The spread of true and false news online. Science, 359(6380), 1146-1151.
Why people spread “fake news” and bad information
“Fake news” has multiple meanings in our culture today. When politicians and online discussants refer to stories as fake news, they are often referring to news that does not match their perspective. But there are news stories generated today that are better described as “fake” – based on no evidence.
So why is “fake news” more of an issue today than it was at some points in the past?
Well, historically “news” has long been the presentation of information on current events in our world. In past eras of traditional media, a much smaller number of people published news content. There were codes of ethics associated with journalism, such as the Journalist’s Creed written by Walter Williams in 1914. Not all journalists followed this or any other code of ethics, but in the past, those who behaved unethically were often called out by their colleagues and unemployable with trusted news organizations.
Today, thanks to Web 2.0 and social media sites, nearly anyone can create and widely circulate stories branded as news; the case study of a story by Eric Tucker in this New York Times lesson plan is a good example. And the huge mass of “news” stories that results involves stories created based on a variety of motivations. This is why Oxford Dictionaries made the term post-truth their word of the year for 2016.
People or agencies may spread stories as news online to:
- spread truth
- influence others
- generate profit
Multiple motivations may drive someone to create or spread a story not based on evidence. But when spreading truth is not one of the story creators’ concerns, you could justifiably call that story “fake news.” I try not to use that term these days though; it’s too loaded with politics. I prefer to call “news” unconcerned with truth by its more scientific name…
Bullshit!
Bullshit is a scientific term for information spread without concern for truth.
Think I’m bullshitting you when I say bullshit is the scientific name for fake news? Well, I’m not. There are information scientists and philosophers who study different types of bad information, and here are some of basic overviews of their classifications for bad information:
- misinformation = inaccurate information; often spread without intention to deceive
- disinformation = information intended to deceive
- bullshit = information spread without concern for whether or not it’s true
Professors Kay Mathiesen and Don Fallis at the University of Arizona have written that much of the “fake news” generated in the recent election season was bullshit, because producers were concerned with winning influence or profit or both, but were unconcerned with whether it was true.
A YouTube element has been excluded from this version of the text. You can view it online here: https://opentextbooks.library.arizona.edu/hrsm/?p=77
It is not always possible to know the motivation(s) behind a story’s creation. Indeed, it can be difficult to determine the source of information on social media. But there have been some cases where identified sources were clearly trying to deceive, or were bullshitting – creating content that would spread fast without caring whether it was true.
Cases of bad information spread reveal different intentions, including destabilization of the US government, and profit. There have been multiple cases of “news” story “factories,” in which people work together informally or are even employed to create news stories. The New York Times investigated one factory in Russia, a nation whose government’s interference in the US election was the subject of a federal investigation. And Wired Magazine reported on a factory in Macedonia in which teens created election-related news stories for profit.
There is evidence that the systematic creation of election-related stories had a considerable effect on the 2016 US Presidential election. Donald Trump’s victory was considered a victory by self-proclaimed “trolls” (see Chapter 3 for a longer discussion of this phenomenon) and others who collaborated in publishing online content to defeat Hillary Clinton. Some of these content creators celebrated their campaign, including its disregard for truths, in an event they called the Deplora-Ball.
Mark Zuckerberg initially denied responsibility for Facebook’s spread of deceptive stories. Now Facebook moderators are beginning to flag “disputed news.” But it is likely “news” factories will continue to produce stories not based in truth as long as there are readers who continue to spread them.
A YouTube element has been excluded from this version of the text. You can view it online here: https://opentextbooks.library.arizona.edu/hrsm/?p=77
The Alt-Right: From fake news to domestic terrorism
2016 saw the fast growth online of a right-leaning political aggregate in the US known as the Alt-Right (first mentioned in Chapter 5). The Alt-Right and related “white nationalist” groups have framed themselves in response to movements based on identity politics – groups that rally or identify around a race, ethnicity, upbringing, or religion rather than a political party. But many refute the notion that these groups are formed around identity, particularly when white supremacy – which centers on oppressing other races – has been so closely associated with Alt-Right media and demonstrations.
What seems to have brought the Alt-Right together more than identity politics is their approach to news – which they often discount as biased – and truth or “reality” – which in their culture it has been acceptable to manufacture for political use. Karl Rove of the second Bush administration was an early purveyor of Alt-Right ideology, who insisted that people in power create their own reality (and therefore truths.) The Alt-Right movement has followed this philosophy, recruiting followers through memes that imagine situations that fit with their politics. One Alt-Right blogger professed clear political intentions behind disinformation he spread in a profile by the New Yorker Magazine – disinformation which spread widely prior to the 2016 election.
We’re an empire now, and when we act, we create our own reality. And while you’re studying that reality—judiciously, as you will—we’ll act again, creating other new realities, which you can study too, and that’s how things will sort out. We’re history’s actors … and you, all of you, will be left to just study what we do. ~ Karl Rove to a NYTimes reporter in 2002
Bullshit that really took off
According to PolitiFact, some big headlines from 2016 of stories not based in truth included these:
- Hillary Clinton is running a child sex ring out of a pizza shop.
- Democrats want to impose Islamic law in Florida.
- Thousands of people at a Donald Trump rally in Manhattan chanted, “We hate Muslims, we hate blacks, we want our great country back.”
Buzzfeed tracked the rates at which election stories spread on Facebook in 2016, and found these false stories out-performed true election stories:
- “Pope Francis Shocks World, Endorses Donald Trump for President”
- “WikiLeaks CONFIRMS Hillary Sold Weapons to ISIS”
- “IT’S OVER: Hillary’s ISIS Email Just Leaked and It’s Worse Than Anyone Could Have Imagined”
None of the listed stories was based in truth, but readers spread them wildly across their social networks and other online spaces. And many readers believed them. Take “pizzagate”: In response to the pizza shop story, one man showed up with a gun at the pizza shop at the center of the story and fired shots, attempting to break up what he believed was a massive pedophilia operation.
Which leads to a new question. We now understand some of the reasons bullshit and other bad information spreads online. But why are readers and social media users so ready to believe it?
Bugs in the human belief system
Fake news and bad information are more likely to be believed when they confirm what we already believe.
We believe bullshit, fake news, and other types of deceptive information based on numerous interconnected human behaviors. Forbes recently presented an article, Why Your Brain May Be Wired To Believe Fake News, which broke down a few of these with the help of the neuroscientist Daniel Levitin. Levitin cited two well-researched human tendencies that draw us to swallow certain types of information while ignoring others.
- One tendency is belief perseverance: You want to keep believing what you already believe, treasuring a preexisting belief like Gollum treasures the ring in Tolkien’s Lord of the Rings series.
- The other tendency is confirmation bias: the brain runs through the text of something to select the pieces of it that confirm what you think is already true, while knocking away and ignoring the pieces that don’t confirm what you believe.
These tendencies to believe what we want to hear and see are exacerbated by social network-enabled filter bubbles (described in Chapter 4 of this book.) When we get our news through social media, we are less likely to see opposing points of view, which social networking sites filter out, and which we are unlikely to see on our own.
There is concern that youth and students are particularly vulnerable to believing deceptive online content. But I believe that with some training, youth are going to be better at “reading” than those older than them. Youth are accustomed to online content layered with pictures, links, and insider conversations and connections. The trick to “reading” in the age of social media is to read all of these layers, not just the text.
Dr. Daly’s steps to “reading” social media news stories in 2020:
Reading today means ingesting multiple levels of a source simultaneously.
- Put aside your biases. Recognize and put aside your belief perseverance and your confirmation bias. You may want a story to be true or untrue, but you probably don’t want to be fooled by it.
- Read the story’s words AND its pictures. What are they saying? What are they NOT saying?
- Read the story’s history AND its sources. Who / where is this coming from? What else has come from there and from them?
- Read the story’s audience AND its conversations. Who is this source speaking to, and who is sharing and speaking back? How might they be doing so in coded ways? (Here‘s an example to make you think about images and audience, whether or not you agree to Filipovic’s interpretation.)
- Before you share, consider fact-checking. Reliable fact-checking sites at the time of this writing include:
- politifact.com
- snopes.com
- factcheck.org
That said – no one fact-checking site is perfect.; neither is any one news site. All are subjective and liable to be taken over by partisan interests or trolls.
fake news
a term recently popularized by politicians to refer to stories they do not agree with
misinformation
inaccurate information spread without the intention to deceive
disinformation
information intended to deceive those who receive it
bullshit
information spread without concern for whether or not it’s true
knowledge construction
the negotiation of multiple truths as a way of understanding or “knowing” something
confirmation bias
the human tendency for the brain to run through the text of something to select the pieces of it that confirm what you think is already true, while knocking away and ignoring the pieces that don’t confirm what you believe
belief perseverance
the human tendency to want to continue believing what you already believe
An interactive H5P element has been excluded from this version of the text. You can view it online here:
https://opentextbooks.library.arizona.edu/hrsm/?p=77#h5p-32
An interactive H5P element has been excluded from this version of the text. You can view it online here:
https://opentextbooks.library.arizona.edu/hrsm/?p=77#h5p-33
You are a key player in efforts to curb misinformation online.
John Fedele/The Image Bank via Getty Images
Kolina Koltai, University of Washington
In the runup to the U.S. presidential election there has been an unprecedented amount of misinformation about the voting process and mail-in ballots. It’s almost certain that misinformation and disinformation will increase, including, importantly, in the aftermath of the election. Misinformation is incorrect or misleading information, and disinformation is misinformation that is knowingly and deliberately propagated.
While every presidential election is critical, the stakes feel particularly high given the challenges of 2020.
I study misinformation online, and I can caution you about the kind of misinformation you may see on Tuesday and the days after, and I can offer you advice about what you can do to help prevent its spread. A fast-moving 24/7 news cycle and social media make it incredibly easy to share content. Here are steps you can take to be a good digital citizen and avoid inadvertently contributing to the problem.
Election misinformation
Recent reports by disinformation researchers highlight the potential for an enormous amount of misleading information and disinformation to spread rapidly on Election Day and the days following. People spreading disinformation may be trying to sway the election one way or the other or simply undermine confidence in the election and American democracy in general.
U.S. intelligence services have reported that the Russian government is orchestrating disinformation campaigns aimed at the U.S. elections and pandemic response. AP Photo/Pavel Golovkin This report by the Election Integrity Partnership (EIP) details narratives meant to delegitimize the election and show how uncertainty creates opportunities for misinformation to flourish.
In particular, you may end up seeing misleading information shared about voting in person, mail-in ballots, the day-of voting experience and the results of the election. You may see stories online circulating about coronavirus outbreaks or infections at polling locations, violence or threats of intimidation at polling locations, misinformation about when, where and how to vote, and stories of voting suppression through long lines at polling stations and people being turned away.
We likely won’t know the results on Election Day, and this delay is both anticipated and legitimate. There may be misinformation about the winner of the presidential election and the final counting of ballots, especially with the increase in mail-in ballots in response to the coronavirus pandemic. It will be important to know that not every state finalizes their official ballot count on Nov. 3, and there may be narratives that threaten the legitimacy of the election results, like people claiming their vote did not get counted or saying they found discarded completed ballots.
What if the source of misinformation is … you?
There is a lot you can do to help reduce the spread of election misinformation online. This can happen both accidentally and intentionally, and there are both foreign and domestic actors who create disinformation campaigns. But ultimately, you have the power to not share content.
Sharing mis/disinformation gives it power. Regardless of your demographic, you can be susceptible to misinformation, and sometimes specifically targeted by disinformation. One of the biggest steps you can take to be a good digital citizen this election season is not to contribute to the sharing of misinformation. This can be surprisingly difficult, even with the best of intentions.
One type of misinformation that has been popular leading up to the election – and is likely to remain popular – is “friend of a friend” claims. These claims are often unverified stories without attribution that are quickly spread by people copy and pasting the same story across their networks.
You may see these claims as social media statuses like a Facebook post or an Instagram Story, or even as a bit of text forwarded to you in a group chat. They are often text-based, with no name attached to the story, but instead forwarded along by a “friend of a friend.”
This type of misinformation is popular to share because the stories can center around the good intentions of wanting to inform others, and they often provide a social context, for example my friend’s doctor or my brother’s co-worker, that can make the stories seem legitimate. However, these often provide no actual evidence or proof of the claim and should not be shared, even if you believe the information is useful. It could be misleading.
How to avoid spreading misinformation
Many useful resources are available about how to identify misinformation, which can guide you on what to share and not to share. You can improve your ability to spot misinformation and learn to avoid being duped by disinformation campaigns.
A key approach is the Stop, Investigate, Find and Trace (SIFT) technique, a fact-checking process developed by digital literacy expert Mike Caulfield of Washington State University Vancouver.
Following this technique, when you encounter something you want to share online, you can stop and check to see if you know the website or source of the information. Then investigate the source and find out where the story is coming from. Then find trusted coverage to see if there is a consensus among media sources about the claim. Finally, trace claims, quotes and media back to their original contexts to see if things were taken out of context or manipulated.
Finally, you may want to share your own experience with voting this year on social media. Following the recommendation of Election Integrity Project, it is a good idea to share positive experiences about voting. Go ahead and share your “I voted” sticker selfie. Sharing stories about how people socially distanced and wore masks at polling locations can highlight the positive experiences of voting in-person.
[Deep knowledge, daily. Sign up for The Conversation’s newsletter.]
However, EIP cautions about posting about negative experiences. While negative experiences warrant attention, a heavy focus on them can stoke feelings of disenfranchisement, which could suppress voter turnout. Further, once you post something on social media, it can be taken out of context and used to advanced narratives that you may not support.
Most people care about the upcoming election and informing people in their networks. It is only natural to want to share important and critical information about the election. However, I urge you to practice caution in these next few weeks when sharing information online. While it’s probably not possible to stop all disinformation at its source, we the people can do our part to stop its spread.
Kolina Koltai, Postdoctoral Researcher of Information Studies, University of Washington
This article is republished from The Conversation under a Creative Commons license. Read the original article.
I am creating this page on April 14th, 2020. Today it has been about one month since the coronavirus and responses to it began truly transforming life where I live in Tucson, Arizona; and weeks or months longer since life was transformed in earlier hotspots including China. The pandemic of coronavirus is causing depression and destruction on a global scale, and it is also giving us a distinct view of information as matter that depends on time and space, as well as geopolitics and culture.
In this chapter, I will reflect on the last month from the context of information about this virus, providing good information and challenging rumors, misinformation, disinformation, bullshit, and confusion.
Enter Coronavirus 2019 (COVID-19)
COVID-19 is caused by a new coronavirus. Coronaviruses are a large family of viruses that are common in people and many different species of animals, including camels, cattle, cats, and bats. Rarely, animal coronaviruses can infect people and then spread between people such as with MERS-CoV, SARS-CoV, and now with this new virus (named SARS-CoV-2).The SARS-CoV-2 virus is a betacoronavirus, like MERS-CoV and SARS-CoV. All three of these viruses have their origins in bats. The sequences from U.S. patients are similar to the one that China initially posted, suggesting a likely single, recent emergence of this virus from an animal reservoir.
~ From the Centers for Disease Control
The Landscape of Information that Coronavirus Encountered
The coronavirus entered our world and the US in particular in the midst of some very polarized online conversations. The camps and tribalism in the US that were established before the virus were already connected and disconnected in numerous ways that are now being built upon. Any further fissures may also be exploited, similar to Russian hacking in the past.
For those of us who study information, the urgent necessity was and is clear be very cautious of what we believe in this landscape.
To begin arming yourselves against bad information on coronavirus, check the fact-checking site Infotagion,
How do we wade through all the information on coronavirus?
1. Trust good science and excellent journalism as superior to personal experience.
This is a global phenomenon where human sharing of good information is our best line of defense. This virus is a bit like climate change in that many of us were being asked to change our behavior before we could see the effects of not changing our behavior. My trust in the CDC, the New York Times, and other high quality sources of information were what convinced me to social distance, to have that hard conversation with my kids about no play dates, and seeing hard lessons learned elsewhere continues to make me vigilant.
2. Recognize those moments of weakness when we want to believe something untrue, and face the emotions behind that.
One very important factor we’ve come to understand about bad information online is that emotion is what fuels its spread. There’s a lot of fear, and sadness, and loneliness right now that’s associated with the truth. The bugs in our human belief systems make information more believable to us when we believed it already, or when we really want to believe it.
In one of my social media networks recently I saw numerous posts sharing information on a local herbalist and insisting that steaming creosote would protect people from the virus, although this idea had never been tested.
I saw it for what it was: An attempt to feel empowered. I believe in herbal medicine in enhancing immunity, but you have to be so careful how to present such a tentative solution. If it makes people less protective, or drives them to deplete resources in some rush for a solution, it could be disastrous. On a national scale, I see the same happen with people’s belief in a malaria drug as a solution.
Until the science shows it conclusively works, protect yourself and those you connect with – from the virus and premature information on solutions.
One Thing We Know About the Future: Global Cooperation Holds Great Promise.
Yes the bad information machine is still chugging, but the virus has also bridged countless divides. It’s enriched how deeply many of us communicate online. Have respect for people experiencing this in different ways, but learn the whole story.Social media-shared perspectives are often not in-depth, and can come across as or actually be bad information.
When you realize people are scared, it can be easier to understand where they’re coming from. The value in this understanding is not to decide you believe a false claim, but to understand the emotions they were having that led to it and are leading to it being shared. If you can address that underlying reasoning more calmly, you can change minds.