Monthly Archives: April 2020

Misinformation as Process

Misinformation may appear rather simple to define. As Wikipedia says, “Misinformation is false or inaccurate information.”

But in turn we use the term information in a variety of ways. Michael Buckland has written that these boil down into three main senses:

  1. as knowledge, or mental content
  2. as a thing, or a physical object
  3. as a process
Algorithm generated cat picture by @normalcatpics on Twitter. Misinformation?

Buckland points out that information systems can only deal with information in the “thing” sense. Consonantly, efforts to design automated systems for misinformation detected rely on a conception of information-as-thing. The logic goes: if we can just determine which utterances are false and suppress them, we’ll have defeated fake news.

But I’d like to shine a spotlight here on information-as-process, and consonantly on misinformation-as-process, to show why defeating misinformation-as-thing isn’t the end of the story.

Construed as a process, information is a person’s becoming informed or informing. Just as we might talk about someone’s formation as their training or background, we can talk about their information as how they came to know or understand certain things. Granted, this usage is archaic; but conceptually, of course, it is still relevant to the human condition.

Looking at misinformation as a process, and not just a thing, invites us to explore the ways in which people become misinformed. Immediately we can see that it’s not enough to claim that we have true information on one hand and misinformation on the other. Sille Obelitz Søe makes a similar point when she concludes that “misleadingness—and not falsity as such—is the vehicle of misinformation and disinformation.” The language of “leading” here is clearly process-oriented.

Indeed, there are many cases where people are misinformed by true information. And yet, it doesn’t quite seem right to say that the information “itself” is misleading. Rather, there’s something about the person–object system which is being misled. Some examples:

  • misconstruing some scientific evidence as support for a certain position—or, more broadly, cherry-picking evidence
  • taking a parody to be true or earnest
  • circulating in one context a photo taken in a different context
  • when news comes too late, or when it is circulated too late

These cases present a problem with fact checking as a solution to the proliferation of misinformation. Saying a news story is “true” or “partly true” does very little. (Granted, it does do something.) As Søe writes:

The dominant focus on truth and falsity disregards the communicative aspects of online sharing, posting, liking, etc. For communication to work, context, intention, belief, and meaning make all the difference. Whether some post is misleading (intentionally or by mistake) is dependent upon its meaning—and the meaning … is determined by the context in which it is posted and the intentions with which it is posted.

Sille Obelitz Søe, “Algorithmic detection of misinformation and disinformation: Gricean perspectives

Taking a process approach to misinformation also helps us see the importance of time and temporality in assessing (mis)information. I am writing amidst the unfolding 2019–20 novel coronavirus pandemic. As our understanding of the situation develops, our sense of what is information and misinformation changes. For example, Vox, tweeted on January 31, “Is this going to be a deadly pandemic? No.” On March 24, the outlet announced, “We have deleted a tweet from Jan 31 that no longer reflects the current reality of the coronavirus story.” Yet two weeks before that, Vox posted a story criticizing President Donald Trump for holding the same position they evidently held at the time, but without reference to their own tweet. It would seem that there is no issue of misinformation-as-false-thing here. (It is not even clear how to think about truth and falsity when the news is reporting on others’ speech, and particularly others’ predictions.) Yet when we understand misinformation as the process of misleading or being misled, the story becomes much more interesting—and complicated.

What is misleading, it seems to me, is at least in some cases relative, just like time and motion. I don’t mean “relative” in the sense of “there is no answer,” but rather that it depends on the context of all entities involved. That is, what a Trump supporter might call “misleading” may be considered simply “leading” by a Vox aficionado.

As Antonio Badia writes in The Information Manifold, automated systems in principle cannot solve this problem. Algorithms can deal only with information on the syntactic level, but identifying misinformation requires semantic and pragmatic capabilities. Algorithms can approach these levels if they are given syntactic surrogates for semantic/pragmatic concepts. A perfect surrogate would allow the detection of relevant, say, articles, with no false positives and no false negatives. It is clear that today’s surrogates are far from perfect—a recent example is seen in automatic content moderation flagging many false positives. And it’s not just that we have more improvements to make to our algorithms. It may simply be that no perfect (or even near-perfect) surrogates exist for certain abstract concepts. As Badia writes:

We have to be prepared that no such surrogates exist. … The lack of perfect surrogates entails that there will always be “backdoors” [to the algorithms we have in place] that a determined, smart human (living naturally on the semantic and pragmatic level) can exploit.

Antonio Badia, The Information Manifold

Misinformation is rampant, but it’s not the only thing. We humans simply aren’t the best reasoners. Many people are pointing out that this pandemic is showing that we are not good at understanding exponential growth. But it’s also becoming clear that we are not good at understanding anything said by people who disagree with us politically, that we have amazingly short memories, that we entertain all manner of superstition, and on and on. So while efforts to curb misinformation are noble, I fear that they are misguided. At best, they are attacking only a small part of a much bigger issue; at worst they are using the wrong tools for the job. Perhaps, as Badia writes, we shouldn’t focus so much on detecting misinformation automatically. Rather:

The real question is, what caused the misinformation to be produced in the first place? This should be our concern, whether the information is produced by an algorithm or a person.

Antonio Badia, The Information Manifold

That, it would seem, is a different kind of enterprise, and its solution is more about education and dialogue than automatic filtering. Automated systems certainly have a place in human–computer systems, but we should not make the mistake of thinking they can work alone.