Misinformation as Process

Misinformation may appear rather simple to define. As Wikipedia says, “Misinformation is false or inaccurate information.”

But in turn we use the term information in a variety of ways. Michael Buckland has written that these boil down into three main senses:

  1. as knowledge, or mental content
  2. as a thing, or a physical object
  3. as a process
Algorithm generated cat picture by @normalcatpics on Twitter. Misinformation?

Buckland points out that information systems can only deal with information in the “thing” sense. Consonantly, efforts to design automated systems for misinformation detected rely on a conception of information-as-thing. The logic goes: if we can just determine which utterances are false and suppress them, we’ll have defeated fake news.

But I’d like to shine a spotlight here on information-as-process, and consonantly on misinformation-as-process, to show why defeating misinformation-as-thing isn’t the end of the story.

Construed as a process, information is a person’s becoming informed or informing. Just as we might talk about someone’s formation as their training or background, we can talk about their information as how they came to know or understand certain things. Granted, this usage is archaic; but conceptually, of course, it is still relevant to the human condition.

Looking at misinformation as a process, and not just a thing, invites us to explore the ways in which people become misinformed. Immediately we can see that it’s not enough to claim that we have true information on one hand and misinformation on the other. Sille Obelitz Søe makes a similar point when she concludes that “misleadingness—and not falsity as such—is the vehicle of misinformation and disinformation.” The language of “leading” here is clearly process-oriented.

Indeed, there are many cases where people are misinformed by true information. And yet, it doesn’t quite seem right to say that the information “itself” is misleading. Rather, there’s something about the person–object system which is being misled. Some examples:

  • misconstruing some scientific evidence as support for a certain position—or, more broadly, cherry-picking evidence
  • taking a parody to be true or earnest
  • circulating in one context a photo taken in a different context
  • when news comes too late, or when it is circulated too late

These cases present a problem with fact checking as a solution to the proliferation of misinformation. Saying a news story is “true” or “partly true” does very little. (Granted, it does do something.) As Søe writes:

The dominant focus on truth and falsity disregards the communicative aspects of online sharing, posting, liking, etc. For communication to work, context, intention, belief, and meaning make all the difference. Whether some post is misleading (intentionally or by mistake) is dependent upon its meaning—and the meaning … is determined by the context in which it is posted and the intentions with which it is posted.

Sille Obelitz Søe, “Algorithmic detection of misinformation and disinformation: Gricean perspectives

Taking a process approach to misinformation also helps us see the importance of time and temporality in assessing (mis)information. I am writing amidst the unfolding 2019–20 novel coronavirus pandemic. As our understanding of the situation develops, our sense of what is information and misinformation changes. For example, Vox, tweeted on January 31, “Is this going to be a deadly pandemic? No.” On March 24, the outlet announced, “We have deleted a tweet from Jan 31 that no longer reflects the current reality of the coronavirus story.” Yet two weeks before that, Vox posted a story criticizing President Donald Trump for holding the same position they evidently held at the time, but without reference to their own tweet. It would seem that there is no issue of misinformation-as-false-thing here. (It is not even clear how to think about truth and falsity when the news is reporting on others’ speech, and particularly others’ predictions.) Yet when we understand misinformation as the process of misleading or being misled, the story becomes much more interesting—and complicated.

What is misleading, it seems to me, is at least in some cases relative, just like time and motion. I don’t mean “relative” in the sense of “there is no answer,” but rather that it depends on the context of all entities involved. That is, what a Trump supporter might call “misleading” may be considered simply “leading” by a Vox aficionado.

As Antonio Badia writes in The Information Manifold, automated systems in principle cannot solve this problem. Algorithms can deal only with information on the syntactic level, but identifying misinformation requires semantic and pragmatic capabilities. Algorithms can approach these levels if they are given syntactic surrogates for semantic/pragmatic concepts. A perfect surrogate would allow the detection of relevant, say, articles, with no false positives and no false negatives. It is clear that today’s surrogates are far from perfect—a recent example is seen in automatic content moderation flagging many false positives. And it’s not just that we have more improvements to make to our algorithms. It may simply be that no perfect (or even near-perfect) surrogates exist for certain abstract concepts. As Badia writes:

We have to be prepared that no such surrogates exist. … The lack of perfect surrogates entails that there will always be “backdoors” [to the algorithms we have in place] that a determined, smart human (living naturally on the semantic and pragmatic level) can exploit.

Antonio Badia, The Information Manifold

Misinformation is rampant, but it’s not the only thing. We humans simply aren’t the best reasoners. Many people are pointing out that this pandemic is showing that we are not good at understanding exponential growth. But it’s also becoming clear that we are not good at understanding anything said by people who disagree with us politically, that we have amazingly short memories, that we entertain all manner of superstition, and on and on. So while efforts to curb misinformation are noble, I fear that they are misguided. At best, they are attacking only a small part of a much bigger issue; at worst they are using the wrong tools for the job. Perhaps, as Badia writes, we shouldn’t focus so much on detecting misinformation automatically. Rather:

The real question is, what caused the misinformation to be produced in the first place? This should be our concern, whether the information is produced by an algorithm or a person.

Antonio Badia, The Information Manifold

That, it would seem, is a different kind of enterprise, and its solution is more about education and dialogue than automatic filtering. Automated systems certainly have a place in human–computer systems, but we should not make the mistake of thinking they can work alone.

Our Unnamable Present

Socrates: An image like the ones encountered in ancient mythology, such as Chimera, with parts from a goat, a lion, and a snake; Scylla, a woman, dog, and serpent; and Cerberus, a three-headed dog and snakes, and many others whose bodies contain several images in one.
Glaucon: Yes. I’ve heard that many such creatures have existed.

—The Republic, Book IX

We are living through a very strange time. In the future, I wonder what they will call this era. A sickness is spreading throughout the world, and no one knows quite how bad it is or will be. The schools are closed indefinitely, as are many businesses. We “nonessential” citizens have been instructed to stay home. All gatherings, travel and so on have been cancelled until further notice. Some 3.3 million Americans have applied for unemployment benefits. Sentences like “We’re two weeks behind Italy” redound. Any news, much of which is prediction rather than reportage, is almost immediately outdated. Amidst all this, our political polarization doesn’t seem to have gotten any better. Emotions are running high. Our age contains “several images in one.”

Earlier this year, before all this started, I read Roberto Calasso’s The Unnamable Present, and some of its ideas have been reverberating in my mind. Mostly, I think, I like the title.

Naming is a form of organization, of theorizing. To organize or theorize, we need to have an understanding of how something fits into the bigger picture. You can’t do this from within a storm. As Orwell wrote, during World War II, “Already history has in a sense ceased to exist, ie. there is no such thing as a history of our own times which could be universally accepted. … Hitler can say that the Jews started the war, and if he survives that will become the official history.”

In the present, our classifications are always in flux. They may seem monstrous, like Chimera or Scylla. Could it be solace to us that things will become clearer eventually? Or could it be that the proverbial Hitler will have been the one to survive, to write the official history?

The Chimera of Arezzo, c. 400 BC, found in Arezzo, an ancient Etruscan and Roman city in Tuscany, Museo Archeologico Nazionale, Florence

A Chimera sculpture from 400 BC, around
the same time Plato wrote The Republic.

The journey of a research paper: rejections, re-visions and revisions

In academia, we are used to reading published, polished papers. This masks the messiness of the publication process—to say nothing of the hair-pulling, self-doubt and frustrations. Particularly for new researchers, it can be instructive to see how a paper changed along its road to publication. To that end, I thought I’d share a bit about one of my own papers.

I recently published “The Self and the Ontic Trust: Toward Technologies of Care and Meaning” in Journal of Information, Communication & Ethics in Society. The paper provides a philosophical discussion of the self, information ethics and technology design, using the distinction between selfies and self-portraits as an illustration. This paper changed a lot between its initial submission in August 2017 and its final acceptance in January 2019.

Originally, the paper was more directly focused on the selfie phenomenon, titled “The Self in the Selfie: An Ethical Discussion of Digital Self-Representation.” It was submitted to an information science conference, where it was rejected. Then it was amplified slightly and submitted to a philosophy of technology journal, where it was also rejected. Here’s the abstract:

Humans use technology to extend, explore and establish the self. Digital tools offer many novel ways to achieve these ends, opening up urgent questions of information ethics, particularly regarding the role of selfhood in today’s world. To approach these questions, this paper explores the ethical directive of self-care through the pervasive phenomenon of the selfie. This paper considers the selfie as a documentary form, i.e. through the framework of document theory, discussing how the selfie performs reference, provides evidence and manifests meaning. This analysis is based on a hermeneutic review of the scholarly literature on the selfie. Framing the selfie as a document offers insight into how the self is constructed and understood through the selfie. This leads to an ethical discussion, considering how selfies can contribute to and detract from human flourishing. This discussion focuses on the ethical directive of self-care, describing how lessons from ancient technological practices can be applied today.

One of the reviewers alerted me to a thorough literature review on the selfie, which had been published after I had conducted my initial literature review, and this meant much of my submission was now duplicative. Unfortunate, but it happens. I shelved this piece for a while, unsure of where its contribution now lay.

Later I became interested in the question of whether we have a duty to be informed. For the past several years, of course, we’ve heard much about the “right to be forgotten.” If we have such a right, then others must have an obligation to ignore. If we don’t have such a right, then it ought to be because we have a right to know, and then others must have an obligation to inform. Most likely we should say that in certain situations all of these rights and duties arise. But could it also be that in some cases we have a duty to know? I wrote a paper reflecting on this, and I submitted it to a philosophy journal. Here is the abstract:

We have a duty to be informed. On an informational ontology, all things are informed to some extent, but those entities that are more informed are better entities. This is particularly the case, and of particular significance, with humans, as we are the only known entities that are informed about our being informed, and consequently we can direct the procession of our being informed. As a self, a person has a duty to be the best self possible. Growing as a self involves valuing, loving and caring. It is a matter of discovering and cultivating values, coming to love particular people and things, and caring about these values, people and things. This is the depth of what it must mean to “be informed.” In short, a person ought to be informed about those things they act on or intend to act on, i.e., their concerns. Under this frame, we can reinterpret so-called duties such as not to lie as duties not to inhibit others’ being informed. We can also reinterpret the so-called duty to ignore as rather a duty to cultivate healthy concerns.

This paper was rejected. I set aside this paper for a while, and then I realized that there was an interesting conversation to be had at the intersection of this paper and my also-rejected selfie paper. This led me to combine the two papers, and I submitted “The Self and the Ontic Trust,” where it was accepted with some revisions. Here is the abstract:

Purpose – Contemporary technology has been implicated in the rise of perfectionism, a personality trait that is associated with depression, suicide and other ills. This paper explores how technology can be developed to promote an alternative to perfectionism, which is a self-constructionist ethic.
Design/methodology/approach – This paper takes the form of a philosophical discussion. A conceptual framework is developed by connecting the literature on perfectionism and personal meaning with discussions in information ethics on the self, the ontic trust and technologies of the self. To illustrate these themes, the example of selfies and self-portraits is discussed.
Findings – The self today must be understood as both individualistic and relational, i.e., hybrid; the trouble is balance. To realize balance, the self should be recognized as part of the ontic trust to which all information organisms and objects belong. Thus technologically-mediated self-care takes on a deeper urgency. The selfie is one example of a technology for self-care that has gone astray (i.e., lost some of its care-conducive aspects), but this can be remedied if selfie-making technology incorporates relevant aspects of self-portraiture. This example provides a path for developing self-constructionist and meaningful technologies more generally.
Practical implications – Technology development should proceed with self-care and meaning in mind. The comparison of selfies and self-portraits, situated historically and theoretically, provides some guidance in this regard. Some specific avenues for development are presented.
Originality/value – The question of the self has not been much discussed in information ethics. This paper links the self to the ontic trust: the self can be fruitfully understood as an agent within the ontic trust to which we all belong.