Category Archives: Sociopolitical issues

Misinformation as Process

Misinformation may appear rather simple to define. As Wikipedia says, “Misinformation is false or inaccurate information.”

But in turn we use the term information in a variety of ways. Michael Buckland has written that these boil down into three main senses:

  1. as knowledge, or mental content
  2. as a thing, or a physical object
  3. as a process
Algorithm generated cat picture by @normalcatpics on Twitter. Misinformation?

Buckland points out that information systems can only deal with information in the “thing” sense. Consonantly, efforts to design automated systems for misinformation detected rely on a conception of information-as-thing. The logic goes: if we can just determine which utterances are false and suppress them, we’ll have defeated fake news.

But I’d like to shine a spotlight here on information-as-process, and consonantly on misinformation-as-process, to show why defeating misinformation-as-thing isn’t the end of the story.

Construed as a process, information is a person’s becoming informed or informing. Just as we might talk about someone’s formation as their training or background, we can talk about their information as how they came to know or understand certain things. Granted, this usage is archaic; but conceptually, of course, it is still relevant to the human condition.

Looking at misinformation as a process, and not just a thing, invites us to explore the ways in which people become misinformed. Immediately we can see that it’s not enough to claim that we have true information on one hand and misinformation on the other. Sille Obelitz Søe makes a similar point when she concludes that “misleadingness—and not falsity as such—is the vehicle of misinformation and disinformation.” The language of “leading” here is clearly process-oriented.

Indeed, there are many cases where people are misinformed by true information. And yet, it doesn’t quite seem right to say that the information “itself” is misleading. Rather, there’s something about the person–object system which is being misled. Some examples:

  • misconstruing some scientific evidence as support for a certain position—or, more broadly, cherry-picking evidence
  • taking a parody to be true or earnest
  • circulating in one context a photo taken in a different context
  • when news comes too late, or when it is circulated too late

These cases present a problem with fact checking as a solution to the proliferation of misinformation. Saying a news story is “true” or “partly true” does very little. (Granted, it does do something.) As Søe writes:

The dominant focus on truth and falsity disregards the communicative aspects of online sharing, posting, liking, etc. For communication to work, context, intention, belief, and meaning make all the difference. Whether some post is misleading (intentionally or by mistake) is dependent upon its meaning—and the meaning … is determined by the context in which it is posted and the intentions with which it is posted.

Sille Obelitz Søe, “Algorithmic detection of misinformation and disinformation: Gricean perspectives

Taking a process approach to misinformation also helps us see the importance of time and temporality in assessing (mis)information. I am writing amidst the unfolding 2019–20 novel coronavirus pandemic. As our understanding of the situation develops, our sense of what is information and misinformation changes. For example, Vox, tweeted on January 31, “Is this going to be a deadly pandemic? No.” On March 24, the outlet announced, “We have deleted a tweet from Jan 31 that no longer reflects the current reality of the coronavirus story.” Yet two weeks before that, Vox posted a story criticizing President Donald Trump for holding the same position they evidently held at the time, but without reference to their own tweet. It would seem that there is no issue of misinformation-as-false-thing here. (It is not even clear how to think about truth and falsity when the news is reporting on others’ speech, and particularly others’ predictions.) Yet when we understand misinformation as the process of misleading or being misled, the story becomes much more interesting—and complicated.

What is misleading, it seems to me, is at least in some cases relative, just like time and motion. I don’t mean “relative” in the sense of “there is no answer,” but rather that it depends on the context of all entities involved. That is, what a Trump supporter might call “misleading” may be considered simply “leading” by a Vox aficionado.

As Antonio Badia writes in The Information Manifold, automated systems in principle cannot solve this problem. Algorithms can deal only with information on the syntactic level, but identifying misinformation requires semantic and pragmatic capabilities. Algorithms can approach these levels if they are given syntactic surrogates for semantic/pragmatic concepts. A perfect surrogate would allow the detection of relevant, say, articles, with no false positives and no false negatives. It is clear that today’s surrogates are far from perfect—a recent example is seen in automatic content moderation flagging many false positives. And it’s not just that we have more improvements to make to our algorithms. It may simply be that no perfect (or even near-perfect) surrogates exist for certain abstract concepts. As Badia writes:

We have to be prepared that no such surrogates exist. … The lack of perfect surrogates entails that there will always be “backdoors” [to the algorithms we have in place] that a determined, smart human (living naturally on the semantic and pragmatic level) can exploit.

Antonio Badia, The Information Manifold

Misinformation is rampant, but it’s not the only thing. We humans simply aren’t the best reasoners. Many people are pointing out that this pandemic is showing that we are not good at understanding exponential growth. But it’s also becoming clear that we are not good at understanding anything said by people who disagree with us politically, that we have amazingly short memories, that we entertain all manner of superstition, and on and on. So while efforts to curb misinformation are noble, I fear that they are misguided. At best, they are attacking only a small part of a much bigger issue; at worst they are using the wrong tools for the job. Perhaps, as Badia writes, we shouldn’t focus so much on detecting misinformation automatically. Rather:

The real question is, what caused the misinformation to be produced in the first place? This should be our concern, whether the information is produced by an algorithm or a person.

Antonio Badia, The Information Manifold

That, it would seem, is a different kind of enterprise, and its solution is more about education and dialogue than automatic filtering. Automated systems certainly have a place in human–computer systems, but we should not make the mistake of thinking they can work alone.

Documents and Moral Knowledge

I’ve finally finished my PhD work, and I can rededicate myself to blogging (more) regularly and taking ScratchTap in new directions. This fall, I’ll be presenting a paper on documents and moral knowledge at the Annual Meeting of the Document Academy. Here’s a snippet of what I’m thinking about.

Documents have traditionally been conceptualized as representations of reality. As such, we know a lot about how they show and afford facts about the world. Recently, scholars have been exploring how documents can also construct reality. With this view, we can begin to think about how documents show and afford moral knowledge, or knowledge about what people ought to value in the world and how people ought to act. In this realm, much of the discussion centers around texts, such as works of fiction. Reading Crime and Punishment, for example, is a great way to build your moral imagination.

But what about visual art? I’d like to consider two works of art depicting Yellowstone National Park, one from the 19th century and another from the 21st. By analyzing these works as documents, we can see how art played and continues to play a decisive role in how Americans conceptualize and value the wilderness—perhaps even more than scientific documents.

The first document is a painting done in 1871 by Thomas Moran depicting the Grand Canyon of the Yellowstone. The work was done while Moran was a guest artist on a geographic survey. Moran’s work showed the beauty and scale of the Yellowstone region more effectively than other descriptions, such that Moran’s work was decisive in the United States passing the National Parks Act in 1872, forming Yellowstone National Park and setting the stage for other regions in the United States—and other countries—to be preserved as national parks.

The second document is a 2014 photograph by Michael Nichols, depicting three bison at Yellowstone National Park being photographed by a group of people near their automobiles. Nichols’ work was part of a National Geographic project documenting Yellowstone National Park which sought to expose the tension between the park’s existence as a wildlife preserve and a site for human enjoyment.

Both of these works respond to a dualism in the human relationship to the wilderness, dating back at least to the European colonization of America. On one hand, (1) we see the wilderness as a store of commodities to be profited from; and on the other, (2) we see the wilderness as a dangerous, chaotic blur that defies comprehension. Thus the U.S. National Parks are at once “for the benefit and enjoyment of the people,” and also a preserve of nature and wildlife for its own sake.

In their artworks, both Moran and Nichols seem to reject (1), but they do so in different ways: Moran does so by depicting (2), while Nichols does so by holding up a mirror to (1).
If we think of the purpose of these documents as providing moral knowledge, we can ask which approach is more effective. Moran’s work had the almost immediate effect of the creation of the U.S. National Parks. Nichols’ work cannot yet boast any such effects. Of course, many other factors complicate this picture: today’s media climate, the saturation of images, the nature of internet communication…

Still, the question should give us pause. The wilderness is disappearing, if it has not already gone. Indeed, the world itself is in grave danger, as climate change unfurls. In discourse around these topics, we have tended to appeal to scientific documents. But if artistic documents can provision the sort of moral knowledge necessary to heal our relationship to the world, then perhaps we can also appeal to art. If that is the case, then it is worth thinking about what sort of art will serve best.

Poetics of electronic writing

An executable code poem by GreyLau

One of the questions that motivated me while I was working on my master’s degree was the differences between handwriting, printing and digital writing. Dennis Tenen’s new book, Plain Text: The Poetics of Computation, contributes to to that discussion.

Tenen points out that the major change between electronic writing and previous forms is that in electronic writing there is a separation between the act of writing and the support (i.e., what the writing is written on).

This becomes evident when we ask ourselves, while looking at a screen, “Where is the text?” Of course on one hand the text is on the screen; but on the other, it exists in electromagnetic storage somewhere we cannot directly see. In some sense, the writing is in both places. Tenen writes, “One must be translated, transformed into the other.”

This transformation occurs in what Tenen calls the formatting layer of electronic texts, which is where we may find censorship, DRM, ads and even spyware. Thus what we see on the screen is only the tip of the iceberg. Tenen:

At the maximally blunt limit of its capabilities, format governs access. Commands render some words and sentences visible on-screen while suppressing others. … The formatting layer specifies the affordances of electronic text. More than passive conduits of meaning, electronic texts thus carry within them rules for engagement between authors, readers, and devices. … Whatever literary-theoretic framework the reader brings to the process of interpretation must therefore meet the affordances encoded into the electronic text itself.

Tenen focuses on developing theoretical acuity for interpreting digital texts. This is vital, because if we do not develop such thinking, we’ll be quickly be strung along by forces beyond our understanding. We’re already at the point where some algorithm-generated texts are indistinguishable from human-generated ones, for instance.

And when it comes to social media (how we spend more and more of our time), if we do not learn to critically analyze the texts around us, we will miss out on what’s going on. John Lanchester writes poignantly on this in the London Review of Books:

For all the talk about connecting people, building community, and believing in people, Facebook is an advertising company. … [But] even more than it is in the advertising business, Facebook is in the surveillance business. Facebook, in fact, is the biggest surveillance-based enterprise in the history of mankind. It knows far, far more about you than the most intrusive government has ever known about its citizens. It’s amazing that people haven’t really understood this about the company. … I’m not sure there has ever been a more complete disconnect between what a company says it does – ‘connect’, ‘build communities’ – and the commercial reality. Note that the company’s knowledge about its users isn’t used merely to target ads but to shape the flow of news to them. Since there is so much content posted on the site, the algorithms used to filter and direct that content are the thing that determines what you see: people think their news feed is largely to do with their friends and interests, and it sort of is, with the crucial proviso that it is their friends and interests as mediated by the commercial interests of Facebook. Your eyes are directed towards the place where they are most valuable for Facebook.

Separation between the act of writing and its support, indeed.