Category Archives: Information

Thinking, Good and Bad

Most thought-provoking in our thought-provoking time is that we are still not thinking.

Martin Heidegger, What Is Called Thinking?, 1954

In my research on information experience, I continue to be drawn to the intersection of mind and information. Many philosophers would define thinking as a series of electrical impulses in the brain that could be done as well in an armchair as in a vat, and information scientists tend to simply assume that to inform someone of something is simply a matter of putting the information in front of their nose. Both views miss quite a bit. (Okay, yes, I’m being a little uncharitable here.)

Ryan McGinness, Mindscape 31, 2019

Lately, I’ve been thinking about thinking—about what it means to think well and poorly, and how we use information and documents to help us think (or not), and how we might do so differently. Today, in the coronavirus era, this is as needful as ever. But it’s not a new question.

On the eve of World War II, the philosopher Susan Stebbing wrote:

There is an urgent need to-day for the citizens of a democracy to think well. It is not enough to have freedom of the Press and parliamentary institutions. Our difficulties are due partly to our own stupidity, partly to the exploitation of that stupidity, and partly to our own prejudices and personal desires.

Susan Stebbing, Thinking to Some Purpose, 1939

This appeared on the dust jacket of Thinking to Some Purpose, a how-to manual for spotting logical fallacies on the road to thinking more effectively. It was well received and pioneered a whole genre of writing on similar topics.

Such approaches to improving one’s thinking fall within the rubric of virtue epistemology, which looks at people’s mental traits, attitudes and thinking styles, and how these serve (or don’t) the pursuit of knowledge and truth. Virtue epistemologists discuss and disentangle the virtues and vices of thinking. Lists of both are myriad and diffuse:

  • The intellectual virtues may include: attentiveness, autonomy, benevolence, carefulness, cognitive empathy, confidence, conscientiousness, courage, creativity, discernment, fair-mindedness, honesty, humility, imagination, integrity, love, objectivity, open-mindedness, parsimony, perseverance, responsibility, studiousness, thoroughness, understanding, warranty, and wisdom—among others.
  • The intellectual vices may include: carelessness, closed-mindedness, conformity, cowardice, curiosity, dishonesty, dogmatism, folly, foolishness, gullibility, idleness, indifference to truth, insensitivity to detail, lack of thoroughness, negligence, obtuseness, prejudice, pride, rigidity, self-deception, superficiality, superstition, twisted thinking, willful naivety, and wishful thinking—among others.

The fact that curiosity appears on both lists may incite us to wonder how the virtues and vices are related. In some cases, virtues and vices may be opposites (e.g., open-mindedness and closed-mindedness). In other cases, a virtue may be a middle-point between two vices; for instance, intellectual humility lies at the sweet spot between intellectual arrogance and diffidence, both of which can be considered vices. What’s more, we might wonder if these all represent distinct virtues/vices or if some should be lumped together or split apart, or if others lie in waiting. All in all, it seems to me that we need more work that brings these visions of intellectual virtues and vices into a more coherent picture.

Moreover, it may be that we need to update our notion of intellectual virtues and vices for the digital age. While the medieval Seven Deadly Sins—lust, gluttony and all the rest—are as relevant today as ever for personal comportment writ large, it seems to me that our information environment has changed so dramatically that intellectual virtues tuned to a medieval scriptorium are no longer sufficient.

To speak of virtues for the digital age, I might suggest tinkering, championed for instance in Nassim Taleb’s book Antifragile. More and more we learn by trying things out and making subtle manipulations and interpretations in a way that does not separate thinking from doing. Additionally, I’ve recently been thinking that silence may be a worthy virtue for our age. I made a short video describing silence as a virtue:

And what about vices? To give one example, Quassim Cassam, in his 2019 book Vices of the Mind, shines a spotlight on what he terms epistemic insouciance, which is a “lack of concern with respect to whether their claims are grounded in reality or the evidence.” It’s an indifference to truth, and sometimes a dismissive coping mechanism for dealing with a hopelessly complex world. This is the vice of the bullshitter.

We can recognize epistemic insouciance at play in our post-truth era. Writes Cassam, “Being subjected to a relentless barrage of misleading pronouncements about a given subject can deprive one of one’s prior knowledge of that subject by muddying the waters and making one mistrust one’s own judgement.” And when we mistrust our own judgment and sense that the prevailing social consensus is this or that, we are prone to commit ourselves to precisely this or that—I’m thinking of the Asch conformity experiments from the 1950s, which showed the power of conformity in social reasoning.

And while Cassam does not explicitly bring up disinformation in his discussion, this is a particularly chilling concern today. In her book Deepfakes: The Coming Infocalypse, journalist Nina Schick describes the rise of disinformation and the coming “infocalypse,” discussing particularly deepfakes—synthetic media (such as videos) intended to deceive—which are now trivial to create with free software and a little time and expertise. The “photograph” you see to the right, for instance, is a computer-generated image of a person who does not exist. Predictably, it gets worse—like the deepfake porn bot that automatically removes the clothing from images of women. And that’s just the start. What happens when nation-state actors and other interest groups begin to create and circulate synthetic evidence—disinformation—depicting brutal or incriminating events that never happened? It doesn’t take much imagination to see some dark possibilities. Hence the term infocalypse—but unlike in the Biblical apocalypse, the infocalypse does not foretell the second coming of any savior or prophet.

The proliferation of disinformation—already upon us, and only slated to get worse—brings us back to Stebbing’s diagnosis, quoted above, that our difficulties thinking are due partly to the exploitation of our stupidity. Here we may be tempted to throw up our hands that the situation is hopeless. Of course this would be a demonstration of the utmost epistemic insouciance, on Cassam’s account—and yet what other choice is there? Could we make ourselves less exploitable?

I am hoping that we can find a satisfactory answer to that question in the coming years. It may have something to do with delineating the intellectual virtues and vices—ones up to the task of the digital age and the infocalypse—and for coming up with ways to instill the virtues and root out the vices in the public and our students.

To be sure, our personal traits and behaviors are only part of a very complex picture; our mental and political outcomes are due to a mix of network-level social effects and sub-personal cognitive biases. But there are things we can do as individuals, and in any case we ought to tend to the things we can.

Origin Stories and Being Thrown

Humans are storytelling creatures, and some stories have been with us for as long as we’ve been human. Among these are, fittingly, stories of our own origins. It’s futile to try to pin down when such and such story originated—and what’s more, it’s mostly missing the point.

17th century RajasthanI manuscript of the Mahabharata depicting Vyasa narrating the Mahabharata to Ganesha, who serves as the scribe
Detail, 17th century Rajasthani manuscript of Ganesha recording the Mahabharata (Wikipedia)

I’m reading the Mahabharata, in Carol Satyamurti’s retelling, and the opening pages of the epic give a fascinating example of what I mean. The story begins with the story of its own origin, in crystal clear description. Yet by the story’s own story, it’s hard to say how exactly it came about. We are told that Vyasa, a seer, composed the poem, and then he dictated it to Ganesha, who wrote it down. Vyasa then taught the poem to a number of disciples, who recited it to a king to interrupt a sacrifice. Ugrashravas, a poet, was present at that event, and he later told the story to a group of ascetics in a forest. The Mahabharata tells us that the version within its pages is the one told by Ugrashravas—as if Vyasa and Ganesha already knew that Ugrashravas would go to the forest from the start.

So what should be made of this? Why not simplify matters by just saying that we’re reading what was written down by Ganesha? Or why not have Vyasa write it down himself? It’s all as if to suggest that if we are looking for origins we’ll wind up going around in circles.

It may be useful to think about this through the concept of thrownness from phenomenology. The idea is that we humans are “thrown” into our human situation. Part of what this means is that we don’t choose our starting conditions: we don’t pick our family, location and so on. As James Baldwin wrote memorably in Giovanni’s Room, “people can’t, unhappily, invent their mooring posts, their lovers and their friends, anymore than they can invent their parents.”

More deeply, being thrown means that there’s not an objective starting point for our lives. We don’t remember our first moments. Of course we must have started at some point, and certainly we have some impressionistic early memories, but our conscious lives seem to start up already on, sort of like how giraffes are born running.

The phenomenological concept describes human conscious experience as thrown, but it also seems to describe humanity more broadly. Our species evolved from some common ancestor with other primates, indeed with everything else alive today, shaped within and as part of our world… and so there never was a first human. And nor was there ever really a first story. Our stories about our origins seem to have emerged in the same way. Today we are thrown into them.

Misinformation as Process

Misinformation may appear rather simple to define. As Wikipedia says, “Misinformation is false or inaccurate information.”

But in turn we use the term information in a variety of ways. Michael Buckland has written that these boil down into three main senses:

  1. as knowledge, or mental content
  2. as a thing, or a physical object
  3. as a process
Algorithm generated cat picture by @normalcatpics on Twitter. Misinformation?

Buckland points out that information systems can only deal with information in the “thing” sense. Consonantly, efforts to design automated systems for misinformation detected rely on a conception of information-as-thing. The logic goes: if we can just determine which utterances are false and suppress them, we’ll have defeated fake news.

But I’d like to shine a spotlight here on information-as-process, and consonantly on misinformation-as-process, to show why defeating misinformation-as-thing isn’t the end of the story.

Construed as a process, information is a person’s becoming informed or informing. Just as we might talk about someone’s formation as their training or background, we can talk about their information as how they came to know or understand certain things. Granted, this usage is archaic; but conceptually, of course, it is still relevant to the human condition.

Looking at misinformation as a process, and not just a thing, invites us to explore the ways in which people become misinformed. Immediately we can see that it’s not enough to claim that we have true information on one hand and misinformation on the other. Sille Obelitz Søe makes a similar point when she concludes that “misleadingness—and not falsity as such—is the vehicle of misinformation and disinformation.” The language of “leading” here is clearly process-oriented.

Indeed, there are many cases where people are misinformed by true information. And yet, it doesn’t quite seem right to say that the information “itself” is misleading. Rather, there’s something about the person–object system which is being misled. Some examples:

  • misconstruing some scientific evidence as support for a certain position—or, more broadly, cherry-picking evidence
  • taking a parody to be true or earnest
  • circulating in one context a photo taken in a different context
  • when news comes too late, or when it is circulated too late

These cases present a problem with fact checking as a solution to the proliferation of misinformation. Saying a news story is “true” or “partly true” does very little. (Granted, it does do something.) As Søe writes:

The dominant focus on truth and falsity disregards the communicative aspects of online sharing, posting, liking, etc. For communication to work, context, intention, belief, and meaning make all the difference. Whether some post is misleading (intentionally or by mistake) is dependent upon its meaning—and the meaning … is determined by the context in which it is posted and the intentions with which it is posted.

Sille Obelitz Søe, “Algorithmic detection of misinformation and disinformation: Gricean perspectives

Taking a process approach to misinformation also helps us see the importance of time and temporality in assessing (mis)information. I am writing amidst the unfolding 2019–20 novel coronavirus pandemic. As our understanding of the situation develops, our sense of what is information and misinformation changes. For example, Vox, tweeted on January 31, “Is this going to be a deadly pandemic? No.” On March 24, the outlet announced, “We have deleted a tweet from Jan 31 that no longer reflects the current reality of the coronavirus story.” Yet two weeks before that, Vox posted a story criticizing President Donald Trump for holding the same position they evidently held at the time, but without reference to their own tweet. It would seem that there is no issue of misinformation-as-false-thing here. (It is not even clear how to think about truth and falsity when the news is reporting on others’ speech, and particularly others’ predictions.) Yet when we understand misinformation as the process of misleading or being misled, the story becomes much more interesting—and complicated.

What is misleading, it seems to me, is at least in some cases relative, just like time and motion. I don’t mean “relative” in the sense of “there is no answer,” but rather that it depends on the context of all entities involved. That is, what a Trump supporter might call “misleading” may be considered simply “leading” by a Vox aficionado.

As Antonio Badia writes in The Information Manifold, automated systems in principle cannot solve this problem. Algorithms can deal only with information on the syntactic level, but identifying misinformation requires semantic and pragmatic capabilities. Algorithms can approach these levels if they are given syntactic surrogates for semantic/pragmatic concepts. A perfect surrogate would allow the detection of relevant, say, articles, with no false positives and no false negatives. It is clear that today’s surrogates are far from perfect—a recent example is seen in automatic content moderation flagging many false positives. And it’s not just that we have more improvements to make to our algorithms. It may simply be that no perfect (or even near-perfect) surrogates exist for certain abstract concepts. As Badia writes:

We have to be prepared that no such surrogates exist. … The lack of perfect surrogates entails that there will always be “backdoors” [to the algorithms we have in place] that a determined, smart human (living naturally on the semantic and pragmatic level) can exploit.

Antonio Badia, The Information Manifold

Misinformation is rampant, but it’s not the only thing. We humans simply aren’t the best reasoners. Many people are pointing out that this pandemic is showing that we are not good at understanding exponential growth. But it’s also becoming clear that we are not good at understanding anything said by people who disagree with us politically, that we have amazingly short memories, that we entertain all manner of superstition, and on and on. So while efforts to curb misinformation are noble, I fear that they are misguided. At best, they are attacking only a small part of a much bigger issue; at worst they are using the wrong tools for the job. Perhaps, as Badia writes, we shouldn’t focus so much on detecting misinformation automatically. Rather:

The real question is, what caused the misinformation to be produced in the first place? This should be our concern, whether the information is produced by an algorithm or a person.

Antonio Badia, The Information Manifold

That, it would seem, is a different kind of enterprise, and its solution is more about education and dialogue than automatic filtering. Automated systems certainly have a place in human–computer systems, but we should not make the mistake of thinking they can work alone.