Category Archives: Technology

Finding Hope in a World of Bad News

Image from Ripleys

It’s easy to feel hopeless given the state of the world. Not only are there tons of bad things happening out there, but we have to keep hearing about them. For those of us who want to make the world a better place, this is a tragic situation—what Luciano Floridi calls the Tragedy of the Good Will in his 2013 book The Ethics of Information.

The Tragedy of the Good Will arises from an imbalance between information (what we know) and power (what we can do about it). On one hand, our information technologies and global connections increase our possibilities for doing good in the world. For instance, today I can easily check out GiveWell and find that Malaria Consortium is among the world’s most effective charities and donate to their cause. I can look up information about which political candidates to support, how to improve my teaching, and on and on. But on the other hand, these same technologies also inform me of injustices and misfortunes so numerous that there’s no hope for me to do anything about most of them. Cruelly, just knowing about such problems seems to exert pressure on me to do something… but genocide? corruption? state-sponsored disinformation? rampant conspiracy theories? What can I do about any of these things?

Lately the tragedy seems to be worsening. If so, this could be because things are really getting worse. And when we think of the climate crisis, political polarization and similar issues, it can certainly seem that way. But scholars such as Steven Pinker argue that, broadly speaking, we have never been better off.

If that is the case, then an alternative explanation for our deepening tragedy is that our information–power ratio as getting less balanced. And this is not hard to fathom, given that our news cycle is approaching the attention span of a goldfish, and that we live amidst always-on internet with endless streams of social media content that are continually replenished.

How can we cope in the face of such tipping imbalance?

Less Information

One way, which seems to come naturally to us, is to get less information. We can call it the ostrich approach*: “If I don’t see it, it’s not happening.” Perhaps, sometimes, that’s all we can do. It was part of my own approach, admittedly, when I quit Facebook in mid-2020. But avoidance is hardly a long-term solution, and as Floridi points out, it only fuels the echo chamber/filter bubble phenomenon. We may not like to confront bad news, but on some level we ought to.

* But to be fair to the ostriches among us, evidently they don’t really bury their heads in the sand to avoid information.

Better Information

So if less information is not a good solution, then what about better information? This is what Rutger Bregman counsels in his book Humankind: “Steer clear of television news and push notifications and instead read a more nuanced Sunday paper and in-depth feature writing, whether online or off. Disengage from your screen and meet real people in the flesh. Think as carefully about what information you feed your mind as you do about the food you feed your body” (p. 392). This approach is resonant with the Slow Information movement, encapsulated for instance in the research paper “Information Balance” by Liz Poirier and Lyn Robinson (Journal of Documentation, 2014). The Slow principles (underlying Slow food, Slow travel, Slow art, etc.) entail finding joy in the activities themselves, making deliberate choices, and establishing mindful balance. When it comes to information, faster is not necessarily better, nor is free or short.

If we are suggesting that a weekly newspaper can provide better quality than a journalist’s Twitter feed, what exactly do we mean? In his discussion of information quality in connection with the Tragedy of the Good Will, Floridi names several things that high-quality information provides:

  • guidance, helping us see what actions are possible and how to do them
  • feedback, helping us understand how our actions are affecting the world
  • transparency, showing us what other people and organizations are up to as a constraint on their behavior
  • forecasting, possibly preventing the worst from occurring
  • engineering, helping us build our capabilities

Empowerment

Next, the information–power gap could also be closed by increasing our power—something that seems to me to be sitting between the lines of the “better information” solution. Indeed, consider again the Malaria Consortium example above. It is precisely the “better information” provided by GiveWell, which identifies the charities that save or improve the most lives per dollar, that allows me to use my charity dollars in a more powerful way than I could otherwise. This is an example from philanthropy, but what if similarly empowering organizations could sprout up across all domains of human interest?

We could also become empowered as a human race, not just as individuals. Floridi writes that humanity as a whole could be empowered by new information technologies so that we could do things together that we could never accomplish alone. Here Floridi has in mind sociotechnical institutions that function as “supra-individual, global, artificial agents that are hybrids of other artificial agents… and individual people.” Designing such solutions, while desirable, would entail solving a massive coordination problem. How to do so without massive coordination in the first place is troubling.

Design

And finally, writes Floridi, in response to the Tragedy of the Good Will, we must realize that we are not just “users” of the world, but also continual creators of it. When we act in the world, we are not moving about in a static space, but we are contributing to the future state of the world. We are designing the world as we go, however badly. This solution recognizes the possibility of not just rebalancing the information–power equation, but of changing the future states of the underlying world, which ultimately requires us to develop an ecological (or “e-cological,” as Floridi puts it, nodding to the centrality of information technology in the picture) ethics of design.

Myth for the Digital Age

My thinking and teaching is centered on the relationship of humans to information technologies—from paper documents to smartphone apps and AI systems. And of course, a deeper theme here is the age-old question of the relationship between humankind and nature.

One way to understand what we humans are doing when we make technology is that we’re constantly encapsulating ourselves from nature, continually adding layers of separation between ourselves and the world around. (This is the view of Luciano Floridi, for example.) It may have started with clothing and shelter, but now we separate ourselves from the world with social media, online shopping, digital entertainment and on and on. And so what does that mean for us? What does it mean for humanity—past, present and future?

One way to explore such questions is through myth. The old stories help us ask the big questions: What do we want? Is it good for us? What will it cost? What are we willing to pay? Is it worth it?

So to help us ask these questions, I want to share a story that I learned from the storyteller Martin Shaw.

It’s late in the lonely afternoon, and a hunter trudges through the forest. Belly stuck to spine, an unsuccessful day. When he comes near to his hut, he sees something that terrifies him: smoke coming out of the chimney. Someone’s inside. Carefully he crept closer, and he discovered that whoever it was, they were gone now. But someone certainly had been there. Inside he found a warm fire and a hot meal. His clothes had been mended and cleaned. He felt something then that he couldn’t quite name. No one had ever cared for him in this way before.

Day after day, the same thing. At the end of the week, the hunter decided to come home early to see who it was who cared for him. He peered through the door, and there he saw a woman with his back to him, cooking at the stove. And he looked at her with his hunter’s eye, and he knew, like all hunters know, that she was not just a woman. She was part woman, part fox, and part spirit. And she knew, like all women know, that she was being watched. She turned around and said to him with authority: “I will be the woman of this hut!”

The hunter knew a good thing when he saw it, and he nodded and said, “Yes.”

“There’s just one thing,” she said. “Being part fox, I have my pelt, and I need to hang it on the inside of the door. Is that going to be all right with you?”

Again he nodded and said, “Yes.”

They had a wonderful night. He told stories and she told jokes and they both sang songs. The hunter’s life changed then—no longer was it so cold and solitary.

But over time, the pelt began to give off a strong, wild smell. You might think it was a small price. But the smell grew more and more pungent as time went on, and the hunter started to complain. “Do you have to keep the damn thing in the house?!” he said. As the months passed, the hunter could smell the wild scent on his pillow, in his clothes, on his own skin—even in his mind. His complaints grew more severe until one day he burst. “I told you before!” he started. “Get—rid—of—the—pelt!”

The woman simply nodded. And in the morning, the woman was gone, and the pelt was gone, and the scent was gone. And the man stood in the doorframe and looked out into the lonely wood, and he felt something that he’d never felt before. And they say that he still stands there, lonely in his whole body, for the scent of the fox woman.

This is what Shaw has to say about the tale:

I would suggest that we are that hunter, societally and most likely personally. The smell of the pelt is the price of real relationship to wild nature; its sharp, regal, undomesticated scent. While that scent is in our hut there can be no Hadrian’s wall between us and the living world.

Somewhere back down the line, the West woke up to the fox woman gone. And when she left she took many stories with her. And, when the day is dimming, and our great successes have been bragged to exhaustion, the West sits, lonely in its whole body for her. Stories that are more than just a dagger between our teeth. More than just a bellow of conquest. As I say, we have lost a lot of housemaking skills for how to welcome such stories. We turned our face away from the pelt. Underneath our wealth, the West is a lonely hunter.

Martin Shaw, “Turning Our Head from the Pelt

Here Shaw sees the fox woman as wild nature. But the power of myth is in its thickness. We can also see the fox woman as our digital technologies—they do have a certain spirit to them. We have invited them into our lives without considering the costs… and once we are infused with their scent, they cannot be gotten rid of so easily, lest we stand, lonely with our whole body.

Myth, I think, for better or worse, can help us see our questions, and see them differently, but it will be up to us to forge the answers. So—where to from here?

Misinformation as Process

Misinformation may appear rather simple to define. As Wikipedia says, “Misinformation is false or inaccurate information.”

But in turn we use the term information in a variety of ways. Michael Buckland has written that these boil down into three main senses:

  1. as knowledge, or mental content
  2. as a thing, or a physical object
  3. as a process
Algorithm generated cat picture by @normalcatpics on Twitter. Misinformation?

Buckland points out that information systems can only deal with information in the “thing” sense. Consonantly, efforts to design automated systems for misinformation detected rely on a conception of information-as-thing. The logic goes: if we can just determine which utterances are false and suppress them, we’ll have defeated fake news.

But I’d like to shine a spotlight here on information-as-process, and consonantly on misinformation-as-process, to show why defeating misinformation-as-thing isn’t the end of the story.

Construed as a process, information is a person’s becoming informed or informing. Just as we might talk about someone’s formation as their training or background, we can talk about their information as how they came to know or understand certain things. Granted, this usage is archaic; but conceptually, of course, it is still relevant to the human condition.

Looking at misinformation as a process, and not just a thing, invites us to explore the ways in which people become misinformed. Immediately we can see that it’s not enough to claim that we have true information on one hand and misinformation on the other. Sille Obelitz Søe makes a similar point when she concludes that “misleadingness—and not falsity as such—is the vehicle of misinformation and disinformation.” The language of “leading” here is clearly process-oriented.

Indeed, there are many cases where people are misinformed by true information. And yet, it doesn’t quite seem right to say that the information “itself” is misleading. Rather, there’s something about the person–object system which is being misled. Some examples:

  • misconstruing some scientific evidence as support for a certain position—or, more broadly, cherry-picking evidence
  • taking a parody to be true or earnest
  • circulating in one context a photo taken in a different context
  • when news comes too late, or when it is circulated too late

These cases present a problem with fact checking as a solution to the proliferation of misinformation. Saying a news story is “true” or “partly true” does very little. (Granted, it does do something.) As Søe writes:

The dominant focus on truth and falsity disregards the communicative aspects of online sharing, posting, liking, etc. For communication to work, context, intention, belief, and meaning make all the difference. Whether some post is misleading (intentionally or by mistake) is dependent upon its meaning—and the meaning … is determined by the context in which it is posted and the intentions with which it is posted.

Sille Obelitz Søe, “Algorithmic detection of misinformation and disinformation: Gricean perspectives

Taking a process approach to misinformation also helps us see the importance of time and temporality in assessing (mis)information. I am writing amidst the unfolding 2019–20 novel coronavirus pandemic. As our understanding of the situation develops, our sense of what is information and misinformation changes. For example, Vox, tweeted on January 31, “Is this going to be a deadly pandemic? No.” On March 24, the outlet announced, “We have deleted a tweet from Jan 31 that no longer reflects the current reality of the coronavirus story.” Yet two weeks before that, Vox posted a story criticizing President Donald Trump for holding the same position they evidently held at the time, but without reference to their own tweet. It would seem that there is no issue of misinformation-as-false-thing here. (It is not even clear how to think about truth and falsity when the news is reporting on others’ speech, and particularly others’ predictions.) Yet when we understand misinformation as the process of misleading or being misled, the story becomes much more interesting—and complicated.

What is misleading, it seems to me, is at least in some cases relative, just like time and motion. I don’t mean “relative” in the sense of “there is no answer,” but rather that it depends on the context of all entities involved. That is, what a Trump supporter might call “misleading” may be considered simply “leading” by a Vox aficionado.

As Antonio Badia writes in The Information Manifold, automated systems in principle cannot solve this problem. Algorithms can deal only with information on the syntactic level, but identifying misinformation requires semantic and pragmatic capabilities. Algorithms can approach these levels if they are given syntactic surrogates for semantic/pragmatic concepts. A perfect surrogate would allow the detection of relevant, say, articles, with no false positives and no false negatives. It is clear that today’s surrogates are far from perfect—a recent example is seen in automatic content moderation flagging many false positives. And it’s not just that we have more improvements to make to our algorithms. It may simply be that no perfect (or even near-perfect) surrogates exist for certain abstract concepts. As Badia writes:

We have to be prepared that no such surrogates exist. … The lack of perfect surrogates entails that there will always be “backdoors” [to the algorithms we have in place] that a determined, smart human (living naturally on the semantic and pragmatic level) can exploit.

Antonio Badia, The Information Manifold

Misinformation is rampant, but it’s not the only thing. We humans simply aren’t the best reasoners. Many people are pointing out that this pandemic is showing that we are not good at understanding exponential growth. But it’s also becoming clear that we are not good at understanding anything said by people who disagree with us politically, that we have amazingly short memories, that we entertain all manner of superstition, and on and on. So while efforts to curb misinformation are noble, I fear that they are misguided. At best, they are attacking only a small part of a much bigger issue; at worst they are using the wrong tools for the job. Perhaps, as Badia writes, we shouldn’t focus so much on detecting misinformation automatically. Rather:

The real question is, what caused the misinformation to be produced in the first place? This should be our concern, whether the information is produced by an algorithm or a person.

Antonio Badia, The Information Manifold

That, it would seem, is a different kind of enterprise, and its solution is more about education and dialogue than automatic filtering. Automated systems certainly have a place in human–computer systems, but we should not make the mistake of thinking they can work alone.