All posts by Tim Gorichanaz

Writing, memory and freedom

Is this memory?

One of the reasons we write things down is to help us remember. That seems clear; I don’t want to forget your phone number or what I was supposed to get from the grocery store, so I write it down.

Some ancient philosophers worried that writing would wipe away our capacity for organically remembering things. Even today, writers such as Nicholas Carr, author of The Shallows: What the Internet is Doing to Our Brains, make analogous arguments.

But those worries seem secondary to the problem that writing (and information systems generally) creates the sense that all memories have been recorded and are retrievable.

First of all, this is an illusion. It is impossible to record everything, and gaps are inevitable. Frighteningly, we may not notice when things are missing, and we fill in the gaps through inference. We know this in everyday life as “jumping to conclusions.” The idea that we think we can record everything seems to come from the 20th-century enchantment with computing, whose metaphors have permeated many aspects of life.

Psychologist Arthur Glenberg argues against this position in his 1997 paper What Memory is For. The idea of total recall—that our brains are recording everything we ever experience but that it’s locked away and can be coaxed out through psychotherapy—is a myth, Glenberg argues. Rather, Glenberg advances a view of memory as a facilitator of action in our environments, and something that is not totally accurate (and shouldn’t be!). More recently, Julia Shaw makes similar points in her book The Memory Illusion, in which she goes over many ways that our memories can betray us. The pernicious thing is not when we forget—it’s when we misremember things and think we’re correct. You can watch an animated abstract for The Memory Illusion here:

Second, remembering may not always be the best thing. Also in 1997, Geof Bowker published a paper on the importance of organizational forgetting. Organizations need to manage a lot of information and knowledge. Normally we think of this only in terms of remembering, but organizations also need to think about forgetting—especially that which no longer serves.

In the everyday lives of individuals, too, memories can sometimes be straightjackets. Sure, it’s easy to argue that forgetting an unpleasant memory could be problematic, but the ability to forget aspects of one’s past is also important for moving forward.

Modern technology has made this so much more complicated. Consider, for example, that you once had an abusive lover. You’ve since broken up, but remembering that person feels more painful than helpful. In the days of yore, you could simply burn all the photographs of the two of you and that would be that. But today, traces of your past relationship may be strewn about Facebook forever. For another sort of example, the work of Oliver Haimson on the intersection of gender transition and social media also presents a fascinating case.

The point is, the (im)possibility of forgetting becomes a crucial ethical issue today. Philosopher Luciano Floridi writes in The Ethics of Information:

Recorded memories tend to freeze the nature of their subject. The more memories we accumulate and externalize, the more narrative constraints we provide for the construction and development of personal identities. Increasing our memories also means decreasing the degree of freedom we might enjoy in defining ourselves. Forgetting is also a self-poietic [creative] art… Capturing, editing, saving, conserving, and managing one’s own memories for personal and public consumption will become increasingly important not just in terms of protection of informational privacy… but also in terms of a morally healthy construction of one’s personal identity.

We should remember that memory slips are not all bad—both mental forgetting and missing written information. And we should also be more humble about what we do “remember,” because it may very well be wrong.

Thinking the relationship between writing and speech

An example of quipu, a knot-based information system used by the Incas. This could be considered a form of proto-writing.

What is writing, exactly? That question began to fascinate me as I worked on my master’s degree (in Spanish linguistics). There are a few posts on this blog on the topic (for example), but it’s a question that I haven’t yet grown tired of.

On the surface, we think of writing as a simple representation of speech. But that may not be quite right. That view would imply that the whole history of writing, from pictographs and other early information systems (flags, knots, etc.) was teleological from the start, just searching for the “correct” way to represent a spoken language that was already totally understood.

A 1996 paper by David R. Olson, “Towards a Psychology of Literacy,” challenges the assumption of writing-as-representation and introduces instead a view of writing as a model for language.

Olson traces the evolution of writing in this light, discussing the movement from representing tokens to representing words (towards the abstract). This movement occurred as syntax (what you might loosely call “grammar”) was introduced and began to complexify. With syntax, writing came to shed light on the structures of speech. The next movement was from thinking of whole words to thinking of units of sound, and from there the development of writing continued to facilitate the analysis of language in many ways—on the level of sentence meaning, for instance.

The cognitive changes that go along with the development of writing and literacy (discussed, for instance, by Walter Ong), are due to this mode of analysis. Some of the changes Olson discusses are a reduction in the felt magic of symbols, and the transformation of words and sentences into objects of contemplation and philosophy.

In summation, Olson writes:

In this way, writing systems, rather than transcribing a known, provide concepts and categories for thinking about the structure of spoken language. The development of a functional way of communicating with visible marks was, simultaneously, a discovery of the representable structures of speech.

On this view, learning to read is not simply a matter of learning the symbols that correspond to already-understood units of language, as is often assumed in pedagogy. Rather, it is a metalinguistic activity—it involves the discovery of those very units. Learning to read is essentially learning to tune in to your language and analyze it.

Thus writing is, by its very nature, a reflective activity. And so we have further grounds for why the practice of writing is conducive to self-reflection (what can be called the hermeneutic capacity of writing). As historian Lynn Hunt says, writing leads to thinking.

Writing in hyperhistory

There’s a well-known distinction between prehistory and history. History, we say, is everything after the invention of writing. This first happened around 7,000 years ago.

But we shouldn’t think of history as referring to a period in the earth’s development, but rather as a descriptor of how people live. That is, even after writing was first invented, many societies still lived prehistorically—without writing. Indeed, writing was invented many times, independently, in different parts of the world (China, Sumer, Egypt, Mesoamerica…). Even today, there are some remote tribes that live prehistorically.

But is the prehistory/history distinction enough? The philosopher Luciano Floridi posits that many of us today have moved into a different way of life, which he calls hyperhistory. In hyperhistory, “societies or environments where ICTs [information and communication technologies] and their data processing capabilities are the necessary condition for the maintenance and any further development of societal welfare, personal well-being, as well as intellectual flourishing.”

We can summarize the distinction in this way:

  • Prehistory – No ICTs
  • History – Society is enriched by ICTs that store and transmit information
  • Hyperhistory – ICTs have overtaken other technologies and now society depends on ICTs to function

 

Floridi very briefly discusses the concept of hyperhistory and what it means for warfare in the Computerphile video below.

Across Floridi’s many works on the topics, he discusses how hyperhistory plays out in economics, politics, warfare, information quality and other areas. The bottom line is that hyperhistory is new, and we are only beginning to grapple with its immense challenges. Floridi writes:

Processing power will increase, while becoming cheaper. The amount of data will reach unthinkable quantities. And the value of our network will grow almost vertically. However, our storage capacity (space) and the speed of our communications (time) are lagging behind. Hyperhistory is a new era in human development, but it does not transcend the spatio-temporal constraints that have always regulated our life on this planet.

And elsewhere:

It may take a long while before we shall come to understand in full such transformations, but it is time to start working on it.

When it comes to writing, what does hyperhistory mean? That’s a big question, but we can sketch some initial thoughts:

  • When there’s more information, there’s less time to devote to any given information. Not only is there more of it to read, but we have to spend more time organizing it and searching for it. Commensurately we’re seeing writing take different forms: shorter sentences, more lists, etc.
  • Non-text forms of information are proliferating. Big data visualizations, videos, etc.
  • Things change fast, and so writing on many topics quickly obsolesces.
  • Text is being processed more by other technologies than by humans. Machines cannot understand in the same way humans can—their “reasoning faculties” are quite different from ours. What does it mean to think of writing for the audience of both humans and machines?

Ah, things to think about…