Reversing the English-only trend in science

Heart of Gold, a mandala by Jay Mohler
“Heart of Gold,” a colorful God’s Eye by Jay Mohler. Jay sells his sculptures on Etsy.

Often we think of science as uncovering a God’s-eye-view of the universe—dare I use the word objective? Sure, this may be the ultimate goal of some branches of science, but even in these cases, the road to God’s Eye is anything but monochromatic.

Our language colors the way we think. Words and phrases may reveal connections that would be invisible to speakers of other languages. (For a riveting exploration of human analogy-making, check out Doug Hofstadter and Emmanuel Sander’s 2013 book Surfaces and Essences.)

In science, then, scholars who speak and write in different languages may take vastly different approaches to solving problems. They may identify different problems, to begin with, but even in exploring the same problems as scholars in other languages, they may proceed differently. This is another reason why I am a proponent of linguistic diversity: These different approaches serve to enrich the human scientific enterprise.

A recent BBC article by Matt Pickles brings attention to the trend toward English-dominance in science, and academia in general. Higher education is becoming ever more Anglophone, as is scientific communication. We write in language, of course, and the way we write also interfaces with the way we think. From this series of perhaps-obvious observations, we can appreciate that language, writing and thought are intertwined. Because science advances through writing, the linguistic white-washing of scientific communication also serves to white-wash science itself. For instance, because international journals are unlikely to accept non-English quotations, authors who want to publish in these journals (often used as a measure of their success as researchers) may be coerced into subscribing to Anglophone theories and methods, as “nonstandard” approaches may not be deemed publishable.

The move toward all-English has an interesting historical parallel, drawn out in the article linked above. Centuries ago, science was written in Latin. A German campaign for scientific linguistic diversity reminds us that Galileo, Newton and Lagrange abandoned Latin in order to write in their vernacular. (We see the same in the literary world: Dante, for instance.) Professor Ralph Mocikat, a German molecular immunologist who chairs this campaign, says that the vernacular “is science’s prime resource, and the reintroduction of a linguistic monoculture will throw global science back to the dark ages.”

What can be done to foster linguistic diversity in science? Because of all the machinery involved, it will surely be a slow process. But it has to start somewhere. Here are a few ideas that come to mind:

  • For academic institutions:
    • Require second-language proficiency in all PhD students.
    • Find ways to facilitate searching the literature in other languages.
  • For journals:
    • Allow space for translations of papers, perhaps one article per issue, or perhaps in an annual special issue of translations.
    • Publish abstracts in multiple languages, even if the content itself is only in one language.
    • Provide translation services to facilitate access of academic work in other languages.
    • Broaden your base of peer reviewers to include researchers with other native languages.
  • For researchers:
    • Participate in international conferences, particularly smaller ones. Talk to researchers in your field whose native language is not English.
    • If you don’t speak another language, start learning one. It’s easier than you think. If you do, search the literature in that language the next time you write a paper.

What else?

Update: This post spawned an interesting conversation on Facebook with a few of my friends. When assessing this trend, we should also consider the needs and values of specific fields. Though I stand by the above discussion for the kind of research I do (humanities and “soft” sciences), a linguistic monoculture could indeed be valuable for certain work in the natural sciences. Clarity means safety, as a friend who works with dangerous chemicals said. Moreover, using one standardized term for a phenomenon rather than a panoply of regionalisms has benefits, such as making a literature search easier.

Thanks to Dr. Deborah Turner for bringing the BBC article to my attention.

How should emphasis be marked in new writing systems?

Writing in the Roman alphabet, as we do in English, is a major privilege when it comes to communicating online. If your language doesn’t use it, you will face some serious challenges, to say the least.

That’s why I was so delighted to hear about Phoreus Cherokee, a new and quite complete typeface for the Cherokee language.

Recently the Cherokee recognized that, in order for their language to survive, it needed to get online. The Cherokee lobbied Apple to include support for their language, and they were successful. But there was still an issue, visually: Because it was historically underprivileged, the language never got to flourish artistically, as other languages did. They only had two typefaces, and they were limited. “Many of the glyphs weren’t accurate or were completely wrong,” according to Roy Boney, a Cherokee language services manager (quoted in the article linked above).

Enter type designer Mark Jamra, who tasked himself with creating a complete typeface of the Cherokee syllabary. Watch the video below for more.

One fascinating part about this story is that Jamra doesn’t speak Cherokee, and he only became familiar with the writing system through this project. He studied historical documents in order to uncover the “essence” of each of the letters. In this way, as he says in the video above, “I wouldn’t just be channeling the one-off quirks of some writer in, say, 1853, but rather I’d be basing them on forms that anyone could read who could read Cherokee.” Brilliant.

The story and video mostly focus on Jamra’s innovation in creating an italic version of the script, as well as lowercase glyphs. Now, this is quite interesting. At first, the creation of these glyphs seems unequivocally good: After all, more = better.

But not all languages have upper and lower case, and not all use italics (and bolding). These are conventions of the Roman alphabet as it’s evolved over time, and I can’t help but think it’s colonial, presumptuous, etc., to assume that all languages ought to have them.

Take Japanese, for instance. There’s now upper/lowercase distinction in Japanese, and there’s no italic or bold. A number of conventions have developed instead. To give a quote or title, the Japanese would use half-brackets, 「like this」.  For emphasis, they would either alternate syllabaries (the language has two syllabaries that are used in concert), or use “emphasis points” (called bōten or wakiten) along the characters to be emphasized. This can be seen in the image below, a photograph from my copy of 星の王子さま (a.k.a. The Little Prince), where the characters けもの (beast) are being emphasized.

Page from The Little Prince - Japanese

All this is to say that different languages have different needs and have developed different ways to show paralinguistic distinctions. How we do things in our alphabet is not the only way! In a world where more and more languages are adopting the Roman alphabet, we should celebrate diversity where we still have it, shouldn’t we?

(Or maybe the real lesson here is that I have a bone to pick with everything!)

Emoticon as word

Two years ago I reviewed some of the literature on emoticon use, concluding that emoticons are words. This was, perhaps, a mere academic curiosity. After all, emoticons don’t share all the characteristics that we typically associate with words—pronounceability, for instance. This shouldn’t deter us, though; as artifacts of visual communication, it doesn’t make sense to limit them by definitions from aural communication. But who really cares/cared?

This year, Oxford Dictionaries announced that their word of the year is the Face with Tears of Joy emoji. This is delightful. Apparently 2015 was a big year for emoji, and particularly 😂. And yet there’s a lot of resistance to this… presumably because emoji aren’t perceived as words. Indeed, even the Oxford Dictionaries announcement says:

😂 was chosen as the ‘word’ that best reflected the ethos, mood, and preoccupations of 2015.

Their putting “word” in quotes implies that they aren’t so sure it’s actually a word, even though they seem to have a hunch (in my opinion, correctly) that it is. Other news sources, predictably, are a little more inflammatory. Newsweek proclaims: “Oxford Dictionaries Word of the Year Is Not a Word.” Ditto for Fox News (Oxford Dictionaries’ ‘word’ of the year is not a word).

I suspect all linguistics students and teachers are tapping the tips of their fingers together in a Mr. Burnsian fashion at all this. Because now the consumers of popular media are (well, hopefully) confronting the question of what a word actually is—which is the subject of considerable grappling in any Linguistics 101 course.