Category Archives: Academia

On Reading Heidegger

One of my favorite philosophers is Martin Heidegger (1889–1976). His work has influenced not only my scholarship, but also my worldview. The notion that human being is inextricable from the world, the concept of the “they-self” who we are when we follow the stream of society, the vision that our death gives meaning to our life, the idea that modern technology coerces us into seeing the world in terms of resources to be exploited, the understanding that thinking is a form of thanking… His insights really are numberless.

Heidegger is in a very strange position nowadays. On one hand, he is hailed as one of the most influential philosophers of the twentieth century, but on the other there is a palpable cone of silence around his name. There’s a mostly-unspoken notion that perhaps one shouldn’t engage with his ideas—or, God forbid, cite him. (Though I have to think he’s not too worried about his h-index.) The elephant in the room, to put it sensationally, is that Heidegger was a Nazi.

I saw a post on Twitter earlier this year that read, “Given that Heidegger was a Nazi, who should I cite for the concept of breakdown?” First and foremost, such a question strikes me as a form of academic dishonesty, though I do understand the impulse comes from a good-hearted place—not wanting to support a hateful regime. It reminds me of when I read Robert Sokolowski’s Phenomenology of the Human Person, which comes to the same substantive conclusions as Heidegger in Being and Time and yet only mentions Heidegger glancingly and tangentially. In my opinion, we should give credit where it’s due, whether we like where it’s due or not.

Anyway, I read and cite Heidegger quite a lot in my own work, and I thought it would be worthwhile to reflect on my doing so. Particularly now, as there is a way of thinking that is becoming more prevalent: a tendency to take people as one thing only—you’re either with us or against us—not recognizing that, as Alexander Solzhenitsyn put it, “the line dividing good and evil cuts through the heart of every human being.” This is the impulse that leads people to topple statues commemorating historical figures who, notwithstanding any of their virtues, committed the sin of holding slaves. Amartya Sen argued that such a “solitarist” approach to identity is fallacious and harmful—and yet it is apparently so enticing to us that we slip into it again and again.

Saying “Heidegger was a Nazi” gives quite a particular impression, and the first question we should ask is whether that impression is true. This has been widely discussed, and I don’t want to rehash all that discussion here, but some broad strokes are warranted. In the end, my own conclusion is that Heidegger was more a coward than anything.

Heidegger was a professor at the University of Freiberg and joined the Nazi party in 1933, becoming rector of the university (i.e., president). In this role, he was responsible for removing Jewish faculty from the university, for enforcing quotas on Jewish students, implementing race-science lectures, etc. Adam Knowles, my colleague at Drexel, has written that Heidegger carried out these duties with efficiency. A year later Heidegger resigned from the rectorship and stopped participating in Nazi activities, though he never formally left the party. Beside all this is the fact we have to contend with that, as a philosopher, Heidegger worked with and inspired a slew of Jewish philosophers with whom he had friendly relationships, such as Hannah Arendt and Hans Jonas—and not to mention his advisor Edmund Husserl.

Then there’s the issue of Heidegger’s infamous notebooks. Throughout his life, Heidegger kept philosophical notebooks in which he recorded ideas and observations. The publication of these notebooks caused a renewed (and sensationalized) interest in the questions of Heidegger’s Nazism, as they contain certain anti-Semitic snippets. But these comments should not be mistaken for reflective philosophical views; for a philosopher, a notebook is a tool for thinking, for playing with ideas, for exploring. We should be careful in attempting to infer one’s beliefs from their writings in philosophical notebooks.

But even if we do suppose that Heidegger was a dyed-in-the-wool Nazi, should that matter? A thorny question if there ever was one. Nazi scientists made a number of scientific discoveries, sometimes through unethical research on human beings. Whether and how such findings should be used is difficult to say. The antimalarial drug chloroquine, for instance, was originally developed using human subjects in concentration camps. Should we do away with it? How do we account for advances made since after World War II? The problem with science is that it cumulates.

Knowledge gained through coercive and unsafe research is one thing, but what about desk research funded by the Nazi regime? For instance, Fanta was developed in Nazi Germany. Should we stop drinking it? When it comes to philosophy, perhaps, the relevant question is whether the philosophy is somehow sympathetic with Nazism. On that question, with regard to Heidegger, a regiment of philosophers can be marshaled both for and against. It is anything but clear. And so how to rule in such a case? To me, it begins to look much more like a typical ad hominem fallacy. In my view, philosophical arguments should be assessed on their own merits, not on the basis of who said them. Heidegger himself seems to have held the same view; when tasked with recounting the life of Aristotle, he gave one sentence: “He was born at a certain time, he worked, and he died”—because that’s all that matters, when the subject is Aristotle’s philosophy. But I do wonder if, as time goes on, we are finding ad hominem arguments more and more persuasive. It seems to me that, societally, we’re having quite a difficult time separating people from ideas.

Another consideration: Should we expect our philosophers to be morally good? (To be sure, I think we should expect everyone to be morally good, but I guess the question is whether we should expect more of philosophers than of others.) In ancient times, when philosophy was more clearly understood as guidance for how to live a good life, perhaps the answer was yes. But in modern philosophy, there seems to be a disconnect. Arthur Schopenhauer (1788–1860) was a famous misogynist and once pushed a woman down a flight of stairs, and Ludwig Wittgenstein (1889–1951) was a tyrannical and abusive schoolteacher. John Searle (b. 1932) allegedly sexually harassed, assaulted, and retaliated against a student while a professor—and not to mention earlier, as a landlord, played a major role in large rent increases for students at his university. A 2019 study found that philosophy professors by and large do not behave any more morally than other academics. (There’s an exception, evidently, when it comes to vegetarianism.)

Is it fair, then, to malign Heidegger and give all these other philosophers a pass? We might recall the Christian maxim here: “Let the one who is without sin cast the first stone” (John 8:7).

The journey of a research paper: rejections, re-visions and revisions

In academia, we are used to reading published, polished papers. This masks the messiness of the publication process—to say nothing of the hair-pulling, self-doubt and frustrations. Particularly for new researchers, it can be instructive to see how a paper changed along its road to publication. To that end, I thought I’d share a bit about one of my own papers.

I recently published “The Self and the Ontic Trust: Toward Technologies of Care and Meaning” in Journal of Information, Communication & Ethics in Society. The paper provides a philosophical discussion of the self, information ethics and technology design, using the distinction between selfies and self-portraits as an illustration. This paper changed a lot between its initial submission in August 2017 and its final acceptance in January 2019.

Originally, the paper was more directly focused on the selfie phenomenon, titled “The Self in the Selfie: An Ethical Discussion of Digital Self-Representation.” It was submitted to an information science conference, where it was rejected. Then it was amplified slightly and submitted to a philosophy of technology journal, where it was also rejected. Here’s the abstract:

Humans use technology to extend, explore and establish the self. Digital tools offer many novel ways to achieve these ends, opening up urgent questions of information ethics, particularly regarding the role of selfhood in today’s world. To approach these questions, this paper explores the ethical directive of self-care through the pervasive phenomenon of the selfie. This paper considers the selfie as a documentary form, i.e. through the framework of document theory, discussing how the selfie performs reference, provides evidence and manifests meaning. This analysis is based on a hermeneutic review of the scholarly literature on the selfie. Framing the selfie as a document offers insight into how the self is constructed and understood through the selfie. This leads to an ethical discussion, considering how selfies can contribute to and detract from human flourishing. This discussion focuses on the ethical directive of self-care, describing how lessons from ancient technological practices can be applied today.

One of the reviewers alerted me to a thorough literature review on the selfie, which had been published after I had conducted my initial literature review, and this meant much of my submission was now duplicative. Unfortunate, but it happens. I shelved this piece for a while, unsure of where its contribution now lay.

Later I became interested in the question of whether we have a duty to be informed. For the past several years, of course, we’ve heard much about the “right to be forgotten.” If we have such a right, then others must have an obligation to ignore. If we don’t have such a right, then it ought to be because we have a right to know, and then others must have an obligation to inform. Most likely we should say that in certain situations all of these rights and duties arise. But could it also be that in some cases we have a duty to know? I wrote a paper reflecting on this, and I submitted it to a philosophy journal. Here is the abstract:

We have a duty to be informed. On an informational ontology, all things are informed to some extent, but those entities that are more informed are better entities. This is particularly the case, and of particular significance, with humans, as we are the only known entities that are informed about our being informed, and consequently we can direct the procession of our being informed. As a self, a person has a duty to be the best self possible. Growing as a self involves valuing, loving and caring. It is a matter of discovering and cultivating values, coming to love particular people and things, and caring about these values, people and things. This is the depth of what it must mean to “be informed.” In short, a person ought to be informed about those things they act on or intend to act on, i.e., their concerns. Under this frame, we can reinterpret so-called duties such as not to lie as duties not to inhibit others’ being informed. We can also reinterpret the so-called duty to ignore as rather a duty to cultivate healthy concerns.

This paper was rejected. I set aside this paper for a while, and then I realized that there was an interesting conversation to be had at the intersection of this paper and my also-rejected selfie paper. This led me to combine the two papers, and I submitted “The Self and the Ontic Trust,” where it was accepted with some revisions. Here is the abstract:

Purpose – Contemporary technology has been implicated in the rise of perfectionism, a personality trait that is associated with depression, suicide and other ills. This paper explores how technology can be developed to promote an alternative to perfectionism, which is a self-constructionist ethic.
Design/methodology/approach – This paper takes the form of a philosophical discussion. A conceptual framework is developed by connecting the literature on perfectionism and personal meaning with discussions in information ethics on the self, the ontic trust and technologies of the self. To illustrate these themes, the example of selfies and self-portraits is discussed.
Findings – The self today must be understood as both individualistic and relational, i.e., hybrid; the trouble is balance. To realize balance, the self should be recognized as part of the ontic trust to which all information organisms and objects belong. Thus technologically-mediated self-care takes on a deeper urgency. The selfie is one example of a technology for self-care that has gone astray (i.e., lost some of its care-conducive aspects), but this can be remedied if selfie-making technology incorporates relevant aspects of self-portraiture. This example provides a path for developing self-constructionist and meaningful technologies more generally.
Practical implications – Technology development should proceed with self-care and meaning in mind. The comparison of selfies and self-portraits, situated historically and theoretically, provides some guidance in this regard. Some specific avenues for development are presented.
Originality/value – The question of the self has not been much discussed in information ethics. This paper links the self to the ontic trust: the self can be fruitfully understood as an agent within the ontic trust to which we all belong.

Reversing the English-only trend in science

Heart of Gold, a mandala by Jay Mohler
“Heart of Gold,” a colorful God’s Eye by Jay Mohler. Jay sells his sculptures on Etsy.

Often we think of science as uncovering a God’s-eye-view of the universe—dare I use the word objective? Sure, this may be the ultimate goal of some branches of science, but even in these cases, the road to God’s Eye is anything but monochromatic.

Our language colors the way we think. Words and phrases may reveal connections that would be invisible to speakers of other languages. (For a riveting exploration of human analogy-making, check out Doug Hofstadter and Emmanuel Sander’s 2013 book Surfaces and Essences.)

In science, then, scholars who speak and write in different languages may take vastly different approaches to solving problems. They may identify different problems, to begin with, but even in exploring the same problems as scholars in other languages, they may proceed differently. This is another reason why I am a proponent of linguistic diversity: These different approaches serve to enrich the human scientific enterprise.

A recent BBC article by Matt Pickles brings attention to the trend toward English-dominance in science, and academia in general. Higher education is becoming ever more Anglophone, as is scientific communication. We write in language, of course, and the way we write also interfaces with the way we think. From this series of perhaps-obvious observations, we can appreciate that language, writing and thought are intertwined. Because science advances through writing, the linguistic white-washing of scientific communication also serves to white-wash science itself. For instance, because international journals are unlikely to accept non-English quotations, authors who want to publish in these journals (often used as a measure of their success as researchers) may be coerced into subscribing to Anglophone theories and methods, as “nonstandard” approaches may not be deemed publishable.

The move toward all-English has an interesting historical parallel, drawn out in the article linked above. Centuries ago, science was written in Latin. A German campaign for scientific linguistic diversity reminds us that Galileo, Newton and Lagrange abandoned Latin in order to write in their vernacular. (We see the same in the literary world: Dante, for instance.) Professor Ralph Mocikat, a German molecular immunologist who chairs this campaign, says that the vernacular “is science’s prime resource, and the reintroduction of a linguistic monoculture will throw global science back to the dark ages.”

What can be done to foster linguistic diversity in science? Because of all the machinery involved, it will surely be a slow process. But it has to start somewhere. Here are a few ideas that come to mind:

  • For academic institutions:
    • Require second-language proficiency in all PhD students.
    • Find ways to facilitate searching the literature in other languages.
  • For journals:
    • Allow space for translations of papers, perhaps one article per issue, or perhaps in an annual special issue of translations.
    • Publish abstracts in multiple languages, even if the content itself is only in one language.
    • Provide translation services to facilitate access of academic work in other languages.
    • Broaden your base of peer reviewers to include researchers with other native languages.
  • For researchers:
    • Participate in international conferences, particularly smaller ones. Talk to researchers in your field whose native language is not English.
    • If you don’t speak another language, start learning one. It’s easier than you think. If you do, search the literature in that language the next time you write a paper.

What else?

Update: This post spawned an interesting conversation on Facebook with a few of my friends. When assessing this trend, we should also consider the needs and values of specific fields. Though I stand by the above discussion for the kind of research I do (humanities and “soft” sciences), a linguistic monoculture could indeed be valuable for certain work in the natural sciences. Clarity means safety, as a friend who works with dangerous chemicals said. Moreover, using one standardized term for a phenomenon rather than a panoply of regionalisms has benefits, such as making a literature search easier.

Thanks to Dr. Deborah Turner for bringing the BBC article to my attention.