Featured

Private Belief, Public Scholarship

It’s time to talk openly about religious commitment among academics.

By Mark U. Edwards, Jr.

Faculty members at institutions of higher learning in the United States generally recognize that religion plays a crucial role in national and world politics, economics, and social relations. We hardly need to be convinced that a well-educated citizen needs to understand religious variety in light of the increasing presence in America of adherents to all the great world religions. We realize that from abortion to civil rights, religious conviction impels American citizens to organize, protest, and engage in often fiercely partisan politics. We hear daily about major religious traditions—Christian, Islamic, Jewish, Hindu, Buddhist—clashing with one another or with modern culture and the emerging global economic system. We know that world literatures, philosophies, and languages reflect religious conviction and offer views of the human, the world, and the sacred that give meaning, purpose, and value to life. As a result, religious issues and traditions are regularly addressed in courses from political science and sociology to literature and philosophy. Many colleges and universities also have a religious studies department that is dedicated to the academic study of religious traditions.

But many of us seem reluctant to come to grips with religion’s role as a powerful influence on how faculty and students may see and understand the world. Reticence about one’s religious or spiritual convictions is the default mode today for most scholars in most colleges and universities. Religion in the academy is a conversation-stopper, to borrow Richard Rorty’s memorable characterization. Faculty do not generally mention their personal convictions in their writings or their teaching.

Some of us may wish that this reticence were more widespread and better observed. Others may feel constrained or even stifled by the expectation that we remain silent about matters that matter so much. Many may not care deeply one way or the other, but have to live and work with those who do. Here, I want to address primarily those who feel stifled by the default mode—by religion as conversation-stopper—as well as those in the middle who have to live with what may seem like warring camps. In so doing, I hope to encourage those who approve of the default mode to consider more deeply the trade-offs that their preference entails.

 

To explain this reticence historically, it helps to recall that many academic disciplines arose out of, and often as substitutes for, religious ways of thinking. Within several new disciplines, the discipline’s construal of self and world replaced a traditional, religious alternative. Put simplistically, but not inaccurately, many of today’s academic disciplines started as alternatives to religiously informed ways of knowing.

During much of the nineteenth century, natural science’s status within American colleges and universities depended on its close association with Christian theology. Above all, science provided evidence of design. Assuming that God had created the human mind in the image and likeness of the divine rationality, it was further assumed that the study of science disclosed the rationality with which the creator had endowed the creation. These natural, easy assumptions about God as author both of scripture and of nature, and hence their assumptive harmony, gradually allowed scientists to explore nature apart from what scripture has to say about nature, laying the groundwork for an eventual separation of the two.

Between 1830 and 1870 scientists came increasingly to limit their discussion to natural phenomena, favoring causal explanations that rested on “secondary causes” in nature rather than on intervention from beyond nature. With time, the appeals to supernatural explanations diminished and finally disappeared altogether. In effect, what constituted an explanation changed. If scientists were unable to account for a natural phenomenon, their proper response was not to invoke God, but rather to pursue further scientific inquiry. After 1870, most scientists assumed that all natural phenomena were amenable to naturalistic description and explanation. This assumption was consistent with the theological assumptions regarding the harmony of scripture and nature that had preceded it and had made it an option. Many scientists remained personally religious, but as disciplinary professionals they were loath to bring such considerations into either scholarship or teaching.

About the same time that the naturalistic assumption swept the field in the natural sciences, scholars of what might be termed the “human sciences” broke away from the traditional college course in moral philosophy. During much of the nineteenth century, the moral philosophy course was the capstone of a collegian’s education. It sought to pull together all that the student had learned in the set curriculum of his day and arrange it into a coherent Christian system of knowledge and duties. Moral philosophy had as its purview human motives and obligations, the social relations of human beings, the harmony of nature and scripture, the agreement of natural science with human morality, and the unity of the true, the good, and the beautiful. As an indication of its crucial integrative function, it was a yearlong course offered to seniors and commonly taught by the college’s president, who was usually an ordained minister.

These nascent disciplines—history, psychology, political science, economics, sociology, and anthropology—allied themselves with the natural sciences both to acquire some of their allies’ prestige and also out of a widely shared conviction that the scientific method provided the surest avenue to attaining truth. The timing was significant for the character of these new disciplines. In contrast to the moral philosophy courses out of which these new disciplines emerged, where providence and divine intervention featured prominently, these new human sciences employed a rhetoric and methodology that was rigorously naturalistic. By the turn of the century, psychologists, sociologists, and anthropologists were employing their scientific methodology to understand religion itself in naturalistic terms.

Concentrating on the discovery of causal relationships and agents, social scientists became resolutely empirical even as they sought to discover “laws” of social behavior. During the Progressive Era, they often conjoined their zeal for scientific advance with a conviction that scientific progress would drive social improvement. The prediction and control promised by the scientific method would be put to service for social engineering. By the 1920s the younger social scientists had embraced the “value-free” model of objective science. While they continued to see social utility arising from their research, they professed that a value-free approach was crucial to making their results socially useful.

Finally, the humanities grew out of the classical education offered in antebellum America. Drawing on a long history of liberal education and its (often conflicting) functions, the predecessor courses to today’s humanities courses sought to promote a broadly “Christian” morality and shape character by requiring students to read the classics, which were seen as “edifying” texts. In the early decades of the twentieth century, this function underwent revision as humanists came to champion a different route to character formation. They now sought to form character by exposing undergraduates to the art, literature, and thought of Western civilization. Recent engagements in the “culture wars” and “the battle over the canon” draw, in an often confused and ironic way, from these earlier developments.

We faculty are also reluctant to bring religious perspectives into our scholarship and teaching for moral or political reasons—and this has a history as well. Consider the history of religious discrimination. Many elite Eastern universities and colleges maintained quotas on the admission of Jews well into the twentieth century. By one tally, Jewish students made up nearly 10 percent of over 100 institutions in 1918–19 while constituting only 3.5 percent of the American population. The percentages would have been much higher, however, had only criteria of intellectual merit prevailed. Roman Catholics and sectarian Protestants were also underrepresented, although for different reasons.

This discrimination was given intellectual justification. When the social scientists took on for a time the task of moral formation, they questioned whether Jews or Catholics were capable of conveying what amount to Christian moral precepts (shorn, to be sure, of their explicitly Christian marks). When humanities faculties took up the burden of “character formation” after the social scientists had laid it down, they inherited the suspicion that “outsiders” were less likely than “insiders” to appropriately pass on the tradition the West had inherited. By the 1930s professors in the humanities, as the self-described bearers of “Western culture” (which was often taken to be synonymous with Protestant Christianity minus theological particulars), were often the most resistant to the entrance of religious and racial minorities into their ranks.

Catholics were still suspect at mid-century because they were thought to owe allegiance to what many liberal Protestants and secularists saw as a dogmatic, authoritarian, and “un-American” faith. Catholic “authoritarianism,” it was charged, was antithetical to the ideals of democracy and free inquiry.

Even after explicit quotas used against Jewish and Catholic students were lifted, bars to professorial positions lingered for a time (and may still exist at some institutions). For example, Dan Oren, in his Joining the Club: A History of Jews and Yale, points to a single Jewish professor in all of Yale College in 1950 and only a scattering of Jews within the professional schools. This “establishment” hegemony unraveled over the next several decades, and its undoing benefited first Jews and then Catholics, women, African Americans, and others. Scholars from this transitional generation and their students have ample reason to look skeptically on overt religious considerations in higher education, for such considerations kept them on the outside for so many years.

 

So, given this history of conflict and discrimination, and given the alternatives to once religious ways of knowing established by our various academic disciplines, why would we want once more to allow religious discourse into scholarship and teaching? What has changed? Let me suggest three major factors.

First, in the last half century, the WASP establishment hegemony has come undone. Both faculties and student bodies have become more diverse—religiously, ethnically, and socially. With increased diversity has come increased secularity. And with increased secularity has come increased security for minorities who always have to cope with majoritarian expectations. Ironically, perhaps, for those who expect secularity and diversity always to reinforce each other, the achievement of sufficient diversity and secularity has opened up safe space for individuals to go public with religious convictions that they would have formerly been reluctant to mention, either for fear of majority reactions or for concern for its effect on minority sensibilities and rights. In a pluralist and individualistic campus setting, it has become safer and more acceptable to do one’s own thing, including one’s own religious or spiritual thing. At my institution, Harvard, we now have Jewish and Catholic faculty explicitly drawing on their religious traditions to inform their own scholarship—something you would not have experienced or expected even a few decades ago.

In a pluralist campus setting, it has become safer and more acceptable to do one’s own thing, including one’s own religious or spiritual thing.

Second, the heady confidence with which the twentieth century began—namely, that reason and the scientific method were the sovereign means for understanding the natural and social worlds—has given way to more provisional claims and a greater awareness of the limits to human knowledge, now and in the future.

Recall the history of how the intellectual content of religious belief and practice was held up to the “scientific” standards of the day and judged deficient. A willingness to advance moral claims next fell to scientific scruples, and scholars in the sciences and then the social sciences came to advocate “value neutrality.” Pluralism confronted exclusive claims to (dogmatic) truth and yielded uncertainty and doubt. This growing uncertainty and the related retreat from traditional religious beliefs and practices reinforced scholarly conviction that religion was properly a private matter.

But the unraveling of easy certainties was not confined to the religious sphere. The critical tools that undermined dogmatic religious claims came to be turned on reason and the scientific method themselves. In many disciplines, confidence in the scientific method and detached reason, with the related notions of objectivity and value neutrality, also waned or became more qualified as the promises of rational means and scientific method fell short of expectations in the natural sciences and even more so in the social sciences. Romantic and postmodern reactions complicated knowledge claims and opened up space for a dizzying range of perspectives, from “intellectually fulfilling atheism” (biologist Richard Dawkins) to “warranted Christian belief” (analytic philosopher Alvin Plantinga). Although the default assumption—at least on the coasts—remains that true intellectuals and scholars are not religious, the actual state of affairs is far more complex. Even on the coasts and in research universities, religious or spiritual academics are coming out of the closet.

Finally, there’s the larger world where religious conviction and practice is shaping political, economic, and social life in ways impossible to overlook or ignore even in our ivory tower. Students bring religious conviction and practice to campus and into classrooms. Outside groups like Students for Academic Freedom push for legislation mandating what they term “intellectual diversity” in faculty hires, syllabi, and class presentations. Calvary Chapel Christian School in Murrieta, California, sues the University of California system for refusing to certify for use in admissions three of the school’s courses—in history, literature, and social studies—that approach their subject matter from what the school describes as a “Christian perspective.” The battle between Israel and the Palestinians—sometimes cast as a battle between Jews and Muslims, with evangelical Christians of a dispensational bent allying with religious Zionists—fuels boycotts and student demonstrations. Religious conviction bursts through attempts to confine it to chapel services, dorm-room discussions, and topical courses.

In these changing circumstances, the default mode will no longer do. After all, reticence is not the same thing as absence. Not mentioning religious convictions does not make them go away. They are still present. They still work their influence, but perhaps without appropriate examination, discussion, and compensating adjustment. In other words, in acceding to the proposition that religion should be a conversation-stopper, there may be a loss of critical self-awareness.

Such restraint may also be doing our students a pedagogical disservice. If we fail to discuss or even mention the role that deep personal convictions may play in career choices and scholarly interpretations, we may be tacitly encouraging our students to conclude that they don’t have to worry about such things. We then forgo a splendid opportunity to illustrate the hard work that scholars undertake to identify subjective inclinations and compensate for improper effects.

We may also be fooling ourselves or misleading our colleagues. Sometimes, and for some faculty, the best reason we can honestly give for what we believe on a particular issue, and why we believe it, is, frankly, religious. In such cases, why not simply say so? It may be disingenuous, and unhelpful to the task of advancing understanding and knowledge, if we pretend otherwise.

 

Let me suggest that there are at least three areas where explicitly religious or spiritual warrant may be appropriate and, at least for some faculty, even required. Religious traditions (and most spiritual worldviews) have a stake in how a person 1) thinks about morality, 2) understands human nature, and 3) construes the cosmos. These three broad topic areas do not exhaust the sort of convictions that characterize religious communities, but they do capture crucial areas where religious or spiritual conviction bears on interpretations advanced within the modern American academy.

Let’s start with moral and ethical claims. Scholars or teachers often make moral or ethical judgments about what should be studied or about the use to which scholarship should, or should not, be put. When they do so, they may want to alert their faculty colleagues or students in those cases where their moral judgment comes out of commitment to a specific religious tradition.

For example, various traditions may oppose on moral grounds embryonic human stem-cell research because the stem cells are collected from (very early stage) human embryos produced in fertility clinics or developed for the explicit purpose of embryonic stem-cell research. If a biologist belongs to such a community and shares its views, I see no compelling reasons why, in stating his moral reservations, he should not refer to the religious or spiritual source of his objection.

Of course, as with any moral or ethical claim, the person advancing it will need to be prepared to argue for its validity and be willing to consider its broader implications. Moral arguments may not be easily settled, but, in academe at least, they deserve more than mere assertion and counter-assertion. In fact, given the academy’s commitment to the pursuit of understanding, moral disagreements that remain unresolved may nonetheless deepen the participating parties’ understanding of each other, of the contending positions, and of their own views and their implications. This is no small gain.

Scholars in the natural sciences and social sciences occasionally confuse science with metaphysics (or at least that branch of metaphysics that concerns itself with a “maximally comprehensive view of reality”). In short, they draw conclusions that exceed science’s grasp. For example, a biologist may confuse methodological naturalism, which assumes methodologically that scientifically adequate explanations for a natural biological phenomenon should be supplied by causes and factors that do not refer to the divine, with metaphysical naturalism that, as The Encyclopedia of Philosophy puts it, denies “that there exists or could exist any entities or events which lie, in principle, beyond the scope of scientific explanation.” In weighing this metaphysical (and nonempirical) claim, the scholar who is religious or spiritual may wish to point out that metaphysical naturalism is an assertion of philosophic opinion rather than a statement of fact, scientific or otherwise, and is not subject to scientific proof or disproof. To make this limited point, the scholar can simply draw attention to the unwarranted move from methodology to ontology. He or she may, however, want to go further and offer an alternative metaphysical view, one derived from religious or spiritual commitments.

Once again, the sharing of religious or spiritual commitment may be appropriate so long as the critic recognizes that the religiously or spiritually based metaphysical view has no more standing as a scientifically valid argument than does metaphysical naturalism. And as with moral arguments, disputes regarding one’s metaphysical commitments may not be resolvable by argument, but in academe a good spirited argument should nonetheless be expected.

More complicated, perhaps, are claims about human being. Various social sciences such as psychology and economics may base their theoretic edifice on assumptions about “human nature” and “human flourishing” that is contested by various religious and spiritual traditions. These assumptions—for example, the model of the rational, self-interest-maximizing human being that underlies many economic models, or the assumption underlying some theories of psychology that psychological health consists largely in individual self-development and self-expression—may be “givens” within their respective field (or subfield). Even so, such assumptions are rarely empirical generalizations or tested propositions. Rather, they reflect a particular set of values that may not be shared by religious or spiritual traditions.

To argue about morality or metaphysics takes the scholar outside his or her disciplinary community into new encompassing communities with their own standards and practices germane to such arguments. To advance fundamental claims about human nature may also entail engagement with moral or metaphysical questions. But much that is asserted about human being may give rise to legitimate inferences that can be tested empirically or subjected to rational scrutiny or both. When generalizations lead to testable hypotheses or rational inquiry, I see no reason why the appropriate disciplinary standards and practices should not apply, whether the generalizations arise from religious or spiritual claims or from, say, interpretive schemes that arose out of the European and American Enlightenments.

 

There is much more that could be said about explicitly religious claims in each of these three domains. Let me illustrate how I would take the analysis further with some additional remarks about moral or ethical claims within a disciplinary context.

Moral claims draw on moral intuitions, maxims, and practices of moral reasoning shared within the larger society of which the academic discipline is but a part. At issue, then, is not whether religious or spiritual claims are appropriate in terms of the standards and practices of the disciplinary community, but rather whether religious or spiritual claims meet the standards appropriate to the broadly encompassing community of moral inquirers in which disciplinary professionals have no particularly privileged position.

Not that moral questions have nothing to do with disciplinary standards or practices. When the moral reasoning concerns disciplinary practices (e.g., the morality of certain forms of research), the disciplinary domain will provide the context and grounds on which the moral reasoning proceeds. Consider again, for example, debates over the morality of embryonic stem-cell research. A microbiologist can specify when an embryo is likely to first experience sensations, but cannot, on the basis of his or her specialized knowledge, specify that an embryo is (or is not) a human being with full moral status when it has achieved this stage. Others have as much right to argue this point as the biologist, and whatever view on this issue is advanced, it must be argued for keeping in mind the (to be sure, contested) good, standards, and practices of the larger community (or communities) of moral inquirers.

A critic may, of course, attempt to limit a discipline’s membership in this encompassing community of moral inquirers. He may assert, for example, that the generally accepted disciplinary standards distinguish between facts and values and bar certain questions of value (that is, moral questions) in properly formed disciplinary arguments. Much could be said about this, starting with the question whether the fact-value distinction is even cogent given what we now know about situated human reasoning. Be that as it may, moral disagreements tend to arise in a discipline when considering disciplinary practices or when expressing a judgment about the behavior or moral beliefs of those one is studying (say, in history or anthropology). When making moral judgments about the behavior or beliefs of those we are studying, the scholar will be expected by colleagues to be sensitive to the considerations and associated literature within each discipline regarding cross-cultural understanding and judgments—a set of standards and considerations that developed over the years, in no small part, through engagement with the larger community of moral inquirers. And in any case, at this point, we’re discussing the cogency, soundness, and appropriateness of particular moral judgments, not whether religious or spiritual claims have any role to play. Arguments are resolved on the merits.

Some critics will also insist that in our liberal pluralistic society, which includes higher education, religious or spiritual moral claims have no place unless they can be reframed in secular terms. Many scholars have written advocating or attacking this “preclusion” in the public political arena. It is beyond my scope to rehearse all the considerations raised in this debate. It may, however, be worth asking whether a (highly contested) ban against explicitly religious or spiritual claims in public political debate should be applied to the academy.

With students, however, other considerations come into play. We faculty have an unequal relationship with our students. As long as we do not blatantly violate professional proprieties, we enjoy broad authority in the classroom over what is taken to be sound opinion within our discipline. We are, of course, ultimately answerable to our disciplinary peers for our scholarly opinions. But unlike published scholarship that usually undergoes peer review, teaching after the probationary years undergoes little scrutiny so long as we don’t “go around the bend.”

The unequal power balance and the occasional, but sometimes dramatic, intrusion of irrelevant religious or political issues into classroom teaching can understandably spark protest and lend color to calls for outside regulation. So, as a matter of prudence as well as of professional ethics, faculty need to be circumspect and thoughtful when introducing religious considerations that our discipline may consider beyond its purview, or even a transgression of its standards and practices.

In several seminars around the country, I have asked faculty whether any mention their religious convictions in class and, if so, why. It turns out that many religious faculty do self-disclose for pedagogical reasons. Here’s one example from a recent seminar. It involves a political science professor who explained to colleagues in the seminar that in introducing a discussion on public policy regarding welfare, she always starts with a confession to her students that attending Catholic Mass is her secret vice, and so, not surprisingly, her Catholicism informs her views on appropriate policy.

Asked why she does this, she explained: First, it lets her students know where she is coming from, and she thinks that is crucial information when dealing with welfare policy. Second, by disclosing her own religiously informed perspective, she invites students to reflect on their own commitments and how these commitments influence their views. She went on to add that self-disclosure of this sort requires a deft hand, perhaps a bit of disarming humor, and, most importantly of all, a willingness to entertain and encourage alternative perspectives.

In the subsequent conversation, colleagues from several different disciplines helped the group think through the strategy and its pitfalls. Faculty in the natural sciences, for example, were less sure that the strategy would be useful in their classes. But in one seminar I attended, a chemist shared how she introduces the concept of “green chemistry” in her physical chemistry classes. Her pedagogical goal is to help her students understand how advances in chemistry help chemists (and the prospective chemists in her class) promote innovative chemical technologies that reduce or eliminate the use or generation of hazardous substance in the design, manufacture, and use of chemical products. She would disclose some of her personal, religious commitments to the environment to get the students to engage with the question of the morally responsible use of chemical knowledge.

A faculty member at another school, a physicist, said that he let his students know in passing that he was an evangelical Christian. He didn’t normally elaborate on this, although he was willing to discuss it with students outside of class. But he wanted all his students to know that at least one evangelical did not find his deep commitment to science and to the scientific method in conflict with his equally deep religious convictions. This generated a great deal of discussion in the seminar about whether the two actually were compatible. I suspect that not many minds were changed, but by the end most of the faculty had a more nuanced understanding of the issues.

Of course, self-disclosure can sometimes inhibit pedagogical goals rather than advance them. It won’t take long for stories to be shared about students who felt uncomfortable or even coerced by this approach. The chemist who mentioned “green chemistry” shared that she had had a few students who complained in their course evaluations that she had brought what they termed “liberal politics” into a science class. Others took issue with her specific religious perspective.

Teachers who employ self-disclosure also commonly share incidents where it is the students who offend other students once the ban on religious discourse is broached. I have heard stories of students attempting to proselytize other students who said in class discussion that they weren’t religious, of students who witnessed in class to their faith, and of students who made disparaging remarks about certain religious traditions or practices, not realizing, or perhaps not caring, that other students in the classroom adhered to those traditions.

 

The title of this article includes the words “time to talk.” I have suggested why I think it is now time to talk about this issue. I want to conclude with the suggestion that the best way to come to grips with the appropriateness of, and limits on, the expression of belief in scholarship and teaching is to take time to talk. For a topic that cannot be easily resolved but needs to be better understood, I recommend that faculty take time to converse with one another.

I recommend conversation, because a proper conversation aims at communication and understanding, not (or at least not necessarily) at agreement or resolution. This is conversation in a sense analogous to the one intended by the philosopher Michael Oakeshott. Participants in a conversation, Oakeshott suggests, are engaged in activity that may include inquiry, argument, and debate, but that ultimately aims at something else. In a conversation, “there is no ‘truth’ to be discovered, no proposition to be proved, no conclusion sought.”

In a conversation there should be little hierarchy and no arbiter of who’s right and who’s wrong.

Participants “are not concerned to inform, to persuade, or to refute one another, and therefore the cogency of their utterances does not depend upon their all speaking in the same idiom; they may differ without disagreeing. Of course, a conversation may have passages of argument and a speaker is not forbidden to be demonstrative; but reasoning is neither sovereign nor alone, and the conversation itself does not compose an argument.” Arguments may arise in conversations, but the goal of conversation is not to win (or avoid losing) an argument, but rather better to understand (and perhaps even empathize with) the other’s position even as one explains (and perhaps even hopes for empathy regarding) one’s own position.

Oakeshott’s distinction between a conversation and an argument is important. As the historian Martin Marty points out, in arguments contenders claim to know answers. They debate with the intent to convince or overthrow those of other opinion. In a conversation the interlocutors have questions. They converse with the intent of deepening empathy and broadening understanding. Again in Oakeshott’s words, conversation is “not a contest where a winner gets a prize.” Rather, it is “an unrehearsed intellectual adventure.” For these “unrehearsed intellectual adventures” to succeed, conversation partners need to understand the conversational rules that distinguish conversations from debates or arguments.

First, in conversations everyone is more or less equal. Each person in a conversation has the right to his or her say. Each has a right to call for reflection, to pose questions, to try to steer the conversation in this direction or that. In a conversation there should be little hierarchy and no arbiter of who’s right and who’s wrong.

Second, conversations are situated. They involve specific individuals in distinct contexts seeking to understand each other. The situated, contingent nature of conversation—both in who converses and what they converse about—often takes the form of telling stories or relating anecdotes. We offer the reasons that incline others or ourselves this way or that, and explain why. We tell what was intended, what actually happened, and why. We relate specific events and see meaning (or its lack) in them. Fragments of contingent narrative are the common stuff of our everyday conversations, whether at work or at play, in the academy or at home, in the privacy of our own minds or in public chitchat. Narrative plays a crucial role in self-understanding and identity, including how some of us understand ourselves as religious and how all of us—in varying ways—understand ourselves as disciplinary professionals.

Third, in conversations it is appropriate to bring up feelings as well as ideas, to share that which is subjective as well as objective—assuming that such a distinction can be easily drawn. Recall that the goal is to deepen understanding of the other as well as of one’s self. So, in true conversations, expressions of passion, aversion, or indifference have as much a role to play as claims of fact or narratives of experience. We may attempt to bracket our feelings when doing our scholarship or when teaching our students. But we need not exercise such restraint when conversing about what convicts and convinces us—or what convicts and convinces others. In fact, if we fail to include the emotional with the notional, we are likely to shortchange the understanding and social engagement that good conversation requires.

Fourth, conversationalists regularly feed back what they are hearing (or think they are hearing) from their conversational partners. This practice of interactive feedback allows listeners to test their understanding of what their conversation partner is saying, even as it gives the conversation partner feedback on whether he or she is getting through. It encourages listeners to empathize with the speaker, to use their imagination to “see” what is being said, to draw on their own experience for analogues and differences.

Fifth, conversations are richest (but also, perhaps, scariest) when a diversity of perspectives is present. For his part, Oakeshott insists that conversation is, properly speaking, “impossible in the absence of a diversity of voices: in it different universes of discourse meet, acknowledge each other and enjoy an oblique relationship which neither requires nor forecasts their being assimilated to one another.” If Oakeshott is right about this—and I think he is—then conversations on religious commitment within the academy should include faculty who come from different disciplines and who bring varied perspectives and experiences regarding religious belief and practice and the life of the mind.

In closing let me re-emphasize that conversation aims at mutual empathy and understanding, not agreement or resolution of differences. We faculty need to recognize that disciplinary and religious communities have formed us profoundly, establishing in us fundamental dispositions, background beliefs, and habits of mind that often influence us without our awareness. This is especially true in matters regarding morality, human nature, and “maximally comprehensive views of reality” (that is, metaphysics). Disciplinary and religious perspectives commonly differ in these three domains, with attendant difficulties for those of us who are both religious (or spiritual) and disciplinary professionals. The differences can also cause problems for colleagues who seek to understand us and our scholarship and teaching. The disagreements on these points are not easily settled, and in many cases the best we may hope for is greater empathy and better mutual understanding. With a topic as potentially divisive and irresolvable as religion, scholarship, and teaching, this is no mean achievement.

Mark U. Edwards, Jr., is associate dean of academic administration at HDS, having been Professor of the History of Christianity at HDS from 1987 to 1994 and president of St. Olaf College from 1994 to 2000. The essay is adapted from his book Religion on Our Campuses: A Professor’s Guide to Communities, Conflicts, and Promising Conversations, which has just been published by Palgrave Macmillan and was written under a grant from the Lilly Endowment.

Please follow our Commentary Guidelines when engaging in discussion on this site.