Winter 2009 issue cover


Ways of ‘Knowing’ Cancer

How can we reason about illness?

Cover illustration by Erik Sandberg. Cover design by Point Five Design.

By Mark U. Edwards, Jr.

I am living with cancer, multiple myeloma to be precise. Myeloma is said by medical authorities to be an incurable malignancy with a median survival of three years. I was diagnosed in 1996 and am, much to my amazement, still going strong.

Knowing that one has an incurable fatal malignancy with a three-year median survival rate focuses the attention. Why me? How long to I have? What do I think about death and dying, not in the abstract, but as a perhaps rapidly approaching personal likelihood? How does it change how I understand what I have done up to now? How might it direct what I attempt to do in whatever time I have remaining?

I mention these personal questions because they point to the existential nature of cancer. When I turn to cancer “from the inside out,” I am able to shift to speaking of my illness rather than my disease.1 This shift to the subjective and existential raises the question of how we can reasonably discuss illness as well as disease at an academic level. After all, academics are also human beings who will, someday, become fatally ill.

As human beings, we wrestle with questions regarding the uniqueness of our own life, our vulnerabilities to life’s contingencies, our propensities to good or evil, our ultimate destiny. Different fields within the academy also deal with existential experience and questions. Literature and other fine arts, for example, tackle such concerns, often with deep insight. But the natural sciences and some social sciences, with their commitment to distance or event detachment, arguably cannot satisfactorily address the dimension of existential depth such experience and questions entail. At the very least, people may feel profoundly discomforted even it we accept as “true” or “scientifically convincing” the (often reductive) explanations some disciplines offer.


The philosopher Stephen Toulmin has spent his career arguing with those scholars who contend that the most valid claims to true knowledge must be abstract, universal, axiomatic, formally deductive, and logically certain. In other words, he has challenged the assumption that the methods of Euclidean geometry should set the gold standard for reasoning an judging, interpreting and explaining. “A more balanced view,” he insists, “will allow any field of investigation to devise methods to match its problems, so that historical, clinical, and participatory disciplines are all free to go their own ways.”2 Religious traditions have a stake in defending Toulmin’s “more balanced view.”

Religious explanations are sometimes stigmatized as unscientific because they entail evaluative and purposive arguments. So religious traditions have an interest in encouraging a nuanced and sophisticated understanding of how scientific and medical models actually are constructed and evaluated. Specifically, religious traditions benefit when students and laypeople generally recognize that scientific models also have purposive and evaluative elements either built into the models themselves or underlying their creation or both. To be sure, the presence of purposive and evaluative elements should not be used to disparage the awesome accomplishments of scientific method.3 On the contrary, scientific modeling remains one of the most powerful and convincing approaches to knowing and understanding, explaining and interpreting, employed by human beings. But scientific modeling is a human activity, and reflects human values and purposes. That which is modeled—nature in the raw, as it were—may lack purpose or value or even meaning, but our scientific models do not. At the very least, then, the presence of purpose and value in scientific as well as in religious accounts should not, ipso facto, be used to disparage either one.

Religious traditions have a profound stake in defending the legitimacy of practical reasoning. From a religious perspective—and, for that matter, from a wide range of other, often humanistic perspectives already at home in the academy—case-based reasoning cannot be deemed inferior to theoretical or abstract, deductive reasoning. In the appropriate context, with the right issue or problem, case-based reasoning may be the best approach available, while in other contexts, with other issues or problems, a formally deductive approach may be more appropriate.

Religious traditions also have a profound stake in insisting that subjective experience discloses aspects of reality that require a depth and breadth of explanation and interpretation that exceeds the grasp of a scientific approach (or perhaps even discursive expression). This does not mean that science cannot explain aspects of experienced reality, only that the explanation will be incomplete.


Cancer is the (normally destructive) territorial expansion of a mutant clone cell. There are many different cancers, and each cancer has its own characteristics that distinguish it as cancer but also distinguish it from other cancers. A model for cancer, then, should encompass the models of different types of cancer. A model for cancer in general may be termed a “model of models.”4

As such, it will be abstract and general. Here’s a simplified version of such a model of models:5

Cancer can be understood as a byproduct of the evolution of complex animals (which include human beings). Cancer is one of the negative consequences, first, of complex animals’ need to regenerate their tissues, and, second, of the evolutionarily beneficial ability they have to reshuffle their genes.

Complex animals need to constantly regenerate tissue. To accomplish this necessary regeneration, animals have various stem cells that can, through various steps, produce the specialized cells that need to be replaced. Animals also have the pathways—for example, the lymphatic and vascular channels—that allow the stem cells or their offspring to get to where they are needed. Cancer is a stem cell or one of its products running amok and spreading beyond its place of origin.

Without the ability to recombine genes, there would be no complex animals, no human beings. We would not have evolved if genes were fixed and could not mutate. To be sure, our cells are remarkably good at copying our DNA accurately and at repairing or eliminating copy errors. But occasionally an error gets through. Over many millennia these occasional mutations, filtered through the sieve of natural selection, have led to the variety of forms of life we find around us. But occasionally an error gets through—or, more likely, a succession of errors accumulating in a clone cell, its sub-clone, its sub-clone’s sub-clone, and so on—and is filtered through the sieve of natural selection within the microenvironment of the body and produces a malignant cancer.

Cancer is understood as a natural process. It is part of our evolutionary legacy and operates through successive mutations and selection within the microenvironment of the body. As a natural process, cancer develops by chance, follows no plan, and cannot be characterized as good or bad, right or wrong, purposive in the teleological sense, or unnatural.


For many academics, questions of good or bad, right or wrong, teleology or purpose are observer relative.6 That is, they are human constructs and are projected by human beings onto natural entities or processes. By these lights, cancer cannot be understood as good or bad as a natural process; it is good or bad only in relation to our interests and interpretations.

Some religious traditions challenge this initial naturalistic assumption. They insist that such properties as goodness or purpose are intrinsic to things in a created cosmos. Or alternatively, that such properties may be considered observer relative if one admits that God is the observer whose constructs and projections are actions that bring about what they say.

We may debate whether nature can be described as purposive or whether questions of right or wrong, good or bad, or issues of teleology are appropriate or meaningful when applied to nature. But the study of nature is purposive. What is more, the study of nature reflects often strongly held value judgments (if only the strongly held value that values ideally have no place in scientific research!).

To illustrate how value judgments may be present but hidden, consider why some cancers are studied more than others, or why research into treatments is generally favored over research into eliminating primary causes of many cancers. Until relatively recently, cancers that plagued males received far more attention than cancers exclusively suffered by females. Did research choices reflect society’s normative assumptions about men, women, and generic humanity? We know that smoking and many modern chemicals cause cancer, but it is only recently and often reluctantly that our society has invested much in active prevention. Might it be politically and economically easier to develop treatments, when going after causes would pit researchers against lucrative commercial activities within our society—say, those who manufacture cigarettes or modern chemicals? Clearly, cancer research of certain sorts are strongly valued: governments pay enormous sums to “wage war on cancer” and pharmaceutical companies invest heavily in research in some areas rather than others, and charge suffering patients enormous sums for the fruits of successful (and even not so successful) discoveries.7 But other areas and approaches to the elimination or treatment of cancer, while still studied, tend to receive far fewer resources and far less attention.

It is therefore important to distinguish between cancer as part of nature and cancer as a field studied by human beings. Nature may be neutral; science certainly is not. Nature may have no purpose; but scientists inescapably do. Scientists may be objective in that their results can be granted a high degree of reliability and “fit” with nature, but in their choice of what to study, why to study it, and for what ends, scientists are not value-neutral, nor are the sources from which scientists receive their funding. Religious ways of knowing may differ in significant ways from academic ways of knowing, but the similarities are greater than some admit. And when it comes to issues of purpose and value, the difference is expressed at most in degree, not in kind. Faculty, students, and laypeople who are cognizant of such nuances are better able to evaluate both scientific and religious arguments, and better able to gain from each worthwhile insight for their studies and for their lives.


Even as science allows us to better understand the mechanisms underlying, say, a disease such as multiple myeloma, it does not (and it may never) allow us to know with certainty how the disease is operating and will play out in a specific myeloma case. If a bone marrow aspiration shows myeloma cells in the patient’s marrow, she has myeloma by definition. But this certain conclusion tells us little about the peculiarities of her particular myeloma and even less about the course it will take in her body over time. As the disease model becomes more sophisticated, clinicians may be better able to specify the likelihoods or probabilities of the course her disease will take. But as the paleontologist and cancer sufferer Stephen Jay Gould reminds us in a famous article, the median is not the message.8 “We now come to the crux of practice,” Gould writes,

I am not a measure of central tendency, either mean or median. I am one single human being with mesothelioma, and I want a best assessment of my own chances—for I have personal decisions to make, and my business cannot be dictated by abstract averages. I need to place myself in the most probable region of the variation based upon particulars of my own case; I must not simply assume that my personal fate will correspond to some measure of central tendency.9

Practical reasoning in specific cases must take larger circumstances into account.

In practical reasoning, practitioners must also evaluate the circumstances that surround the case. The answer to the traditional questions of “who, what, when, where, why, how, and by what means” can seriously alter the analysis of the case. To begin with, consideration of circumstances can affect the choice of paradigms and the evaluation of presumptions. The phrase “exceptional circumstances” suggests how circumstances of a certain sort—fulfilling the “unless” or “except in the case of” qualifications that accompany many presumptions or rules of thumb—can force the overturning of otherwise applicable presumptions. To take an extreme example, the paradigm employed in the practical domain of jurisprudence of murder in the first degree may be inapplicable given the circumstances of war.

In moral reasoning, circumstances may have a decisive bearing on how a case is handled in at least five ways.10 First, circumstances may alter the judgment of how serious a moral action might be. Second, circumstances may suggest that one paradigm be substituted for another. Third, circumstances in the form of cultural forces may inform or shape the practitioner’s judgment, and a master practitioner needs to be sensitive to such influences and willing to compensate for any inappropriate effects. Fourth, circumstances in the sense of anticipated consequences of a moral decision can affect moral reasoning; the slippery slope argument comes immediately to mind. Finally, circumstances may themselves be changed by repeated moral choice that becomes established as custom.

Analogous criteria may be formulated in each domain of practical reasoning, whether law, politics, applied economics, or medical diagnosis and treatment.

Good diagnosticians attend to the larger circumstances in which their patients find themselves. It is for this reason that they take their patients’ case history—another example of how narrative can play a helpful role in knowing and judging. An acute diagnostician—a master practitioner, in other words—will pay considerable attention to the wider circumstances since they may alter diagnosis or suggest different treatment regimens. A history of kidney problems might, for example, alter the significance the physician attributes to the amount of protein in the urine.

In practical reasoning in law, medical diagnosis, moral reasoning, and so on, authorities are often sought and cited. In law there is the authority of precedent; in medical diagnosis, the authority of scientific and clinical studies of the disease, its genesis and typical progression; and in moral reasoning, the prior opinion of experts in moral reasoning who have dealt with similar cases. Master practitioners in each of these endeavors will know these sources of expert opinion and will devote time to ongoing study intended to keep them abreast of such expert opinion.

Physicians are expected to know the medical literature in their field and to draw on it when diagnosing and treating particular cases. When it comes to moral and ethical reasoning, the demand to keep abreast of the literature and field should be no less imperative. Given the extensive past and continuing experience with practical reasoning in the various major religious traditions, the professional who deals regularly with ethical and moral quandaries can learn from the religious traditions as well.

To reason well and render an appropriate judgment, practitioners require knowledge of the relevant paradigms and models. They should have experience in construing analogies that allow them to associate, through similarities and differences, exemplary paradigms with novel particular instances. They need a keen eye for relevant detail and an appreciation for the possibly confounding influence of larger circumstances. They also, and importantly, require experienced judgment, a judgment that may be partially captured in discursive rules—but not always and rarely fully.

The judgment of master practitioners frequently includes insight that cannot be captured discursively and which relies on tacit understanding.11 This judgment draws on trained imagination and the human ability to integrate seemingly disparate parts into an intuitive grasp of the whole. It is often dialectical in its approach, integrating particulars into a provisional whole which, in turn, is analyzed into expected parts which, if found in the particular case, may confirm or disconfirm the integrative judgment. This process moves back and forth until the judgment is (relatively but still fallibly) complete.

Some scholars insist that human thought—including the reasoning and judgment displayed in medical diagnosis and moral casuistry—must in principle be reducible to discursive rules. The alternative, they suggest, is a surrender to irrationality or, perhaps worse, mysticism. Yet there is considerable empirical evidence for the contention that some knowing cannot be captured in discursive rules. Some decades ago the philosopher Michael Polanyi, who developed the concept of tacit understanding, pointed out how our senses, especially our visual senses, are able to discriminate wholes out of shifting particulars (e.g., our ability to recognize faces in changing light, angle, grooming, and makeup) without being able to specify all of the particulars on which our judgment relies. In an analogous way, Polanyi contended, we are able to achieve insight into wholes through an integrative grasp of particulars without being able to specify all the particulars or give discursive principles for how we intuitively related them one to another.

More recently the quest to develop artificial intelligence in computers (AI) has by some of its failures suggested that humans may rely more heavily on tacit and in principle nondiscursive understanding than proponents of discursive rationality may be comfortable with. For example, after nearly 50 years of research, AI researchers have been singularly unable to get supremely logical computers to simulate even simple intelligent behavior that human beings matter-of-factly display. More specifically, it has proven to be extraordinarily difficult if not impossible to represent human “common sense” as even a very large set of facts and associated logical rules.

Human behavior may seem “commonsensical” to other humans, also coherent and logically consistent. But the experience of AI suggests that the consistency and coherence that human beings experience depend to a large extent on a rich understanding of content, intention, and meaning—matters not easily captured by rules or slotted nicely into a compact worldview. If these contentions are true, they have implications for religious ways of knowing. On the one hand, they may pose problems for hyperrational views of human being and doing. On the other hand, they may also pose problems for religious traditions that attempt to insist on believers adopting a logically consistent religious worldview.


Decisions about treatment are purposive, and they are neither ethically neutral nor free of significant value judgments. The academic who is also a clinician must go beyond abstraction and generalization of models to diagnosis the ailments of specific human beings and beyond fallible diagnosis to make fallible judgments about what treatment regimen is best for a particular patient, not patients in general.

Decisions about treatment have an ameliorative purpose: their reasoning presupposes that treatment should make the patient’s “condition” better than it would be without treatment. To put the matter in classical terms, in Epidemics, book 1, section XI, Hippocrates writes, “As to diseases, make a habit of two things—to help, or at least to do no harm.” A similar sentiment is found in the Hippocratic Oath. So, from the outset, the academic who is also a clinician is guided by purpose and immersed in value judgments, starting with, “Will this treatment help this particular patient or at least do no harm?”

But what constitutes help? And how does one trade off between help and harm? For example, most chemotherapy has harmful side effects. How does one weigh the possible benefit from chemotherapy against the certain harm that it will cause? Practical reasoning deals with just such questions.

The patient is also immersed in purposive and value judgments. Take the example of my multiple myeloma. Although at the time I write, my physician has not recommended that I have a stem cell transplant using my own stem cells (autologous transplant), he might well do so some time in the future. If he does, I must decide whether I am willing to undergo a difficult, expensive, dangerous, and often—at least for several months to a year—debilitating procedure in order to extend my lifespan, on average an uncertain and perhaps negligible amount. Had I a close match with the stem cells of one my relatives—unfortunately, something that I do not have—I could risk an allogeneic stem cell transplant. My stem cells would then be replaced with those of a healthy donor among my relatives, essentially exchanging my faulty immune system with their functioning one. Such transplants hold out a greater upside, if they “take,” but also have a considerably larger downside—the risk, including death, that the transplant will be rejected by my body. So how do I trade length of life for quality of life? What are my goals in the choice of treatment, and how do my values guide me?

The reasoning and trade-offs can become yet more complicated. What if the clinician and I disagree on what treatment is “best,” given my particular case, his clinical judgment, and my thoughts about quality and quantity of life? One plausible answer is that one person’s interests—in this case, probably the patient’s—should be seen as morally overriding. After all, isn’t it my life, my health, my sense of trade-off between quantity and quality of life that is at issue here?

Naturally, a patient can say “no” to a treatment, and except for cases where the patient’s judgment is thought to be crucially impaired and guardians or the courts step in to represent the patient’s interests, that “no” is final. But refusing treatment differs significantly from demanding that a certain treatment be accorded.

The patient often cannot and sometimes should not be the one who decides. The patient’s wishes cannot, for example, trump a clinician’s judgment when the patient wants a treatment that the clinician thinks is inappropriate, dangerous, or even morally questionable (i.e., entails unacceptable risk to the patient or to others). Further, just because a patient wants a particular treatment does not mean that her insurance company will pay for it, even if the patient’s clinician also favors the particular treatment. There may also be limits on other resources: for example, everyone who wants a transplant does not get one, because there is a limit to available organs. And so it goes.

I now return to the distinction the medical sociologist and writer Arthur Frank makes between disease and illness. Disease talk invites the patient to think of the body as a “site” “out there.” In disease talk the patient adopts the perspective of the physician or scientist. Disease talk lives and breathes academic objectivity. Illness is the experience of living through the disease from “inside out.” It attempts to capture the lived experience of one who has the disease. It lives and breathes existential subjectivity.

Disease talk is appropriate in many contexts and can be helpful to the patient, but it has limitations. I may accept rationally that multiple myeloma is an unfortunate but natural by-product of evolutionary processes. I may find it comforting (at least at one level) that my disease developed by chance, without plan or purpose. I may wrestle with all the practical wisdom I can command with decisions about diagnosis and treatment. But in wrestling not with my disease but rather my illness, I personally may need more than what the academy currently offers in all its sciences, natural and social.

How, for example, do I understand and explain to myself my existence as a singular, situated, thinking being who is living with an incurable, fatal disease? How do I understand my mortality and vulnerability? How do I understand the fear that this raises? How do I live with as much good humor as I can muster with the pain, the disorientation, the fears, and the other unhappy by-products of a nasty disease and the associated nasty ways of treating it? How do I deal with uncertainty, despair, and hope? How do I work all this into the story of my life? Where does it feature in my understanding of my destiny? How do I deal with all this from the inside of lived experience, rather than from the outside of detached analysis or prescription? And how do the various approaches to reasoning and judging in higher education help or hinder me in tackling such questions?

These are questions that are deeply and ineluctably raised by every human being fortunate to live long enough to gain a certain experience and maturity and who, then, has to deal with a life-threatening disease. Traditionally, they are questions addressed by religion (and by great literature and art). I do not claim that religious traditions always address such questions well. Quite the contrary. Religious traditions can also treat cancer as a disease rather than as an illness, from the outside in rather than the inside out.

In her Illness as Metaphor, Susan Sontag calls attention both to the perverse ways religion has treated illness “from the outside in” and to the even less satisfying psychological approaches to illness from the “inside out” that have sprung up as religious answers have become less persuasive, or at least less available.12

Plague was once thought to afflict whole communities as punishment for sin. The historical anthropologist William Christian has studied such attributions in detail.13 More recently, religious traditions in the West may have put greater emphasis on individual failings, and hence put the onus for diseases and its suffering on the sins of the individual. While it may be symbolically satisfying to see resonances between moral and physical failing—a linkage encouraged by religious traditions throughout the ages—our modern understanding of disease and disease process seems to me preferable—if that Hobson’s choice is all that is available.

But even with the fading of the religious worldview, the punitive view of disease has lingered in a psychological form that may be at least as perverse as its religious forebear. For a time there was much discussion of the “cancerous personality”—characterized as unemotional, inhibited, and repressed—that somehow contributed to a person’s contracting cancer.14 This can hardly be considered an advance over punitive religious views. “Ceasing to consider disease as a punishment which fits the objective moral character of the individual, making it an expression of the inner self,” as Sontag perceptively observes, “might seem less moralistic. But this view turns out to be just as, or even more, moralistic and punitive.”15 With time, and perhaps with a better understanding of, and hence less mystery surrounding the biology of cancer, the “cancerous personality” has largely fallen out of use as a causal explanation.16 But in an ironic way, the Dr. Jekyll counterpart of the Mr. Hyde cancerous personality is pressed on cancer sufferers by the most well-meaning of people, and often nervously but heartily adopted by the cancer patients themselves, namely, that the “right attitude” can help “beat” cancer. Again Sontag captures its peculiarity well:

Moreover, there is a peculiarly modern predilection for psychological explanations of disease, as of everything else. Psychologizing seems to provide control over the experiences and events (like grave illnesses) over which people have in fact little or no control. Psychological understanding undermines the “reality” of a disease. That reality has to be explained. (It really means; or is a symbol of; or must be interpreted so.) For those who live neither with religious consolations about death nor with a sense of death (or of anything else) as natural, death is the obscene mystery, the ultimate affront, the thing that cannot be controlled. It can only be denied. A large part of the popularity and persuasiveness of psychology comes from its being a sublimated spiritualism: a secular, ostensibly scientific way of affirming the primacy of “spirit” over matter. That ineluctably material reality, disease, can be given a psychological explanation. Death itself can be considered, ultimately, a psychological phenomenon.17

Here’s Sontag getting to the heart of the problem:

Illness is interpreted as, basically, a psychological event, and people are encouraged to believe that they get sick because they (unconsciously) want to, and that they can cure themselves by the mobilization of will; that they can choose not to die of the disease. These two hypotheses are complementary. As the first seems to relieve guilt, the second reinstates it. Psychological theories of illness are a powerful means of placing the blame on the ill. Patients who are instructed that they have, unwittingly, caused their disease are also being made to feel that they have deserved it.18

It is worth pondering whether calls for “a positive attitude” and a “determination to lick this thing” become an anemic psychological substitute both for a more profound understanding of the evolutionary and biological nature of cancer and for a religious or philosophical view that more deeply addresses, even if it may not satisfactorily answer, the painful questions that fatal illness poses. The persistence and vigor of a psychological human response to disease (or even of a “sublimated spiritualism,” to employ Sontag’s term) underlines how the “rational” and “reasonable” ways of knowing that higher education offers may leave important questions not only unanswered but unaddressed.

We may outgrow or reject the religious answers of our youth. We may find implausible or even revolting the answers given by the religious traditions found in our society or elsewhere in the world, for reasons including those we just treated in the last few paragraphs. As experiencing human beings, we can ignore or bracket these questions for a time, but we can never really escape them. They arise out of life itself. They are questions of deep meaning, value, and purpose.


Seen from the “inside out,” another striking feature of discussions around cancer is how often the metaphors are martial. “I’ve been attacked by cancer.” “I’m fighting back.” “My doctors and I think I can lick this thing.” Again, Sontag captures the images well:

The controlling metaphors in descriptions of cancer are, in fact, drawn not from economics but from the language of warfare: every physician and every attentive patient is familiar with, if perhaps inured to, this military terminology. Thus, cancer cells do not simply multiply; they are “invasive.” (“Malignant tumors invade even when they grow very slowly,” as one textbook puts it.) Cancer cells “colonize” from the original tumor to far sites in the body, first setting up tiny outposts (“micrometastases”) whose presence is assumed, though they cannot be detected. Rarely are the body’s “defenses” vigorous enough to obliterate a tumor that has established its own blood supply and consists of billions of destructive cells. However “radical” the surgical intervention, however many “scans” are taken of the body landscape, most remissions are temporary; the prospects are that “tumor invasion” will continue, or that rogue cells will eventually regroup and mount a new assault on the organism.19

Cancer is not an outside invader. It is a part of ourselves, unhappily gone astray to be sure—and a part of us that we hope to change. But it is still part of us. Frank, in his At the Will of the Body, points to the peculiarity of this way of expressing things, speaking of his own wrestling with cancer:

The tumors may have been a painful part of me, they may have threatened my life, but they were still me. They were part of a body that would not function much longer unless it changed, but that body was still who I was. I could never split my body into two warring camps: the bad guy tumors opposed to the naturally healthy me. There was only one me, one body, tumors and all.20

And he continues:

Thinking of tumors as enemies and the body as a battlefield is not a gentle attitude toward oneself, and ill persons have only enough energy for gentleness. Aggression is misplaced energy. You may feel anger because of the way you are treated, but that is different from fighting yourself.21

Much the same is true for pain. “When we feel ourselves being taken over by something we do not understand, the human response is to create a mythology of what threatens us.” Frank suggests,

We turn pain into “it,” a god, an enemy to be fought. We think pain is victimizing us, either because “it” is malevolent or because we have done something to deserve its wrath. We curse it and pray for mercy from it. But pain has no face because it is not alien. It is from myself. Pain is my body signaling that something is wrong. It is the body talking to itself, not the rumblings of an external god. Dealing with pain is not war with something outside the body; it is the body coming back to itself.22

Here, it would seem, a better understanding of science might help reasoning about illness from the inside out.

Unfortunately, many health providers act and speak in ways that make it difficult for cancer patients to recognize “the body coming back to itself.” The objectifying, distancing stance that turns the body into a “site” where there is “cancer” that “must be treated” can quickly lose sight of the person who is that “site” and is experiencing all that it means to be “treated”— including the worries, the fears, and the sense of loss and change that accompanies a person’s illness. At its worst in this regard, modern medicine can descend into a war to the death between doctors wielding the miraculous tools of modern science against the exquisitely evolved defenses of cells run amok. The battlefield becomes the patient, whose illness and all its experienced meaning is overlooked, ignored, or denied.

In what ways are we in the academy able to reason about such conflicting views and experiences without limiting our answers to a narrow range of what constitutes reasoning and rationality, objectivity and distance? Can insights from case-based practical reasoning help in this regard as well? What about insights gleaned from our religious traditions?

Society leans hard on the ill person to adopt the role of model patient, forcing the patient to strike a deal between external expectations and internal needs.

It goes without saying that religious and philosophical traditions are not the only communities to care about the distinction between disease and illness. Psychologists, nurses, doctors, experts in hospice care, and others have wrestled with how to understand and help the patient deal with not only her disease but also her illness. The issues are not easy, and proposed resolutions are frequently unstable and shifting.

Let me illustrate the complicated, shifting balancing act some forms of practical reasoning entail by examining Frank’s thesis on “the cost of appearances.” Frank identifies two types of emotional work involved in being ill. First, the patient must deal emotionally with fears, frustrations, loss, and the search for “some coherence about what it means to be ill.” But second, the patient must deal emotionally to keep up with the appearances expected of him or her by a society of healthy friends, co-workers, medical staff, and the patient’s own internalized self-identity. Frank sees this second type of emotional work as inherently problematic. “When I tried to sustain a cheerful and tidy image,” he explains,

it cost me energy, which was scarce. It also cost me opportunities to express what was happening in my life with cancer and to understand that life. Finally, my attempts at a positive image diminished my relationships with others by preventing them from sharing my experience.

“But,” Frank concludes with some asperity, “this image is all that many of those around an ill person are willing to see.”23

Frank sees society leaning rather hard on the ill person to adopt the role of model patient. And the patient, in his or her dependence, may be forced to strike a deal between external expectations and internal needs. “To be ill is to be dependent on medical staff, family, and friends,” Frank explains:

Since all these people value cheerfulness, the ill must summon up their energies to be cheerful. Denial may not be what they want or need, but it is what they perceive those around them wanting and needing. This is not the ill person’s own denial, but rather his accommodation to the denial of others. When others around you are denying what is happening to you, denying it yourself can seem like your best deal. To live among others is to make deals. We have to decide what support we need and what we must give others to get that support. Then we make our “best deal” of behavior to get what we need.24

I conclude from my own experience with severe illness that there is much truth in Frank’s description of the deal-making that we ill folks go through to satisfy the expectations of others and to secure as much care and support from providers as possible.

But in his zeal to recover what is commonly overlooked—namely, the ill person’s self-denying and enervating accommodations to the expectations of family, friends, and the medical system—Frank may have minimized analogous costs borne by family and caregivers when ill persons insist on their right to fully express fears, angers, loss, and incoherence. The patient’s unbridled self-expression may be healthy—a reaching out to ask others to understand what the patient is experiencing—but it imposes a cost on family and friends, who are also dealing with fear, anger, loss, and incoherence because of their loved one’s illness. It also puts additional demands on frequently overworked caregivers, who have multiple needy patients to deal with.

So who does rightly get to demand what of whom? How’s the balance struck? How does one reason and make judgments regarding such conflicting claims on the emotions, expectations, needs, and behavior of others? How does one balance the gains here against the losses there? Again, there are no certain answers to such questions, only reasonable, shifting approximations.

Is it unreasonable to struggle with questions for which we know at the outset we’ll secure no agreed-upon answers—or, at least, no answers that will find agreement much beyond the confines of one’s community of practice? This is not just a question to be posed to religious communities. Disciplinary communities are as capable, I would submit, of asking pertinent questions for which they find no consensus on reasonable, much less “rational” answers.

More far out: Is it possible (and if so, is it appropriate) that modern scholarship limits the questions it asks in exchange for a greater likelihood of depth or precision or “certainty” secured by those limits? That may be a worthwhile trade-off, but one must then ask whether by asking questions of a certain answerable sort, modern scholarship may be shortchanging the range of true human reasonableness. What is being lost in questions bypassed because they fall short of imposed standards of rationality or even reasonableness, certainty or even reasonable certitude?

I cannot help but wonder whether self-imposed limitations may put some of us in the position of the man looking for his car keys under the lamppost because it offers the most illumination. Modern scholarship may have achieved great power and purchase by limiting what it takes to be knowledge and reason. But this can verge into dogma: there are no keys unless under the lamppost. Or the keys not found under the lamppost are not true keys. Or they are keys by courtesy designation, only. Are we shortchanging ourselves?


Reasoning and judging in much of life may realistically aspire only to some sliding scale of reasonableness and varying degree of certitude.25 With few exceptions— mathematics being the exemplary case—rational certainty is an impractical, perhaps unobtainable goal. This, I submit, is born out by any honest reflection on our experience as academics and human beings, however it may be deplored by those who wistfully aspire to deductive certainty in fields other than mathematics, formal logic, and, perhaps, theoretical physics. Claims to absolute certainty and universal method—whether in the natural or social sciences or, to be fair, in various religious traditions—may help a particular community of practice reassure and retain its members, but in the pluralistic, disjunctive world of the academy, more modest, even fallibilistic expectations may better serve the intellectual enterprise.

There is an aspect to knowing and judging in complex, contingent, and situated cases that cannot be captured in discursive rules. At least with expert practitioners, knowing and judging rely to varying degrees on trained imagination and experienced intuition. Such knowing and judging rely on tacit understanding, to employ the philosopher Michael Polanyi’s term.26 A significant component of religious understanding and commitment also arises out of the human ability to intuit integrated wholes from specific, contingent details. This ability is at play when religious inquirers interpret narrative texts; when they reason through specific cases of conscience and reach probable (but rarely certain) judgments regarding the appropriate thing to do; and when they address questions of meaning, value, and purpose from the inside out. So, religious traditions have a stake in acknowledging and defending tacit understanding. So, too, do even the “hardest” scientific disciplines, which, in practice, also have depended on intuition and nondiscursive insight to surface some of their most fruitful discoveries.

Finally, to claim that religious traditions have a stake in narrative, case-based, subjective or existential, and tacit ways of knowing does not mean that the use to which religious traditions put these various ways of knowing necessarily leads to true understanding or right judgment. To apply legitimate methods to appropriate problems is crucial to sound reasoning, but it does not guarantee truth. However, once these methods are recognized as legitimate when applied to appropriate issues or problems, their use by religious traditions can no longer be employed as ipso facto grounds to disqualify religious claims as meaningful knowledge in general. That awareness may allow academics to see their own disciplines better and make them more willing to entertain different, and perhaps even religiously inflected, ways of knowing.


  1. This point is an important one, and I owe the distinction to the medical sociologist and writer Arthur W. Frank. See his At the Will of the Body: Reflections on Illness (Houghton Mifflin, 1991).
  2. Stephen Toulmin, Return to Reason (Harvard University Press, 2001), 83.
  3. On the science side of this debate, see James Robert Brown, Who Rules in Science: An Opinionated Guide to the Wars (Harvard University Press, 2001). For an analogous argument from the side of liberal Christianity, see James M. Gustafson, An Examined Faith: The Grace of Self-Doubt (Fortress Press, 2004).
  4. In the following illustration I am finessing all sorts of arguments and distinctions in debate in the philosophy of science. For those interested in what I’m trampling on, see, among others, Ronald N. Giere, Science Without Laws (University of Chicago Press, 1999).
  5. The following draws heavily from two popular surveys: Mel Greaves, Cancer: The Evolutionary Legacy (Oxford University Press, 2001); and Robert A. Weinberg, One Renegade Cell: How Cancer Begins (Basic Books, 1998).
  6. John R. Searle, The Construction of Social Reality (Free Press, 1995); John R. Searle, Mind, Language, and Society: Philosophy in the Real World (Basic Books, 1998).
  7. For a polemic on this, see Robert N. Proctor, Cancer Wars: How Politics Shapes What We Know and Don’t Know About Cancer (Basic Books, 1995).
  8. Stephen Jay Gould, “The Median Isn’t the Message,” Discover, June 1985.
  9. Stephen Jay Gould, Full House: The Spread of Excellence From Plato to Darwin (Harmony Books, 1996), 49. Gould survived mesothelioma, labeled an “inevitably fatal form of cancer,” only to succumb to a different cancer 20 years later.
  10. For an elaboration on these five points, see Richard B. Miller, Casuistry and Modern Ethics: A Poetics of Practical Reasoning (University of Chicago Press, 1996), 22-25.
  11. Jerry H. Gill, The Tacit Mode: Michael Polanyi’s Postmodern Philosophy (State University of New York Press, 2000); Michael Polanyi, Knowing and Being (University of Chicago Press, 1962) and Personal Knowledge: Towards a Post-Critical Philosophy (University of Chicago Press, 1962); Harry Prosch, Michael Polanyi: A Critical Exposition (State University of New York Press, 1986).
  12. Susan Sontag, Illness as Metaphor and AIDS and Its Metaphors (Picador, 1977, 1989).
  13. William A. Christian, Local Religion in Sixteenth-Century Spain (Princeton University Press, 1981).
  14. Sontag, Illness as Metaphor and AIDS and Its Metaphors.
  15. Ibid., 46.
  16. “Theories that diseases are caused by mental states and can be cured by will power are always an index of how much is not understood about the physical terrain of a disease,” Ibid., 55.
  17. Ibid., 55-56.
  18. Ibid., 57.
  19. Ibid., 64-65.
  20. Frank, At the Will of the Body, 84.
  21. Ibid., 85.
  22. Ibid., 31.
  23. Ibid., 67.
  24. Ibid., 67-68.
  25. See, especially, Toulmin, Return to Reason.
  26. Polanyi, Knowing and Being, and Polanyi, Personal Knowledge.

Mark U. Edwards, Jr., former president of St. Olaf College, is senior adviser to the Dean at Harvard Divinity School. He is author of Religion on Our Campuses: A Professor’s Guide to Communities, Conflicts, and Promising Conversations (Palgrave Macmillan, 2006). This essay is from “Having a Stake, Making a Contribution: Religious Perspectives in American Higher Education,” a project he is working on under a grant from the Lilly Foundation.

Please follow our Commentary Guidelines when engaging in discussion on this site.