First, Do No Harm!
Medical Ethics and Moral Education
Howard B. Radest
1
In the past three years, I have been working intensively with a group of doctors, nurses, social workers, hospital administrators, clergy and lawyers in developing a program on medical ethics at our two local hospitals. In reflecting about what went on, I have come to realize that my life-long interest in moral education has not only been enriched by exploring what it means to do medical ethics but that my assumptions for both teaching ethics and for doing medical ethics have changed. Once upon a time, I believed that medical ethics was simply an “applied” field where ethical theories and principles would be used for analyzing, judging, and deciding the moral questions raised in clinical practice and in shaping health care policy. For example, in designing a year-long “orientation” seminar as a preliminary to establishing the hospital medical ethics committee, I began with ethical theory and, moreover, the participants seemed to expect me to. Theory and principle came first. Application was merely problem- solving.
Pretty soon, I realized that there was an expectation of concreteness among the participants that had not appeared in the same way in my other experiences in the classroom. Of course, just about all students want to know the “cash value” of what they were learning. But, what I was finding was not simply an echo of the cry for “relevance” that we used to hear in the ’60s. My new colleagues were polite enough, to be sure. But in a deeply felt way they were asking me to demonstrate how moral principle connected realistically at the examining room and the hospital bed. More generally, I learned that they perceived through a lens of tangibility and thought with ideas embedded in the specific situation. Typically, this was illustrated by the way our discussions inevitably developed toward the questions, “But what shall I do?” and “How will that be different as a result of paying attention to ethics?” A monthly workshop for hospital staff members moved in a similar direction. At first, I put this down to the natural impatience of very busy and over-worked people. But, in looking back, I realize that more was going on than a disharmony of styles, a gulf between my impatience with this urgency to get to the “bottom line” in the way of practical guidelines and the participants’ impatience with a philosopher’s habit of always finding another question to be explored. Behind the question, “What shall I do” was a well grounded passion for the reality of the event, a respect for the unique quality of person and situation and a realization of what was at stake in the medical situation. The notion of the derivation of “practice” from “theory” simply didn’t catch what was going on.
In short, I no longer think medical ethics is simply an applied field even if much of the clinical literature talks as if that were the case. As it were, this view of the matter rests on a cultural presumption, a “self-evident” division of labor where ethics specialists, like other specialists, are designated by role and training to bring their “expertise” to bear on the medical situation. Of course, the opposite view, that anyone can do ethics, is also a cultural presumption and defending against it may help account for the insistence on ethics as a specialty. Thus, in the university class on medical ethics that I teach, I used to begin with a “quick and dirty” review of Western ethical inquiry and ethical inquirers, e.g. Aristotle, St. Thomas, Kant, Mill, Moore, Ross, and Rawls. Of course, I made sure to note that these were all “middle class white males” and that there was much to be learned from feminism, from recent ethicists like Carol Gilligan and Nel Noddings, as well as from Eastern traditions like Buddhism. I used to try to get students to apply ethical theories like “deontology,” “utilitarianism,” and “intuitionism” to the cases we were studying. Similarly, I would try to encourage identifying the particular moral principles that were exhibited in our responses to the issues posed by cases, e.g. “non-maleficence,” “benevolence,” “utility,” and “justice.” The cases we explored surely presented instances where the ideas we talk about in ethical theory were at work. Bringing these ideas to the surface then, permitted critical analysis of our assumptions and exposed differences between actual and pseudo-conflicts. Certainly this retrospective activity helped to unblock many a dead-ended argument.
But, more and more often, the “facts” of the case would take over and reference to theories and principles came to feel a bit too precious. To be sure, theory and principle are latent in every medical ethics situation, e.g. the conflict between “justice” and “utility” in coming to grips with triage and rationing or with setting national health priorities, the puzzles of non-maleficence when “harm,” as in death and dying, becomes an ambivalent term. But, I am convinced that the cases serve as much to develop ethical theory and principle as the latter serve to illuminate the moral choices we have to make. The process is at least reciprocal, revealing what John Rawls calls “reflective equilibrium.”
More interesting, however, is the fact that medical ethics force attention to the narrative character of doing ethics as well as to the passionate responses that serious entry into the moral domain evokes. In a sense, we haven’t seen too much of that way of doing ethics since Plato embedded it in the Socratic or dramatic dialogues. This, in turn, raises issues of objectivity and perspective in a new way and points to an inescapable dialectic between ego and event in the effort to arrive at moral judgment and decision. It asks that we take another look at what it really means to be a moral “subject,” a moral “actor,” a moral “agent,” particularly in situations that are inherently ambiguous and yet that do not permit us to evade closure. Finally, it poses questions about the source, meaning, status and ultimate locus of moral principle. So it is that I want to explore how medical ethics can help us respond to the Socratic question: Can ethics be taught?
Questions of medical ethics live within actual cases. These questions are both urgent and tangible and, above all, come at us charged with emotion. The passions of the patient are shaped by pain and anxiety. The passions of the “care-givers” are masked by the demeanor of professionalism. At the same time, it is worth pondering the effects of this double life on those forced by role expectations to live it. Illustrating that puzzlement of role, one of the students in my medical ethics class, a nurse, wrote of her experience as a newly graduated nurse in the ICU (intensive care unit). She described Mike, a 17 year old “beautiful, blond, blue-eyed kid” who was injured in a diving accident. Mike was unable to move or to breathe without a ventilator. The nurse wrote, “Poor Mike–he couldn’t even kill himself.” Another young patient, Kenny, “only lived a month or so. I remember thinking, he was lucky.” A third, Steve, gets a different response, “He is mean. No one likes Steve. Time passes. Steve dies after a year or so. That’s a blessing, I think to myself.” “Why,” I asked, “did she have to ‘think it to herself,’ and was it permissible to think it aloud with others?” Of course, the hospital cafeteria and the nurses station allow for an exchange of attitudes and feelings but the exchange suffers from the inevitable limitations of hierarchy, of gossip, and of anecdotalism. Because there’s always so much to do and interruptions are inevitable, the fragmentation of stolen moments is inevitable too. As became evident in class discussion, thinking such thoughts to oneself was not only what was expected of a “professional” but served, too, as a way of avoiding self-revelation and as a means of defense and self-protection. The more experienced nurses modeled the role; the novices quickly caught on. All too soon and for all too many, however, the “role” becomes the reality. The move from professional stance to troubled indifference is easily made. The outcome is a certain superficial hardness and, eventually for many, “burn-out.”
In the presence of the suffering and dying patient, the saddened family, the conflicted doctor and nurse cannot avoid judgment and decision and yet, find it hard to admit that they live through their cases in “fear and trembling.” I recall, for example, a nurse reporting at an ethics committee discussion on an ALS patient (Amyotrophic Lateral Sclerosis, “Lou Gehrig’s disease” ). There is no cure although a patient can survive for months all the while deteriorating and suffering before finally dying. The doctors and nurses continued their efforts to postpone death in response to family wishes, to legal imperatives and to the mores of our medical culture. At the same time, everyone was aware of the futility of what was being done and regretted that “extraordinary” emergency measures had been taken at the outset that turned out to be medically pointless and very costly. In particular, I was struck by her description of the frustrations and angers of the nurses in the situation and most of all by her own tone of commitment and despair. As she spoke, she would catch herself. I could almost see past lessons about “professionalism” flash through her mind as she tried to escape herself through the use of sanitized language and clinical description. But it didn’t work. The case would not let her go. I considered my own responses to her story and the responses of those around the table. Yes, it was possible to see the situation through Kantian eyes and so to the puzzles of “dignity” and “rationality” under these conditions. And it was just as possible to see the situation as a utilitarian and the effort to assess the balance of costs and benefits for the patient, the family, the care givers, the hospital and the community. But, much more was going on morally both in the situation and among us than could be accounted for or understood using principles like “do no harm” and “autonomy” and “beneficence.” I’m not yet sure what this “much more” contains and I’m not ready to dismiss moral principles as mere word games. But I kept asking myself, how does it all fit together–or does it?
Of course, no case stands by itself. We build on the record–consult case books, for example–and we try to establish enough emotional distance to permit judgment. But, at the same time, we are sensitive to the unique features of each instance. In fact, it really isn’t an “instance” at all to those living through it. Each case is rich with its own peculiar connections. Religion, politics, law, science and technology inevitably enter its domain but so does family history and attitudes toward money and problems of authority. A congeries of professional and interpersonal relationships shape the scene as do extended and sometimes hidden ones outside the hospital room or the doctor’s office. These connections remind us over and over again of the existential presentness of each case, of the themes of personhood and choice, of being born and reared, of living and of dying, of what we really value and do not value. The “case” brings together the biography of the patient, the knowledge and art of the practitioner and the needs and puzzles of the human condition. The case tells a story and is accessible because it reminds us of the “human condition” and inaccessible because it evokes our inwardness, our subjectivity. In short, doing medical ethics keeps “moral space open” in situations where anxiety leads to reductive strategies, i.e. to seeing the case only as an instance.
Because we are always tempted by the alibi of time, we need to struggle against ourselves for the chance to really listen to the story and to reflect on it. A careful listener describes one doctor’s responses, “With all the personal and institutional pressures of medical practice…he suggested, it was important to have a place to go (e.g. an ethics committee) for that kind of thinking…it allowed him to feel more confident or more responsible about the decisions taken.” At the same time and because of the clinical situation, doing medical ethics avoids the abstractness and the moralism that all too often characterizes moral education. To get a grip on this double opportunity, however, I have to take a brief side trip, as it were, in order to look at what the teaching and learning of ethics ordinarily means these days.
2
It’s easy to teach and learn to talk moral language; it’s not so easy to teach and learn what it means; it’s even tougher to teach and learn how to make moral judgments; and, finally, it’s hardest of all to teach and learn how to do things morally. Given that “talk is cheap,” as we say and action costly, its not surprising that we are likely to talk one way and act another. It’s not that we’re “hypocrites” but that we are often puzzled about how to make connections between idea and action. “Forgive us,” as the prayer goes, “for doing that which we know we ought not to do and for not doing that which we know we ought to do.” We’re confused by the complexity of what we find and frightened by the consequences of the moral choices we do make. Given what is at stake in the moral situation, its also not surprising that we try to reduce moral ideas to principles, moral judgments to rules, and moral actions to commandments, to generalize and simplify.
Unfortunately, principles, rules and commandments are built on the assumption that they don’t conflict with each other and that moral ideas can be made “clear and distinct.” Sadly, how-ever, moral “geometry” fails us. Just when we most need to be unconflicted things turn out to be ambiguous. So, we twist and turn to get principles, rules and commandments to fit the situations we meet or we blind ourselves to the thickness of experience in order to get principles, rules and commandments to work out. We design “dilemmas” and “case studies” and other textbook exercises, in order to simplify what cannot really be simplified. We miss the clue of the parable, the fable, the story. Its no wonder then that, in teaching and learning morals, failure is more likely than success and frustration more likely than satisfaction.
If the “realist” in us–and we are all realists sooner or later–dismisses moral education and settles for what he or she thinks he or she can get, the outcome will merely be moral training or at best moral instruction. So we turn to a primitive pragmatism like “honesty is the best policy,” to comforting echoes of moral talk and recitation–simple or refined–of moral rules, principles and commandments. It’s tempting, too, to find someone or some-thing to blame for our messy predicament and so we look for the demons who’ve gotten us into it. If we are worldly, we look to his-tory or biology. If we have a theological bent, we conclude that human beings are naturally sinful. Of course, since we can’t avoid messy moral situations, we also look for gods and heroes to get us out of them.
Luckily, people are often better than their talk and do learn a usable morality, sometimes in spite of themselves. For example, most of us, most of the time and most everywhere, tell the truth and keep promises. Most of us, most of the time, try to be helpful and not hurtful and to care for each other. Of course, there are hor-ror stories, too many of them, and there are lots of times when it isn’t clear at all what we ought to do even when we want to do the “right” thing. And, there are times, when being moral is just too costly–the things we value morally compete with the things we value socially or politically or economically or aesthetically–and so we find ourselves both wanting and not wanting to be moral. But however things turn out, we know that the moral situation is unavoidable. That’s why we work so hard to find excuses for doing what we know we ought not to do and why we feel shame and why we care that our children and theirs should somehow or other get to know, really know, what morals are all about. Maybe, as some of the newer biologists claim, we are as it were “hard wired” by natural selection to be “moral animals.” Or maybe, in a way, the Scottish moralists of the 18th Century were correct when they spoke of a “moral sense.” Or just maybe, the human situation as such just won’t let us avoid dealing with matters of better and worse, good and evil, right and wrong.
As we know, however, depending on moral common sense or on moral intuitions can be very risky and often disastrous. Of course, we sometimes face conflicts between good and evil but we are much more likely to be confronted by choices between one good and another or between one evil and another. Common sense is trapped by nuance. Nor can we be sure that what looks like common sense does in fact represent a moral intuition. For exam-ple, not so long ago–and still in lots of places around the world–the common sense of the matter held that blacks or women were somehow less entitled to be treated as human beings than whites or men. Moral intuition was only a name given to our biases. And teaching and learning, therefore, really meant getting others to ac-quire those biases. “Might makes right” has then at least a double meaning. All too often, morality is simply the exercise of power.
Since we have by no means escaped bias–of ego, place, culture, society or what have you–and never will, we cannot be sure that any present moral intuition is genuinely moral, i.e., are we merely exhibiting moral opportunism. As a result, the moral situation is clouded by doubt. As a result, too, we tend to suppress our discomfort by denying that the situation presents a moral question at all: e.g. we reduce it to technique or economics, or psychology, etc.; or, in a typically American move, we sublimate it by convert-ing the moral into the legal. More constructively, we also try to find our way behind our moral intuitions to reliable criteria for distinguishing the false from the genuine. Principles, rules and commandments are, then, markers along our pathways through ethics as inquiry. Taught as reductions of messy moral situations to a delusional clarity, they are misleading. Taught as moral history, they are genuinely instructive.
In general, the effort to get behind our moral intuitions takes us to two variations on the themes of “is” and “ought.” Briefly, the first of these begins with the notion that there are a finite number of moral ideas and that once clarified, we can apply them to situations that present us with moral questions. That, in fact, has been the way we’ve tended to describe medical ethics, seeing it as one of many such “applications” like legal ethics, military ethics, and business ethics. Among the assumptions of this variation is the notion that the situation by itself is simply what it is, an event that has certain natural characteristics like location and sequence and connection. Using ideas like cause and effect, diagnosis and prognosis, events can be studied scientifically and dealt with as problems of engineering. Human events are, on this view, no different in kind than the movement of the planets or the fall of a stone. Only as we feel a discomfort, take an interest in, or are expected to make a choice do we transform the “blooming buzzing confusion” of experience and the particular order given to it by one or another scientific inquiry from what it simply is into a moral situation. It is an idea that turns the inchoate happening, happening-to-me, into an event, a happening that is organized with shape and meaning. Then, resorting to moral ideas derived by reason alone or received from community or from God, we then make judgments and engage in acts we now are authorized to call moral acts, acts that eventuate in the morally better or minimize the morally worse. We move, as it were, from “ought” to “is.” The world is enhanced or completed by the attachment or application of moral intention, judgment and choice to action.
On this view, moral ideas–or more accurately, my thinking about moral ideas–may be stimulated by the situation but are never determined by it. Thus, I may learn to count on my fingers but, sooner or later, I come to the notions of number, addition and subtraction and these have a life of their own, independent of any act of counting. I am able to manipulate numbers without reference to experience at all. Following this cue, a radical distinction is made between “is” and “ought,” although what is can be the occa-sion for turning to what ought to be. For example, an auto accident that kills a pedestrian is simply an instance of cause and effect. We describe it in a report, incorporate it in statistical tables and study it in a laboratory. When we assign it a legal and economic weight we attach values to the event, we evaluate. The judgment that it was a “hit and run” and that a “hit and run” is morally bad pre-sumes that the driver has made a moral choice and that we have made a moral judgment. So it is no longer only a happening or only an accident.
In principle, given this radical separation of “is and ought,” it should be possible to arrive at moral ideas without any direct reference to actual situations at all, much as a theoretical physicist may use mathematics to speculate about possible worlds without reference to experimental data and certainly without reference to biography and history. In a sense, we play a Kantian game, asking, “What must the world be like for moral judgment to be possible?” From this point of view, it is simply an unfortunate complication that the human beings who do the moral thinking are psychological and social animals. That only confuses things and leads us to interpret moral ideas psychologically or sociologically, to mix up preferences with judgments. For example, I “like” to eat junk food but I “judge” it to be a “bad” thing to do. The first part of the sentence refers me to tastes and cultural habits, to facts; the second part refers me to context, meaning and purpose, to values. Our language, unfortunately, invites this confusion. We say something is “good to eat” when we refer to how it tastes and we also say something is “good to eat” when we refer to something that is “good for you.”
On this view of the moral situation, the validity of moral ideas arises simply from their coherence. Validity is worked out much as a Euclidean theorem is worked out by deduction from axioms using rules of procedure. Or else moral ideas are authoritative because of their source in a particular cultural narrative or from some claimed trans-cultural or trans-natural revelation. While arbi-trary in principle, i.e. we can try out just about any set of axioms and rules or listen to the voice of god or satan, the fact is that some ideas stay with us because they have–or seem to have–broader relevance and usefulness; others vanish because they do not. The moral process is one of the application of theory to practice and of the coming to be and passing away of theories.
On this first view, a moral principle that didn’t work out would be a surprise in the same way that a Euclidean theorem that didn’t work out would he a surprise. We would tend to look for our mistakes or, less neutrally, for our moral failures. Typically, when we encounter what we call an immoral act, we tend to look not at the validity of the moral idea but at the conduct of the moral perpetrator. The second variation on “is and ought,” however, sees rules, principles and commandments as useful summaries of past experience, “funded experience” as John Dewey put it. Through trial and error we have arrived at rules of thumb that over the long run have relieved our moral discomfort and so have earned the name of rule or principle or commandment. We expect to try the rules again and again in future instances. While we might be disap-pointed, we wouldn’t be surprised if they didn’t always work out. Situations, after all, are unique and things are always changing. When encrusted by authority and time, however, we lose sight of the status of rules, principles and commandments as historical ideas and, instead, see them as eternal and unchangeable.
The second variation thus may be said to approach ethics as an instance of evolution, biological or historical, at work or as a criticism of practices. The move is made from “is” to “ought” and “ought” emerges out of successfully meeting the problems posed by experience. Success, here, only means the relief of moral discomfort–nothing more, but nothing less either. While moral heroism is possible–the “is” is complicated enough so that determin-ism is not required–the idea of making moral demands that simply cannot be met makes no sense at all, e.g. the command, “be ye perfect.” “Ought,” as we say, “implies can” and “can” implies that the world permits some “oughts” and not others and never permits an absolute “ought.” Moral ideas then are responses to the questions posed by what happens to us. Our attention is called to events by our discomforts and paying attention becomes the initial stance of the moral process. Typically, then, moral virtues and moral character are interpreted as moral “habits.” In order to avoid moral anecdotalism–i.e. jumping from one moral moment to another without any connection between them–this empiricist view of the moral situation relies on the psychological and social fact of memory located not just in biography and patterns of conduct but in biology, history and institutions. Thus, familiar western ethical notions like “utility,” “duty,” “autonomy,” “justice,” etc. are, as it were, mnemonic cultural devices that serve functional purposes. We may give the virtues a special force by sacralizing them in some way in religious or political guise but we should not be deluded about their mundane location.
Moral education, whichever pathway we choose, can be undertaken in several ways. For example, we can “inculcate” rules, principles and commandments not merely as moral “talk” but as moral “virtues.” Using an appropriate mix of reward and punishment, we can, over time, develop moral “character” in ourselves and in others so that we consistently enact the virtues in the situa-tions we meet. We come to identify people by their reputation as truthful, as having integrity, as trustworthy, as courageous, etc. by which we mean that, by and large, we can expect them to respond to the moral situation in certain ways and not in others. This works well under stable conditions where moral situations look enough like each other to allow for the effective use of moral habits. Nor should the usefulness of moral habits be minimized.
On the other hand, the situation does not stay stable and, indeed, under modern conditions the more likely fact of our lives is that instability increases rather than decreases. At that point moral habits don’t work out and we find ourselves at a loss on how to deal with the situation. For example, what counts as a “bribe” in some places counts as a “tip” in others and as a legitimate source of “income” in still others. In other words as we come to live in a global rather than in a local situation, standard judgments don’t apply as we think they should. Similarly, as scientific knowledge develops and technological skill increases, what counts as “living” and “dying” and therefore what counts as a moral response to these poses moral dilemmas as in current debates over abortion and euthanasia and “life-prolonging” treatments.
Although the contemporary situation is more volatile than ever, moral instability has always been a fact of life and so moral education has always had to confront moral change. For this reason, ethics has been a story of differing efforts to equip people not simply with moral “wisdom” like virtues and moral codes like the “Ten Commandments,” but also with moral competence. Meeting instability, people need to have the capacity to respond by analyz-ing and re-formulating the moral approach. Typically, then, a Socratic dialogue does not begin with a list of “virtues” but with the question, “What does courage or justice or piety, etc. mean?” Sometimes, in a division of labor, certain people were designated to adapt, interpret, and adjust wisdom to events. We referred–and deferred–to authorities like priests, leaders and seers. At other times, like our own, we resort to schooling in the belief that all people are supposed to be capable of arriving at their own adaptations, interpretations and adjustments. Both, of course, have their dangers. Authority can easily deteriorate into authoritarianism and has; schooling can easily deteriorate into moral indifference and anarchy and has.
3
While there are obvious and important differences in these two approaches to “is and ought,” they have much more in common than you would think if you only listened to the arguments of philosophers or the debates of partisans. Having moved in a democratic direction, both views assign priority to the person. He or she must consult his or her conscience, exercise moral judgment, perform moral acts and accept moral responsibility. Both attend to the “formation” of conscience although adopting differing teaching strategies, and both arrive at a similar end-point in naming the moral values like justice and love that are to be esteemed. Both as-sign particular moral weight to personal integrity and both struggle with the moral problems of subjectivity and sincerity. Medical ethics, in so far as it exhibits this larger democratic trend thus ad-dresses issues of autonomy, informed consent, truth-telling, and the like. The transformation of authority appears in debates over “paternalism,” in the changing prerogatives of physicians, and in the relationships of health care professionals to each other, to pa-tients and their families and to social and cultural institutions. As such, medical ethics is illustrative of what is happening just about everywhere in the development of ethical thought and practice. And if this were all there was to the story, then it would be suffi-cient to recognize that medical ethics is indeed an instance of the application of theory to practice.
Of course, a special argument might still be made for the salience of medical ethics; i.e. it touches upon everyone’s experience at some time or other, intersects everyone’s needs, transcends cultural and social boundaries, and above all does not permit the evasion of personal judgment and decision. With the evolution of base notions like autonomy and informed consent, this evasion is no longer, as it once was, a legitimately available option. Of course, some patients still adopt a passive mode and some doctors still act as if “they know best.” Yet, both of these stances, when con-fessed, appear under a moral cloud. They are no longer taken as the common sense of the medical situation.
A similar argument cannot be as convincingly made for business or legal or political or economic ethics. No doubt important, the fact is that each of these can legitimately be seen as boundaried and each can be routinely undertaken by socially designated others. The active personal participation of the patient in the decision is not paralleled in politics or economics or law even in democratic societies. While medical ethics does talk of “surrogates,” the idea carries more moral and emotional weight than do ideas like “representative” or “attorney” or “agent” or even “advocate.” It is more than likely that the “surrogate” will have to make a judgment in the absence of clearly-stated intentions or an unambiguous record of what the patient “really” wants done. And the patient’s intentions are far more weighty than the citi-zen’s. Further, even the statement of intentions at an earlier mo-ment as in a “living will,” while surely helpful, does not necessarily determine intention in a present moment or in an actual circum-stances. A “surrogate” is thus not a representative but is, quite lit-erally, expected to be present as the other when the other cannot be present. He or she is caught in an existential dilemma for no one can really be another or even “as another” and yet he or she is expected to be. In medical situations like clinical depression or coma, how-ever, the other is by definition not at hand to be consulted and–as with a “persistent vegetative state” or severe organic brain dam-age–never again will be.
In law, politics and economics, a distinction between the public and the private can be defended. In part, this arises paradoxically from the systemic and global character of these domains which calls for agents peculiarly adapted to move beyond subjec-tive interest and local characteristic. We notice, too, that where medicine intersects these domains–as in the debate over health care policy–its special features are muted and it starts to look very much like politics or economics and all the rest. Thus, it is understandable, as a practical matter, for someone to claim that he or she has no capacity for, or intention of, participating in the po-litical life of the country or engaging in business or using the law. Similarly, debates over health care policy sound less like questions of medicine and more like questions of politics and economics al-though, even here, the life and death urgencies of medicine do not permit of an entire transformation of the personal into the sys-temic.
The decision not to participate, of course, does not mean that the individual is not affected by what happens in business or politics or law or economics but that he or she can choose-both practically and morally–not to be a moral actor in those domains and not even to designate a proxy. And while we might challenge the wisdom of making such a choice we can agree that it is a feasi-ble option, and, more significantly, that under some conditions even a morally defensible one. It was Socrates, after all, who claimed the privilege of being a private person–deliberately choosing to be outside of and apart from–and, indeed, he insisted that only as such could he really do the moral job, be the “gadfly,” that his vocation demanded. In a similar way, Thoreau could justify turning his back on the society of persons, listening to a “different drummer” and living alone at Walden Pond. By contrast, while choosing passivity may be, abstention is not an available medical option.
At the same time, medical ethics clearly exemplifies that other feature of modern culture, the interaction of science and technology with human experience. It is fair, I think, to conclude that the transformation of medical ethics from a relatively narrow instance of professional ethics, as exemplified in the Hippocratic Oath, into a nearly boundaryless field of interactions and choices is, in no small measure, a consequence of scientific discovery and the development of technique. Choosings about life and death once inconceivable are now commonplace and, with the development of organ transplants, genetic knowledge and associated technologies, choosings already extend to areas once thought utterly untouchable by human interventions. But again, while a special argument could be made for the salience of medical ethics, similar things about science and technology could be claimed for politics, business, economics, the law, etc.
More interesting for our purposes is the fact that both historic approaches to ethics and moral education begin with what is taken as an initiating event although they disagree on its moral and sometimes even its ontological status. For the rationalist, the initiating event motivates us to go through a doorway into some other reality like the world of reason or the world of spirit. For the empiricist, the initiating event is a stimulus with the moral idea as the transitional moment in a natural continuum. The moral act then announces closure but only for the moment. Of greater interest to the empiricist, however, is some kind of inductive generalization where the initiating event is transcended and transformed into a datum. Closure and its consequences are thus instrumental in teaching us how to respond to future initiating events.
Unfortunately, the moral clue contained in the notion of an initiating event has tended to be ignored. The real interest of rationalist and empiricist alike is in arriving at some kind of generalization which, whatever the epistemological stance, serves as the major premise in a future deductive procedure. This moves us very quickly away from what is happening and to whom it is happening in the present. The event pretty much keeps its status only as an “initiating” event. Attention is really focused on past and future, i.e. on retrospectively giving meaning to what was not understood at the time it was happening or prospectively giving meaning to what is yet to come. Hence, we move to interpretations and pre-dictions. Certainly this is a useful and adventurous move, giving us entry to modern mathematics and the natural sciences. Nor is the role of interpretation and prediction to be minimized in doing ethics. After all, the notion of moral character may be understood as both an interpretation of biography and as a prediction of moral conduct.
Yet, it is the event that not only raises the questions, but that forecasts the development of what is to follow, and prescribes the field within which answers, if any, will be found. In other words, the event taken seriously is much more than a starting point, a motivation, or a doorway. For ethics it remains a presence that cannot be ignored, that cannot be transformed into datum and that cannot be put to one side. Yet, this is precisely what we tend to do in moral education. The moral situation is always exigent but we act as if it were not. In the name of method, we are deliberately forgetful.
This leads pretty directly to the special contribution that medical ethics can make to moral education. The most striking fact of medical ethics is that it comes to us in the form of “cases,” ac-tual situations where something real is at stake, where time avail-able is never indefinitely prolonged and where the participants are genuinely unsure of what to do. But even as we proceed to identify the issues, analyze their meaning and consequences, compare and contrast them with what has gone before and arrive at preferred outcomes, the case has an imperious present. It stays with us. At the bedside, the patient does not let us forget his or her suffering and need. The family’s voice is never really stilled and where there is no family we, quite literally, feel its absence. The doctor and nurse are impelled to a continuing focus on the person with whom they are engaged. I might even suggest that calling what is going on the “case” mis-names the medical situation in an attempt to dis-tance it, to emphasize the “science” and not the “art” of healing, to have it represent and not be.
4
The medical situation is and is felt as an existential drama. Of course, it exhibits theory and principle and benefits from empir-ical generalization. We can talk about these usefully and helpfully. From a moral point of view, however, the medical situation pre-sents us with a third way of doing ethics in which “is and ought” are inextricably interwoven in and through a “lived experience.” To be sure, “is and ought” serve us as analytic tools–ways of telling us what is going on under certain conditions and for certain purposes–but fail us when transformed into epistemological and ontological categories. In fact, just as we think to apply an “ought” to an “is,” the scene intrudes with its tangibility. And just as we think to elicit an “ought” from an “is,” the scene throws us back into the situation. Even this is too abstract and passive a way of talking to catch what is going on. The scene is “had”–and not simply observed, described, and understood.
Above all, the drama is enacted by people with “proper names.” While using language that seems to describe and prescribe in the third person–e.g. the physician, the diagnosis, the prognosis–the members of the scene are in fact acting on their own behalf. To be sure, responsibility may be shared as in the development of medical “teams,” or as consultations with institutional review boards, utilization review boards and ethics committees but it cannot be successfully transferred to some non-personal entity like the “medical community” or the “hospital.” Of course, we try to do so in self defense. But the joys of cure and the pains of suffering are located in actual subjects at the bedside. Even standard treat-ment practices are ultimately focused through the choices, judg-ment and acts and also through the feelings, hunches and responses of named individuals. For example, I think that it is because of this existential quality of the “case” and not simply because of institutional stupidity, economic advantage or violated tradition, that so called “third party payers”–tax based or private–represent a felt intrusion and evoke passionate discomfort.
In an existential drama, past and future are subordinated to the expressed and expressive present. Time, in other words, is had both as duration and as simultaneity. It’s all there and yet it passes before you. The scene is framed not simply by setting and stage but by having, as Aristotle put it with seductive simplicity, a be-ginning, middle and end. Of course, the suspense of the ending, the dramatic surprise, must be awaited. Yet, retrospectively, it was forecast by the beginning, present in the beginning. Waiting, think-ing back, living through happen all at once. So, for example, the drama does its work, “evokes pity and terror,” just because it is possible to say, as we can of Oedipus or Hamlet, “character is destiny.” So, too, we attend to the performance of a drama over and over again, i.e. we know how it will all come out, and yet still feel the surprise, still respond with appreciation, still hesitate. “Will it really turn out as it did before?” Indeed, neither we nor the drama are ever the “same.” This interplay of time, of non-predictability and surprise, of repeated non-repeatability char-acterizes the medical situation. As I suggested earlier, no “case” is an “instance” except to the stranger. But in medical ethics, proba-bly in ethics generally, there are, finally, no strangers who remain only strangers. We are drawn in, caught by the fact that medical ethics has its roots in the existential predicament of persons as such. Where, as in medical research, the role of the stranger is insti-tutionalized, we still find the existential drama tracing its course and this becomes obvious where urgency generates conflict be-tween therapy and inquiry–say in dealing with epidemic or with diseases like AIDS.
The usual features of drama are present: a story line, conflict, tension, resolution. The players are caught up in the situation sometimes against their will. Passions are at work and part of the tension arises just from the fact that the rules call for passion to be suppressed in the name of professionalism and objectivity. At that point we encounter the drama of deception and of un-masking. The roles we play are both real and distanced, i.e., we are not playing roles in some scripted drama but becoming the roles in an existential drama. If the gap between playing and being grows too wide, doing ethics becomes an abstract exercise and, to that extent, unconvincing and morally irrelevant. If the gap entirely disappears, doing ethics becomes impossible because it is no longer possible to use idea and language to cross the boundaries from one event to another, to enter the scene as a stranger and leave as an actor.
As with any drama, it is not only the players in the scene whose passions are expressed and whose lives are engaged. In one sense, to be sure, the rest of us are off to one side. We read the case report, talk with doctor and nurse, listen to commentary. We see ourselves as the audience at a performance; we see the situation played out before us. Yet, we are a strange kind of audience, more like a community than like a gathering of observers. In fact, we too are transformed and come to look like the chorus in classical drama. We reflect, respond, comment. The medical situation invites us not simply to sympathize, to feel pain for the pain of another, and not simply to empathize, to experience the pain of another as if I were living through the situation when, in fact, I am not.
Ultimately, my ego is engaged but, at the same time, my ego yields itself up. I am drawn, willingly or unwillingly, to feel another’s pain as my own, to become the other. It is for this reason that there are, ultimately, no “strangers” in the medical situation. I recall, for example, reading to my class Timothy Quill’s essay on the death of his patient, Diane. Opening with the diagnosis of leukemia and the prognosis of inevitable death, Quill talks about his actions, feelings and doubts as he provides her with enough barbiturates for suicide and with instruction on how to use them successfully. He has no illusions and yet, like his patient, he hopelessly hopes. But there “are no miracles.” He then pictures her last days, her last moments.
Diane’s immediate future held what she feared the most–increasing discomfort, dependence and hard choices between pain and sedation. She called up her closest friends and asked them to come over to say goodbye….As we had agreed, she let me know as well….Two days later her husband called to say that Diane had died. She had said her final goodbyes to her husband and son that morning and asked them to leave her alone for an hour. After an hour they found her on the couch, lying very still and covered by her favorite shawl. There was no sign of struggle. She seemed to be at peace….
I must have read Quill’s piece a hundred times before. Yet, in the classroom, I once again felt myself transported to the bedside, saw the shawl spread over her body, felt Diane at peace, Quill’s anxiousness and relief, the silent sadness of husband and son and friends. For a moment, I became these. My voice broke and I was moved to tears. My students were very quiet, were caught each in his or her own way and yet all of us were together, disparate and one. For some reason, at that moment, I thought to myself of the Buddha emptying himself of ego, yet renouncing nirvana and willingly taking on as his own the suffering of the other. For that moment, I understood. Afterward, the members of the class were very still and then, as if shaking themselves free, began slowly to describe their feelings and responses, their own moves away from being the stranger. Only after some while did the stranger re-appear, did the issues and questions, the ordinary dis-course of the classroom, re-appear.
Inevitably, the existential drama includes its ghosts. As a witness I am reminded that I will suffer and die and as participant, I am drawn into the scene. At the same time, much as I am become the suffering other, I cannot help but be my fortunate self, too. Right now, I know that it is not me that is dying. I feel relief that it is not me and yet, not a little bit of guilt at having the feeling. I do not envy the friend, the family member, the doctor or nurse who will turn from the hospital bed and go home in sadness and pain. I am neither the patient nor the lover. These realizations are ringed with judgment on myself, even shame at my escape.
As in any drama, I find myself, then, many persons all at once, am both inside and distanced from the scene all at once. I know that unbroken remembering would make life unlivable. But I know, too, that in daily living with its busyness and distraction, I forget too much too easily. So, I realize that the moral problem is not only one of good and evil, not only one of choices between good or between evils, but a blindness that arises as a defense against the painfulness of judgment and choosing. I realize, too, that blindness takes many forms, moral abstraction on one side–the escape to theory–and moral sentimentalism on the other–the refusal to judge. And I realize, finally, that the medical situation, just because of its nature, does not permit us to be blind. Every time that I try to run away to blindness, to re-deceive myself, the situation calls me back, forces me back.
Perspective now arises as a dramatic fact and becomes a reflective activity within the scene. I am not allowed to remain apart in order to know clearly, to be uninvolved in order to judge fairly. I have learned in an unavoidable way that in the moral situation, remaining aside is a form of sin. But, because the drama invites me to be, alternatively, many players and the chorus too, I find objectivity without losing the subject. I am enabled to judge and decide from within the committed community and as the particular person that I am.
The “initiating situation” that is the common entry into the moral discourse of our typical pathways to the doing and learning of ethics is really misnamed. More accurately, it is the permanently presented situation. To act as if it can go away, sometimes in the name of clarity to expect it to go away, is to turn doing ethics into an empty exercise. In no small measure, that is why the gulf between talk and act is hardly ever bridged, why so much of what we do under the guise of moral education does not take. Medical ethics, finally, reveals that doing ethics is a form of personal criticism, perhaps even aesthetic criticism, rather than an application of theory to practice or a way of gathering data for empirical generalization and statistical prediction.
For example, as the member of an ethics committee reviewing cases, it is necessary to look for fully dimensioned persons and for the story that those persons tell. But it is not enough to catch the story of the other. I, too, have my story and it is woven into the story of the other. As I listen then, I look for threads of development and connection not simply between those ordinarily play-ing out the medical roles of doctor and patient but between them and myself as coinhabitants of the scene. Unspoken, but no less real, are a sense of fittingness and balance. For example, I hear not only the words but the emphasis, the appropriateness, the accent and stress, the shift from lead to support and back again. I am, in short, attentive and not simply diagnostic or analytic, present to and not distant from. I conclude: the teaching and learning of ethics have indeed found their model.
© 1997 by the North American Committee for Humanism (NACH) All rights reserved, including the right to reproduce this book, or portions thereof in any form, including electronic media, except for the inclusion of brief quotations in a review.
ISSN 1058-5966