Moral Mistakes, Lapses in Judgment, and Fuck-ups: A Taxonomy of Ethical Errors
Steven Gimbel
Gettysburg College
sgimbel@gettysburg.edu
Stephen Stern
Gettysburg College
sstern@gettysburg.edu
Download a PDF of this article
Abstract: Most approaches to metaethics have focused on how to make the right moral decisions, that is, how we should live and what we should do given that we live lives connected to others who are affected by our decisions. But there is value in taking the converse approach, that is, looking at the range of ways in which human actions can go wrong. Humans commit all sorts of moral mistakes, so it would thereby be helpful to possess a taxonomy of moral errors establishing coherent families, phyla, and species of infelicities, allowing the bases of the miscues to be made clear.
Keywords: metaethics, ethics, morality, logic, reasoning
Introduction
A colleague proudly declared that he was boycotting the fast-food chain Chick-Fil-A because of their opposition to LGBTQ+ rights. When reminded of a recent visit he had admitted to making, his response was that in patronizing the franchise, ‘I made a mistake.’ This claim seemed problematic. The colleague did not turn into the wrong driveway believing he was going to Burger King next door. No, had chosen where to go, what to order, and paid for it, thereby successfully engaging in a complex cognitive task intending to procure a specific chicken sandwich that he did, in fact, get and eat. Further, his moral beliefs concerning the act remained unrevised before, during, and after the meal, all based on the same arguments. While it was surely an ethical lapse, describing it as a ‘mistake’ was a category error.
The desire to correctly label the infelicity led to reflection on the need for a moral version of the project Giora Hon (1989) undertook, categorizing sources of experimental error. The goal here is to construct a sort of Linnaean classification system for morally wrong acts. According to this account, there are two kingdoms of ethical misdeeds: one deals with cases in which the agent did not intend to violate morality, and the other is those cases in which the agent knowingly did. Within these two kingdoms there are seven genera of ethical errors giving us a wide range of species and sub-species that can be illustrated with examples. In some cases, there are epistemic mistakes at play, in other cases errors of judgment, and in yet others non-moral elements that may supersede, or at least be taken to supersede, ethical responsibilities. By distinguishing these clearly, we can develop an understanding of the span of moral wrong-doing.
This is more than a mere academic exercise in categorization. Philosophers going back to David Hume (1739) have noted that there is a difference between statements involving fact and those involving value. The first group describe how the world is and the second sets out how the world ought to be. Explorations of the scientific method discuss how we should develop rational beliefs concerning that which exists (Gimbel 2011), but given that the world is not always the way it should be, what are the truth conditions for normative claims? What makes some moral claims true? Some have moved to extra-natural or supernatural justifications for moral truth. Immanuel Kant (1797) places the ultimate justification for ethical truth in the realm of metaphysics, while others like Thomas Aquinas (utilize the Divine Will as the additional content needed to bridge the gap between is and ought. But by seeing the range of ways in which we can go wrong morally, the need for such a bridge can be obviated returning moral discussions to humanistic world.
1. I Thought I Was Doing the Right Thing, But I Wasn’t
The first kingdom of moral error comprises those acts in which the agent chose and executed an action that was morally problematic, but did so wrongly believing that the act was, in fact, permissible. There are distinct genera within this group: moral mistakes, getting carried away, and moral ignorance.
Moral Mistakes
The term ‘mistake’ is generally reserved for cases in which there is a false belief that plays an operative role in determining the action of an agent, that is, cases in which someone makes a conscious choice to act in a particular way for a particular reason and the flaw that renders the act problematic is contained in the reasoning that led to it. In the case of ethical action, we can differentiate between two sorts of premises involved in the reasoning that leads to the rational decision to undertake the specific act—facts of the world and moral principles.
This approach presumes a clean fact/value distinction and there is, of course, a long line of strong arguments against the maintenance of such a distinction, see, for example, Foot (1958), Williams (1985), and Putnam (2002). For the sake of this scheme, the cases considered allow us to naively appeal to the intuition underlying the traditional distinction found, e.g., in Hume (1739). In the interesting cases where concepts get thick, we can simply move them into the subsequent category, even if no absolute line can be drawn. Both of these are defeasible, leading to two different species of moral mistakes: errors of fact and errors of judgment.
Let’s start with errors of fact. Acts require beliefs about the world. Sometimes it is beliefs about how the world is. Sometimes it is beliefs about what the act will accomplish. These beliefs are sometimes false. The first species of ethical errors are those that result from mistaken factual beliefs, so that while the error may be moral, the origin is, in fact, epistemic.
We can consider a morally wrong act to be an ‘error of fact’ if and only if the agent can truthfully claim that “I committed act X because I believed proposition P was true and if P had been true, then X would be morally acceptable, but it turned out P was false.” We can divide these into four distinct sub-species based upon the reason for the belief in P.
The first sub-species of error of fact is the sort in which the agent had good reason to believe that P was true, but turned out not to be. We rarely have complete information about the contexts in which we act, so we must rely on inferences. As a result, there will be cases in which proposition P, a belief about the situation, is the conclusion of a cogent inductive inference, but that the case at hand turns out to be one of the unexpected situations in which the likely outcome is not the actual outcome.
Suppose, for example, that you attend a social function with the President of another academic institution and you see him drinking far too much alcohol and overhear him say that he was going to drive his car to another location after the party. When someone tries to convince him not to drive, he insists on doing so. You leave the party and see a car parked in a spot marked “Reserved for the President.” In order to make sure that the President does not drive drunk and thereby cause harm to himself, someone else, or the reputation of the institution so graciously hosting you, you let the air out the car’s tires making it incapable of being driven. Unbeknownst to you, that is not the President’s car, but that of the caterer whom the President told to use his space to facilitate delivery of the hors devours.
You would be responsible for the wrongful act of tampering with the caterer’s car. In citing your perfectly reasonable belief in the statement P, in this case, that the car you disabled belonged to a person who expressed an intention of driving drunk, you are not excusing yourself from the misdeed, but giving an explanation why your error was factual in origin and does not display a deeper character flaw. It was an attempt to do the right thing, even if it failed and even if that failure results in your having to take responsibility for the problematic act. You did make a moral mistake, but the origin of that mistake is a factual error that results from an otherwise reasonable inference.
A second sub-species is the case of unintended consequences wherein an inference is made based on facts of the world to instantiate a legitimate moral principle and in doing so, the desired act is successful, but its success triggers a secondary effect that causes harm which had not been expected. Action A is taken because it is inferred correctly to cause B, where B is morally desirable, but B then goes on to cause C, which is not. The satisfaction of the proximate moral goal B may be laudable, but the agent remains culpable for secondary effect.
A classic example of this comes from Thomas Nagel’s The View from Nowhere in which he found a spider living in a urinal in the men’s room.
“When the urinal wasn’t in use, he would perch on the metal drain at its base, and when it was, he would try to scramble out of the way, sometimes managing to climb an inch or two up the porcelain wall at a point that wasn’t too wet. But sometimes he was caught, tumbled and drenched by the flushing torrent. He didn’t seem to like it, and always got out of the way if he could. But it was a floor-length urinal with a sunken base and a smooth overhanging lip: he was below floor level and couldn’t get out…The urinal must have been used more than a hundred times a day, and always it was the same desperate scramble to get out of the way. His life seemed miserable and exhausting.” (Nagel 1986, 208)
So, Nagel did the right thing, fishing the spider out of the urinal with a paper towel and placing him on the men’s room floor…where he found him dead the next day, starved from being deprived of his food source—the small bugs that were attracted by the lingering urine. Nagel had acted with good intentions to improve the life of a fellow being and was successful in alleviating the spider’s difficulty in avoiding being swept away by urine or flushed water, but he is also responsible for the spider’s unnecessary death. The initial inference was spot on, but led to an unexpected secondary state of affairs which came with moral culpability.
A third sub-species of errors of fact are the cases in which the agent should have known that P was true in this specific instance. These are often cases in which there is some peculiarity which the agent should be aware of, but neglects in determining how to act. Like the first case, it is an error in inference, but where in the first species the inference is inductively cogent but false, this case involves an enthymeme in which the missing premise changes the outcome of the argument. Where Nagel could not have been expected to have a sense of the food source of the spider, there are some propositions that one can be expected to know.
Consider, for example, a longtime and dear friend who is suffering from dehydration who sends you to a convenience store for a bottle of sports drink needing both the water and electrolytes it contains. You bring it back and the friend guzzles the bottle wrongly assuming you purchased the sugar-free version, your having known since childhood that the friend has type-1 diabetes. You did not intend to compound the health issues of your old buddy, but you did by acting in a way that you wrongly inferred would be in their best interest.
You are morally culpable in this case because you should have known to get the sugar-free version without explicitly being told so in this specific instance. It is different from the first case because it turns on a different sort of error in the argument that led to the belief in the proposition P on which the moral decision was based. The first case includes a cogent inductive inference that unexpectedly turns out to be false, whereas in this case there is an inference to be drawn that requires an additional fact that the agent should have known, but neglected in making the inference.
The fourth sub-species of error of fact is one in which the agent believed that they were acting in accord with proposition P, but actually was not as a result of negligent oversight. In the first three cases, belief in P was the result of an inference that happened to fail. Here, on the other hand, there is no inference, but rather garden-variety getting something wrong despite an intention to get it right because of preventable negligence on the part of the agent.
Consider the case in which you are cat-sitting for a friend who has gone to a conference and you must administer medication to the pet for a minor illness. Reading the handwritten instructions that you were left, you were told to give a teaspoon of the drug to Mittens, but reading quickly while also checking a text on your phone, you mistakenly read the lowercase ‘t’ as uppercase and wrongly give her a tablespoon of medication. By tripling the dose, you cause a reaction that requires an emergency trip to the veterinarian.
Like the first three sub-species, you acted in such a way that you believed at the time to be morally proper. But unlike them, this was not a reasonable inference gone wrong, but rather an error that resulted from the sloppiness arising from your choosing to be distracted. You rightly feel terrible for subjecting Mittens to the unnecessary suffering as a result of your negligence, but the factual error is not explainable in terms of a rational inference, rather it is completely on you for having formed a false belief.
In all four of these sub-species, the agent deserves moral condemnation despite the intention to do the right thing. The moral flaw derives from an error in moral deliberation, but the error is not the result of problematic moral reasoning, but rather from a factual error informing the act that was intended to be the morally right thing.
The second type of moral mistake involves the moral components of the inferences. In deciding what you think is the right thing to do, you could have all of the facts about the world correct, but apply the wrong principle thereby acting improperly. There are three sub-species of this sort of error: misapplication of a valid principle, application of an invalid principle, and unreflective application of a principle.
The first sub-species is the case in which the agent applies the wrong principle to a given situation. In these instances, the agent wants to do the right thing and the reasoning by which the act is decided involves a principle that is not itself flawed, but rather is misapplied. These are the often the sorts of cases in which one has conflicting duties or prima facie duties, as W.D. Ross terms them, but in which it is clear which ought to take moral precedent, yet the wrong one is selected.
A well-worn example of this sub-species comes from Immanuel Kant. It is certainly the case that one ought to tell the truth, all else being equal. Kant, famously, considers the case of the would-be murderer searching for the hiding victim. If a bystander knows where the victim is hiding, knows the intention of the would-be murderer, and is asked directly where the victim is, Kant contends that the bystander must tell the truth to act in a moral fashion because the imperative ‘lying is wrong’ is a universal truth.
Like all legitimate moral principles, it is true ceteris paribus, viz., all else being equal. But in the complexity of real life contexts, things aren’t always equal. By denying the existence of the ceteris paribus rider, Kant is committing an error of judgment here by applying a valid principle where it does not belong. One certainly ought not lie in most situations. However, one ought also to do what one can to save innocent lives. When these two principles conflict, it is clear which ought to be the one obeyed and Kant’s contention that it is telling the truth is certainly incorrect, despite the fact that we do accept the principle as a generally correct rule.
The second sub-species is the application of a flawed principle. One of the dangers of philosophy is that it leads to allegiance to abstract beliefs that may lead one to formulate principles that are consistent with those beliefs, but flawed as bases for moral action. As a result, a true believer could buy into a false principle, apply it correctly, and thereby be led to believe they were acting rightly, but in fact, be acting wrongly. In these cases, there is an inference to the principle, a reason that the agent believes justifies it, but the reasoning is fallacious.
Consider an absurdly radical form of libertarianism in which one starts from the reasonable position that autonomy is valuable and then in positing it as the primary or only value, formulates the principle that one ought never intervene in the life of another person, that even the most seemingly helpful acts are, in fact, harmful, by virtue of robbing the person of their autonomy (Rand 1995, 82). As a result, one may formulate and even believe the proposed moral principle, ‘never help anyone.’ Using it as a basis for neglect of those whom one could easily save from unnecessary tragedy would be an instance of this second sub-species.
The third sub-species is where one applies a principle unreflectively. Unlike the first case in which the principle is extended beyond its boundaries and the second in which a flawed principle is derived, the third is the case in which one simply fails to consider whether the principle one uses as a basis for their attempt to act rightly is actually a true moral principle.
We are acculturated into society and rules become what Émile Durkheim (1895) terms “social facts,” i.e., “manners of acting, thinking and feeling external to the individual, which are invested with a coercive power by virtue of which they exercise control over him (Durkheim 1895, 21).” Sometimes these social facts are good, other times they are not. But many people simply obey them because that is how they were raised, and because, as Durkheim points out, there are enforcement mechanisms to reinforce them.
One sees this sort of error when dealing with members of older generations for whom social progress violates established ways of doing things. Consider the grandparent who believes they are being righteous in kicking the queer grandchild out of the family. It is easy to confuse social custom or longstanding social beliefs with moral necessity. When someone schooled in the old ways deems those who violate it to have done something wrong and deserving of punishment, the error is of this sub-species.
Getting Carried Away
The first genus involved mistaken judgments deriving from flawed moral deliberation. But moral lapses may result not from fallacious reasoning, but from psychological effects that override reasoning altogether. We may desire to be moral individuals, hope to choose our actions in accord with morality, and yet sometimes that intention get outmatched by other cognitive goings-on.
Emotionality is the primary culprit here. Consider the myth of Heracles who, having flown into a blind rage, murders his own children. Neurologically, strong emotions like anger or resentment can cause changes in the way the brain works that diminish our ability to employ the decision-making process rationally. This is what Daniel Goleman terms an ‘amygdala hijack (Goleman 1995).’ As a result, we will be influenced by strong emotions to do things that are cognitively complex and therefore require intentionality, but which we later regret, realizing they were the wrong choice once we calm down and can assess the action with cool rational detachment.
A second species of this genus does not require that we step away to know that we are getting carried away when acting. Sometimes, we are fully aware of it at the time and yet remain unable to assert the dominance of the sort of rationality necessary for good moral decision-making.
Consider the distinction from Tamar Gendler between beliefs and what he terms “aliefs.” One may believe in a proposition P because one has strong evidence supporting an argument in favor of P, and yet, one may still be unable to act in accord with that belief. We may, for example, know that the bottle that formerly held bleach has been thoroughly cleaned and sterilized before being filled with potable water, but despite that belief be unable to drink from it because we see the label ‘bleach’ on the front of the bottle. The alief that the bottle contains a substance that is poisonous overrides the belief that it does not.
One can easily see how situations could arise wherein acting on the belief is morally necessary, but made difficult due to an overriding alief, that is, where one knows what one has to do yet cannot bring themselves to do it. Consider the plot of the film Airplane! in which a commercial jet’s flight crew has been rendered too ill to work midflight and the only person with the skills to save the lives on board is a former pilot with a lingering case of post-traumatic stress disorder. The passenger has the justified belief that he has the ability to land the plane safely, but the alief that he does not. Initially, he refuses to take control of the flight based on the aleif he knows is false. Surely, he fully recognizes this dereliction of moral duty as ethically problematic.
A third species involves the inability to reason clearly as a result of intellectual incapacitation. If one makes poor moral decisions because one is, say, drunk, then the source of the misdeed that you would never do while sober is understandable. But the explanation is not an excuse if you were the one who chose to drink as much as you did. If you are responsible for creating the state in which you are less than able to reason effectively and that handicap is causally connected to the bad choice, then by an ethical version of the transitive property, you are blame-worthy in this way (Aristotle 322, 38).
Acting in Ignorance
The third genus of morally problematic actions in the kingdom of acts in which the agent tried, but failed to act morally is when the immorality arises from ignorance. As Aristotle points out in the Nicomachean Ethics, there are cases in which ignorance is a legitimate excuse that removes culpability and cases in which it does not (Aristotle 322, 32). Those in which it is not—cases in which the agent did not know, but should have known—are the content of this category.
The first species of misdeed is where the agent was completely unaware of the action that they were undertaking, while committing it and would not have taken the action if they had known, but are responsible for not knowing. If, for example, you got in your car, started the engine, put the car in reverse and pulled back without checking the rearview mirror to realize that you backed over your neighbor who was kneeling down tying his shoe, you would be to blame for his injuries. You did not know he was back there because you failed to do the thing you should have done that would have given you access to the relevant fact. It was not intentional, but you are to blame for the lack of the relevant knowledge.
While both this species and that associated with false beliefs associated with negligence result from morally relevant sloppiness, in the over-medicating Mittens type of cases, there was an act designed to acquire a belief that resulted in the wrong belief being formed. In this sort of case, there was a lack of a belief where a relevant fact should have been inserted. By not checking the rear-view mirror before backing up, one did not operate with a mistaken assumption that there was no one behind the car, rather one operated without even considering whether there might be. It is the failure to actively attempt to form the requisite belief that generates the infelicity.
The other species of this genus is where the agent thought they were doing something other than what they were, in fact, doing such that the act they believed they were undertaking is morally acceptable, but the act they were actually engaged in is not. A trivial example of this sort is reaching out to console an injured friend with a reassuring pat, but end up hitting the site of the injury so that instead of emotional comfort, you accidentally cause physical pain. You are responsible for the additional hurt to your friend, despite your intention being the opposite. The motivation was care rather than malice, but the act was problematic.
2. I Knew It Was Wrong, But I Did It Anyway
The second kingdom in the classification system contains morally problematic acts committed by agents who knew that the act was problematic. This kingdom will have four genera: exceptionalist excuses, the irrelevance of morality, acts of immoral justice, and the fuck-up. In each, the agent is fully aware of the immorality of the choice but either does not care or contends that a non-moral element overtakes the ethical in deciding how one ought to act.
Exceptionalist Excuses
Moral judgment is only one factor in deciding whether to undertake an act or not. There are contexts in which we know that the needle on the metaphorical morality meter points against the act, but there may be other grounds that make the act expedient or desirable. There are at least three different species of this category: trivial immorality, utilitarian override due to competing obligations, and ethical anomalies.
Trivial Immorality
There are cases in which we know that an act is wrong, but the degree of immorality or the consequences of committing the immoral act are seen as so slight that the threat of immorality ceases to have its usual force. Examples include the ‘little white lie’ such as overstating your need to hang up when a telemarketer phones during dinner. It is a lie that your pot is boiling over on the stove in the other room, but you were not going to purchase the extended warrantee for your automobile anyway. The lie makes no difference because it does not have a negative effect on the telemarketer since they were not going to be successful. Further, since there is no lasting relationship between the you and the telemarketer as you will (hopefully and likely) never speak again, any Kantian concerns about trust between people are not germane. As a result, the act may be wrong, but not wrong enough to make any real difference. The triviality of the wrong makes any concern mere nitpicking. There are those like Bok (1978) who contend that the triviality renders the little white lie morally acceptable. For the sake of completeness of the taxonomy, such acts will be considered immoral.
Utilitarian Override Due to Competing Obligations
Perhaps the most common case of this sort is where someone knows that their act is wrong, but decides that something other than morality is more important. It may be some pragmatic aspect of the context like ‘I had to do that immoral thing or else I would lose my job,’ it may be loyalty to a friend or group to which one is a member, but most often it is the naked self-interest of ‘yes, I know it is wrong, but it is so much fun.’ We often choose to do the wrong thing because it holds the promise of making things better for us.
This includes the self-serving lie that every teacher has had to endure. The student has not done what the student needed to do. The student wants special treatment they know they do not deserve and likely will not receive. But if they could make you believe that a tragedy has occurred—a personal illness, a death in the family, a computer malfunction—then your sympathy might get them what they want, but do not deserve. In these cases, the student deems not failing the class more important than their integrity.
This is not utilitarian reasoning in the moral sense. These are not the standard sorts of cases in which acts that are generally morally problematic become acceptable due to a peculiar result from the hedonic calculus. Rather, the utility here is amoral. The agent has determined that knowingly doing the wrong thing will have helpful consequences and deemed those consequences more important than acting in an ethically appropriate fashion.
Ethical Anomalies
The final species is often connected to the informal fallacy of special pleading wherein we make an exception for ourselves that we generally do not make for others. We contend that the situation in our case is different and therefore the ethical expectations are different. In this way, we will attempt to justify a clearly immoral act by asserting that the situation in which the act is embedded is an anomaly, that is, so peculiar that the standard means of moral deliberation are no longer legitimate. We fully acknowledge that in the usual case, one should not do what we are doing, but claim that this is not the usual case and in this special circumstance, the ability to violate ethical norms is allowable.
People who are fully able-bodied should not park in spaces reserved for those with a handicap or specific challenge (consider parking spots reserved for expectant mothers). You might fully agree that those spaces ought not be used by those not in the specified categories. But on this day, there were no open spots and only needed a moment to run the package into the store to drop it at the counter to be shipped back to the manufacturer. It would take only a second. “I wasn’t really parking there,” you contend, “just popping in and out.”
This is different from the case of the triviality because you do not consider selfishly creating further disadvantage for those who are already disadvantaged to be trivial. Had you been driving around and saw someone else without the requisite license plate or hangtag do what you did, you would have taken extreme umbrage at such thoughtlessness despite the fact that you excused yourself for the act.
Irrelevance of Morality
The second genus of this kingdom is radical in asserting the irrelevance of morality itself. There are three species of this category.
The first is global nihilism. If life is meaningless then moral expectations are meaningless as well. One might have the moral obligation to pull a drowning child out of a swimming pool, but one might choose against executing this required act, instead glancing up from one’s lounge chair over the top of their open copy of The Stranger to see the flailing child starting to sink and simply shrug and shake one’s head at the absurdity of existence.
The second species is local nihilism. One might full well believe that life is meaningful or can be endowed with meaning through one’s acts and that morality is has its usual force in general, yet choose in some specific circumstance to act in bad faith. A wealthy person might know that stealing is wrong, and yet, just for the thrill, shoplift a moderately expensive item from a large corporate store that they easily could have afforded. Their social position generally allows them to escape being held responsible for this sort of thing as Donald Trump claimed in the infamous “Access Hollywood” tape: “When you are a star, they let you do it.” There is morality for thee, but not for me.
The third species is pragmatic amoral justification. If all that matters from an act is its ‘cash value’ as William James put it (James 1907, 97), then moral proclamations are meaningless metaphysical twaddle. Ethicists, on this view, are proto-economists and now that we have real economists, ethicists are welcome to fade away into the intellectual past alongside phrenologists and alchemists. If one held a radical sort of evolutionary psychological understanding of morality, then one might ignore the plight of the homeless rationalizing any guilt one might feel at one’s callousness holding that the poor person simply needed to be taught to work for what he gets. As B.F. Skinner argued in Walden Two, “What is love…except another name for the use of positive reinforcement? Or vice versa (Skinner 1948, 300).”
Moral Protest
A third genus is one in which the immorality of an act is embraced for the sake of a larger extra-personal justification, say, putting political or social causes above the moral. You may decide to do a bad thing in service to a greater good, knowing all the while that your act was morally problematic. Three species of this sort of act present themselves: service to the source of morality, conscientious objection, and pursuit of justice.
Service to the Source of Morality
If there is an entity that forms the basis of morality, then serving its interest immorally may be necessary and since the entity is the foundation on which morality is based, its interests could put the act outside of the realm of ethics. Consider Søren Kierkegaard’s treatment of the story of the binding of Isaac in Fear and Trembling. Abraham’s foiled attempt to murder his son may have been righteous because he was following the command of God, but it was not moral. In Kierkegaard’s interpretation of the biblical tale, Abraham was faced with a choice between the ethical and the faithful and should be celebrated for preferring the latter over the former. Because it was a Divine command that violated morality, his duty was extra-moral from a Christian theological standpoint, but immoral from an ethical standpoint. We see similar reasoning from a wide range of contemporary religious fanatics who commit heinous acts in the name of their deity who disallows killing.
Conscientious Objection
In his ‘Letter from a Birmingham Jail,’ Martin Luther King, Jr. argues that the goal of civil disobedience is to create a situation where violating the law creates additional leverage for protesters in negotiating with power. Sometimes, the line is, you have to do something contrary to the law to fix the law. The same, some protesters argue, goes for morality.
Consider the animal rights protesters who threw tomato soup on Vincent van Gogh’s masterpiece “Sunflowers” in order to draw attention to their cause. Their argument is that rational discourse has generated insufficient progress concerning the abuse of animals in society, so the only way to get their issues into the collective conversation is to force them in through making noise and following the dictates of morality could never accomplish this. Immoral acts like attacking a cultural touchstone yield attention and the hope is that the attention will create sympathetic outrage in their direction, not against them. The argument is not that the act is morally acceptable because it was done in the name of a greater good. Rather, it was accepted by the soup lobbers as a sort of necessary evil, an immoral act being done for what was seen as a moral cause.
Pursuit of Justice
The conscientious objectors in the above case use immorality indirectly, hoping the result of the immoral act—immense media coverage—will then translate into something morally good, social reflection that leads to better treatment of animals. The next species of wrong employs the immorality more directly. The immoral act is chosen in order to cause harm, but the perpetrator believes that the victim deserves the harm. The outcome of the immoral act, it is argued, is just desserts.
When we seek revenge on an enemy, we may know full well that we are doing something wrong. Consider, for example, cases of disproportionate reactive violence. Where an eye for an eye, as the Biblical claim of retribution goes, allows two equivalent wrongs to seem to make a right—even when we admit that each one, on its own, is a wrong—there are cases where one eye leads to the destruction of a whole lot more. Especially in the case of repressive governments, the response to an attack may be designed to amplify the violence where the perpetrators of the vengeful reaction possess greater capacity to do harm. ‘If they kill one of ours, we kill ten of theirs.’ In this case, the escalating immorality is designed to exact revenge and establish control.
In yet other cases, the vengeance may be less violent, but symbolic. Suppose John, an art enthusiast, is married to Jane, who has a more lucrative occupation. They purchase John an expensive minimalist work that is the apple of his eye, or at least one such apple with Jane’s best friend being another. When Jane finds out about John’s infidelity, she takes a pen and makes a small dot in the corner of the canvas, the addition would be unnoticeable to anyone else, but destroying the symmetry and openness of the piece for John. He will never be able to see it as anything other than ruined henceforth. Jane’s act is immoral, even if it is clever.
In all of these cases, the immorality of the retributive act is acknowledged, but justified by the intended pursuit of justice. The victim is considered to have deserved the harm that befell them, but that does not mean that the act was not immoral.
The Fuck-Up
The remaining category of moral failure takes us back to the example with which we began, the case of the colleague who violated his own boycott to eat at Chick-fil-A. This is a special sort of moral misdeed, the “fuck-up.” While all of the errors in this kingdom come with the understanding of the wrongness of the act committed, the fuck-up is different in that the person who errs not only does so knowing the act is problematic, but does so from a pre-existing stance explicitly formulated against that specific type of act. Opposing that sort of act is a part of who that person thinks they are as a moral agent, it is a part of their self-identity. As such, there is a special sort of hypocrisy involved. An agent fucks up when they not only knew that X was wrong when they committed X and could have chosen not to do X, but also explicitly and publicly held that not performing X is a part of the agent’s sense of self.
These cases are different because they not only involve doing something that the agent fully understands they should not have done, but because the violation of identity has deeper ramifications. Our identity not only affects us, but often serves as the basis of our relationships. When one fucks-up, there is not only a violation of trust, but a deeper desecration of the very foundation of the relationship.
If the leader of any civic organization is arrested for driving under the influence of alcohol, the result is a sense of the person being morally unfit to lead. But if the person arrested was the head of a local chapter of Mothers Against Drunk Driving, then there is not only a sense of unfitness, but outright betrayal. Is the person really who they say they are? They not only undermined the relationship with the other members of their group, but destroyed the basis on which it rested.
That thoroughgoing destruction is what justifies the crudeness of the label. The term “fucked up” employs profanity for emphasis to stress how foundational the ramifications of the act are. It would be one thing for the husband to lie to his wife about going fishing instead of going to work. The love of angling can coincide with the love of one’s spouse. The harm to the trust between them will require attention to reel the deceived spouse back into a successful relationship.
But if after flowers, cards, and other regular shows of affection, the husband is discovered committing adultery, then the foundation is broken in a different, more radical sense. This is not a normal sort of violation of trust, but a deprivation of identity. The wife knew that the husband loved fishing. She is angry about the deception, but still knows who the husband is. The cheated-upon spouse, on the other hand, not only loses trust, but also loses an understanding of the identity of the spouse qua lover or soul mate. As a result, the marriage is not only damaged, it is thoroughly undermined, that is, it is fucked up. Hence the act which created it thereby deserves the same sort of label.
In conclusion
Is this taxonomy complete? Does it account for every possible morally wrong act? Likely not. Such a failure, however, would not be a moral failure because of the good faith attempt this work contains. But understanding our limitations, the hope is that this list comprises a valuable starting point for a complete categorization of immoral acts. This is designed to launch that project with an initial structure because if, as ethicists, we strive to understand how we ought to act, it may be helpful to be able to enumerate the ways in which we should not. But even in its incompleteness it does show us that when we analyze the success or failure of our actions from a moral perspective, that deliberation is one that partakes of the purely natural.
References
Aquinas, Thomas. 1274. Summa Theologiae. South Bend: University of Notre Dame Press, 1993.
Aristotle. 322. Nicomachean Ethics. Indianapolis: Hackett, 1999.
Bok, Sisela. 1978. Lying: Moral Choice in Public and Private Life. New York: Pantheon.
Durkheim, Émile. 1895. Rules of the Sociological Method. Chicago: University of Chicago Press, 1983.
Foot, Philippa. 1958. “Moral Arguments,” Mind 67: 502–513.
Gendler, Tamar. 2008. “Alief and Belief,” Journal of Philosophy 105(10): 634–663.
Goleman, Daniel. 1995. Emotional Intelligence: Why It Is More Important than IQ. New York: Random House.
Hon, Giora. 1989. “Toward a Typology of Experimental Errors: An epistemological View,” Studies in History and Philosophy of Science, Part A 20(4): 469-504.
Hume, David. 1739. A Treatise on Human Nature. New York: Penguin, 1986.
James, William. 1907. Pragmatism: A New Name for Some Old Ways of Thinking. Cambridge: Harvard University Press, 1975.
Kant, Immanuel. 1797. Groundwork for the Metaphysics of Morals with On a Supposed Right to Lie because of Philanthropic Concerns. Indianapolis: Hackett, 1993.
Kierkegaard, Søren. 1843. Fear and Trembling. Cambridge: Cambridge University Press, 2006.
King, Martin Luther. 1963. “Letter from Birmingham Jail,” in Why We Can’t Wait. New York: New American Library.
Nagel, Thomas. 1986. The View from Nowhere. Oxford: Oxford University Press.
Putnam, Hilary. 2002. The Collapse of the Fact/Value Dichotomy and Other Essays. Cambridge: Harvard University Press.
Rand, Ayn. 1995. The Letters of Ayn Rand. ed. Michael Berliner. New York: Dutton.
Ross, W.D. 1930. The Right and the Good, Oxford: Oxford University Press.
Skinner, B.F. 1948. Walden Two. New York: Macmillen.
Williams, Bernard. 1985. Ethics and the Limits of Philosophy. Cambridge: Harvard University Press.