Saturday, October 27, 2012

The Sharp Samaritan: Self Interest and Rescue


In a famous article in 1972, the Australian philosopher Peter Singer put forward an elegant argument for strong duties to contribute to charities. The argument begins by considering the ‘Pond’ situation. In this case, you are walking past a pond and see a drowning child. You can wade into the pond at no risk to yourself, and easily pull out the child. Should you do so? As Singer observes, most people think that you have a powerful duty to do exactly that. (And so, of course, do I.)
Simple yet powerful: Singer's 'pond' argument is a classic
of modern ethical philosophy.

But now add the extra factor that you are wearing expensive clothes, and that dry-cleaning them will cost you a couple of hundred dollars. Is there any change in the nature or importance of the duty in this case? Surely not. Everyone agrees that it is still your moral duty to wade into the pond and rescue the child.

What do we conclude from this? Singer suggests a few different results, but for our purposes we’ll just consider the principle that, when the cost to you is pretty small, and another’s life is at stake, then you morally ought to pay that cost and save their life.

At first glance, this principle can seem innocuous enough. But as Singer points out, it has surprisingly radical implications. For, in a way, the ‘pond’ situation is one that confronts us every day. Overseas, people caught in famines and conflicts do not have access to food, water and basic medical care. If we donated enough money to charities like Oxfam and the Red Cross, then we could save those imperilled lives. And the monetary cost to us is comparative to our losses in the pond case. Taking into account all the difficulties that foreign aid has in getting resources to those who need them, Singer estimates that we can probably save one life for an investment of about $1000.

Now if it is wrong of a passer-by to decide not to wade in to help the child if they are wearing an expensive suit, then – surely – isn’t it equally wrong for each of us every day to decide not to give like amounts to charity? (Of course if we continue to follow this line of thought, eventually we will start giving away so much money we might undermine our own means of existence and capacity to work productively. At that point everyone (including Singer) agrees we should look after ourselves.) Aren’t we being inconsistent in expecting the rescuer to help the child, but not ourselves taking comparable actions to save others?

There is an enormous philosophical literature that has arisen in response to Singer’s argument, and there are lots of different ways it can be critiqued (and, in response, defended). But here I just want to focus on one problem I have with it: The apparent cost to the self-interest of the duty-holder in this case is offset by powerful self-interested gains. First, I’ll argue there are gains. Second, I’ll argue they are important in assessing the pond situation.

The gains

Singer’s description of the pond situation describes the potential costs and gains in very material terms – that is, in terms of the dollar-cost to you of dry-cleaning or replacing your clothes and shoes. But there is much more on offer than this.

First, there is the capacity to feel the power we have in the world. As Nietzsche argued at length, in many different ways a fundamental driver of human behaviour is the will to exert power in the world, and to see the changes we have wrought to the world. The pond situation offers a profound opportunity in this respect. As you wade out of the pond, you are holding in your arms a life that would have ended were it not for you. You can apprehend in no uncertain terms the profound effect of your action. And for the rest of the rescued child’s life, every thing they do will only happen because of what you did. Now giving money to international charities simply does not provide this feeling of power in the same, direct way as rescuing the child. Our charitable giving is mediated through the actions of countless other people – like the humanitarian actors themselves. We might be unsure whether our money really had the desired effect in this case or not. And we don’t know exactly who we have saved. It is only in an abstract, indirect sense that we can feel the significance of what we have done, not in the immediate, determinate way we create change in the world by rescuing the child. Rescue shows us in the most visceral, overt way the power we have in the world. And every human being, for good or for ill, likes to feel that power. We want to know we matter.

Second, there is heroism. Most cultures tell stories about heroes. They are objects of our admiration. By rescuing the child, we fit ourselves into these stories, casting ourselves into the role of hero. From the first moment a child picks up a comic book or ‘Famous Five’ story, they will fantasize about being the hero in tales such as these. Rescuing the child offers the priceless opportunity to make those fantasies come true. Giving to charity does not. Indeed, sometimes when I teach Singer’s argument to undergraduates, students come up to me afterward saying how it would be a dream come true for them to be able to rescue a child in such a pond-like situation. It wouldn't just make their day; it would make their year!

Third, there is excitement. Rescuing the child is exciting stuff. Will you get there in time? Will you have to do CPR, and will you remember how to do it? And there is a story here. How did the child get into this position? Where are its parents? People spend thousands of dollars and travel the world in the hopes of having exciting adventures, and having great tales to tell of their adventures. It may sound grim to say it, but the fact is that charity is just not exciting in the same way rescue is.

Fourth, and building on all of the former points, there is glory. As Adam Smith observed in his work on the moral sentiments, many people want to get the acclaim and admiration of others for doing the right thing. That doesn’t mean they want to ‘fake it’. They don’t want only the admiration without the reality (this is merely a love of fame, rather than a desire for true glory, as Smith puts it). Rather, they want to actually do the right thing, and to be known and admired for doing so. The pond situation offers a fantastic opportunity for this. Because it is an interesting, exciting story, people will want to hear it – and because you are the hero in the story, you will be the centre of attention and the object of admiration. Newspapers often carry reports of good Samaritans who saved others, and in so doing provide a motivation to future Samaritans. The pond situation offers the opportunity for glory in a way that charity simply does not.

Fifth, as well as social appreciation, there is the appreciation of specific people. In the pond case, this is the appreciation of the parent or parents of the child. Imagine you yourself are the parent in question. You have lost your child for a moment. You look around in sudden concern, realising the dangers surrounding you. You are near a road, near a pond, there are strangers around. Where is your child? Are they okay? Suddenly you see a complete stranger wading out of the pond, carrying your just-breathing child. What is your reaction? 

My overwhelming feeling, I think, would be one of profound gratitude. How can I ever replay them? This isn’t to say I would open up my wallet to them, as I would worry that could cheapen the importance and nature of what they have done. But I certainly would seriously consider if I can somehow show my immense appreciation for their action. Being an object of such gratitude is a wonderful social experience – even if there are no further social and material consequences that might flow from it.

Sixth, building on all the prior factors, in cases where costs have been incurred, there may be real opportunities for others to deal with them. As a parent of a rescued child, I at least would insist on paying for the dry-cleaning of the rescuer’s suit. And imagine that you missed or were late to an important meeting or interview because of the rescuing. Is there any better excuse imaginable than that you stopped to save a drowning child’s life? What manner of person would not go out of their way to reschedule the meeting or interview? And if it was a job interview, how could the interviewers fail to be impressed by the quality of the potential applicant?

Now, of course, it could be argued that all of these personal, social and potentially material benefits (or at least, reductions of costs) are in some sense irrational, and that in all consistency we really should personally and socially apply them to charitable giving as well. But even if this is so (and I doubt it), the fact remains that rescue has all these advantages, and charity does not. Saying it is irrational or inconsistent does not alter the social reality that rescue has these benefits and that charity does not.

So what?

Suppose we grant that acts of rescue do have all these of these real and potential benefits. Why does it matter?

My point is not that these benefits are so great that, in any given case, it will always be in one’s self-interest to rescue the child. This is not an ‘egoist’ argument for being a Good Samaritan. Rather, my point is that if we are to take Singer’s argument seriously, then we need to accurately tally the costs and benefits in each case. Because of all of these sorts of benefits, the real, sum-total cost (the decrease in our ‘expected utility’) of rescuing the child is much smaller than Singer estimates. Because it is smaller, the sacrifice expected of the person is not as great. As such, when we apply the same moral arithmetic to cases of international charity, we will get very different answers to the ones Singer puts forward.

But perhaps this misses the point. So consider the following objection.

When we are standing on the bank of the pond, it is hardly as if any of us will really calculate all of these benefits. Rather, we simply see the drowning child and realise we can help. With no further consideration, we wade in and save the child. The benefits may subsequently occur but – it may rightly be objected – it is unlikely they formed any part of our reasoning at the moment of action.

I agree this point is a valid one. But I still think the benefits matter. They just matter in a more indirect way. The benefits mean that whenever we hear stories of rescues, we get used to them ending happily – indeed, we think they should end happily, and we act to make this happen. We shake the hand of the person who rescued the child and buy them a beer. We reschedule the appointment they missed. We listen in admiration to their story. And because of all this there is never any caution or qualification applied to the norm of rescuing. We don’t hear innumerable stories about how the rescuer lived to regret their action, or failed to live up to their other responsibilities because of the costs they incurred. For most people, I agree it is true that the above-noted benefits are not their primary reason for rescuing the child. But the benefits still play a role in empowering the person not to need to have any second-thoughts about this situation. The benefits don’t motivate the principle, but they remove obstacles that might otherwise weaken our responsiveness to it.

If this is right, then one of the reasons we are horrified by the person who walks by without saving the child is simply because there are no countervailing considerations that could justify their doing so. Our society has created, as it might be put, a well-functioning norm of rescue, with myriad rewards and cost-mitigation factors set up to ensure its consistent functioning. And this is important in the context of rescue, because we cannot afford for people to have to weigh up costs and benefits in such a case. We need them to be willing to jump into the pond, and to trust that society has got their back. A well-functioning norm achieves this. If a person cannot be relied on to act on a norm in a situation like the pond one, then it is hard to believe they are capable of acting on a norm in any situation. The amount they are willing to respect and care for others is almost zero.

And this means that if we are interested in improving the lot of those suffering from famine and the ravages of conflict, then we are better served trying to create social and personal rewards that can flow to the people who help them. The more we can make real the benefits for doing such important moral actions, the more we smooth the way for such action, and allow it to feed into a life-well-lived. 

Friday, October 19, 2012

"I knew it!" Confirmation Bias and Explanation

Source of all evil or defender of all freedom? How the same
event can seemingly justify directly opposing beliefs.

“Confirmation bias” refers to the well-known human foible of favouring their existing beliefs and commitments, even in the face of conflicting evidence. For instance, if you believe that some person – Annie, say – is a shifty person, you will have a tendency to hold to that belief over time, and even come to believe it more strongly. Psychology experiments suggest that confirmation bias works in a variety of ways – including biasing the search for evidence (we search for data that supports our belief, rather than data refuting it), the interpretation of evidence (we look for weaknesses and ambiguities in evidence that questions our belief) and the memory of prior evidence one has been exposed to (we have faster and more thorough recall of confirming rather than dis-confirming evidence). Through these three methods, human beings show a decided tendency to cement their initial beliefs rather than revising them. In some cases confirmation bias can be very powerful. If people are exposed to evidence that Annie is shifty, say, and then shown beyond all dispute that this initial evidence was fabricated, they will still tend to harbour suspicions about Annie’s character that arose on the basis of that evidence, even though they will explicitly acknowledge and believe that the evidence was false!

Of course, confirmation bias is not insurmountable. People can and do decide they were wrong about something; and can recognize and choose to use tests and lines of enquiry that will expose their mistaken beliefs.

Confirmation bias and philosophy

Does confirmation bias affect philosophers too? There are good reasons to believe it does – consider the old saying that “being a philosopher means never having to admit you’re wrong”. The worry expressed here is that philosophers will use the sizable intellectual tools at their disposal to critique and interrogate opposing theories and evidence, and search out subtle cases of confirming evidence. In so doing they will corroborate their initial position, rather than using those intellectual tools to honestly enquire into it. And certainly wholesale changes of philosophical theory by established philosophers tend to be pretty rare. To be sure, philosophers develop their positions over time, revising and responding to new evidence and argument, but direct moves into the opposing camp don’t happen a lot, in my experience at least. (And I have no particularly special virtues in this regard either, of course.)

And it is probably fair to say such biases arise in the emotionally charged milieu of political philosophy even more stridently. Those who think that capitalism is a pretty good idea sometimes seem to be able to find confirming evidence for this belief every day and in every way. And the same is true for those who think capitalism is the fundamental source of every misery in the world. No possible horror can beset humanity without an explanation leading back to capitalism.

Explanations of confirmation bias

So why do we all do it? Well, there are lots of different reasons that have been put forward for this human tendency to cement our beliefs over time. For instance, having to reject a belief is cognitive hard work. The type of thinking that would reject the belief can require effort. Furthermore, the type of thinking that follows from revising the belief takes still more effort – do other beliefs now have to shift because that first one has been rejected? And since human beings are largely prone to avoiding effort, we are motivated to avoid these situations of mental heavy-lifting. Easier to stick with what we know.

Also, the more we understand our world, the better we are doing, and the more secure we feel. Beliefs underwrite our actions and our projects in the world. If our beliefs can be relied upon, then that enhances our ability to predict future events and attain our goals. Finding out a prized belief is wrong threatens that happy security.

And there is a social factor. The more we change our mind and revise what we have said, the less others can feel confident in relying on us as a valuable source of information and insight. Since most of us like our views to be taken seriously, a habit of admitting defeat carries social costs.

There’s a lot to be said for these sorts of explanations of confirmation bias – and the countless others out there in the psychology literature. In all likelihood, confirmation bias is over-determined – there are lots of reasons we do it.

However, I want to reflect on the possibility that confirmation bias arises from largely rational, sensible ways of thinking – in particular the search for explanations for events.

Confirmation bias and seeking explanations

Imagine something strange happens in the world – something you can’t explain. But it is (just suppose) important for you to be able to understand it. So you seek an explanation for it.

What does that involve?

Well, one of the main things it will involve will be aligning this new event with your current beliefs. If you can work out how this new thing happened, given what you already believe, then you will have explained that event. If you can work out how the facts as you currently understand them would have caused (or at least allowed the possibility of) this new event, then you will have explained it. That, pretty much, is just what it means to have an explanation.

So when you start searching for information to explain the event, you search for what we might call linking facts; facts that would allow you to move from your beliefs as they stand to the occurrence of the event.

Current beliefs + Linking facts = Explanation of event (or phenomena)

This, I hope, is pretty straightforward. If we want to understand something, there’s no point aligning it with other people’s beliefs (how would that help?), and no point trying to explain it from first principles all the way up (couple of years to spare?). The new phenomena is understood and explained only when it makes sense, given what we already accept. This doesn’t mean that the linking facts can’t replace or force revision of existing beliefs, but it does mean the existing beliefs fundamentally frame what counts as an explanation.

In fact, this search for linking facts has at least two results.

First, the type of linking facts you are looking for will vary depending on what your current beliefs are. If you believe that the United States is ultimately the root of all international problems, for example, then you will search out the type of linking facts that will explain the current phenomena – the situation in Syria in 2012, say – with the US. You will look for the involvement of the CIA, the pressure the US exerts on the global media, its historical influence on and action in the Middle East, its current oil interests in that region, and so on and on. On the other hand, if you believe that most of the world’s problems arise from extreme ideologies and fundamentalist religious beliefs, then you will search for very different sorts of linking facts.

Second, the search for linking facts will determine when your investigation stops. Once you have located the required linking facts, then the event is understood and explained. You can stop searching. So once you have found – to return to the example – that the US has CIA agents at work on Syria’s border with Turkey, that it has a history of animosity with the Syrian regime, and that Syria is an ally of Russia, then you have explained the Syrian crisis and the way it is presented in the mainstream media.

Job done. Move on.

And, of course, if you had started out with concerns about religious extremism, then in all likelihood a different set of linking facts would have been discovered, and the search stopped at that point. In other words, you will not continue to search in such a way that you might, (a) find subsequent facts that disprove the existence of your linking facts or impact on their capacity to explain the event, or (b) find subsequent facts that would better account for the event, using an entirely different explanation.

Now this search for explanation is not in any sense irrational. But it can clearly contribute to confirmation bias. As well as constraining the nature and end-point of the search, it cements the initial belief even further.

Why?

Because now that initial belief (about the role of the US in world affairs, say) helps explain this new event. The fact that it can explain this means you now have one more reason for believing it. If someone else later challenges this belief of yours, you are entitled to think: “But wait, clearly the US is playing this role, because I found evidence of its presence in the Syrian crisis.” You did not set out to test this belief, but you nevertheless ultimately collected evidence that helped justify your continued belief in it. In this way consideration of the same event by two people with different starting beliefs, even with access to the same information, will contribute for each of them to their justification of their initial beliefs.

And that’s a problem, because it means that confirmation bias arises from what are otherwise quite sensible and effective methods for understanding and explaining events.

Wednesday, October 10, 2012

On Julia’s 'He needs a mirror' speech


One of my favourite books when I was younger was The Penguin Book of Twentieth Century Speeches. It has all the greats, as you would expect, and some little known treasures. I use to pore over the best of them again and again, and you could almost hear the voices springing out from the page when you read them to yourself – the signature sounds of Martin Luther King, JFK, Winston Churchill.

In one of my past lives before I grew up to be a political philosopher, I spent some time in stage acting, so great speeches appealed to me on that level too. They were not only political theory – though the best of them were that as well – but they were theatre, strategy, rhetoric, persuasion and poetry.

Like any such list of the greats of years gone by, whether in music, film or philosophy, they are apt to make the idle reader despair of current offerings. What would it have been like, you can’t help but wonder, to live in a world where people like Martin Luther King actually stood up and spoke that way? What would it have been like to actually be in the crowd and hearing him make history in front of you? To let you be a part of that history?

Julia Gillard’s 'He needs a mirror' was not the stuff of the ‘I have a dream’ legends. No question about that. But when the dust settles on the end of this century, and the great speeches are collected so the next generations can despair over their current crop of political leaders, I hope at least her name comes up, and that some consideration is made as to whether her speech of October 10, 2012, might belong in the sequel to my dog-eared penguin.

To be sure, the speech was not set in a context as grandiose as many of last century’s greats – it was not the culmination of a huge social movement that had gathered tens of thousands of people together to march on their capital, it was not ringing from the halls of a nation newly freed from apartheid, or sounding out as the Berlin Wall fell. There were no epochal events unfolding – no terrible war confronting a nation, no fledging state being formed, no national emergency to rally around.

But that, in a way, was precisely its power.

It is not only great and terrible acts that define a people, a country, a generation. It is the petty and demeaning ones as well. It is all the tiny, needling, fleeting, half-audible, offhand, half-joking words and acts that we almost all are guilty of using – sometimes thoughtlessly, sometimes not – that bring people down a peg, that put them in their place, that remind them how things really stand and where they are in the pecking order. It is those acts, as much as the great acts, which define who we are and where we are going. It is – as Leonard Cohen once put it – the homicidal bitching that goes down in every kitchen, that determines who will serve and who will eat.

Individually, maybe, each particular snippet isn’t a big deal. But that’s the thing with racism and sexism. It’s not just arbitrary meanness. Its organized meanness. It’s the little things that are said often enough, and by enough people, and to the same people, that finally start to weigh on them.

So it takes a reckoning. It takes someone being able to fix down one target, and add up the sorts of things they say and do – and the sorts of things they let their friends and allies say and do – and package it all together, and see what that says about the person’s character.

And that’s what Julia did.

And she did it beautifully.

It was theatre.

At the end of the performance I looked in disbelief at the timestamp on my iPad. Had I really just listened, captivated, to a speech in the Australian parliament fifteen minutes long? And had my ten year-old-daughter really just sat next to me on the couch and (excepting a short distraction when the complexities of the Slipper case lost her) watched it with me? Even she – largely a stranger to the evening news – could see the drama unfold as Tony Abbott’s confident smirk began to dissolve, and the realisation sink in for him.

Julia had him.

She knew it.

He knew it.

My ten-year-old daughter knew it.

And the more he sank down in his chair like a cheeky child finally getting the dressing down he knows he deserves, the more my daughter giggled uproariously.

If this was politics, she wanted in.

How long had Julia Gillard been saving up all of those snippets, each perhaps almost-excusable on its own, but able to be drawn together for devastating effect, racked up together when the opportunity presented itself, and when the opponent has the decency to blunder headlong into a political trap that might have been years in the making?

And if the list had been years in the creation, who can blame her? The speech works because we the audience can all put ourselves in her place – doubtless women can do this much more effortlessly than men, of course, because they have in fact been in that place – and wanted to make that list, and dream of one day being able to set it out, publicly, point by point, with the victim trapped with nowhere to turn.

But in the end, if her speech really does merit candidacy in the great speeches of the twenty-first century, it won’t be because we the audience put ourselves in her place, vicariously living through the thrill of the kill, exhilarating though it was.

It’ll be because we put ourselves in Tony Abbott’s place, and we wondered what that speech would sound like when it was delivered to us. And because we decided we didn’t think we’d altogether like how it might sound.

Tony Abbott is not a terrible, evil man. He’s not so far from a regular bloke.

And that’s the point.

Saturday, October 6, 2012

Ooops. Ethics and the Psychology of Error


Can we learn about moral decision-making from the psychological literature on human error – that is, the study of how and in what ways human beings are prone to mistakes, slips and lapses? In this blogpost I offer some brief – but I hope enticing – speculations on this possibility.

At the outset it is worth emphasizing that there are some real difficulties with linking error psychology with moral decision-making. One obvious point of dis-analogy between ethics and error is that one cannot deliberately make mistakes. Not real mistakes. One can fake it, of course, but the very practice of faking it implies that – from the point of view of the actor – what is happening is not at all a mistake, but something deliberately chosen. But it seems to be a part of everyday experience that one can deliberately choose to do the wrong thing.

However, the most significant dis-analogy between moral psychology and error psychology is that in most studies of the psychology of error, there is no question whatever about what counts as an error. In laboratory studies the test-subjects are asked questions where there are plainly right and wrong answers – or rational and irrational responses. Equally, in studies of major accidents, the presence of errors is pretty much unequivocal – if there is a meltdown at a nuclear reactor, or if the ferry sinks after crashing – then it is clear that something has gone wrong somewhere.

In ethical decision-making, on the other hand, whether a judgement or an action is ‘in error’ – if this is supposed to mean ‘morally wrongful’ – is often very much in dispute. So someone who judges that euthanasia is wrong, say, cannot be subject to the same error analysis as some who contributes to a nuclear disaster. At least, not without begging some very serious questions.

The way I aim to proceed is to think about those cases where the person themselves comes to believe they made a moral mistake. The rough idea is that a person can behave in a particular way, perhaps thoughtlessly or perhaps after much consideration, but later decide that they got it wrong. Maybe this later judgement occurs when they are lying in bed at night and their conscience starts to bite. Maybe it happens when they see the fallout of their action and the harm it caused others. Maybe it happens when someone does the same act back to them, and they suddenly realise what it looks like from the receiving end. Or maybe their local society and peers react against what they have done, and the person comes to accept their society’s judgement as the better one.

One reason moral psychology might be able to learn from error psychology is in the way that error psychology draws on different modes of action and decision-making and welds them all into one unified process. Interestingly, the different modes error psychology uses parallel the distinctions made in moral psychology between virtue, rule-following (deontology) and consequentialism (utilitarianism).

GEMS

I’ll use here the account relayed in James Reason’s 1990 excellent book, Human Error. Drawing on almost a century of psychological study on the subject, Reason puts forward what he calls the ‘Generic error-modelling system’ or GEMS. GEMS is divided into three modes of human action, in which different sorts of errors can arise. (What follows is my understanding of GEMS, perhaps infected by some of my own thoughts – keep in mind I am surveying a theory that is rather outside my realm of expertise, so I make no great claims to getting it exactly right.)

Skill-based action

The first mode of action is ‘skill-based’. This is the ordinary way human beings spend most of their time operating. It is largely run below the level of conscious thought, on ingrained and habitual processes. We decide to make a coffee, or drive to the store, but we do not make executive decisions about each and every one of the little actions we perform in doing these tasks. Rather than micro-managing every tiny action, we allow our unconscious skills to take over and do the job. This mode of action draws heavily on psychological ‘schemas’ – these are roughly speaking processes of thought or models of action that we apply to (what we take to be) a stereotypical situation in order to navigate it appropriately. The context triggers the schema in our minds, or we deliberately invoke a schema to manage some task, and then our conscious minds sit back (daydream, plan something else, think about football, etc) as we proceed through the schema on automatic pilot. Schemas are created by prior practice and habituation, and the more expert we become on a particular area, the more of it we can do without thinking about every part. To give an example: when a person is first trained in martial arts, they need to consciously keep in mind a fair few things just to throw a single punch correctly. After a while, getting the punch right is entirely automatic, and the person moves to concentrate on combinations, then katas, and so on. More and more of the actions become rapid and instinctual, leaving the conscious mind to focus on more sophisticated things – gaps in an opponent’s defences, their errors of footwork, and so on.

Errors usually occur at this skill-based level when, (a) the situation is not a stereotypical one, and faithfully following the schema does not create the desired result, or (b) we need to depart from the schema at some point (‘make sure you turn off the highway to the library, and don’t continue driving to work like a normal day’) but fail to do so.

Rule-based thinking

The second mode of action is ‘rule-based’. This arises when the schema and the automatic pilot have come unstuck. When operating at the ‘skill-based’ level described above we were unaware of any major problem; the context was ‘business as usual’. Rule-based action emerges when something has come unglued and a response is required to rectify a situation or deflect a looming problem. In such cases, GEMS holds, we do not immediately proceed to reason from first-principles. Rather, we employ very basic rules that have served us well in the past; mainly ‘if-then’ rules like: ‘If the car doesn’t start, check the battery terminals are on tight’; ‘if the computer isn’t working, try turning it off and on again’.

Mistakes occur at this level if we apply a bad rule (one that we erroneously think is a good rule) or we apply a good rule, but not in its proper context. We can also forget which rules we have already applied, and so replicate or omit actions as we work through the available strategies for resolving the issue.

Knowledge-based thinking

The third and final mode of action is ‘knowledge-based’. Knowledge-based thinking requires returning to first-principles and working from the ground up to find a solution. At this level an actor might have to try and calculate the rational response to the risks and rewards the situation presents, and come up with solutions ‘outside the box’. It is at this level where a person’s reasoning begins to parallel ‘rational’ thinking in decision-theoretic or economic senses. That is, it is in this mode where a person really tries to ‘maximise utility’ (or wealth).

GEMS says that human beings do not like operating at the knowledge-based level; it takes not only concentration but also real mental effort. For the most part, effort is not enjoyable, and we only move to knowledge-based decision-making when we have reluctantly accepted that rule-based action has no more to offer us – we have tried every rule which might be workable in the situation, and have failed to resolve it appropriately. We are dragged kicking and screaming to the point where we have to think it out for ourselves.

James Reason argues in his presentation of GEMS that human beings are not particularly good at this type of knowledge-based thinking. Now my first thought on reading this was: ‘not good compared to who?’ It’s not like monkeys or dolphins excel at locating Nash equilibriums to game theory situations. Compared to every other animal on this planet, human beings are nothing short of brilliant at knowledge-based thinking. But Reason was not comparing humans to other animals, but rather knowledge-based thinking to rule-based thinking. He argues that, comparatively, human beings are far more likely to get it right when operating on the basis of rules. Once the rules render up no viable solutions, we are in trouble. We can think the issue through for ourselves from the ground up, but we are likely to make real errors in doing so.

The opportunity for error at this stage, therefore, is widespread. Human beings can read the situation wrongly, be mistaken about the causal mechanisms at work, make poor predictions about likely consequences of actions, and be unaware of side-effects and unwanted ramifications. This isn’t to say knowledge-based decision-making is impossible, of course, just that the scope for unnoticed errors is very large.

A full-blown theory of human action

Ultimately, in aiming to give a comprehensive theory of how human beings make errors, this realm of psychology has developed a full-blown account of human action in general. Most of the time we cruise through life operating on the basis of schemas and skills. When a problem arises, we reach for a toolbox of rules we carry around – rules that have worked in previous situations. We find the rule that looks most applicable to the current context, and act on its basis. If it fails to resolve the situation, we turn to another likely-looking rule. Only after we despair of all our handy rules-of-thumb resolving the situation are we forced to do the hard thinking ourselves, and engage in knowledge-based decision-making – which is effortful and fraught with risk.

(Now one can have worries about this picture. In particular, psychologists of error seem to me to work from a highly selective sample of contexts. Their interest focuses on cases where errors are easily recognizable, such as in artificial laboratory situations and slips of tongue, or where the errors cry out for attention, such as in piloting mishaps and nuclear meltdowns.)

What’s interesting about GEMS, from a moral theory perspective, is how it aligns with the three main theories of moral action: virtue, duty-based theories and consequentialist theories. In what follows, I’m going to insert these three theories of moral reasoning into the three categories of human action put forward by the GEMS process, and see what the result looks like.

Virtue and skill-based action
Virtue theory has deep parallels with the schemas of skill-based reasoning. Virtues are emotional dispositions like courage and truthfulness. When operating on the basis of the virtues, one isn’t focusing on particular rules, or on getting the best consequences. Instead, the point is to have the correct emotional response to each situation. These steadfast emotional dispositions – the virtues – will then guide the appropriate behaviour.

Now, to be sure, it is mistaken to view Aristotle (the first and greatest virtue-theorist) or contemporary virtue-ethicists as basing all moral behaviour on habit and habituation, especially if this is taken to imply not actively engaging one’s mind (Aristotle’s over-arching virtue was practical wisdom). But the formation of appropriate habits, and learning through practice and experience to observe and respond to the appropriate features of a particular situation is a hallmark of this way of thinking about morality. (Indeed, it is reflected in the roots of the words themselves: our English word ‘ethics’ comes from the Greek term meaning ‘custom’ or ‘habit’ – and ‘morality’ comes from the Roman word for the same thing.)

Paralleling the psychology of error, we might say that the primary and most usual way of being moral is to be correctly habituated to have the right emotions and on their basis to do certain actions in particular contexts. We need to be exposed to those contexts, and practice doing the virtuous thing in that type of situation until we develop a schema for it – until the proper response is so engrained as to become second nature to us. Operating on this mode, we don’t consciously think about the rules at all, much less have temptations to breach them. Most good-hearted people don’t even think about stealing from their friends, for instance. It isn’t that an opportunity for theft crosses their mind, and then they bring to bear the rule on not stealing. Rather, they don’t even notice the opportunity at all.

Sometimes, though, problems arise. Even if we have been habituated and socialized to respond in a particular way, we might find a case where our emotionally fitting response doesn’t seem to provide us with what we think (perhaps in retrospect) are good answers. This may be because the habit was a wrongful one. Schemas are built around stereotypes – and it is easy to acquire views and responses based on stereotypes that wind up harming others, or having bad consequences. Equally, if we are not paying attention to the situation, we may be on emotional autopilot, and not pay heed to the ways in which we need to modify our instinctual or habitual response in response to the differences between this situation and a more stereotypical one.

Deontology and Rule-based thinking

At the points where our instinctive emotional responses lead us awry, (if we follow the analogy to the GEMS psychology of error) we will look to rules that have served us well in the past. In ethics these rule-based systems are called deontological theories. Deontology says that the point is not to have the right emotions, or achieve the best consequences, but to follow the proper rule in the circumstances. So when we are jerked out of habitual response by some moral challenge (unexpected harm to someone else, etc), the first thing we do is to scout around for rules to resolve the situation – rules that have previously served us well in the past. These might be very general rules: ‘What would happen if everyone did what I am thinking of doing?’ or ‘What would everyone think of me if they knew I was doing this?’

Or the rules we appeal to might be quite specific. In the GEMS system, we have a variety of options for rules to select, and we try and gauge the most appropriate one for the situation we are in. In ordinary moral thought, this process in fact happens in all the time. In fact there is a technical word for it (and a long history behind it): casuistry. When one reasons casuistically, one analogizes to other, closely related situations, and uses the rule from that situation. For instance, if we are unsure if first-term abortion is morally acceptable, we might first true analogizing to murder of a child, which has a clear rule of ‘thou shalt not kill’. But as we think about it, we might decide that the dis-analogies here are very strong, and perhaps a closer analogy is one of contraception, with which (let us suppose) we accept a rule that contraception is legitimate. Or we might analogize to self-defence, especially in cases where the mother’s life is in danger. In attending to the relevant features of the situation, we select what seems to be the most appropriate rule to use. Sometimes we use multiple rules to develop highly sophisticated and qualified rules for a specific situation.

Utilitarianism and knowledge-based thinking

But what happens when this doesn’t seem to resolve the issue, or we feel torn between two very different rules (as might have occurred in the above abortion scenario)? At this point the third, knowledge-based reasoning would come online. We must return all the way to first principles. In GEMS one way this can occur is through means-end rationality, where we take into account how much we want each outcome, and what the chances of each outcome are – given a particular action of ours. We then choose the action that has the best mix of likelihood and good consequences; we ‘maximise expected utility', as the point is put technically.

And of course there is a moral theory that requires exactly this of us, except that rather than maximizing our own personal happiness, we are directed to maximise the happiness of everybody, summed together. This is the ethical theory of utilitarianism which requires (roughly speaking) that we create the greatest good for the greatest number of people (or sentient creatures more widely). It is here that we have really hard thinking to do about the likely costs and benefits to others of our action. We weigh them up and then act accordingly.

Large-scale pluralist theory of moral action

Is this a plausible over-arching model of what moral thinking looks like? I think it has some merits. It is true that most of what we do, we do without a lot of conscious thought, on the basis of engrained habits and customs. We don’t go through life forever applying rules or calculating consequences; we hope that our habits of action, thought and emotion will generally push us in the right direction more often than not. And this might be particularly true when we are interacting with friends and lovers and family, where following ‘rules’ or coolly calculating risks and rewards might seem quite inapt.

But sometimes we are confronted with ethical ‘issues’ or ‘challenges’. We encounter a situation where habit is no longer an appropriate guide. It sounds right to me that in these situations we do cast about for rules-of-thumb to resolve the problem. We think about what worked before in similar situations, and go with that.
In some cases, though, there can seem no ‘right’ rule; no rule that will work fine if everyone does it, or no rule that does not clash with what looks to be an equally fitting rule. These will often happen in novel situations, rather than everyday encounters. And if they are worth thinking about, then it may be that the stakes in them are rather high – they might be the decisions of leaders, generals, or diplomats. In such cases, really weighing up the possible pros and cons – and trying to quantify the merits and demerits – of each approach seems appropriate.

Note also that some of the major objections to each theory might be managed by this pluralist, sequential account, where we proceed from virtue, to duty, to utilitarianism. For instance, utilitarianism can be philosophically disputed because the pursuit of sum-total happiness may lead us to sacrifice the rules of justice – or even to force us to give up on our deepest personal commitments. But on the approach here this wouldn’t happen. On everyday matters of personal commitment, we operate on the basis of our emotional dispositions and mental schemas and habits. When there is a clear rule of justice at stake, we accept its rule-based authority. But in cases where neither works – and only in those cases – we look to the ultimate consequences of our actions as a guide to behaviour. And even at this level we still have constraints based on our emotional habits and rules-of-thumb. The utilitarian decision-making operates within the psychological space carved out by these two prior modes of judgement.

Of course, like any such speculations, I am sure I raise more questions than I have answered. But it is striking the way that the three modes of decision-making that occur in the psychology of error map onto the three ways of theorizing about ethics, and that the process developed whereby a decision-maker moves from one mode to another does have prima facie plausibility in the way we go about making moral decision-making.