Thursday, December 13, 2012

Why Intellectual Liberty?


This month has seen the release of my first scholarly book Intellectual Liberty: Natural Rights and Intellectual Property. In it I argue that natural rights thought, in particular as it appears in the work of the seventeenth century political theorist John Locke, has rather surprising consequences for intellectual property rights (IPR).

Intellectual property rights are the legal rights that restrain all of us from certain types of uses of other’s ideas, expressions and inventions. Two of the most common forms of intellectual property are patent – prohibiting copying other’s original inventions – and copyright – prohibiting copying other’s original expressions (and, sometimes, aspects of the ideas underneath those expressions). Copyright is my main focus in Intellectual Liberty.

In this blog entry, I thought I would reflect on the main ideas animating the work and discuss the book's key ideas in a user-friendly fashion.

Since it began, copyright has been steadily expanding. When it first appeared in legislative form in the seventeenth century, the ‘literary property’ held by authors of original works was very limited. A new writer did not get control over as many aspects of the work as they now do – for instance they usually could not control translations and sometimes even abridgements of their work. And the period of their control was much shorter – fourteen years was a common term, with extension to another fourteen years upon application.

Nowadays things are quite different. Copyright applies to translations, abridgements, sequels, prequels and works in the same fictional universe. In some cases it can cover information and facts – or words that are purported to convey facts. Scientology, for instance, has used copyright to actively suppress the communication of its esoteric texts, preventing the communications of ex-disciples wanting to warn others of the religion’s ultimately rather wacky secret beliefs. Increasingly, the exceptions carved into copyright – exceptions that allow for news-reporting, satire, critique and suchlike – are understood narrowly. Most of all, all these intellectual property rights last for much, much longer. In many countries, copyright terms run for life-plus-fifty-years, and this number may well continue to rise in the future.

Are these changes justified? Have we progressed toward better and more reasonable laws, or away from them? If we have moved away from just laws, then what is it that is morally wrong with these new laws?

One answer is that a good way of evaluating laws is to determine whether, on balance, they make us all happier and more prosperous. Most laws will tend to make some people happier and wealthier, and others less well off, as compared to other alternatives. It seems to be a sensible idea for us to have the rules that, when the sum total of happiness is calculated, make us all more – rather than less – happy. This way of evaluating policy is utilitarian. Copyright is often justified in this way – explicitly so in the US Constitution. By giving a monetary reward to those who create new works – works citizens want to have and are willing to pay for – we motivate writers and artists to create these works. The artist makes a buck, the audience get their entertainment, and everybody wins. However, history over the last few hundred years shows it’s possible to get a huge amount of cultural innovation and creation with pretty weak copyright rewards. So one way of critiquing the expansion of copyright is to say that they have tipped the balance too far toward the creators, and have unjustifiably limited the uses granted the work’s audience – including all the audience that would have been able to enjoy the work if not for the constraints of copyright.

Many critics of contemporary copyright make this sort of argument, and it is an important one. Some people might object that it is hard to make these kinds of determinations about which policies are more likely to make people happy, and others might even be sceptical about whether we can add together each person’s happiness to create a meaningful overall total. But these are not my worries. Indeed, when I first started the PhD work that ultimately led to Intellectual Liberty, I was myself a card-carrying utilitarian. But as readers of this blog will know, I changed my mind.

My main concern with the use of this utilitarian argument is that many who use it in the context of intellectual property seem to suppose it is quite uncontroversial – as if it is just obvious and sensible that we should do whatever will maximise happiness or the ‘public interest’. But this way of thinking is very controversial. Utilitarianism can require us to sacrifice the one for the many, and can make extraordinary demands of individual people.

The reason utilitarianism encounters these problems is because it is concerned with achieving a specific goal (maximizing happiness) rather than ensuring the proper treatment of each and every individual. It is built around good consequences, rather than specific right acts. Moral theories that focus on the proper treatment of individuals are called deontological, and natural rights theories fall under this banner.

Why might a deontological theory – one concerned with proper treatment of others – worry about the extension of intellectual property rights? Broadly, there are two reasons.

First, there is the question of what we might call property over-reach. Natural rights theories – especially those that hearken back to John Locke – allow for individuals to get property rights over specific objects or pieces of land. Property rights allow people to take control over their lives, to reap what they sow, and to have a degree of independence from others – at least in the sense of not being beholden to those others for one’s very survival. To pay proper respect to a person requires respecting the sphere of the world that is their own. And this makes sense. After all, human beings are not incorporeal. They exist in the real, physical world. They need food to eat and a place to sleep. Almost all of their long-term activities and projects involve interacting with the world around them in some way. Even the most mental activities of a person (reflecting, learning, meditating, praying) require being in a space where such actions are possible. To only have rights over one’s own physical body – with no concern for the environment around that body – makes no sense.

Even as they require private property rights, however, natural rights theories constrain those rights. Why? Well, the answer is pretty obvious – and probably occurred to many readers as they read the last paragraph. If property rights really have all those desirable moral characteristics, and they really do tap into vital aspects of our humanity – having control over our lives, bearing the consequences of our choices and labours – then it is crucial for every human being, as a right-holder, to be able to acquire property rights. This doesn’t mean property has to be distributed equally, but it does mean that each person needs the opportunity to acquire property and build on it.

Locke expressed this point through the use of his ‘proviso’. There is a substantial debate about exactly what the proviso requires, but despite all the controversy the central thrust of it is quite straightforward. Locke argued that when people acquire new property, their acquisition is only legitimate if one way or another they leave ‘enough and as good’ for others. That is, if I want you to treat me justly by respecting the bounds of my property, I have to make sure that when I stake out my property, I am treating you properly by respecting your equal need to acquire property yourself. Once upon a time, this meant making sure there was land for you to farm or otherwise work productively. Nowadays in the developed world, the same concern is likely to be filled through ensuring people get a good education and have opportunities for gainful employment.

So the first way that a deontological system can critique property rights is by demonstrating that the people acquiring those rights are over-reaching – that is, they are taking so much that there is not ‘enough and as good’ left for others. Applied to intellectual property, the thought is that those who acquire new intellectual property rights (for instance by writing an original book) need to make sure that those coming after them will have the same opportunities to write their own books (and copyright them). The expansion in the scope, strength and duration of copyright is worrying, from this perspective. It is worrying because writers and artists have been able to draw upon prior culture and other’s original creations in creating their work, but now seek to grasp entitlements that would undermine the capacity of other future writers and artists to engage with and be inspired by their work in the same way.

In other words, a justifiable natural intellectual property rights regime has to be sustainable. It needs to make sure that it is justifiable to budding artists and writers in future generations as much as those in ours. Even as it gives people control of the cultural objects they have created, the intellectual property regime must have mechanisms allowing those cultural objects to play their role in others’ future creations of their own cultural objects. In the first part of Intellectual Liberty I explore several different ways in which contemporary intellectual property regimes can fail to achieve this sustainability.

The second reason a natural rights theory will have for critiquing property arrangements is if those arrangements transgress on other rights that people have. For example, suppose we thought people should have a right to travel – to be able to move about from one place to another. Such a right would not prevent others from owning property, but it might impact on how that property is arranged and understood. For instance, it might be required that there are roads between blocks of land, so that people can travel between them as they journey from one locale to another.

Are there natural rights that might be interfered with by expanding intellectual property rights? One answer is that intellectual property can interfere with free speech rights. Copyright, after all, prevents certain types of expressions; namely those expressions that involve repeating or sharing the original works created by an artist or writer. Indeed, the Copyright Clause in the US Constitution explicitly acknowledges this point. It carves out a space for copyright by allowing for Congress, in this context of original works, to impose limitations on people’s speech.

Rights to free speech are, I think, an important part of what is at stake here. But I do not think they are the whole story. For there is another right with which intellectual property can interfere; the right to learn about the world and to have that learning inform one’s choices and actions. This is what I call the right to intellectual liberty. It comprises the rights to apprehend, to investigate, to learn and to use what one has learned in governing one’s decisions and actions.

Why think that this right is important? The answer is that the human capacity for self-learning is one of humanity’s most quintessential capabilities, and one of the most profound ways that human beings take can control of their lives and empower themselves. Indeed, an enormous number of rights-theorists throughout history, spanning almost every part of the political spectrum, have lauded this extraordinary human capacity for learning, whether it is learning by oneself or learning in concert with others. And with good reason! In the second part of Intellectual Liberty I discuss no less than seven ways of understanding human freedom, including the ideas that liberty is, (a) leaving someone alone to do whatever they want to do, with whoever they want to do it with, (b) allowing people to pursue activities that are natural to the human being – perhaps including their pursuit of happiness or self-preservation, (c) allowing the person to govern the direction of their life, to have control over who they are and where they are going, and (d) allowing the person to protect and develop their own unique individuality.

The capacity for self-directed individual learning, I argue, is a crucial part of all of these different yet inter-locking conceptions of human freedom. Yet despite these many ways it can be justified as important, and despite the many times we can see echoes of it in the writings of rights-theorists, intellectual liberty is not entrenched in law and policy in the same way as other rights. In a way, there is a good reason for this. If a wide gamut of other rights is protected, intellectual liberty will usually be adequately secured. Imagine for a moment a person on their own in a virgin forest. If you give them basic bodily rights – that is, you ensure that no-one can physically interfere with the person as they move around – then this will usually suffice to ensure their intellectual liberty. They can search, look, listen, remember, experiment and so on as much as they like. They get intellectual liberty ‘for free’, so to speak.

But the expansion of intellectual property rights alters this happy picture. Increasingly, the world around us is not made up of untouched trees and rivers, but rather it is populated with the prior intellectual creations of others; books, laptops, cars, speeches, religious texts, songs and inventions galore. The more that property-based obligations (like those of copyright) constrain our access to and investigations of these things, and our subsequent use of them, the more that our prior capacity to learn about our world is being constrained.

Now there is no reason to be silly about this. It’s not like we have a vital need to know and enquire into absolutely anything and everything. There are some things that are very rightly none of our business. Equally there are many times intellectual property constrains us in ways that have little to do with human learning. I accept both those points. But there are aspects of the world that it is important to be able to learn about because those aspects impact upon choices we need to make regarding how we are to live, for example, or how we are to vote or what we are to believe. And even though not everyone will become curious about every part of the world around them, people do tend to become fascinated with various elements of it. We are curious and we like to explore. That is what it is to be human. As intellectual property expands, our intellectual liberty contracts. And that is a cause for concern.

This way of picturing things is different to the other ways that ‘user’s rights’ are usually defended. (‘User’s rights’ are the rights of those who aim to use works, as distinct from the ‘creator’s rights’ of those who crafted them in the first place. The idea is not new. John Locke in the seventeenth century castigated the monopolies held by book publishers of his day (including their exclusive rights to print ancient books, or all books of a certain type – like legal treatises) on the basis of the rights of ‘book buyers’.) When commentators talk about the importance of user’s rights, the rights they have in mind are usually rights to have access to a resource of a certain quality. For instance, it might be argued that we all have a right to a vibrant and diverse cultural milieu, from which we can be entertained, informed and inspired. This would give the state a positive role in encouraging the creation of new works (including through copyright), but also ensure that those works were available to everyone as much as possible.

I acknowledge there is something to be said for this line of thought, but I think the justification for the right to intellectual liberty is much stronger than the justifications that may be given for these sorts of user’s rights. Why? Recall when I began this discussion I spoke of how deontological ethics has the desirable feature of focusing on the way that one person treats another. Rights to have a certain status or to have a resource of a certain standard move us away from this classic deontological picture. It becomes unclear who has the duties to provide this resource. And even if we settle this question, declaring that Cho has duties to Chitra to contribute to her cultural environment, it seems quite implausible that Cho is treating Chitra in one way or another if he fails to do so. After all, he may not even know Chitra, Chitra may not even know him, and there may be no interaction whatever between them. And because Cho isn’t treating Chitra in any particular way when he fails to augment her intellectual environment, it becomes implausible to say that the impediments to Chitra’s learning are caused by Cho. Political liberty, as Isaiah Berlin famously observed, is not about what you cannot do in general, but is about what you cannot do because other people have prevented you from doing so. Intellectual liberty is a part of our political liberty, because it is a right ensuring that others do not prevent us from doing what we could otherwise do – intellectual liberty prevents others from filling the world around us with artefacts, and then demanding that those artefacts cannot be the subject of investigation, learning, report, discussion and warning. Unlike rights to have a certain status or to have a resource of a certain quality, intellectual liberty brings to the foreground the way that one person can worsen another’s situation by creating a new work, having it play a role in the other person’s cultural and informational world, but prohibiting their investigation of it.

Like any summary of a larger argument, I expect what I have said here will open as many questions as it answers! For those who would like to pursue it further, the book awaits…

Monday, November 12, 2012

Ethical conduct: What’s philosophy got to do with it?


In the most recent issue of Australian Ethics (the newsletter of the Australian Association for Professional and Applied Ethics), Peter Bowden challenged the relevance of ethical philosophy to applied and professional ethics, pointing out that many of the valuable practices that predominate the pages of the recent AAPAE book Applied Ethics: Strengthening Ethical Practices have little to do with ethical theorizing. Indeed, he goes so far as to argue that moral philosophy might even be pernicious. Ignoring well-accepted empirical findings and encouraging endless disputations, learning moral philosophy is nothing short of an ‘intellectual handicap’ for ethical decision-making in the 21st Century.

Here I take up the mantle of defending (albeit in a qualified form) moral philosophy’s relevance to applied ethics – in particular with an eye to the increasing practice of having philosophers involved in the teaching of ethics to professionals and budding professionals.

What I am not arguing, however, is that moral philosophers should have the sole role in teaching and developing applied ethics. Bowden is undoubtedly correct when he lists the many vital ways professions can themselves develop codes, roles and integrity systems, and how we can learn empirically about which measures, legislation, and practices work and which do not. While philosophers have engaged with some of these issues, many of them are completely ignored – whistle-blowing is perhaps Bowden’s most important example here.

As such, I accept that if philosophers alone are left to theorize, develop and teach professional and applied ethics, they can be expected to do a very limited job. Often, they will be unaware of key modes of strengthening ethical behaviour, and ignorant of the empirical research on these. They may be unfamiliar with the ethical issues that actually confront professionals, and of the difficult circumstances within which professionals negotiate solutions to them. Worse, they may know little enough of the actually existing social and institutional practices in a given practice that are working at promoting integrity – and which the philosopher’s top-down policies might weaken or sideline.

That much admitted, is there anything left that ethicists can offer?

I think there is.

First of all, philosophy can excel at describing clearly the sorts of features of actions and situations that call for moral concern. Local practices, spontaneous arrangements and shared identities are crucial in creating ethical behaviour – but they equally can be threats to it. Institutions can display group-think mentalities and they can promote their narrow self-interest, or even just the self-interest of the institution’s leaders. For this reason, moral philosophy can be important precisely because of the external perspective it brings – forcing practitioners to face up not only to the views of their peers, but also to universal principles of proper conduct.

Second, moral philosophy is important because it can clear away some popular but potentially problematic philosophical viewpoints that some practitioners and students may already hold. Here I (controversially, no doubt) name three viewpoints I tend to encounter:
1.       cultural relativism: the view that morality is just whatever the local culture says it is,
2.       psychological egoism: the idea that people only do whatever they think will make them happy, and;
3.       religious necessity: the view that the only reason people can genuinely be moral is if they believe in God.
Now I’m not saying that all these views are ultimately incorrect – I acknowledge there is much that may be said in favour of versions of each of them. But in my experience these views can be held in a very naïve and unreflective form. In this form, they can create problems for those trying to teach and develop applied ethics. Teachers, in particular, need to be able to provide the basic arguments that may be given to a student who challenges the course material by saying, ‘It’s all relative really, so why should we care what you say?’ or ‘This is naïve. People only ever do what makes them happy anyway.’ There are powerful philosophical arguments against these crude views – but they are views that often arise as soon as people start thinking and talking about ethics.

Third, learning moral philosophy can help motivate – or at least energize interest in – moral behaviour. This is not to say that the first-principles arguments of Aristotle, Kant or Mill are better fillips to moral action than institutional structures or entrenched cultural practices. I don’t think that is true. But such theories can play an important supplementary role – consider, for example, the many people who become committed vegetarians after reading Peter Singer’s books. Ethical argument can change behaviour.

As a more general matter, though, I have found students can be quite excited when they are first exposed to a moral theory that seems to make sense of their previously unexamined moral intuitions. They find that a theory such as utilitarianism explains something about them, and who they are, and this then plays a role in forming and making concrete a moral identity for them. Thenceforth, they see themselves as utilitarians, and try to act accordingly.

Fourth, Bowden draws an unwarranted distinction between argument and empirical evidence. Empirical evidence plays a role in argument (that is pretty much what it means to call something evidence). To be sure, Plato at the dawn of western thought instilled a proud tradition of armchair philosophy, working top-down from abstract, other-worldly principles to applied ethical conclusions with precious little engagement with history, anthropology and (what we would now call) the social sciences. But it did not take long for his student Aristotle to develop the alternative tradition, where serious ethical thought is infused with meticulously gathered evidence about cultures, practices and institutions, and about what works and what does not. If we think that empirical evidence is a vital way of ensuring our ethics is in touch with lived reality, then this does not mean we should avoid philosophy. Rather, it changes the type of moral philosophy we should be engaging with. There is a substantial amount of sophisticated moral philosophy that is informed by genuine understanding of actual human institutions and how they have operated historically. Far from being contrasted with the workings of actual social institutions, philosophy can itself study and improve our knowledge of these. (Members of the AAPAE doubtless will be able to think of many instances of this – Professor Daniel Wueste’s illuminating paper at our recent 2012 conference, discussing the construction of professional roles and responsibilities according to the purpose the institution in question serves, and the significance of factors such as trust and knowledge in this process, is an excellent example.)
This is such a breathtaking image that it's worth reflecting on.
Is it a striking picture of human liberty? But the girl's seeming movement is so
unnatural. If we actually saw a person in this pose, she would not be
flying. She would be falling. Maybe there's a message in that?
Human liberty as beautiful falling.  


The link between empirical evidence and philosophical argument arises in vivid form in what is currently referred to as ‘non-ideal theory’ in political philosophy.  Ideal theory, as exemplified by John Rawls, involves working out what regimes are just – in abstraction from deep questions about real-world disagreement, compliance, ignorance and competence. As Schmidtz and Brennan argue in their stimulating and highly recommended A Brief History of Liberty, this is “like designing cars on the assumption that they’ll never encounter slippery pavement, or will never be driven by bad drivers.” Non-ideal theory, on the other hand, from the outset asks the question about what institutions have a solid history of achieving (say) peace, rising standards of living and mutual respect.

For these four reasons, I submit, moral philosophy has much to offer the teaching and development of professional and applied ethics.

Before concluding, though, I must respond to the important point Bowden makes about philosophical disputations. These disputations can occur across multiple dimensions. Philosophy might spark division because it raises the questions of ‘Why be moral?’ and ‘What are the fundamental principles of morality’? And it is altogether possible that people who might be able to agree on the proper response to a moral problem might hold sharp disagreements on these deeper questions. For this reason, philosophy might distract attention away from solving what we all acknowledge are real, important ethical problems by implying that we need to get agreement on first principles. To the contrary, however, if we needed agreement on first principles before we could start creating practices and institutions that treat people decently, we would all have killed one another long ago.


Another way philosophy focuses attention on disputations occurs because in teaching and thinking about different ethical theories philosophers need to differentiate those theories from one another, and an important mode of accomplishing this task is by considering cases where the theories give rise to different moral prescriptions. So, for instance, we are invited to speculate on fantastic cases that allegedly show stark differences between utilitarianism and deontology. (And I, of course, am no stranger to such arguments.) And in general philosophers spend much more time pondering the ‘hard cases’ about which there can be much fascinating and revealing disagreement, rather than emphasizing how much agreement there is on the overwhelming amount of ordinary issues people confront every day.

These are important points, but I think awareness of them can generate sensitive responses. These contentious matters rightly receive emphasis in philosophical theory for the plain reason that philosophers do not need to debate matters where there is little serious disagreement. But this narrow emphasis becomes less helpful when we turn to teaching and developing ethical practices. There the focus should centre on the enormous amount of issues upon which there is wide consensus, and direct attention to the project of motivating and empowering individuals and institutions to do the right thing.

Finally, it is worth remembering that argument does not necessarily mean endless, confrontational disputation. Argument can also mean rational discussion aimed at persuading another person of the merits of your view, and being open to the merits of their's. There are other ways of responding to moral differences, after all, that are not as civilised. In a world where consensus is rare, the ability to solve problems by giving and listening to another person’s reasons is a precious one. 

Philosophical discussion, in this way, can be an applied moral practice in itself.

Saturday, October 27, 2012

The Sharp Samaritan: Self Interest and Rescue


In a famous article in 1972, the Australian philosopher Peter Singer put forward an elegant argument for strong duties to contribute to charities. The argument begins by considering the ‘Pond’ situation. In this case, you are walking past a pond and see a drowning child. You can wade into the pond at no risk to yourself, and easily pull out the child. Should you do so? As Singer observes, most people think that you have a powerful duty to do exactly that. (And so, of course, do I.)
Simple yet powerful: Singer's 'pond' argument is a classic
of modern ethical philosophy.

But now add the extra factor that you are wearing expensive clothes, and that dry-cleaning them will cost you a couple of hundred dollars. Is there any change in the nature or importance of the duty in this case? Surely not. Everyone agrees that it is still your moral duty to wade into the pond and rescue the child.

What do we conclude from this? Singer suggests a few different results, but for our purposes we’ll just consider the principle that, when the cost to you is pretty small, and another’s life is at stake, then you morally ought to pay that cost and save their life.

At first glance, this principle can seem innocuous enough. But as Singer points out, it has surprisingly radical implications. For, in a way, the ‘pond’ situation is one that confronts us every day. Overseas, people caught in famines and conflicts do not have access to food, water and basic medical care. If we donated enough money to charities like Oxfam and the Red Cross, then we could save those imperilled lives. And the monetary cost to us is comparative to our losses in the pond case. Taking into account all the difficulties that foreign aid has in getting resources to those who need them, Singer estimates that we can probably save one life for an investment of about $1000.

Now if it is wrong of a passer-by to decide not to wade in to help the child if they are wearing an expensive suit, then – surely – isn’t it equally wrong for each of us every day to decide not to give like amounts to charity? (Of course if we continue to follow this line of thought, eventually we will start giving away so much money we might undermine our own means of existence and capacity to work productively. At that point everyone (including Singer) agrees we should look after ourselves.) Aren’t we being inconsistent in expecting the rescuer to help the child, but not ourselves taking comparable actions to save others?

There is an enormous philosophical literature that has arisen in response to Singer’s argument, and there are lots of different ways it can be critiqued (and, in response, defended). But here I just want to focus on one problem I have with it: The apparent cost to the self-interest of the duty-holder in this case is offset by powerful self-interested gains. First, I’ll argue there are gains. Second, I’ll argue they are important in assessing the pond situation.

The gains

Singer’s description of the pond situation describes the potential costs and gains in very material terms – that is, in terms of the dollar-cost to you of dry-cleaning or replacing your clothes and shoes. But there is much more on offer than this.

First, there is the capacity to feel the power we have in the world. As Nietzsche argued at length, in many different ways a fundamental driver of human behaviour is the will to exert power in the world, and to see the changes we have wrought to the world. The pond situation offers a profound opportunity in this respect. As you wade out of the pond, you are holding in your arms a life that would have ended were it not for you. You can apprehend in no uncertain terms the profound effect of your action. And for the rest of the rescued child’s life, every thing they do will only happen because of what you did. Now giving money to international charities simply does not provide this feeling of power in the same, direct way as rescuing the child. Our charitable giving is mediated through the actions of countless other people – like the humanitarian actors themselves. We might be unsure whether our money really had the desired effect in this case or not. And we don’t know exactly who we have saved. It is only in an abstract, indirect sense that we can feel the significance of what we have done, not in the immediate, determinate way we create change in the world by rescuing the child. Rescue shows us in the most visceral, overt way the power we have in the world. And every human being, for good or for ill, likes to feel that power. We want to know we matter.

Second, there is heroism. Most cultures tell stories about heroes. They are objects of our admiration. By rescuing the child, we fit ourselves into these stories, casting ourselves into the role of hero. From the first moment a child picks up a comic book or ‘Famous Five’ story, they will fantasize about being the hero in tales such as these. Rescuing the child offers the priceless opportunity to make those fantasies come true. Giving to charity does not. Indeed, sometimes when I teach Singer’s argument to undergraduates, students come up to me afterward saying how it would be a dream come true for them to be able to rescue a child in such a pond-like situation. It wouldn't just make their day; it would make their year!

Third, there is excitement. Rescuing the child is exciting stuff. Will you get there in time? Will you have to do CPR, and will you remember how to do it? And there is a story here. How did the child get into this position? Where are its parents? People spend thousands of dollars and travel the world in the hopes of having exciting adventures, and having great tales to tell of their adventures. It may sound grim to say it, but the fact is that charity is just not exciting in the same way rescue is.

Fourth, and building on all of the former points, there is glory. As Adam Smith observed in his work on the moral sentiments, many people want to get the acclaim and admiration of others for doing the right thing. That doesn’t mean they want to ‘fake it’. They don’t want only the admiration without the reality (this is merely a love of fame, rather than a desire for true glory, as Smith puts it). Rather, they want to actually do the right thing, and to be known and admired for doing so. The pond situation offers a fantastic opportunity for this. Because it is an interesting, exciting story, people will want to hear it – and because you are the hero in the story, you will be the centre of attention and the object of admiration. Newspapers often carry reports of good Samaritans who saved others, and in so doing provide a motivation to future Samaritans. The pond situation offers the opportunity for glory in a way that charity simply does not.

Fifth, as well as social appreciation, there is the appreciation of specific people. In the pond case, this is the appreciation of the parent or parents of the child. Imagine you yourself are the parent in question. You have lost your child for a moment. You look around in sudden concern, realising the dangers surrounding you. You are near a road, near a pond, there are strangers around. Where is your child? Are they okay? Suddenly you see a complete stranger wading out of the pond, carrying your just-breathing child. What is your reaction? 

My overwhelming feeling, I think, would be one of profound gratitude. How can I ever replay them? This isn’t to say I would open up my wallet to them, as I would worry that could cheapen the importance and nature of what they have done. But I certainly would seriously consider if I can somehow show my immense appreciation for their action. Being an object of such gratitude is a wonderful social experience – even if there are no further social and material consequences that might flow from it.

Sixth, building on all the prior factors, in cases where costs have been incurred, there may be real opportunities for others to deal with them. As a parent of a rescued child, I at least would insist on paying for the dry-cleaning of the rescuer’s suit. And imagine that you missed or were late to an important meeting or interview because of the rescuing. Is there any better excuse imaginable than that you stopped to save a drowning child’s life? What manner of person would not go out of their way to reschedule the meeting or interview? And if it was a job interview, how could the interviewers fail to be impressed by the quality of the potential applicant?

Now, of course, it could be argued that all of these personal, social and potentially material benefits (or at least, reductions of costs) are in some sense irrational, and that in all consistency we really should personally and socially apply them to charitable giving as well. But even if this is so (and I doubt it), the fact remains that rescue has all these advantages, and charity does not. Saying it is irrational or inconsistent does not alter the social reality that rescue has these benefits and that charity does not.

So what?

Suppose we grant that acts of rescue do have all these of these real and potential benefits. Why does it matter?

My point is not that these benefits are so great that, in any given case, it will always be in one’s self-interest to rescue the child. This is not an ‘egoist’ argument for being a Good Samaritan. Rather, my point is that if we are to take Singer’s argument seriously, then we need to accurately tally the costs and benefits in each case. Because of all of these sorts of benefits, the real, sum-total cost (the decrease in our ‘expected utility’) of rescuing the child is much smaller than Singer estimates. Because it is smaller, the sacrifice expected of the person is not as great. As such, when we apply the same moral arithmetic to cases of international charity, we will get very different answers to the ones Singer puts forward.

But perhaps this misses the point. So consider the following objection.

When we are standing on the bank of the pond, it is hardly as if any of us will really calculate all of these benefits. Rather, we simply see the drowning child and realise we can help. With no further consideration, we wade in and save the child. The benefits may subsequently occur but – it may rightly be objected – it is unlikely they formed any part of our reasoning at the moment of action.

I agree this point is a valid one. But I still think the benefits matter. They just matter in a more indirect way. The benefits mean that whenever we hear stories of rescues, we get used to them ending happily – indeed, we think they should end happily, and we act to make this happen. We shake the hand of the person who rescued the child and buy them a beer. We reschedule the appointment they missed. We listen in admiration to their story. And because of all this there is never any caution or qualification applied to the norm of rescuing. We don’t hear innumerable stories about how the rescuer lived to regret their action, or failed to live up to their other responsibilities because of the costs they incurred. For most people, I agree it is true that the above-noted benefits are not their primary reason for rescuing the child. But the benefits still play a role in empowering the person not to need to have any second-thoughts about this situation. The benefits don’t motivate the principle, but they remove obstacles that might otherwise weaken our responsiveness to it.

If this is right, then one of the reasons we are horrified by the person who walks by without saving the child is simply because there are no countervailing considerations that could justify their doing so. Our society has created, as it might be put, a well-functioning norm of rescue, with myriad rewards and cost-mitigation factors set up to ensure its consistent functioning. And this is important in the context of rescue, because we cannot afford for people to have to weigh up costs and benefits in such a case. We need them to be willing to jump into the pond, and to trust that society has got their back. A well-functioning norm achieves this. If a person cannot be relied on to act on a norm in a situation like the pond one, then it is hard to believe they are capable of acting on a norm in any situation. The amount they are willing to respect and care for others is almost zero.

And this means that if we are interested in improving the lot of those suffering from famine and the ravages of conflict, then we are better served trying to create social and personal rewards that can flow to the people who help them. The more we can make real the benefits for doing such important moral actions, the more we smooth the way for such action, and allow it to feed into a life-well-lived. 

Friday, October 19, 2012

"I knew it!" Confirmation Bias and Explanation

Source of all evil or defender of all freedom? How the same
event can seemingly justify directly opposing beliefs.

“Confirmation bias” refers to the well-known human foible of favouring their existing beliefs and commitments, even in the face of conflicting evidence. For instance, if you believe that some person – Annie, say – is a shifty person, you will have a tendency to hold to that belief over time, and even come to believe it more strongly. Psychology experiments suggest that confirmation bias works in a variety of ways – including biasing the search for evidence (we search for data that supports our belief, rather than data refuting it), the interpretation of evidence (we look for weaknesses and ambiguities in evidence that questions our belief) and the memory of prior evidence one has been exposed to (we have faster and more thorough recall of confirming rather than dis-confirming evidence). Through these three methods, human beings show a decided tendency to cement their initial beliefs rather than revising them. In some cases confirmation bias can be very powerful. If people are exposed to evidence that Annie is shifty, say, and then shown beyond all dispute that this initial evidence was fabricated, they will still tend to harbour suspicions about Annie’s character that arose on the basis of that evidence, even though they will explicitly acknowledge and believe that the evidence was false!

Of course, confirmation bias is not insurmountable. People can and do decide they were wrong about something; and can recognize and choose to use tests and lines of enquiry that will expose their mistaken beliefs.

Confirmation bias and philosophy

Does confirmation bias affect philosophers too? There are good reasons to believe it does – consider the old saying that “being a philosopher means never having to admit you’re wrong”. The worry expressed here is that philosophers will use the sizable intellectual tools at their disposal to critique and interrogate opposing theories and evidence, and search out subtle cases of confirming evidence. In so doing they will corroborate their initial position, rather than using those intellectual tools to honestly enquire into it. And certainly wholesale changes of philosophical theory by established philosophers tend to be pretty rare. To be sure, philosophers develop their positions over time, revising and responding to new evidence and argument, but direct moves into the opposing camp don’t happen a lot, in my experience at least. (And I have no particularly special virtues in this regard either, of course.)

And it is probably fair to say such biases arise in the emotionally charged milieu of political philosophy even more stridently. Those who think that capitalism is a pretty good idea sometimes seem to be able to find confirming evidence for this belief every day and in every way. And the same is true for those who think capitalism is the fundamental source of every misery in the world. No possible horror can beset humanity without an explanation leading back to capitalism.

Explanations of confirmation bias

So why do we all do it? Well, there are lots of different reasons that have been put forward for this human tendency to cement our beliefs over time. For instance, having to reject a belief is cognitive hard work. The type of thinking that would reject the belief can require effort. Furthermore, the type of thinking that follows from revising the belief takes still more effort – do other beliefs now have to shift because that first one has been rejected? And since human beings are largely prone to avoiding effort, we are motivated to avoid these situations of mental heavy-lifting. Easier to stick with what we know.

Also, the more we understand our world, the better we are doing, and the more secure we feel. Beliefs underwrite our actions and our projects in the world. If our beliefs can be relied upon, then that enhances our ability to predict future events and attain our goals. Finding out a prized belief is wrong threatens that happy security.

And there is a social factor. The more we change our mind and revise what we have said, the less others can feel confident in relying on us as a valuable source of information and insight. Since most of us like our views to be taken seriously, a habit of admitting defeat carries social costs.

There’s a lot to be said for these sorts of explanations of confirmation bias – and the countless others out there in the psychology literature. In all likelihood, confirmation bias is over-determined – there are lots of reasons we do it.

However, I want to reflect on the possibility that confirmation bias arises from largely rational, sensible ways of thinking – in particular the search for explanations for events.

Confirmation bias and seeking explanations

Imagine something strange happens in the world – something you can’t explain. But it is (just suppose) important for you to be able to understand it. So you seek an explanation for it.

What does that involve?

Well, one of the main things it will involve will be aligning this new event with your current beliefs. If you can work out how this new thing happened, given what you already believe, then you will have explained that event. If you can work out how the facts as you currently understand them would have caused (or at least allowed the possibility of) this new event, then you will have explained it. That, pretty much, is just what it means to have an explanation.

So when you start searching for information to explain the event, you search for what we might call linking facts; facts that would allow you to move from your beliefs as they stand to the occurrence of the event.

Current beliefs + Linking facts = Explanation of event (or phenomena)

This, I hope, is pretty straightforward. If we want to understand something, there’s no point aligning it with other people’s beliefs (how would that help?), and no point trying to explain it from first principles all the way up (couple of years to spare?). The new phenomena is understood and explained only when it makes sense, given what we already accept. This doesn’t mean that the linking facts can’t replace or force revision of existing beliefs, but it does mean the existing beliefs fundamentally frame what counts as an explanation.

In fact, this search for linking facts has at least two results.

First, the type of linking facts you are looking for will vary depending on what your current beliefs are. If you believe that the United States is ultimately the root of all international problems, for example, then you will search out the type of linking facts that will explain the current phenomena – the situation in Syria in 2012, say – with the US. You will look for the involvement of the CIA, the pressure the US exerts on the global media, its historical influence on and action in the Middle East, its current oil interests in that region, and so on and on. On the other hand, if you believe that most of the world’s problems arise from extreme ideologies and fundamentalist religious beliefs, then you will search for very different sorts of linking facts.

Second, the search for linking facts will determine when your investigation stops. Once you have located the required linking facts, then the event is understood and explained. You can stop searching. So once you have found – to return to the example – that the US has CIA agents at work on Syria’s border with Turkey, that it has a history of animosity with the Syrian regime, and that Syria is an ally of Russia, then you have explained the Syrian crisis and the way it is presented in the mainstream media.

Job done. Move on.

And, of course, if you had started out with concerns about religious extremism, then in all likelihood a different set of linking facts would have been discovered, and the search stopped at that point. In other words, you will not continue to search in such a way that you might, (a) find subsequent facts that disprove the existence of your linking facts or impact on their capacity to explain the event, or (b) find subsequent facts that would better account for the event, using an entirely different explanation.

Now this search for explanation is not in any sense irrational. But it can clearly contribute to confirmation bias. As well as constraining the nature and end-point of the search, it cements the initial belief even further.

Why?

Because now that initial belief (about the role of the US in world affairs, say) helps explain this new event. The fact that it can explain this means you now have one more reason for believing it. If someone else later challenges this belief of yours, you are entitled to think: “But wait, clearly the US is playing this role, because I found evidence of its presence in the Syrian crisis.” You did not set out to test this belief, but you nevertheless ultimately collected evidence that helped justify your continued belief in it. In this way consideration of the same event by two people with different starting beliefs, even with access to the same information, will contribute for each of them to their justification of their initial beliefs.

And that’s a problem, because it means that confirmation bias arises from what are otherwise quite sensible and effective methods for understanding and explaining events.

Wednesday, October 10, 2012

On Julia’s 'He needs a mirror' speech


One of my favourite books when I was younger was The Penguin Book of Twentieth Century Speeches. It has all the greats, as you would expect, and some little known treasures. I use to pore over the best of them again and again, and you could almost hear the voices springing out from the page when you read them to yourself – the signature sounds of Martin Luther King, JFK, Winston Churchill.

In one of my past lives before I grew up to be a political philosopher, I spent some time in stage acting, so great speeches appealed to me on that level too. They were not only political theory – though the best of them were that as well – but they were theatre, strategy, rhetoric, persuasion and poetry.

Like any such list of the greats of years gone by, whether in music, film or philosophy, they are apt to make the idle reader despair of current offerings. What would it have been like, you can’t help but wonder, to live in a world where people like Martin Luther King actually stood up and spoke that way? What would it have been like to actually be in the crowd and hearing him make history in front of you? To let you be a part of that history?

Julia Gillard’s 'He needs a mirror' was not the stuff of the ‘I have a dream’ legends. No question about that. But when the dust settles on the end of this century, and the great speeches are collected so the next generations can despair over their current crop of political leaders, I hope at least her name comes up, and that some consideration is made as to whether her speech of October 10, 2012, might belong in the sequel to my dog-eared penguin.

To be sure, the speech was not set in a context as grandiose as many of last century’s greats – it was not the culmination of a huge social movement that had gathered tens of thousands of people together to march on their capital, it was not ringing from the halls of a nation newly freed from apartheid, or sounding out as the Berlin Wall fell. There were no epochal events unfolding – no terrible war confronting a nation, no fledging state being formed, no national emergency to rally around.

But that, in a way, was precisely its power.

It is not only great and terrible acts that define a people, a country, a generation. It is the petty and demeaning ones as well. It is all the tiny, needling, fleeting, half-audible, offhand, half-joking words and acts that we almost all are guilty of using – sometimes thoughtlessly, sometimes not – that bring people down a peg, that put them in their place, that remind them how things really stand and where they are in the pecking order. It is those acts, as much as the great acts, which define who we are and where we are going. It is – as Leonard Cohen once put it – the homicidal bitching that goes down in every kitchen, that determines who will serve and who will eat.

Individually, maybe, each particular snippet isn’t a big deal. But that’s the thing with racism and sexism. It’s not just arbitrary meanness. Its organized meanness. It’s the little things that are said often enough, and by enough people, and to the same people, that finally start to weigh on them.

So it takes a reckoning. It takes someone being able to fix down one target, and add up the sorts of things they say and do – and the sorts of things they let their friends and allies say and do – and package it all together, and see what that says about the person’s character.

And that’s what Julia did.

And she did it beautifully.

It was theatre.

At the end of the performance I looked in disbelief at the timestamp on my iPad. Had I really just listened, captivated, to a speech in the Australian parliament fifteen minutes long? And had my ten year-old-daughter really just sat next to me on the couch and (excepting a short distraction when the complexities of the Slipper case lost her) watched it with me? Even she – largely a stranger to the evening news – could see the drama unfold as Tony Abbott’s confident smirk began to dissolve, and the realisation sink in for him.

Julia had him.

She knew it.

He knew it.

My ten-year-old daughter knew it.

And the more he sank down in his chair like a cheeky child finally getting the dressing down he knows he deserves, the more my daughter giggled uproariously.

If this was politics, she wanted in.

How long had Julia Gillard been saving up all of those snippets, each perhaps almost-excusable on its own, but able to be drawn together for devastating effect, racked up together when the opportunity presented itself, and when the opponent has the decency to blunder headlong into a political trap that might have been years in the making?

And if the list had been years in the creation, who can blame her? The speech works because we the audience can all put ourselves in her place – doubtless women can do this much more effortlessly than men, of course, because they have in fact been in that place – and wanted to make that list, and dream of one day being able to set it out, publicly, point by point, with the victim trapped with nowhere to turn.

But in the end, if her speech really does merit candidacy in the great speeches of the twenty-first century, it won’t be because we the audience put ourselves in her place, vicariously living through the thrill of the kill, exhilarating though it was.

It’ll be because we put ourselves in Tony Abbott’s place, and we wondered what that speech would sound like when it was delivered to us. And because we decided we didn’t think we’d altogether like how it might sound.

Tony Abbott is not a terrible, evil man. He’s not so far from a regular bloke.

And that’s the point.

Saturday, October 6, 2012

Ooops. Ethics and the Psychology of Error


Can we learn about moral decision-making from the psychological literature on human error – that is, the study of how and in what ways human beings are prone to mistakes, slips and lapses? In this blogpost I offer some brief – but I hope enticing – speculations on this possibility.

At the outset it is worth emphasizing that there are some real difficulties with linking error psychology with moral decision-making. One obvious point of dis-analogy between ethics and error is that one cannot deliberately make mistakes. Not real mistakes. One can fake it, of course, but the very practice of faking it implies that – from the point of view of the actor – what is happening is not at all a mistake, but something deliberately chosen. But it seems to be a part of everyday experience that one can deliberately choose to do the wrong thing.

However, the most significant dis-analogy between moral psychology and error psychology is that in most studies of the psychology of error, there is no question whatever about what counts as an error. In laboratory studies the test-subjects are asked questions where there are plainly right and wrong answers – or rational and irrational responses. Equally, in studies of major accidents, the presence of errors is pretty much unequivocal – if there is a meltdown at a nuclear reactor, or if the ferry sinks after crashing – then it is clear that something has gone wrong somewhere.

In ethical decision-making, on the other hand, whether a judgement or an action is ‘in error’ – if this is supposed to mean ‘morally wrongful’ – is often very much in dispute. So someone who judges that euthanasia is wrong, say, cannot be subject to the same error analysis as some who contributes to a nuclear disaster. At least, not without begging some very serious questions.

The way I aim to proceed is to think about those cases where the person themselves comes to believe they made a moral mistake. The rough idea is that a person can behave in a particular way, perhaps thoughtlessly or perhaps after much consideration, but later decide that they got it wrong. Maybe this later judgement occurs when they are lying in bed at night and their conscience starts to bite. Maybe it happens when they see the fallout of their action and the harm it caused others. Maybe it happens when someone does the same act back to them, and they suddenly realise what it looks like from the receiving end. Or maybe their local society and peers react against what they have done, and the person comes to accept their society’s judgement as the better one.

One reason moral psychology might be able to learn from error psychology is in the way that error psychology draws on different modes of action and decision-making and welds them all into one unified process. Interestingly, the different modes error psychology uses parallel the distinctions made in moral psychology between virtue, rule-following (deontology) and consequentialism (utilitarianism).

GEMS

I’ll use here the account relayed in James Reason’s 1990 excellent book, Human Error. Drawing on almost a century of psychological study on the subject, Reason puts forward what he calls the ‘Generic error-modelling system’ or GEMS. GEMS is divided into three modes of human action, in which different sorts of errors can arise. (What follows is my understanding of GEMS, perhaps infected by some of my own thoughts – keep in mind I am surveying a theory that is rather outside my realm of expertise, so I make no great claims to getting it exactly right.)

Skill-based action

The first mode of action is ‘skill-based’. This is the ordinary way human beings spend most of their time operating. It is largely run below the level of conscious thought, on ingrained and habitual processes. We decide to make a coffee, or drive to the store, but we do not make executive decisions about each and every one of the little actions we perform in doing these tasks. Rather than micro-managing every tiny action, we allow our unconscious skills to take over and do the job. This mode of action draws heavily on psychological ‘schemas’ – these are roughly speaking processes of thought or models of action that we apply to (what we take to be) a stereotypical situation in order to navigate it appropriately. The context triggers the schema in our minds, or we deliberately invoke a schema to manage some task, and then our conscious minds sit back (daydream, plan something else, think about football, etc) as we proceed through the schema on automatic pilot. Schemas are created by prior practice and habituation, and the more expert we become on a particular area, the more of it we can do without thinking about every part. To give an example: when a person is first trained in martial arts, they need to consciously keep in mind a fair few things just to throw a single punch correctly. After a while, getting the punch right is entirely automatic, and the person moves to concentrate on combinations, then katas, and so on. More and more of the actions become rapid and instinctual, leaving the conscious mind to focus on more sophisticated things – gaps in an opponent’s defences, their errors of footwork, and so on.

Errors usually occur at this skill-based level when, (a) the situation is not a stereotypical one, and faithfully following the schema does not create the desired result, or (b) we need to depart from the schema at some point (‘make sure you turn off the highway to the library, and don’t continue driving to work like a normal day’) but fail to do so.

Rule-based thinking

The second mode of action is ‘rule-based’. This arises when the schema and the automatic pilot have come unstuck. When operating at the ‘skill-based’ level described above we were unaware of any major problem; the context was ‘business as usual’. Rule-based action emerges when something has come unglued and a response is required to rectify a situation or deflect a looming problem. In such cases, GEMS holds, we do not immediately proceed to reason from first-principles. Rather, we employ very basic rules that have served us well in the past; mainly ‘if-then’ rules like: ‘If the car doesn’t start, check the battery terminals are on tight’; ‘if the computer isn’t working, try turning it off and on again’.

Mistakes occur at this level if we apply a bad rule (one that we erroneously think is a good rule) or we apply a good rule, but not in its proper context. We can also forget which rules we have already applied, and so replicate or omit actions as we work through the available strategies for resolving the issue.

Knowledge-based thinking

The third and final mode of action is ‘knowledge-based’. Knowledge-based thinking requires returning to first-principles and working from the ground up to find a solution. At this level an actor might have to try and calculate the rational response to the risks and rewards the situation presents, and come up with solutions ‘outside the box’. It is at this level where a person’s reasoning begins to parallel ‘rational’ thinking in decision-theoretic or economic senses. That is, it is in this mode where a person really tries to ‘maximise utility’ (or wealth).

GEMS says that human beings do not like operating at the knowledge-based level; it takes not only concentration but also real mental effort. For the most part, effort is not enjoyable, and we only move to knowledge-based decision-making when we have reluctantly accepted that rule-based action has no more to offer us – we have tried every rule which might be workable in the situation, and have failed to resolve it appropriately. We are dragged kicking and screaming to the point where we have to think it out for ourselves.

James Reason argues in his presentation of GEMS that human beings are not particularly good at this type of knowledge-based thinking. Now my first thought on reading this was: ‘not good compared to who?’ It’s not like monkeys or dolphins excel at locating Nash equilibriums to game theory situations. Compared to every other animal on this planet, human beings are nothing short of brilliant at knowledge-based thinking. But Reason was not comparing humans to other animals, but rather knowledge-based thinking to rule-based thinking. He argues that, comparatively, human beings are far more likely to get it right when operating on the basis of rules. Once the rules render up no viable solutions, we are in trouble. We can think the issue through for ourselves from the ground up, but we are likely to make real errors in doing so.

The opportunity for error at this stage, therefore, is widespread. Human beings can read the situation wrongly, be mistaken about the causal mechanisms at work, make poor predictions about likely consequences of actions, and be unaware of side-effects and unwanted ramifications. This isn’t to say knowledge-based decision-making is impossible, of course, just that the scope for unnoticed errors is very large.

A full-blown theory of human action

Ultimately, in aiming to give a comprehensive theory of how human beings make errors, this realm of psychology has developed a full-blown account of human action in general. Most of the time we cruise through life operating on the basis of schemas and skills. When a problem arises, we reach for a toolbox of rules we carry around – rules that have worked in previous situations. We find the rule that looks most applicable to the current context, and act on its basis. If it fails to resolve the situation, we turn to another likely-looking rule. Only after we despair of all our handy rules-of-thumb resolving the situation are we forced to do the hard thinking ourselves, and engage in knowledge-based decision-making – which is effortful and fraught with risk.

(Now one can have worries about this picture. In particular, psychologists of error seem to me to work from a highly selective sample of contexts. Their interest focuses on cases where errors are easily recognizable, such as in artificial laboratory situations and slips of tongue, or where the errors cry out for attention, such as in piloting mishaps and nuclear meltdowns.)

What’s interesting about GEMS, from a moral theory perspective, is how it aligns with the three main theories of moral action: virtue, duty-based theories and consequentialist theories. In what follows, I’m going to insert these three theories of moral reasoning into the three categories of human action put forward by the GEMS process, and see what the result looks like.

Virtue and skill-based action
Virtue theory has deep parallels with the schemas of skill-based reasoning. Virtues are emotional dispositions like courage and truthfulness. When operating on the basis of the virtues, one isn’t focusing on particular rules, or on getting the best consequences. Instead, the point is to have the correct emotional response to each situation. These steadfast emotional dispositions – the virtues – will then guide the appropriate behaviour.

Now, to be sure, it is mistaken to view Aristotle (the first and greatest virtue-theorist) or contemporary virtue-ethicists as basing all moral behaviour on habit and habituation, especially if this is taken to imply not actively engaging one’s mind (Aristotle’s over-arching virtue was practical wisdom). But the formation of appropriate habits, and learning through practice and experience to observe and respond to the appropriate features of a particular situation is a hallmark of this way of thinking about morality. (Indeed, it is reflected in the roots of the words themselves: our English word ‘ethics’ comes from the Greek term meaning ‘custom’ or ‘habit’ – and ‘morality’ comes from the Roman word for the same thing.)

Paralleling the psychology of error, we might say that the primary and most usual way of being moral is to be correctly habituated to have the right emotions and on their basis to do certain actions in particular contexts. We need to be exposed to those contexts, and practice doing the virtuous thing in that type of situation until we develop a schema for it – until the proper response is so engrained as to become second nature to us. Operating on this mode, we don’t consciously think about the rules at all, much less have temptations to breach them. Most good-hearted people don’t even think about stealing from their friends, for instance. It isn’t that an opportunity for theft crosses their mind, and then they bring to bear the rule on not stealing. Rather, they don’t even notice the opportunity at all.

Sometimes, though, problems arise. Even if we have been habituated and socialized to respond in a particular way, we might find a case where our emotionally fitting response doesn’t seem to provide us with what we think (perhaps in retrospect) are good answers. This may be because the habit was a wrongful one. Schemas are built around stereotypes – and it is easy to acquire views and responses based on stereotypes that wind up harming others, or having bad consequences. Equally, if we are not paying attention to the situation, we may be on emotional autopilot, and not pay heed to the ways in which we need to modify our instinctual or habitual response in response to the differences between this situation and a more stereotypical one.

Deontology and Rule-based thinking

At the points where our instinctive emotional responses lead us awry, (if we follow the analogy to the GEMS psychology of error) we will look to rules that have served us well in the past. In ethics these rule-based systems are called deontological theories. Deontology says that the point is not to have the right emotions, or achieve the best consequences, but to follow the proper rule in the circumstances. So when we are jerked out of habitual response by some moral challenge (unexpected harm to someone else, etc), the first thing we do is to scout around for rules to resolve the situation – rules that have previously served us well in the past. These might be very general rules: ‘What would happen if everyone did what I am thinking of doing?’ or ‘What would everyone think of me if they knew I was doing this?’

Or the rules we appeal to might be quite specific. In the GEMS system, we have a variety of options for rules to select, and we try and gauge the most appropriate one for the situation we are in. In ordinary moral thought, this process in fact happens in all the time. In fact there is a technical word for it (and a long history behind it): casuistry. When one reasons casuistically, one analogizes to other, closely related situations, and uses the rule from that situation. For instance, if we are unsure if first-term abortion is morally acceptable, we might first true analogizing to murder of a child, which has a clear rule of ‘thou shalt not kill’. But as we think about it, we might decide that the dis-analogies here are very strong, and perhaps a closer analogy is one of contraception, with which (let us suppose) we accept a rule that contraception is legitimate. Or we might analogize to self-defence, especially in cases where the mother’s life is in danger. In attending to the relevant features of the situation, we select what seems to be the most appropriate rule to use. Sometimes we use multiple rules to develop highly sophisticated and qualified rules for a specific situation.

Utilitarianism and knowledge-based thinking

But what happens when this doesn’t seem to resolve the issue, or we feel torn between two very different rules (as might have occurred in the above abortion scenario)? At this point the third, knowledge-based reasoning would come online. We must return all the way to first principles. In GEMS one way this can occur is through means-end rationality, where we take into account how much we want each outcome, and what the chances of each outcome are – given a particular action of ours. We then choose the action that has the best mix of likelihood and good consequences; we ‘maximise expected utility', as the point is put technically.

And of course there is a moral theory that requires exactly this of us, except that rather than maximizing our own personal happiness, we are directed to maximise the happiness of everybody, summed together. This is the ethical theory of utilitarianism which requires (roughly speaking) that we create the greatest good for the greatest number of people (or sentient creatures more widely). It is here that we have really hard thinking to do about the likely costs and benefits to others of our action. We weigh them up and then act accordingly.

Large-scale pluralist theory of moral action

Is this a plausible over-arching model of what moral thinking looks like? I think it has some merits. It is true that most of what we do, we do without a lot of conscious thought, on the basis of engrained habits and customs. We don’t go through life forever applying rules or calculating consequences; we hope that our habits of action, thought and emotion will generally push us in the right direction more often than not. And this might be particularly true when we are interacting with friends and lovers and family, where following ‘rules’ or coolly calculating risks and rewards might seem quite inapt.

But sometimes we are confronted with ethical ‘issues’ or ‘challenges’. We encounter a situation where habit is no longer an appropriate guide. It sounds right to me that in these situations we do cast about for rules-of-thumb to resolve the problem. We think about what worked before in similar situations, and go with that.
In some cases, though, there can seem no ‘right’ rule; no rule that will work fine if everyone does it, or no rule that does not clash with what looks to be an equally fitting rule. These will often happen in novel situations, rather than everyday encounters. And if they are worth thinking about, then it may be that the stakes in them are rather high – they might be the decisions of leaders, generals, or diplomats. In such cases, really weighing up the possible pros and cons – and trying to quantify the merits and demerits – of each approach seems appropriate.

Note also that some of the major objections to each theory might be managed by this pluralist, sequential account, where we proceed from virtue, to duty, to utilitarianism. For instance, utilitarianism can be philosophically disputed because the pursuit of sum-total happiness may lead us to sacrifice the rules of justice – or even to force us to give up on our deepest personal commitments. But on the approach here this wouldn’t happen. On everyday matters of personal commitment, we operate on the basis of our emotional dispositions and mental schemas and habits. When there is a clear rule of justice at stake, we accept its rule-based authority. But in cases where neither works – and only in those cases – we look to the ultimate consequences of our actions as a guide to behaviour. And even at this level we still have constraints based on our emotional habits and rules-of-thumb. The utilitarian decision-making operates within the psychological space carved out by these two prior modes of judgement.

Of course, like any such speculations, I am sure I raise more questions than I have answered. But it is striking the way that the three modes of decision-making that occur in the psychology of error map onto the three ways of theorizing about ethics, and that the process developed whereby a decision-maker moves from one mode to another does have prima facie plausibility in the way we go about making moral decision-making.