Sunday, September 30, 2012

Zombie Apocalypses, Lasertag and Ethics


But does she count as a person for the purposes of
applying Kant's Categorical Imperative?
How would people behave in the world of a zombie apocalypse, and what, if anything, does it tell us about ordinary morality?

One theory is that we would all fall upon each other, tearing ourselves to pieces in a ‘Lord of the Flies’ type frenzy. Life in the zombie aftermath would be – in the words the Seventeenth Century political theorist Thomas Hobbes used to describe the ‘State of Nature’ – ‘solitary, poor, nasty, brutish and short’. And this is so not because of the dangers from zombies, but rather because of the dangers presented by other humans in a lawless world.

Is this really what would happen?

A recent post on The Conversation suggests – in a light-hearted fashion – that there is some evidence that it is. The author describes the player-to-player carnage of the videogame DayZ, where players struggle to survive in a post-Zombie-apocalyptic world. The costs and payoffs to players in the game are structured so that rational self-interested players benefit greatly from killing others and stealing their stuff. And that, for the most part, is exactly what the players do. Of course, some stalwart do-gooders can be found – healers of the wasteland – but they are all the more remarkable because of the cutthroat and solitary world they inhabit.

Better hope that Z-Plan is tight.

But it seems to me that all this shows is how warped the DayZ game-mechanics must be. A game that has people avoiding teaming up for survival is unrealistic. (Just to clarify that – a zombie apocalypse is of course altogether realistic. But a world where survival is not improved by being in at least small teams, preferably with cordial relations with other nearby teams, is utterly divorced from the irremovable realities of the human situation.) Some level of decency, trustworthiness and sociability is a core part of rational survival.

Now if we were to take the less fun path, this point could be argued through the theories of the great philosophers since Epicurus in the Greek Hellenistic period, and we could use the philosophies of Rousseau and Hume to refute the grim picture relayed by Hobbes. Similarly, we could make the point in terms of contemporary game theory, arguing that the dog-eat-dog ‘prisoner’s dilemma’ model of human interaction is a misrepresentation of ordinary reality, and that the more sociable ‘stag hunt’ game is a better picture of what is going on.

Instead though, I propose to take the fun path, and to compare the DayZ situation that which arises in zombie lasertag.

Stuck for a gift idea for the person who has it all?
Go with zombie apocalypse erotic romance!
But before we turn to violence; sex.

After all, the normal social motivations that pull people together will be just as relevant post-apocalypse. Indeed, this very day marks the book release of what I suspect is the first ever novel-length erotic romance set in a zombified world: Flesh by Kylie Scott.(No, the zombies themselves aren’t getting it on, so get that mental image out of your heads. Ew. No. It’s about a ménage forming amongst three survivors as they band together against the infected hordes.) And the idea here is a pretty plausible one. People are still going to have the same desires for companionship, love and lust in the new world. Probably more. Heck, you may as well live like there’s no tomorrow when there’s every chance there really is no tomorrow. Indeed, a considerable number of writers have written on this premise; it turns out there is in fact a website for the Romance Writers of the Apocalypse.

And there is a deeper point at issue here. There are basic survival gains in simply having a good time. This is the major problem I have with the TV series The Walking Dead. There’s a point at which grimness becomes pathological. Every character in that show seems just a nudge away from committing suicide. Lighten up people! In order to survive, you need to want to survive. You need something to get you through the day – and that might well be whatever gets you through the night. Ultimately, one of the best survival rules offered in Zombieland is not Rule #1 ‘cardio’, or Rule #2 ‘Double-Tap’, but rather Rule #32: ‘Enjoy the little things’. 

Or, in the case of Flesh, the not-so-little things. And all at the same time. Stag-hunt games indeed.

But to return from sex and zombie erotica to violence and zombie lasertag, last week I was fortunate enough to play in a test-run of the new Laserforce Zombie game.

For those who don’t know much about lasertag, the basic principle is pretty simple. You wear a vest-pack with flashing lights and sensors on it, and it attaches to a gun that shoots a laser. When your shot hits the sensors on the vest-pack of an enemy player, their suit registers that it has been hit, and it ‘goes down’ for a period of time – meaning that their lights go out and their gun won’t work. They can still be shot again, so usually they will have to run away and hide somewhere until their suit reactivates and they can start shooting people again. A central computer keeps track of everyone’s score – basically the amount of times they have shot the enemy and been shot by the enemy.
Social Science Experiments: Lasertag vs. DayZ

That’s the simple story. But most games of lasertag – such as the one I play at Laserforce league – are more complicated. A lot more complicated. In ‘Space Marines 5’ (the game that the international competition is held in) you have limited lives and shots and need to get resupplied by team-members – your ‘Ammo’ and ‘Medic’ – in order to stay in the game and keep fighting. Plus there are nukes, rapid-fire, missiles, ‘generator targets’, power-boosts and more. The goal in that game isn’t merely to outscore your opponents, but to eliminate their team altogether. Kill their medic out of the game, and then remove the other players one by one.

On the last League night, league players were used as the test subjects for Laserforce’s new Zombie game. The game is pretty much how you’d imagine it. Everyone but one player starts as a human ‘survivor’, with flashing red lights on their vest. One player starts as a zombie. The zombie (with flashing green lights) wants to infect the survivors, which it can do by shooting them with a ‘missile’. Using a missile is just like shooting them normally, but it takes a little longer. You have to ‘lock on’ to them with your gun pointed at their vest-sensors, which takes a little over a second. The zombie is also hard to kill. He has to be shot multiple times before he goes down, and it doesn’t take him long to get back up again. Once a survivor is infected, they have 30 seconds to get the ‘cure’, which pops up periodically at one of the in-game targets. If you can’t get the cure in time (and you can only ever use one cure per game), then the infection takes hold. Your lights turn green and you turn on your buddies, trying to infect them.

The game is an ‘individual’ game. That is, the score and player rankings are not built around team victory, but solely on individual performance. Your aim is to win personally, not to play a part in getting your team to victory.

So how did the survivors act in this situation, in order to maximise their own individual score?

Exactly how you would imagine. They teamed up.

Of course they teamed up. It took barely seconds for players to realise that there was safety in numbers. To be found on your own was to be infected. You needed players around you to communicate where the zombies are, to check in every direction the zombies might attack your position, and to create enough firepower to shoot the zombie down before he could infect anyone in your group. As well, the less zombies that were out there on the playfield, the more manageable the task was. Keeping track of and defending against one or two zombies was straightforward. But once you have five or six zombies, coming at you en masse and from different directions, the end is nigh.

In fact, so successful was the teaming up and communication amongst the dozen or so survivors that in the first test game the starting zombie wasn’t even able to infect anyone. Spreading out across the upstairs levels, players gravitated towards vantage points that allowed each group to patrol and defend one of the four routes the zombie could take to get upstairs. Equally, being upstairs allowed each group to keep track of an area of the lower playfield; with good communication every player on the field knew exactly where the zombie was at pretty much every moment of the game.

This might have happened in the second game as well, if some morons hadn’t ruined the important social science experiment by deliberately allowing themselves to be infected just so they could stagger about the playfield, infecting survivors and mumbling, ‘Brains, brains’. (I totally deny any involvement in such silliness.)

There was even enlightened self-sacrifice when it came to the cure. After some frenetic in-game arguments, some players were willing to forego getting the cure themselves, in order to let infected players use it. Better to not have the resource yourself than to have an ally turn into an enemy, and to let the zombies get a foothold in the survivor population.

The point is that a basic level of teamwork is a fundamental part of any real-world scenario. Human beings are in many respects, if not a pack animal, then at least a social animal. Throughout history they survive, or perish, in groups. They cannot see every direction at once. They need time to sleep, recuperate and reload. Their capacity to fight off threats increases in their greater numbers. And their natural specializations can lead to a useful division of labour in groups.  In a word, society pays off.

Thus I conclude that the zombie lasertag social science experiment supports the view that some level of moral behaviour – trust, teamwork and so on – arises amongst self-interested rational actors.

What is the result here for moral theory? Well, some might say that morality is entirely divorced from self-interested rational behaviour. Prudence has nothing to do with genuine respect for others. Others might say morality just is enlightened self-interested rational behaviour. The truth, though, is perhaps somewhere in between, and that the sociable teamwork required by the human condition – both in ordinary life and during zombie apocalypses – sets us on a path to morality proper.

A final observation. One interesting facet of the zombie evening was the question of rules. All the players on the night were Laserforce league players – and the League requires players abide by various rules, such as not chasing fleeing opponents with their lights out and not ‘blind-shooting’ opponents (sticking your gun around corners without looking and shooting). These rules are standardly applied in non-league games as well; while there are no referees in these games, others will chastise you (that is, they will abuse the heck out of you) if you clearly breach these rules. But no-one expected the zombie to follow the rules. Of course zombies chase their prey. The rules cannot apply to them. The ease with which everyone instinctually accepted this departure from the rules that accompany every other game was fascinating.

But then the further question arose as to whether the survivors should be rule-bound to the zombies. How could it be wrong to cheat against the undead, by blindshooting them for instance? They weren’t human; they weren’t really persons anymore.

Rational self-interest may lead us on the path to morality proper. But even morality proper has its limits – and they seem to stop about the time you dig yourself out of that shallow grave and start feasting on people’s brains.

There will be morality amongst humans in the post-apocalyptic world – but it probably won’t extend to the post-human. Sorry.

Tuesday, September 25, 2012

Australia in the UN Security Council: Non-cynical reflections

Security Council meeting on Afghanistan, Sept 2012, UN Photos/Eskinder Debebe

On October 18, in the 67th session of the UN General Assembly, the United Nations will vote on whether Australia will take up one of the non-permanent seats on offer over 2013-14 at the United Nations Security Council. Australia is in (somewhat oddly, somewhat sensibly) the ‘Western European and Others Group’, which has two seats available in this vote. Australia will be vying against Finland and Luxembourg.

It is easy to be a little cynical about Australia’s bid, and its potential significance. The first thing I asked myself, when I heard of the bid, was whether I thought there was any major issue where Australia’s vote would realistically diverge from that of the United States (and, relatedly, that of the United Kingdom). If Australia was to be no more than a ‘yes-man’ to the US, who already hold veto-power in the Council, then it was hard to understand what real contribution it would make to the Security Council decision-making. And I doubted whether Australia would in fact vote against the US position on any major matters of international security – for much the same reasons as Australia played a role in the Afghanistan and Iraq wars. At least when it comes to issues of national and international security, Australia takes its relationship to the US very seriously.

But, on reflection, I think this was the wrong question to ask. I think the important question is this: 

Would Australia be willing and able to impact on the wording of UN resolutions that are proposed by the Western bloc (US, UK, France)? 

And I think the answer to this question is ‘yes’.

The first thing to note is that the specific wording of UN Security Council resolutions is incredibly important – perhaps moreso now than ever before. A matter of word-selection can make the difference between a military action being legal under international law or not (as has occurred in the context of the recent Iraq war, where arguments either way depend on the briefest sentences in two or three resolutions). Resolution minutiae can make the difference between vital action occurring or not occurring.  Action by the peacekeeping force in Rwanda in 1994 was stymied, in part at least, because the operation there was directed to contribute to the security and protection of civilians, rather than to provide security or protect civilians. The difference in wording can seem tiny – but the ramifications can be huge.

What would Australia have to offer, in this respect? Both because it is a ‘middle power’, and because of its actual location on the globe, Australia can have more reason to avoid striking a belligerent tone on matters of international security than the US or UK. Its middle power status makes it more sensitive of the need to compromise, and for the need for sensitivity for the concerns of other nations and peoples. It is these traits – which I think are evinced particularly in the role Australia has played in international peacekeeping for some time – that can make it a valuable contributor to the Council.

Moreover, Australia has very different neighbours to most of the other Western bloc countries. In a way, Australia’s inclusion on the Security Council effectively gets another representative for the Asia-Pacific area onto the Council. As well as helping bring to international attention various issues that are relevant to the region, it seems to me there is good reason to believe that Australia will try to nuance its behaviour on the Security Council in a manner that enhances, rather than detracts, from its relations with Indonesia, Malaysia, the Philippines and so on.

Ultimately, Australia is in an almost unique position (alongside New Zealand) in terms of its politics and positioning on the globe. While it unquestionably shares its fundamental values with the West, its dialogue and relations with its neighbours in the Asia-Pacific region make it sensitive to concerns that are less likely to be felt by Western European nations. And because of the structure of the voting in the Security Council, members that inject a spirit of compromise into resolutions can mean the difference between Council paralysis and action.

The question, then, is not whether or not Australia would take a stand against the US or UK on any important proposed resolution, but whether it would impact on the text of resolutions in a productive and sensitive manner.

I think the answer to that question could be ‘yes’.

Friday, September 21, 2012

Ministry of Radical Philosophy (not Silly Walks) Sketch


 (A man dressed in suit complete with bowler hat comes into shop. He is holding a cat with his arms outstretched, with the cat facing him.)

Minister: ‘Times’ please.

Shopkeeper: Oh yes sir, here you are.

Minister: Thank you.

Shopkeeper: Cheers.

(The Minister takes the paper with some difficulty, keeping the cat in position. He leaves the shop, and walks off. Cut to him proceeding along Whitehall, and into a building labeled ‘Ministry of Radical Philosophy’. For a moment the cat looks away, and the Minister freezes in place. Then the cat returns to looking at him, and he continues on. Inside the building he passes three other people, each behaving in indescribably strange ways. Cut to an office; a man is sitting waiting. The minister enters, still holding his cat.)

Minister: Good morning. I'm sorry to have kept you waiting, but I'm afraid my philosophy has become rather more radical recently and so it takes me rather longer to get to work. (Places cat on table facing him. Leans forward conspiratorially) Vicarious solipsism. I argue my cat, Snowball, is the only fully existing entity in the universe. The rest of us only exist when she is observing us. (Leans back in chair.) Now then, what was it again?

Mr Pudey: Well sir, I have a radical philosophy and I'd like to obtain a Government grant to help me develop it.

Minister: I see. (Keeping one eye on the cat.) May I hear your radical philosophy?

Mr Pudey: Yes, certainly, yes. Here it is. (Pudey stands) Ahem. First Premise: In our society, we believe in certain moral rules – like respecting human rights. Second Premise: In other societies, they believe in different moral rules. For instance they may not believe in human rights. The conclusion is –

Minister: (Sees the cat about to get distracted by licking itself.) Wait! (The cat licks its privates for a moment, and the Minister freezes in place. It then looks up again.) Right, continue.

Mr Pudey: Right. So – first premise, we have certain moral rules. Second premise, other cultures have different moral rules. Conclusion: (he pauses triumphantly) Other societies are wrong. (Pause. Minister waits cautiously.) Wrong in their moral beliefs.

Minister: That’s it, is it?

Mr Pudey: Yes, that’s it, yes.

Minister: It’s not particularly radical, is it? I mean, the premises aren’t radical at all, and the logic merely performs a low-level reification and a standard is-ought fallacy.

Mr Pudey: Yes, but I think that with Government backing I could make it very radical.

Minister: (rising) Mr Pudey,  the very real problem is one of money. I’m afraid that the Ministry of Radical Philosophy is no longer getting the kind of support it needs. You see there's Defence, Social Security, Health, Housing, Education, Radical Philosophy ... they’re all supposed to get the same. But last year, the Government spent less on the Ministry of Radical Philosophy than it did on National Defence! Now we get $348,000,000 a year, which is supposed to be spent on all our available research projects. (he sits down) Coffee?

Mr Pudey: Yes please.

Minister: (pressing intercom) Ms Wendt, would you bring us in two coffees please?

Intercom Voice: You bastard.

Minister: ... (Smiles indulgently) Now the Japanese have a man who believes in panpsychism and psychological behaviourism at one and the same time; everything both has and has not got mental states. While the Israelis, at least, whenever Snowball is paying attention to them, the Israelis have – Ah, here is the coffee.  (Enter secretary with tray with two cups on it. Puts it down on table.) Thank you for getting that. (Secretary slaps him hard. Minister smiles understandingly. She leaves. Minister leans forward to explain.) PhD Candidate. Radical ethical philosophy of language, you know. Any use of verbs counts as patriarchal marginalization. Promising stuff. (Leans backYou’re really interested in radical philosophy, aren’t you?

Mr Pudey: Oh rather. Yes.

Minister: Well, I’ve told you about the Japanese. The Israelis have a girl who argues the Pythagorean commandment not to touch a white cock follows necessarily from his mathematical theorem. Could neuter entire mathematics departments. For their part, on the continent there’s a fellow who shows beyond dispute that computer viruses must be accorded basic moral considerability and constitutional rights. (Frowns and looks at computer.) Still haven’t received that email from him. And the Russians – well! (Leans forward and whispers.) Don’t spread this one around. Very hush-hush. But the Russians have a thinktank who – building ambitiously on Cartesian arguments for external world skepticism – are daring to question the metaphysical reality of government funded research grants!

Mr Pudey: Oh my God! (Falls off chair in shock.)

Friday, September 14, 2012

“Since you’re a philosopher you must know…” (#4 on the list of top ten things not to say to a philosopher)


So this is basically a gripe.
Read Pufendorf? Wtf? I haven't even read the contents of my own bookshelf!

There are a lot of things I like about being a philosopher. One of the things I like is that when people ask what I do I can tell them I’m a philosopher. Because of the type of places I like to hang out, this often results in them asking me what a philosopher is, which usually means we wind up talking about philosophy. This, for me, is a pretty happy result.

But sometimes it doesn’t work like that.

Sometimes, the person does know what philosophy is. In fact, they have read some, or heard of some somewhere. And finally, they have met a real, live philosopher to talk to about it.

Which is great!

Except when they say:

“Since you’re a philosopher, you must know and have read such-and-such…”

And then, when you tell them you haven’t read it, and maybe don’t even know much about it, their face falls in disappointment. How can you – a professional philosopher – not have read the one piece of philosophy that they have read? It’s like meeting a scientist who says no, she’s never actually encountered the third law of thermodynamics before, but she’s really interested in hearing about it from you. Wtf?

Often, it isn’t even laypeople who do this. In its most prevalent form, it comes from other academics and intellectuals. You stumble into a law seminar and they figure you’ve spent years studying Marx, or at least Macpherson on Locke. You visit the social sciences and they’re pleased they have someone to explain Foucault to them. You flee to the hard sciences, where it is inconceivable you haven’t read Popper and Kuhn.

The basic problem, I take it, is that non-philosophers simply have no idea how vast philosophy really is.

Let me give an example. A few years ago I had the pleasure of teaching on one of my favourite subjects; the nature of moral values, relayed through its history since the Greeks. I agonized over what to put in and what to leave out, and wound up picking a selection of the usual suspects: Plato, Aristotle, Epicurus, Epictetus and the Stoics, Aquinas, Hume, Kant, Mill, Nietzsche and G. E. Moore. No real surprises there.

But let’s canvas (as we did briefly in the first lecture) who’s not featuring. The dramatis personae of those who didn’t make the cut is arresting. Most of the major political philosophers have been ignored, including those who had an enormous amount to say about moral values. The inclusions don’t even make space for the philosopher who often heads polls of the public’s view of the greatest and most influential philosopher of all time: Karl Marx. Nor does it include his immensely influential predecessor Hegel. The philosophers who most shaped the political landscape around us are missing: Hobbes, Locke and Rousseau. The philosopher who perhaps alongside Nietzsche gave us the single most sophisticated study of the moral psychology doesn’t get a mention: Adam Smith (yes, he actually did write another book as well as the “Wealth of Nations”). With the exception of Mill there are no card-carrying feminists – worse still there are no women at all (unless Harriet Taylor really wrote as much of “On Liberty” as Mill said she did); no Wollstonecraft, de Beauvoir,  Irigaray… There is no-one at all from the Germany or France in the last century, despite the litany of famous authors from there – Heidegger, Sartre, Derrida, Foucault… No environmental philosophers – Naess, Leopold.

And we are hardly getting started. Thomas Aquinas is the only exemplary religious philosopher included, despite the wealth of input from the Christian, Jewish and Islamic scholars on the Western tradition of ethics. Where’s St Augustine, at least? Which is another way of observing that the list of inclusions only portrays the most facile patina of historical coverage. After all, there is almost a fifteen-hundred year gap in the middle of the list (did you spot it? Between the Stoics and Aquinas). Are we really to believe that nothing worth inclusion happened throughout that vast span of time, as empires rose and fell, religious traditions clashed and laws developed and dissolved? And don’t even get started on the anglo-centricity problem. Surely the Confucian tradition, at least, warrants mention. And what about the input of non-philosophers? Ever heard of that Charles Darwin fellow? He had a bit to say on the nature of morals. And the sociologists – Weber, Durkheim? And the psychologists and psychoanalysts – Freud, Piaget, Kohlberg, Gilligan? Not worth a mention?

One could go on.

And on.

The point is this: the history and content of moral philosophy – itself just one section of philosophy– is enormous. It is possible to reasonably demand a justification for the exclusion of every given one of these philosophers named above. But that’s just the point. Every one of those philosophers, and many more, warranted coverage in a course of this sort. But, it hardly needs to be said, the course only had so many lecture-slots, and harsh decisions had to be made.

Hegel? Cut.

Hobbes, Locke, Rousseau, Smith? Cut.

Anyone French? Cut.

Anyone writing in the last hundred years? Cut.

The rest of you name-brand guys are in.

Oh, except you Marx. You’re cut. No, I don’t care what that bloody poll said.

And just as there are only so many lectures in a course, so too there are only so many hours in a day, and only so many books that can be read. Most decent philosophers take months if not years of study to understand, and philosophers, like everyone else, need to specialize. So the chances are ultimately pretty high that if a layperson or an academic from another field has just bumped into a particular philosopher’s book, or just knows the philosophy that relates to their own domain, that I simply haven’t read it. Sometimes I won’t even have heard of it. And I doubt the problem is confined to me; most philosophers would encounter this (though maybe they don’t worry about it as much as I do).

What’s the cause of this problem? Is it simply that those who bump into some philosophy just assume that – if they as a non-philosopher have encountered it – then surely it must be a central text within philosophy itself? Is it a type of optimism they have that what they read actually mattered, and that the time spent wading through that forty pages of ethical argument at the end of ‘Atlas Shrugged’ was worth it? (Gracious – I didn’t even mention non-academic philosophers above!)

Or is the problem with we philosophers? Is it a function of the fact that philosophy never wholly dispenses with its past? Philosophy snowballs through history, gathering accretions but only rarely shedding them, to the point now where a person conversant with the works of every name mentioned above appears more like a polymath than a recognizable philosopher. Granted, there are occasionally efforts to excise vast parts of that history, and so to make philosophy more manageable. Analytic philosophy in the early 20th Century – under the spell of logical positivism – tried something like this with all those “unscientific” forebears. So too postmodern theory is all-too-easy to be taken as an excuse for not bothering with all those superseded dead, white males from modernist days of yore. I wonder what a tremendous intellectual relief it must be to have some excuse for ignoring all these thousands of years of philosophical thought. But – in the end – we keep going back for more. One generation casts aside Kant, and the next resurrects him. The struggle between Plato and Aristotle begins anew. And the snowball ever grows.

Perhaps the problem is with the way we philosophers write our books. When a layperson or non-philosophy-academic picks up one of our books, it’s unlikely they’ll be bearing in mind that this is just the latest offering on a question millennia old, on which countless theorists have written and are still writing. On which countless wrong turns have been taken and dead-ends found. The reader has happened across just one tiny drop in the ocean of thought, just one skittering pebble in the vast avalanche that is philosophy. Hopefully, of course, the drop might turn out to be an important one, the skittering pebble might be one that ramifies into a new avalanche – but such questions might not be settled for decades or even centuries after publication. In all likelihood, the drop will remain a drop, the pebble just a pebble.

Try putting that as the cover blurb on your new book.

Goodness knows I haven’t. So perhaps ultimately the fault is – at least in part – one of my own making. Perhaps philosophers have a tendency to present to their readers a view of philosophy as so much smaller than it actually is, so as to make our own works comparatively so much larger – and therefore worth reading and thinking about.

So – even though it didn’t make it onto the back cover of my upcoming book – this humble blogpost can be my attempt to communicate to the wider world the point that philosophy is big. Really big. And that the chances are pretty high that I have never heard of that dusty old tome you happened across in that delightful used bookstore.

But I’m happy to hear about it anyway.

Tuesday, September 4, 2012

Publishing, rejection and peer review in philosophy


“Peer review is the worst form of appraising academic work, except all the others that have been tried from time to time.”

Today I want to offer just a couple of reflections on peer review and publication, in the context of philosophy and the humanities. Hopeful they will be of some help or comfort to those just setting out on the tumultuous journey that is academic publication, or – like myself – still wrestling with its slings and arrows.

The first thing to keep in mind – as my bastardization on Churchill’s line on democracy above is meant to capture – is that peer review is by no means a perfect system. To be sure, it has substantial merits. What better way to judge academic work than to send it to experts in the field? Stripped of the name of its author, the experts judge it only by the cogency of its argument, the originality of the thinking, and the plausibility of its premises. The experts make their determination – hopefully offering thoughts for improvement – and the good papers are published and the poor ones passed over (usually to be reworked and submitted elsewhere). For the most part, the system works and is resilient enough to the occasional less-than-ideal behaviour on the part of individuals within it.

Indeed, one can feel a real dignity and virtue in the system when it works well. Reviewers and editors routinely decide to publish papers with whose conclusions they profoundly disagree; but they nevertheless accept the significance and the cogency of the argument and decide it is worth publishing on that basis. So too, the anonymity of the process really can create an equal playing field. Papers written by professors from Ivy League universities are passed over, and articles penned by PhD students from some backwater are selected. I may not have been to Cambridge, but I can (and have) published in its journal of ethics.

So that’s the good news. Of course it doesn’t always work like that. Sometimes, journal editors will reject papers immediately, before sending them out for review. And in such cases usually they will know the name(s) of the author, meaning the process is no longer blinded (processes can be put in place to fix this, of course). I hazard in such cases editors are more likely to give the benefit of the doubt to well-known professors than unknown students. There is no reason to believe editors are more judicious than reviewers in this respect, so it's hard to see why they shouldn't be blinded.

So too, reviewers can be biased. They can evaluate the argument on the basis of whether they approve of its conclusions, or not. Now I think this is often not as large a problem as it might seem, as there are institutional factors that press against it. Effectively, philosophers like (need!) to argue with each other, and to do that requires finding intelligent people who disagree with them. So a good argument against their favourite theory is not something they want to suppress – but rather to publish and then respond to. But bias can rear its head at certain points – both the Sokal hoax and the ‘climategate’ scandal carried the charge that reviewers in those cases focused on whether they agreed with the conclusion, rather than the cogency of the evidence and argument itself.

But the major problem with peer-review is simply this: journal referees are often time-poor, and there are few institutional reward-mechanisms to ensure they carry out their task diligently and judiciously. If a referee makes a laughably wrong decision in rejecting a paper (and the world is littered with stories of what-came-to-be landmark (even Nobel-prize winning) articles that were passed over with brusque rejections) then there is usually no professional fallout from their doing so. Indeed, the institution that employs them will almost never know what happened. This means that every time a referee gives a fair and careful appraisal of a manuscript it is almost always an act of virtue. They did it because they themselves wanted to do the task well, not because there was a system of accountability by which they might be judged and found wanting.

Even if there are no issues of bias or brusqueness, it still remains a grim fact that top journals have (and must have in order to function) very high rejection rates – often well above 90%. This means that reviewers need to be pretty critical; it’s not enough for them to think the manuscript is okay. To be successful the manuscript has to convince two separate referees (and often the editor as well) that this paper simply cannot be passed over; it is brilliant and a perfect fit for the journal. Unless you are a scintillating genius, this means that if you are submitting manuscripts to top journals, you will get rejections.

If you are anything like me, you will get a lot of rejections.

So how to survive, in this world?

The first thing is, of course, not to take rejections too seriously. If you’re in this game, you will get rejections. You will get lots of them. The mere fact that you have had a long series of rejections on a particular manuscript does not mean that the paper is not worthwhile. I have met colleagues who are surprised when I tell them one of my manuscripts has been rejected seven times and I am still revising it and sending it out. (This is not merely my own personal stubbornness; one of the smartest and most well-published early-career philosophers I know told me just the other day that she had hit seven rejections with a manuscript and was still sending it out.) To be sure, in some cases it may be that the paper just isn’t good enough – if all the reviewers tend to be saying the same thing, then changes are in order, at least. But you can just be unlucky. If you believe in the paper and its significance, then you need to stick with it. All of my best publications were rejected at least once by some journal or other. My favourite paper, and the one I feel to be my best contribution to political theory – Two Concepts of Property (The Philosophical Forum, 2011) – had no less than six rejections at various venues before it was accepted. (Though, to be fair to the reviewers that had rejected it, it was heavily revised as it went along, especially in the light of some sympathetic but trenchant critiques put forward by two excellent referees from Social Theory and Practice.)

The point is more general. Just as rejections can make you doubt a paper, they can make you doubt yourself. I think it is fair to say that I have a pretty robust self-belief about my capacities as a philosopher. But when you are hit, again and again, with rejections from multiple sources, about multiple papers, it is easy to start thinking you are getting things wrong in some deep way. But the fact is that if the journals you are submitting to only publish about one in twenty submissions they receive, then even if you are producing high-quality work, multiple rejections will be the order of the day.

With this in mind, it is important to keep in mind that – in publishing like in so many other pursuits – when you are doing well you will tend to chalk it up to your merits, and when you are doing poorly you will put it down to luck. We attribute successes to things in our control, and failures to things outside our control. This is just a part of how human beings think, but it can be a persistent bias. If you (or one of your peers) gets a string of acceptances, then you/they are probably doing a lot right. But you/they are probably also lucky. Around 2009, during my PhD, I received three journal acceptances in a short period of time. Frankly, at that point I thought journal publication was the easiest thing in the world, and that referees were clearly able to recognize manifest genius (i.e. me) when they saw it. Sometime in 2011, I began to realise how wrong I had been. The manuscripts had been good, yes, but I had been lucky as well. And having that lucky start didn’t stand me in good stead when I began to receive rejection after rejection for my next set of manuscripts. Over time it all evens out, but that can mean long periods of rejection after rejection (people are often surprised at how many times a fair coin can turn up heads, if you toss it long enough). That can be pretty tough to handle, especially if you are early in your career, when each publication really counts, and you need them now - not later.

My next point is that there is never a comment from a referee that is not worth thinking seriously about. I mean that. Never ever. So what if they just skimmed your argument? So what if they clearly don’t know the literature as well as you do? So what if they hold to a shoddy interpretation of Hume or Foucault or whoever? It still matters what they thought. When your manuscript finally does get published, it will be read by people who just skim it, who don’t know the literature, and have dubious understandings of Hume and Foucault. You need to consider those readers as well – are there some small changes you can make so as to make your main point sensible to these readers? Could you make the abstract clearer – so that even the laziest scholar will not misunderstand what you are aiming to do in the paper? The point here isn’t to try and please everyone – that’s impossible. The point, rather, is never to write off a reviewer’s comments merely because you are sure they are an idiot. Often, they will still have said something that is worth thinking about seriously. It usually will cost you between two and six months to get those few comments; value them accordingly.

(The point about the abstract is worth emphasis too. I often fall into the trap of getting the paper itself very well-polished, and then realise I need the abstract only when I am about to submit, and just whip something up quickly. This is a mistake. The abstract is crucial. Every word in it matters. It will be the first thing editors and reviewers read – and the first thing scholars generally read when your article is published. You have to get it exactly right.)

Next point: always take the revise-and-resubmit process really seriously. A ‘revise and resubmit’ result should be what you are aiming for when you submit a manuscript. Most good journals will almost never give a straightforward acceptance. And a ‘revise and resubmit’ is usually an opportunity to really improve your paper. The comments you have been given are provided by someone who is clearly sympathetic to what you are doing (otherwise it would have been rejected). This is often the single best source of constructive criticism a scholar can access. Referees and editors will usually not require that you have made every change requested. What they will require, at absolute minimum, is that you have thought very carefully and sympathetically about their comments, and how they might benefit the paper. You don’t need just to do this – you need to demonstrate unequivocally to them that you have done it. (I’ll often provide a document detailing all the changes and my reasons, even if this is not requested.)

One final point: always always always be polite when dealing with journals, even if you feel you have been hard done by. There is nothing to be gained by telling the editor how incompetent one of her referees is, or that she should at least have sent it out for review, or whatever. Editing a journal is hard and often thankless work. The editor cannot second-guess her referees, or her own judgment, or her task becomes impossible. Moreover, venting your frustrations on the editor is a waste of emotional energy that would be better channelled into thinking seriously about whether and how to rework the manuscript. 

Take the criticism to your typewriter, as the old writing adage goes.