Are human lives more important than profits? Is scientific progress more important than people’s psychological comfort? More challengingly, is stability more important than justice? Is wisdom more important than happiness?
Questions of this sort – taking two good things and asking about their relative importance – come up sometimes, and they seem reasonable and interesting questions. Indeed, they seem like exactly the sort of questions that moral philosophy should help with, if it’s any help at all.
But here’s the thing – looked at in terms of moral philosophy, these questions are unanswerable, and in fact strictly meaningless. That’s for the simple reason that the two values have no shared units – for instance, when we compare wisdom and happiness, how much wisdom and how much happiness are we comparing? ’10 points’?
We might say that one is absolutely more valuable than the other: e.g., enormous wisdom for millions of people is less important than 1 second of mild pleasure for 1 person – or vice versa, one moment of insight is more valuable than years of blissful happiness for millions. But claims like that are hardly plausible.
The alternative is to say that some amount of one value can outweigh some amount of the other – for instance, it’s more important to prevent a bloody civil war than to be scrupulously fair in dividing a cake. But conversely, change the amounts and the opposite is true: it’s more important to correct some centuries-old racial injustice for a whole nation than to stop a drunk from starting a fistfight. But without giving some shared units of stability and justice, this doesn’t allow us to say that either value is more important full-stop.
But it does seem like the questions are meaningful, doesn’t it? So what’s going on? It seems to me that what’s going on is that we’re making an implicit reference to ‘the quantities that might be traded off in real life decisions’. So when we ask about ‘money vs. lives’, we have in mind a rough sense of the situations when there might be a conflict between some amount of lives and some amount of money, and an average of what sort of amounts tend to be involved. The concrete situations in which these questions come up provide a ‘conversion rate’ that lets us make the questions meaningful.
What I want to draw attention to is that this is relative to a particular situation – the sorts of conflicts that arise now didn’t arise 200 years ago, and may not arise for other people with other projects. So this can’t be a question of true-vs-false, just as it can’t take place at the level of moral philosophy as a pure intellectual discipline.
In a recent post I distinguished principles, which can be true or false, and are suitable for fairly abstract consideration, from paradigms, which can be appropriate or inappropriate, and require consideration of the (fairly general) real-world situation and its historical context.
So the point could be expressed simply by saying that the sort of questions I opened with are ones about paradigms, not principles. The answer they demand is not that one value or the other ‘is’ more important – even if you’re a moral objectivist/realist – but of whether it’s more appropriate, given the way facts are, to prioritise one or the other. A fairly banal point, but one that I spent a while in the study of ethics without really noticing.