Sunday, 18 October 2009

7. What is ‘being moral’?

Oh, how the philosophers love to argue about this one. It all started with Socrates (see Plato’s works ‘Protagoras’, ‘Meno’ and ‘The Republic’ in particular). Much later in the West, the humanistic turn of the enlightenment needed to find an alternative to the idea that being moral was following divine commandments. Two competing theories captured attention: Kantianism and Utilitariansim.

The ‘twitter’ version of each: Immanuel Kant thought being moral is acting rationally. Human beings are distinct from other creatures by having reason. Therefore, to be immoral is irrational and subhuman. John Stuart Mill thought what is moral is that which increases the well-being (‘happiness’) of the majority of affected parties. Immoral actions cause greater suffering to more people than moral ones.

Barely stated, Mill’s theory does not address the question ‘why be moral?’ If you don’t care about others, immorality is in your best interests. For that reason, he tried to link collective well-being with individual well-being (‘what’s good for everybody is good for me’), most think unsuccessfully. Kant’s theory only bites if you buy the idea that rationality is your defining characteristic and benchmark for action (a bit anachronistic these days).

Back in the 21st Century, a lot of philosophers think that ‘being moral’ essentially revolves around the notion of impartiality (considering everyone equally). They also think the question ‘why be moral?’ is misconceived. You can’t justify selfless behaviour on selfish grounds (i.e., ‘You should think about others because it will have this benefit for yourself...’ is to misunderstand what morality is all about, they say).

Not surprisingly, I disagree with everybody. Ethical behaviour is not impartial behaviour (treating everyone equally), but fair behaviour (treating everyone fairly). It is the principled ordering of partialities (having consistent reasons for weighing some people’s interests as counting for more in some situations, but not in others). Although you might be grateful, you would think it odd (and presumably would not do the same) if I left my own children to burn in a fire in order to rescue yours for some Mr Spockian logical reason. It would be both moral and reasonable to give greater preference to my own children first.

This can easily be justified: a society that did not value parents’ protective preference for their own offspring would probably be less functional than one that did (digression but one of my pet subjects: compare the breakdown of family values in the West and the resultant social problems with the strong family ties and social cohesion in many Asian communities).

If by ‘moral’ we mean ‘having consistent reasons for weighing some people’s interests as counting for more in some situations, but not in others’, then we are all moral to the extent that we share and apply the same set of values. This provokes questions about moral relativity (are competing moral value systems equally ‘moral’? Can one person have consistent reasons that are out of tune with the rest of their society? Won't all criteria for disallowing some sets of consistent reasons sneak in moral prejudices somewhere along the line? – the short answer to all these is: no.), a subject I’ll return to in later posts.

No comments:

Post a Comment