Tuesday Thursday Paradox – Chapter 7

Chapter 7: Selfishness is not a dirty word

All that you buy
beg, borrow or steal
All you create
All you destroy
All that you do
All that you say

Eclipse

Pink Floyd

 

OK, experience tells me that this is the chapter people will be least attracted to accepting – but that doesn’t mean it doesn’t hold a potentially important insight on how life works.  I actually think this might be the most important chapter in the whole collection, certainly at the “big picture” level.  You can read the rest of the chapters in this collection and happily ignore this one if you like, but I suspect that life will throw up fewer surprises and other people’s behaviour will be far more understandable if we accept something like the basic premise here.

The reason people don’t like this chapter is that I say we are all fundamentally guided in our choices by selfishness.  No one wants to hear this.  Especially at barbeques and parties and the other places where I like to talk about it given half a chance.  To make the idea more palatable I have tried other words and descriptions, but none of them really work as well and I tend to think that being more direct here is not a bad thing.

Of course I’m not the only person who identifies this concept. In Behavioural Economics it is theorised that behaviour is driven by three factors – maximising outcomes, minimising effort and avoiding negative affect (feelings).  I agree totally with this, and what I call ‘selfishness’ here is the nett sum of these three considerations.

I’m not saying that every action or person is guided by the negative, pejorative connotations of selfishness the term conjures up, though personally I suspect that a fairly large proportion probably are.  What I am really saying is that in any decision (conscious or instinctive), as an organism we weigh up the choices available to us, take all things we know of into consideration, apply our own interpretations and biases – and then do what we think is in our own best interest.  What this turns out to be can be just about anything, and not just things we traditionally think of as being “selfish” – but whether ‘good’ or ‘bad’, each of them is selfish in this literal sense.

The thing to understand about this is that “our own best interest” is a very personal, broad and dynamic concept.  Our own best interest also changes – as a very simple example, we will choose differently if we are in a situation by ourselves, with a bunch of strangers, or with a bunch of people we know.  Our judgement of what is in our best interest is affected not just by social factors, but by our awareness of options, how likely we think things are to happen, and how good or bad they would be if they did.  It’s partly rational and logical, and partly irrational and emotional.  In the same situation, different people will respond very differently – but my hypothesis is that each person will have made the choice they feel is in their own best interest given the information they had to work with.

People don’t like this idea, but I am not trying to be demeaning or critical, just to understand how behaviour really works.  People have given me many, many examples of times when someone could do something they would claim is not in their best interest.  However, my answer is that in most (and probably all) of these cases the person has still chosen what seemed to them the best available option of the choices they had at the time.

The parent who dies saving the life of a child is an example which seems to come up a lot, and people say “see, that wasn’t in their best interest was it?”.  My answer is, if the only other choice they could see was to live on after choosing to stand by and watch their child die, perhaps they might see it differently in that moment.  Just because you have to choose a bad thing, doesn’t mean that of the available choices it wasn’t the thing that was in a person’s best interest at that moment.  The “least bad” thing is still the best thing available, and in this context the ‘selfish’ choice.  Probabilistic Determinism (Chapter 5) tells us that we don’t have all possible choices available to us.

The next part of the discussion infuriates people even more, so if you don’t like what you’ve read in this chapter so far, you might want to take a deep breath before reading on.  Hopefully you will though – as I said, just because we don’t like an idea doesn’t mean it is without value.

Altruism, people tell me, is incompatible with my theory.  They say when someone behaves altruistically they are giving something of their own up for the benefit of others, which clearly is not in their nett best interest (especially when it puts them in danger), and thus they conclude that my theory has easily been disproven.

Not so.  Altruism is completely compatible with my theory, and in fact a natural consequence.  For some people, helping other people makes them feel very happy, satisfied, pleased, useful, loved, respected – any number of positive emotions.  For these people, what they give up in time, money or safety to be altruistic is less valuable than the positive emotions and cognitions they get in return, and so it is completely feasible that it is in their best interest to do so.  It would be no easier to stop some people from doing nice things to other people than it would be get those whose experience is reversed to start doing nice things.  They do it because they quite literally love it.

So, to my mind, what we call ‘altruism’ is not really people choosing something that is not in their interest – it is exactly the same principle, just resulting in a different type of choice.  What they selfishly choose to do is something that society values and respects, but my belief is that its underlying drivers are the same.  You can see why this is not a popular theory, but I mean it only as a mechanical description that we can benefit by understanding.

In fact, I am often asked at this point why I bother telling people about this theory at all if all I’m going to do is badmouth people doing good things and say they are actually selfishly doing it for themselves?  Why even try to make the distinction if it leads to the same behaviours?  Why not let people enjoy the experience of altruism?

Well, the reason for trying to explain this theory is that I want to understand how and why behaviours (desirable and undesirable) happen.  We spend most of our time trying to influence other people to do the things we want [don’t even start!], and if we don’t understand why things really happen, then we can’t do that very effectively.  If we want to encourage altruistic behaviours, we need to do it by understanding what is really going on.  Appealing to people to make a sacrifice and be altruistic is a highly inefficient way to influence behaviour if that isn’t really how it works.  If we can understand how to make it in someone’s best interest to behave altruistically, then they will just do it of their own accord.  THIS is why it’s important not to misrepresent altruism in nice but inaccurate ways, because it just wastes resources and good intentions.

Take volunteering as an example of a behaviour we would like to be able to influence (in this case, increase).  People volunteer for a range of reasons – it makes them feel good to do their part, it gets a necessary job done, they feel a sense of obligation, they feel a sense of purpose, they feel a sense of camaraderie, all sorts of things.  Most places that rely on volunteers always need more volunteers, and it tends to be the same small group of people who repeatedly volunteer in different ways, and different settings.  This small group try to attract other people by telling them all of the very valid reasons that they themselves volunteer – but it doesn’t usually work very well.  Though they attract a few likeminded souls, they can never work out why more people don’t volunteer when they personally get so much out of it.

I can tell them why.  They don’t get more volunteers because they are communicating with themselves, not to the other people.  If the other people were going to be swayed by that argument, they would already be volunteering themselves.  Actually, most of the non-volunteers well understand the deal and are doing the same trade off of the costs and the benefits – but with different values, beliefs, constraints, personalities they just come up with a different answer.  For them, it isn’t in their nett best interest, and the emotional return just isn’t valuable enough for them to justify the costs in time, effort and whatever else they anticipate.  It’s not that they don’t get it, they just interpret it differently.  If you want to attract different volunteers, you need to offer them a different trade off – not just try and articulate the same one better in the hope that they will suddenly see the world the same way.

This is why I think it is valuable to understand that people are looking for what is in their own best interest, even when that results in desirable outcomes.  If we rely on appealing but fundamentally false explanations, then we can’t be very effective in increasing desirable or decreasing undesirable behaviours.  Thinking about it now, maybe this is one of the reasons sociopaths and psychopaths can be so persuasive and high achieving – they don’t care about social desirability, they just coldly pull apart cause-and-effect and use it for their own benefit.

Anyway, as I said at the outset, this is not a particularly crowd-pleasing theory this one, but I think it has a fair degree of utility.  Since applying it both personally and professionally, I have found myself far more able to understand a lot of what I see around me, and maybe you will too.  If you know what someone really wants, there is a good chance you can predict their behaviour.  If you see how they behave, there is a good chance you can understand what they really want.  When you can figure out how to align their perceived best interests with your desired behaviour, then you can start to make progress.

I think this mechanism can explain a lot of human behaviour.  When large numbers of people do things that on the surface seem incomprehensible to us (good or bad), we can’t just write it off as an anomaly, we need to be able to understand how it has come about.  To understand their choices, we need to understand better the tradeoff they made, not the one we think (or like to think) we might have made in their place.  I believe this works something like what is described by the formula below.  Obviously people don’t consciously run this formula in their heads every time they make a choice, but I think something similar to it fires off in their brains, and produces a similar effect.

S=p.V.i

Where:

p = our expected probability of something occurring

V = the nett value (good or bad) that we think we would experience if it did occur taking into account the costs (time, effort, financial, social)

i = how immediately we think it will be if it does happen

In this formula, for each of the options available to us we think about how likely they are to happen, multiply that by how good or bad it would be, run it through a filter where things that are more immediate often have a greater salience – and then choose the one with the highest score (S).

This model tells us that we have a few levers we can pull to try to change someone’s decision.  We can make them aware of more options to evaluate; we can change their estimates of the probability of things happening; we can try to change the value they put on different outcomes or their perception of the costs; and we can try to make things more immediate and therefore more important to them.

It’s not that these are not the levers people have often tried to use to influence other people’s choices.  The insight here is just understanding what the decision making mechanism is.  In using these levers, we need to give the other person a situation where a different choice becomes the highest scoring option, and to do that, we need to understand their scoring system.

Chapter 6: Potential energy
Back to the Tuesday Thursday Paradox page