Dr. Cory Clark is a moral and political psychologist at the University of Pennsylvania. We talked about her research on political bias, tribalism and the psychology of fake news on social media.
Dr. Clark, you’re a social psychologist and an experimental philosopher. What kinds of questions do you research?
Experimental philosophy is, in my case, applying social and experimental psychology to answer certain philosophical questions. My two biggest interests are political bias and how we hold other people morally responsible. In the domain of political bias I’m interested in how people have ties and commitments to their political ingroup which affects how they evaluate information and what they come to believe is true in the world. As for the moral responsibility – the more experimental philosophy side – I explore how people’s desires to hold other people morally responsible shape their beliefs about how much control other people have over their life and the choices that they make.
What is political bias exactly?
I’ll tell you about the two that I think are the most important for how we construct our reality. First is selective exposure, which is a tendency to seek out information that supports one’s group’s beliefs and avoiding information that would challenge them – they read media that is more favorable towards their ingroup, they socialize with people who support their beliefs, follow social media profiles who say positive things about their ingroup and negative things about their outgroup etc.
The second one is motivated scepticism and credulity. So whenever we are confronted with a piece of information and it’s something we want to believe is true, we’re very credulous – we believe it quickly, we don’t look into sources for that information. But when we’re confronted with information that opposes what we want to believe, we are really sceptical about it, we question the methods, the credibility of the source and invent all other reasons for not believing it.
One way psychologists test this is they’ll take a scientific study, present people with exact same methods of a scientific study, but change the conclusion so that it supports or opposes the participants’ political beliefs, and people will say the methods are better when a study comes to a conclusion they want to believe, than when it opposes their political beliefs.
So there are really two layers of bias. First people approach information that supports their beliefs and avoid information that opposes them. And then when people are actually exposed to this information, then the second layer kicks in where they treat the information more favourably if they want to believe it.
What about the moral responsibility? What methods do you use for researching this topic?
I found out that when we’re desiring to punish another person we perceive them as having more control and autonomy over their decisions than if we don’t have that desire. So sometimes we see ourselves as less culpable or less in control of our own outcomes when we don’t want to take responsibility for something we have done, and the opposite – when we want to hold someone else responsible, we see them as more in control of their decisions.
In a recent paper I’ve done together with some colleagues we looked at this in a political context. We looked at how people blame their political outgroup for actions. In one study we’ve had participants read about someone from their ingroup doing something to their outgroup – so a Democrat harming a Republican or a Republican harming a Democrat – and we found out people attribute more free will to their outgroup members when they’re harming the members of their ingroup, but no so for the their ingroup members harming their outgroup. So they perceive their political opponents as more responsible for their immoral behaviour than their ingroup members, as a way of avoiding moral responsibility for their ingroup members. So if it’s a politician a person likes, they’ll say that it wasn’t intentional, wasn’t on purpose, or that they didn’t have control of the situation. But if it’s a politician they don’t like, they hold them as more responsible and see them as morally worse and want to blame and punish them more, even for the exact same kind of immoral infraction.
We can see that today with this extreme political tribalism. How do you look at political tribalism from a perspective of a social psychologist?
Historically, human beings evolved in smaller groups and throughout history, human groups would confront other groups and would often get in conflicts, kill one another, acquire their land and resources. So the human groups that were most successful and passed their genes to modern humans are the ones who were very good at coordinating and cooperating within their own group. Social psychology has shown that people are really quick to identify who is a member of the ingroup and who is a member of the outgroup and they treat their ingroup members better than the members of the outgroup.
We see that even with children. In one study they put a child in a yellow t-shirt and put him among other children who had either a yellow or a green t-shirt. They found out that the child in a yellow t-shirt treated other children with a yellow t-shirt better than the ones with green t-shirts. Our psychology is designed to recognize who is in our group and who is not. And we cooperate with ingroups better, while cooperating less with outgroups, and sometimes even behave aggressively towards them and even hate them.
And sometimes, the hatred towards the outgroup can be the basis of ingroup identity?
It is definitely a part of it. Some people argue about what is the stronger motivating force: the desire to be loyal to your ingroup or disloyal to your outgroup? I think loyalty towards the ingroup is important, but a part of it can be that they all hate the outgroup together and that is what bonds them.
My PhD advisor published a paper about political mavericks. These are the people who are on your own side, in the same political party as you, but they shift to the other side a little bit, like on one issue. So they are still on your team, but are sort of breaking the rules by agreeing with the other side. And people don’t like mavericks. People don’t want politicians who appear to be disloyal in any sort of way. What this shows is that people are punished if they’re disloyal to their ingroup. And because people are punished they can get ostracized and therefore have these strong desires to prove to their ingroup that they’re loyal group members. So when ingroup identity is formed on hatred towards the outgroup, members need to conform and start hating the outgroup in order to fit in and not get ostracized by their ingroup. I would say this is happening in almost all modern political groups that we can see.
Does this hatred towards a political outgroup fall into the category of political bias?
It can, but it sort of depends because you can dislike the political outgroup for legitimate reasons. Maybe they support a policy that you really find horrible. But we know there might be a bias element to it, because we see people will dislike their political outgroup for the exact same things that their political ingroup members do. They hold these double standards.
Liberals and conservatives alike, right?
Yes, it’s a fundamental human tendency. If you, for example, take a politician in your group with a marital affair, you might say it’s irrelevant to his ability to lead; it’s not related to politics; it shouldn’t affect our view of him as a leader. But if someone from your political outgroup does it, you might say it’s totally immoral; you can’t trust this person; their ethics are completely misguided. We hold ingroup and outgroup members to different standards, even if they do the exact same things.
This is one thing I did in my research. I looked at how people will treat differently the exact same behaviour, exact same idea, or exact same policy, depending on whether it reflects poorly on an ingroup or outgroup member. And both liberals and conservatives have this tendency to do it, where they’re more forgiving and accepting towards things that would benefit their ingroup and take more harsh stances on things that can potentially harm their ingroup.
What is the role of the internet and social media in creating this extreme form of political tribalism we see nowadays?
There are a lot of studies and analysis about what has caused such severe polarization in recent years and there is not necessarily much consensus there. Social media and media in general definitely contributed, but both of these respond to demand. I’ll give you an example. A person publishes a tweet that is somewhat leaning in the liberal side, so it gets liked and retweeted by more liberals, more liberals also start following that person. Then in the future, the same person publishes a somewhat conservative leaning tweet, but their followers are liberal, so they won’t like it, won’t retweet it, and the signal to the author is: do more liberal tweets and less conservative ones. A lot of polarizing content is driven by the demand that people create for it. They don’t necessarily want nuanced, balanced information. They want information that is going to make them be more sure that they’re right and information that they can throw in the faces of their enemies.
And this kind of information is also easier for our brains to process because, as Daniel Kahneman says, we’re lazy animals and don’t like to put a lot of effort into critically evaluating information.
Yes, information that is congruent with our mental system doesn’t violate our assumptions about the world and therefore require less thinking.
Is this a human trait that is being exploited by the fake news on the internet?
Studies have shown that fake news travels across the internet further and faster than real news, probably in part because fake news doesn’t have to be completely in touch with reality. It can be really sensational, have extreme headlines that are going to grab people’s attention. People are also attracted to information that is really novel, sensational, have moral tones to it and create this strong emotional reaction. What is important now is that psychology has made some progress and we have a more sophisticated understanding of human psychology and how human brains work, and people can capitalize on this kind of information. So if, let’s say, I know that people are attracted to certain kinds of headlines, I can use that information to create that certain kind of headline that is built for you to click on it to spread it across the internet. With this greater knowledge of human psychology we can spread information or misinformation really easily on the internet and make people feel a certain way, get them outraged and increase their hate and distrust toward the outgroup. And I think that is definitely happening.
Do you see social media and fake news as a threat to our individual and collective imaginations? Like blurring the line between reality and fiction?
On the internet you can put a piece of (false) information out there and it quickly spreads to millions of people, which means millions of people out there believe something that is utterly false. So it is a threat. Social media at the moment relies on an everyday person to be educated enough to be able to recognize a good source from a bad source, to be able to find markers of good information and bad information. But not everybody is capable of that. Fake news spreads faster between older populations because they have less experience with information being given to them in this way. They’re used to reading a newspaper with information that an editor needed to approve. And now they are confronted with information online and they have not been repeatedly told they shouldn’t believe everything they read on the internet. So social media is, to a degree, to blame for this severe polarization, which creates even more demand for polarizing content that people want.
Is this bias present in traditional media as well?
That happens with the media too. All the outlets monitor their clicks, they monitor what is a successful post and what is not. So if they have a bunch of liberals following them, they’ll see that every time they post this liberally-skewed type of article it’s super successful and on this basis they chose future topics they report about and angles they use. So you have this symbiotic relationship between consumers with their own political biases and tribalistic tendencies on one side, and on the other side the media that’s depending on the consumers to click and read in order to survive, be successful, and grow. I’d say that the media seem to lack a certain amount of humility or uncertainty that you hope you’d get from them.
Is there a way to minimize these irrational effects on our decision making?
You can also be conscious about the influence of emotions in your decision making. You can make decisions in more or less emotional states, for example you can be angry and decide to postpone some decision in order to cool off first and reflect. You can do that. But our decisions are mostly based on who we are, and on your ability to cool off in this case. People who have aggressive tempers might not have the ability to do this, calm dawn and then make a decision. It takes effort and true desire to do it.
I would also say a person needs a desire to make more rational decisions and to avoid the cycle of tribalism. You can do that by shaping your environment, for example by following members of both political sides on Twitter and be equally critical to both and apply the same standards of evaluation. If you agree with everything your party/leader does, and universally disagree with everything the opposing party/leader does, that is probably a bad sign about the rationality of your decision making and you’re not fully appreciating both sides in a way you should be. I strongly encourage people to find and talk to smart people who they disagree with, but respect and know they’re good people. I think that is good way to better appreciate the good that both perspectives have to offer.