So far we’ve seen that when we hold stereotypes about particular groups, those stereotypes may influence what we notice, how we interpret people’s behavior, and what we remember. That naturally leads to the next idea, which is that stereotypes also influence behavior.
One of the first ways this has been explored is with the idea of a self-fulfilling prophecy. Self-fulfilling prophecies occur when a perceiver’s expectations influence their own behavior, which can elicit expectancy confirming behavior in a target individual, thereby “proving” the perceiver’s expectation true. If I expect someone to be unintelligent or incapable of understanding something, for example, I may be very short with them, giving incomplete or unhelpful explanations. They may respond with confusion and several questions as I roll my eyes thinking about how dumb they are. My expectation came true not because the person was unintelligent, but because of how I treated them.
Studies show that:
(1) teachers who have the expectations that certain students will “bloom” in the coming year spend more time helping/challenging those students, so that in the end, those students (who were randomly given the label) learn more throughout the year.
(2) men who think they are speaking on the phone to more attractive women are nicer, and they elicit more positive social behavior from the woman they are speaking to (even though, in fact, the photo does not depict the person they are actually speaking to).
(3) people interviewing Black applicants sit further away, make more speech errors, and conduct shorter interviews. In turn, people who are treated with such “low immediacy,” whether black or white, perform worse in interviews.
In all these examples, perceivers are essentially manufacturing “proof” of their initial expectations – that certain students are better than others, that more beautiful people are more socially skilled, that blacks are inherently less qualified job candidates, and that people in psychological counseling are socially awkward. The problem is, people usually don’t realize how their own behavior (and nothing internal to the target) led to those outcomes.
Even without our behavior as a mediator, stereotypes can sometimes ‘prove’ themselves true. Stereotype threat occurs when a person is afraid of being judged in line with an existing stereotype, or when they are afraid that they might fulfill someone’s stereotypic expectancies. So just because stereotypes exist in society, they may interfere with someone’s performance in a way that provides ‘proof’ that they exist. Research shows that when group memberships or stereotypes are salient:
No group can really escape this (for once, general privilege doesn’t seem to matter; what does is the existence of a negative stereotype!). If we’re afraid we might confirm an existing stereotype, we often do. Exceptions include when a stereotypic trait is exceptionally discrepant from one’s self concept, we may not hold that fear of confirming a stereotype in the first place, and thus our performance may not be negatively impacted (in fact, it may be boosted!).
Those first two effects – self-fulfilling prophecies and stereotype threat – show how stereotypes are self-perpetuating. By influencing the behavior of targets individuals, we manufacture evidence that the stereotype is true. Stereotypes are quite powerful in that way. But they may be even more problematic in the way they fill in details in decisions and quick reactions.
As we saw last week, stereotypes influence what people notice and remember. Think about how important that can be in situations such as jury trials, where jurors are given a massive amount of information, some of which is likely incriminating (the person is on trial after all), some of which is probably exculpatory (if the defense attorney is doing his or her job), and some of which is fairly neutral or irrelevant. If we focus on information in a stereotype-confirming manner, the natural consequences is that we should be more likely to convict individuals who are members of groups that are stereotyped as criminal or violent.
One study told participants that they would be participating in a study on jury decision-making. Participants read several pieces of evidence about the defendant and the case, then rendered guilt-likelihood judgments and were asked to recall as much of the evidence as they could. The researchers changed the name of the defendant across two conditions of the study. That is, one group of participants read that the defendant was Carlos Ramirez (presumably Latino), and another group of participants read that the defendant was Robert Johnson. Everything else that the participants read about the case was identical.
The results showed that, even though participants read identical evidence, participants who read the case against Carlos judged him more likely to be guilty, and recalled relatively more incriminating than exonerating information about him. In directing our attention to particular elements of the evidence, the stereotype impacted judgments.
This kind of effect outlines why it is so detrimental to label an entire group of people as, for example “bad hombres.” Labeling an entire group as criminals, or creating a registry to highlight a particular group’s criminal behavior (ignoring such behavior of other groups) reinforces the stereotype, which likely then influences the way individuals are judged (which reinforces the stereotype and so on). All of our ‘evidence’ is a manufactured product of the stereotype, but we use it to justify or prove our expectations true.
A clear consequence of this is highlighted by recent data which show that Black men are disproportionally wrongly convicted of crimes, and especially murder, sexual assault, or drug offenses. This may very well be attributable to the way Black (men in particular) are stereotyped, and thus, how juries consider evidence against them.
And that leads to the final tragic consequence I’d like to discuss in this post, which is an effect known as “shooter bias” or “the police officer’s dilemma.” Horrified by the killing of Amadou Diallo, a West African immigrant to the U.S. who was shot 41 times by 4 plain-clothes police officers who thought he resembled a rape suspect, researchers began investigating biases in people’s quick decisions.
More specifically, researchers wanted to investigate the decisions that people make in quickly identifying potentially dangerous objects and their decisions to shoot or not shoot target people. Participants were asked to play a rudimentary video game where they saw target people in different backgrounds, and were asked to “shoot” any armed targets and “not shoot” any unarmed targets (people holding phones, soda cans, wallets, etc.). The results were compelling. Participants were faster to shoot an armed black target person than an armed white target person. They were also faster to not shoot an unarmed white target person than an unarmed black target person. In other words, the speed of people’s decisions was influenced by the target’s race in stereotype-confirming fashion.
If the results ended there, they wouldn’t be too discouraging, but studies show that we also make a different pattern of errors depending on the target’s race. More specifically, people are more likely to accidentally shoot an unarmed black target person than an unarmed white target person, and more likely to accidentally NOT shoot an armed white target person, than an armed black target person.
This kind of shooter bias seems to be predicted by people’s knowledge (not endorsement) of the cultural stereotype that “blacks are dangerous.” That is, the more we are aware of that stereotype, the more likely we are to show such bias. Shooter bias is unaffected by a person’s level of prejudice. It’s not because we dislike black people that we make such mistakes; it’s the simple existence of the stereotype that leads to them. Of course, explicit prejudice may very well have an amplifying effect on the process, as do perceptions that the world is a dangerous place. That means that any rhetoric that tells us that we are in danger and need to be protected may amplify bias against people who are different from us.
Studies show that black participants show the same time biases, but maybe not the same pattern of errors as white participants. Police officers also show a similar pattern of bias, although it may be reduced somewhat compared to the general public. Finally, other research has shown similar effects with different target groups. In particular, people have a bias to shoot “Muslim looking” targets over less Muslim-looking targets, and Latino targets over white and Asian targets.
Clearly stereotypes linking danger, criminality, or violence to particular groups can have very dangerous implications. This is again why using the label “radical Islamic terror” (rather than simply “extremist” or “terrorist”) has a negative impact on the millions of peaceful Muslims in the world. It’s why scapegoating Latinos and immigrants for crime can be dangerous for those groups. And it’s the scientific basis of the “Black Lives Matter” campaign, showing that – yes – innocent black people are more likely to become victims of “accidental” police shootings than are innocent whites. So yes, blue lives matter, and indeed, all lives matter. The problem is, some lives are (perhaps unintentionally) targeted because of insidious stereotypes. We are all responsible for these tragedies and need to work together to change – or at the very least make more subtle – stereotypes that link groups to such characteristics. Until then, it’s worth repeating: #blacklivesmatter.