Too bad to be true? An antidote to PosPos (Positivist Philosophy of Science) and the Crises in Social Psychology

Consider the following scenario. You do an experiment to test hypothesis X. The opposite to your prediction emerges and further analyses suggest some insights into what might be going on. What do you do? Do you: (a) Change your hypotheses to fit the results, supported by additional analyses? (it was a 2-tailed test after all!); (b) Bin the study and start again, feeling guilty that you even peeked further at your data after the hypothesis was clearly rejected? Or (c) Something else?

Clearly, we should all know that (a) is wrong both on moral and scientific grounds, and undermines key statistical inference assumptions (this is sometimes referred to as HARKing, or “hypothesizing after the results are known”; see Kerr, 1998). The concern that some scientists have done this (or keep trying until they find support and leave the non-confirmed study in the file drawer) is the source for much justified current concern in social psychological science, and the main reason we now have a drive for pre-registration and replication (and hence the replication crisis).

Approach (b) above reflects the hypothetico-deductive method of positivism, and more specifically the falsificationist philosophy of the famous Karl Popper (explained in his book The logic of scientific discovery) – although he was actually not a positivist in the strict sense (read on). This seems to reflect the way most people think about scientific progress: scientific theories are rejected when hypotheses they generate are falsified (although it is not always clear when the theory itself is falsified – see e.g., Lakatos, 1978; Feyerabend, 1987).

In this post I want to argue that there is a third way between infamy and guilt (option (c) above: digging deeper into your data), that reflects the practice of what many experimental scientists actually do anyway, but may feel guilty about because they apparently reject the golden rules of the hypothetico-deductive method, not to mention upsetting the statistics and statisticians.

 

“It seems that many high-powered scientists in general are not much better informed on their philosophy of science.”

A first point to note is that most psychologists are actually quite hazy about the philosophy of science (hereafter PoS) guiding their research practice. To illustrate, at a recent talk I put up a number of “labels” to see who would endorse them (empiricist, positivist, falsificationist, etc) and most audience members seemed perplexed and did not know which one to choose (or to choose more of the above). It seems that many high-powered scientists in general are not much better informed. In his excellent and highly recommended book “What is this thing called science” (1976; recently printed in 4th edition, 2013), Alan Chalmers notes that many top physicists, for example (Chalmers is himself a physicist by training), were much better at actually doing science than at explaining and justifying what they were doing and why, from a PoS angle. Feyerabend (1987) noted this too: he claimed that there are actually few clearly agreed rules about science or how to do it.

I think this lack of understanding of PoS poses a problem, and ignorance around this has left many vulnerable to doubt and confusion, particularly in the light of the onslaught on social psychology (what is good practice, avoiding Questionable Research Practices, etc). The recent attack on Susan Fiske’s APS piece is only the latest furore in this debate. The PoS we endorse has fundamental implications for how we go about what we do (what is acceptable and what not), and for the confidence we have in our research practice. In particular, I want to challenge an outmoded conception of science, based exclusively on the hypothetico-deductive approach of positivism, which arguably captures neither how scientists actually work nor how they should work. At best this positivist approach is partial (in both senses of the word) and in my view it only applies to a (hypothesis) testing mode (or what Popper called the realm of justification, to be contrasted with the realm of discovery).

In this blog I want to make a plea for a more realist PoS. This would open the door to more exploratory and problem-solving modes of investigation and analysis. Both are an important source of theory building, yet have been neglected by the traditional positivist approach which many scientists, if only vaguely, think they are working under. I argue that most scientists, without realizing it, are actually working according to a realist PoS (see Chalmers for a more detailed introduction to realism in science; Greenwood, 1989, as applied to social psychology; and Bhaskar, 1975, for a more philosophical treatment). As part of this I will discuss the process of “retroduction”, sometimes also called abduction, which offers a third way to generate theory, beyond the dualism of deduction vs. induction.

A realist approach to science

So what is the key difference between a traditional positivist approach and a realist approach to science? The positivist approach is empiricist insofar as hypotheses are tested for their empirical evidence. Realism, on the other hand, while not rejecting empiricist methodology, does not confine itself to directly observable empirical evidence. Instead it allows us to dig beneath the surface, so to speak, and permits inferences that might be more indirect (while still being subject to experimental investigation, of course). In particular, it allows for “causal powers” (Harre & Madden, 1975) and deep structures that can underlie (surface) empirical regularities, observable evidence, etc. Indeed, much of modern science (including important aspects of physics, astronomy etc.) routinely relies on relating empirical evidence to inferred realities or states, rather than relying exclusively on direct empirical evidence. For example, the presence of planets and other astral bodies that are not directly observable can be inferred from how they affect the orbits of others that are directly detectable. Social scientists, and especially psychologists, make these kinds of inferences all the time, as many of our measures reference less visible properties (e.g., traits, beliefs stereotypes, etc.) as well as more visible ones (e.g., behaviour).

Note that these examples are not designed to question falsificationism, but more the empiricism on which positivist PoS is based. The point is that with so much going on beneath the surface, if often pays to dig deeper than the surface empirical reality, and to invoke modes of reasoning that go beyond the observable data (of which more shortly). The strictly falsificationist mechanism for theory development misses and arguably actively discourages this informative mode of inquiry. We should also note that Popper considered himself a realist.

However, Popper’s approach reinforces the hypothetico-deductive model of positivism in so far as it leads us to reject theories based on empirically unsupported hypotheses. Because falsificationism is tied to the hypothetico-deductive method, it does not provide an efficient mechanism (or indeed a mechanism at all) for doing this. I will return to this point again shortly. Although most empirical and experimental researchers would consider themselves as vaguely working within the positivitist tradition and the hypothetico-deductive model associated with this, I would argue that they are actually (philosophical) realists in scientific practice.

 

“We might be able to find out whether a theory is true or better still false, but how do we get our theories in the first place?”

The central question here, which I was asked to address at a recent KLI conference (Spears, 2016), is: how do we, as scientists, come up with (new) scientific theories? The problem with much of the traditional PoS is that it focuses on how we test and evaluate theoretical propositions (hypotheses derived from theories), but it has less or little to say on where theories come from (Tavory & Timmermans, 2014). In other words, we might be able to find out whether a theory is true or better still false, but how do we get our theories in the first place? Once again, this is an area in which there is perhaps too little reflection. The standard approach to theory and hypothesis generation is deductive: we have theories and deduce hypotheses from them, which we then proceed to test. That’s fine, if you already have your theory. This deductive approach is typically contrasted with induction, which is more empirically grounded and involves generalizing on the basis of empirical cases.

Several objections are commonly raised against inductive theory building. For example, it is sometimes pointed out that this is not theoretical, being purely ad hoc (as well as intellectually lazy!), and that it has nothing to say about the explanatory mechanism or process (an advantage of realism over positivism as argued above) underlying the observations. In practice, this is also unsatisfactory, because it it can lead to “effect chasing”: something unexpected emerges from your research, so that becomes (part of) your “new theory”. You test it, and find something else happens next time, which then gets incorporated into a new hypothesis or theory, etc. As Hume long ago noted, the regularities of empirical occurrence provide no guarantees that they will always occur (or that there is a causal connection) – which was also the basis for Popper’s critique of inductivism.

 

“We can only test what the theory puts before us, so this approach is not generative (of new theory). Waiting for a better theory may be rather like waiting for the proverbial monkeys with typewriters to produce Shakespeare.”

So induction or inductivism has typically not been seen as a good way to generate theory (although, as I argue, data and findings more generally are in fact a major source of theory). However, I think objections to induction in theory development have, perhaps, been overstated and have led to us breaking an important link between the empirical realm on the one hand, and theory generation on the other. Moreover, as I noted, the hypothetico-deductive method, and in particular the falsificationist PoS, is an inefficient way to generate theory for several reasons (indeed if a way at all). Firstly, simply rejecting our hypothesis (and disconfirming our theory) seems to set us back to square one: we only know that we were wrong, but we don’t know much about why and so the “failed” experiment is largely uninformative. To paraphrase that famous philosopher, Donald Rumsfeld, we get to know what we do not know (known unknowns), but we could neglect some potentially important unknown unknowns. Even more importantly, we can only test what the theory puts before us, so this approach is not generative (of new theory). Waiting for a better theory may be rather like waiting for the proverbial monkeys with typewriters to produce Shakespeare.

 

“A third path somewhere between deduction and induction, called retroduction, involves a more agentic and problem-solving mode in response to experiments that seem to fail.”

Retroduction

I suggest (but of course it is not my idea) that there is a third way towards theory generation that is less strict and inefficient than the one implied by the Popperian route of falsificationism (i.e., generating theory by deselecting false ones), and less ad hoc than the purely inductive way. This invokes a third path somewhere between deduction and induction, called retroduction, which involves a more agentic problem-solving mode in response to experiments that seem to fail. I focus here on so-called “failed” experiments, because here lie the biggest taboos and misnomers pertinent to the current crisis. I would warrant that for most of us the so-called failed experiment is a far from unique experience. I refer to this phenomenon (with tongue in cheek) as “data too bad to be true”. Most of the time, for most of us doing experimental research, things rarely pan out perfectly as we predict, and sometimes the complete opposite (or worse still, apparently nothing) emerges – leaving us with data that are “too bad to be true.”

So what do we do then? Returning to the scenario I started with, the text books teach us, and what most students learn is that we reject the hypothesis (fine so far) and maybe the theory (OK, but we have to be careful here) and thus consider the experiment “failed.” This is where I take issue with the textbook prescriptions. It is not (necessarily) the experiment that has failed here. In my view there is no such thing as “bad data” or a “failed experiment” – at least when it has been well designed an conducted. Calling experiments failures because the data are disappointing or confusing is the slippery slope to Stapelsville, in which the quality of an experiment is judged by its outcomes (the source of the file drawer problem underlying the replication crisis). Most journal editors still fall into this trap (hence the rise of pre-registration, and journals like Plos One, where decisions are not linked to so-called “successful” outcomes of experiments).

 

“In my view there is no such thing as “bad data” or a “failed experiment” – at least when it has been well designed an conducted.  Calling experiments failures because the data are disappointing or confusing is the slippery slope to Stapelsville, in which the quality of experiment is judged by its outcomes (the source of the file drawer problem underlying the replication crisis).”  

The third way requires that we look closely at our data to try to understand what is going on, what went “wrong” (apparently), and why. Here, rejecting hypotheses and even theory are, of course, primary options, but again, this is the negative agenda of creating known unknowns and not the positive agenda of finding out what happened and why. Speaking to our data, even interrogating them, of course does not mean that we “torture” them, with all those QRP pitfalls. The example of Rumsfeld also reminds us that torture does not lead to useful or truthful information, and at best is just a highly unethical form of self-delusion. Of course, after such data mining and exploratory analyses we should realize that we are no longer talking about “an experiment” (i.e. testing causality), and it should never be presented as such in this detective, exploratory mode. However, and this is my point, this exercise can be very useful in helping us to understand what went on in the experiment and can provide insights that help us to develop new theory, and that can subsequently be experimentally tested. Whereas this retroductive mode is de rigueur in some other approaches (it is the basis for the “grounded theory” approach), in my experience it is sometimes frowned upon by researchers, precisely because of the assumed restrictions of the  hypothetico-deductive/falsificationist agenda. Typically, journals and their editors discourage this kind of exploratory detective work, or want it written out of the research process. The consequence of this is that, rather than theory development becoming more transparent, it is driven underground and scientists are tempted to HARK or even encouraged to do so by editors who want us (à la Bem, 1987) to tell a good story and not how we got there. In fact, sometimes the journey of how we got there is the most interesting part, instead of something to be back-channelled and driven underground or pushed into the file drawer.

 

“Exploratory analysis can be very useful in helping us to understand what went on in the experiment and can provide insights that help us to develop new theory, and that can subsequently be experimentally tested.”

Obvious possibilities for unpredicted results can be that our manipulations failed, that the study was underpowered, et cetera. But more interesting possibilities can emerge from regularities we do find in the data. This process of trying to understand what happened (armed with the knowledge and insights we have as scientists) is retroduction: trying to work out what has happened, based on empirical evidence. Different to induction, retroduction is a more active theoretically inspired attempt to ask the question “what may have (even sometimes “must have”) been the case for this to happen?” To use the Kantian jargon, we can thus invoke transcendental arguments (Bhaskar’s account of realism, critical realism, has also been referred to as transcendental realism). Retroductive reasoning can save us from the nihilism (and the time waste) of concluding that we know little, apart from the fact that our hypothesis, and possibly our entire theory, is incorrect.

For a more materialist and less idealist grounding we can point to another German philosopher, Marx, who developed the concept of immanent critique to understand the internal contradictions in society. Marx did not look “outside the data” to theorize, and has thus been claimed as a critical realist who used empirical data to develop his theory (Bhaskar, 2008; Sayer, 1979).

 

“Scientists are tempted to HARK or even encouraged to do so by editors who want us to tell a good story and not how we got there.”

Retroduction or retroductive reasoning is, I would argue, an important source of theory building. While alien to the hypothetico-deductive approach of positivism, it is perfectly consistent with a realist PoS with its commitment to understanding causal powers and deep structures underlying empirical outcomes. So while it remains true that theory should stand apart from the data used to test it in any given experimental study, there is much more to be gained from studies that seem to fail or don’t go according to plan. The problem is that, in the current climate, many researchers have become hyper-sensitive to such practices and think they are fudging or engaging in QRPs if they engage in this exploratory mode and attempts at retroductive reasoning.

 

“The problem is that, in the current climate, many researchers have become hyper-sensitive to such practices and think they are fudging or engaging in QRPs if they engage in this exploratory mode and attempts at retroductive reasoning.”

I think that the heated debates around methodology, replication and so forth have reinforced this concern. In particular, I think many journalists reporting on scientific practice have a mistaken and outmoded understanding of the PoS under which scientists operate. To put it in a nutshell: we are often unfairly judged by positivist standards, when most of us would claim to be (or de facto operate as) philosophical realists, even when we don’t realise it. If we would lose this critical and problem-solving approach to understanding our empirical work, I think we would become much less able social scientists. Of course, this should not be a license to simply reject whatever we disagree with: the aim is always to understand what is going on, because of the data, not in spite of them.

 

“This approach also casts new light on the replication solution to the replication crisis.”

I would argue that this approach also casts a new light on the replication solution to the replication crisis. Attempting to replicate previous research is only half of the story. Failure to replicate, like falsification, is only informative to a point: it could indeed suggest that previous effects are due to sampling bias, the file drawer selectivity problem. However, replication failure can also increase uncertainty, if it is not accompanied by an attempt to understand and measure the causal processes mediating the proposed effects, as well as the potentially critical moderators telling us when they will occur. A good example of this is the recent controversy about failure to replicate the “emotion facial feedback” pen studies, in which some critics argued that filming participants in the attempted replication may have raised self-awareness, thereby disrupting the effect. This takes us beyond naïve empiricism and positivism, and firmly into the realm of a realist PoS. Sensitivity to such questions will advance knowledge and allow us to distinguish false positives from genuine effects and the particular conditions that can produce them.

Two personal examples of theory that arose from retroduction

The traditional theory-hypothesis-data sequence can mislead us into thinking that theory never emerges from data, but of course this is far from being the case. Very often, data are the impetus for theory-creating in the first place. For example, social identity theory (which I have worked on and developed over my career) arose partly as a way of understanding the result of the minimal group studies, which even at the time of the first paper on this topic (Tajfel, Flament, Billig & Bundy, 1971) could be described as “data in search of a theory” – a theory which only emerged later (it is actually not precisely clear when, but officially only by about 1978: Tajfel, 1978; Tajfel & Turner, 1979: See Spears & Otten, 2012 for an account of the twists and turn of this theoretical quest). Legend has it that the minimal group paradigm grew out of the “failed” control condition of a paper by Rabbie and Horwitz (1969). I will now present a couple of examples from my own research; the second ironically is an attempt to understand the mind-set of fraudeurs such as Diederik Stapel.

 

“Of course this should not be licence to simply reject that with which we do not agree: the aim is always to understand what is going on, because of the data, not in spite of them.”

My first example comes from my research on the “nothing-to-lose” hypothesis which (ironically, given the above example) seemed to contradict social identity theory. This emerged in some research with Daan Scheepers during his PhD. Social identity theory predicts, inter alia, that low status group members are particularly likely to show the “social competition” strategy of in-group bias when the status relation between their group and the high status comparison group is unstable. Such conditions, especially when the status relation is seen as illegitimate, create “insecure social comparisons” such that they see hope and scope for social change, which warrants a competitive strategy favouring the in-group in terms of reward allocations (and ratings on other dimensions also).

However, in Daan’s PhD research (Scheepers, Spears, Doosje & Manstead, 2006), besides confirming that people in low status groups indeed show in-group favouritism under unstable conditions, we also found that people in groups with a stable low status showed evidence of stronger out-group derogation – an altogether more antagonistic and aggressive form of in-group bias, that was not predicted or explained by social identity theory. This led us to develop our “nothing to lose hypothesis”, namely that groups that were sufficiently disadvantaged and desperate, with little hope or scope of social change, might resort to more radical means, either to challenge the situation (because they had nothing to lose by doing so), or at least to destabilise the situation. We have now replicated this finding in a range of other contexts and with naturally occurring groups (e.g., Tausch et al., 2011). In other words: unexpected data that did not fit the theory led to a new line of research, and ultimately to a theoretical account for this.

 

“Unexpected data that did not fit the theory led to a new line of research, and ultimately to a theoretical account for this.” 

A strictly falsificationist approach would only have allowed us to conclude that the previous theory and prediction were wrong; it would not have let us generate a new theoretical explanation. Retroduction, however, allows us to think outside the box of the existing theory, and to generate new theory based on exploratory analysis by looking more closely at the data. Retroduction is thus the missing link that takes us from the negative to the positive agenda, from the realm of justification to the realm of discovery. Did Daan’s result disconfirm social identity theory, such that we must reject it? In my view it did not, but it did challenge the generality or scope of one of its central tenets, and it pointed to boundary conditions that required theoretical refinement.

 

“Retroduction is thus the missing link that takes us from the negative to the positive agenda, from the realm of justification to the realm of discovery.”

My second example is more prosaic, but in some ways more poetic and provocative. This concerns research conducted by my Masters student, Sara van de Par, who was intrigued by the Stapel affair and the conditions that might cause a scientist to commit such extensive fraud. So we set about designing a study that might test these conditions. Two themes emerged from Stapel’s own account (rather than from established theory), both of which we tried to model in the lab. First was the idea that the “rat race”, or the “publish or perish” culture of academia, put so much pressure on scientists that researchers would inevitably be tempted to cheat. In other words, it wasn’t Stapel’s fault; the pressure of the “situation made him do it” (of course, social psychologists, in contrast to personality theorists, are renowned for pointing to the power of the situation). In order to model this aspect of a fraudster’s psyche, we developed a way of priming either a deterministic worldview or one associated with free will. The idea here was that people who were primed to think of behaviour in deterministic terms would be more likely to be tempted to follow their self-interest, than those for whom free-will, and thus personal responsibility, was made salient. After all, the first group could put the blame on the situation, devoid of any personal responsibility for their behaviour.

A second intriguing aspect of Stapel’s account was his statement that he started by cutting a few corners, but then gradually descended the slippery slope to full-blown fraud, rather like we used to be told the occasional dope-smoker might inevitably turn into a crack addict. We tried to manipulate this aspect in an experiment. We told participants that we were interested in whether solving anagrams would help them to process words more deeply and improve their subsequent memory of these words (compared to controls) later in the experiment. So in this anagram solving stage we asked them to solve anagrams and type them into the computer within a set time limit. However, in one condition (the “slippery slope condition”) we told participants they would not be asked to actually type in their answers, because of the problems of individual differences in typing speed. In both conditions, participants were afterwards presented with the solutions, and were then asked to indicate which anagrams they had solved correctly. As predicted, those who had not had to type in their answers (slippery slope) claimed to have solved more correctly (because they could not be held to account, they were more tempted to cheat). We called this condition the slippery slope condition because the temptation to cheat here (as confirmed) might tempt participants to cheat again, possibly even more so, when it came to recalling a list of words later in the experiment. In this memory task, we tempted participants with a bonus reward if they reached a certain level of memory recall. At this stage we engineered a fake computer crash, which occurred after they had typed in all the words that they could recall. The experimenter then explained that their data were lost (which, of course, they were not), and asked them if they could try to remember the number of items they had recalled. At this point, the experimenter then simply said that it was a shame, because if they had had 2 more correct, they would have been eligible for a bonus. However, she also said that, given that there had still been some time left before the crash occurred, if they thought they would have remembered two more if they had been able to continue, they would still be eligible for the bonus. Participants’ claims that they would have reached the bonus with more time were used as further evidence of dishonest claims.

The data were not strong, but pointed to an interaction such that people in the ‘determinist’ and ‘slippery slope’ condition showed most evidence of overstating their memory performance, in line with predictions. However, given this weak evidence we tried to replicate this finding with a further experiment, in which we addressed a number of methodological weaknesses. In this second study, however, we found no evidence for greater dishonest behaviour in the predicted conditions (determinism and slippery slope).

At this point, we decided to explore the data further to try to understand what might be going on, and to see whether there were other interesting patterns in the data (a clear example of retroduction!). One strength of this new experiment was that we could now distinguish people who actually cheated on the initial anagram task (because we had asked people to write their answers on a scrap of paper and throw this away at the end; we retrieved these scraps of paper afterwards). It turned out that this factor (actual cheating rather than the manipulated temptation to cheat) affected the final cheating behaviour on the memory task: those who had actually cheated earlier, overstated their memory performance most. Moreover, a second predictor was the actual number of words correctly recalled on memory task: those with a poor memory performance where much more likely to overstate their recall after the crash, presumably because they felt ashamed about underperforming. These two factors also interacted: initial cheats who underperformed were by far the most likely to cheat subsequently.

 

“At this point we decided to explore the data further to try to understand what might be going on, and to see whether there were other interesting patterns in the data (a clear example of retroduction!).”

So what is the “moral” of this research? First, perhaps a cheap point, but Stapel’s claim that the “situation made him do it” and that the slippery slope made matters worse, did not stand up to scrutiny. Rather, after more careful replication, it seems that more dispositional or individual difference factors were more predictive. So perhaps Stapel should look in the mirror rather than to academia for the cause of his downfall. Second, in terms of PoS we might consider this research as a semi-failed experiment, followed by a failed replication. True, our original hypotheses and theory were falsified, but does this mean the research was completely uninformative? No, further internal and exploratory analyses provided useful insights into more individual-level factors that seem to predict dishonest behavior. I would not (in the spirit of Lakatos) want to claim that situational factors never play a role here, but it is perhaps reassuring for us that Stapel is not exonerated by these findings (notwithstanding any concerns we might have that students seem willing and able to bend the truth!).

 

“The message of this post is to reassure researchers who accept the realist philosophy that probing one’s data and trying to solve problems of understanding and interpretation is what good science is about, and we should not be ashamed or guilt-tripped when we do this.”

In short, the message of this post is to reassure researchers who accept the realist philosophy that probing one’s data and trying to solve problems of understanding and interpretation is what good science is about, and we should not be ashamed or guilt-tripped when we do this. We should just be careful about the experimental status and the statistical implications when we are in this exploratory mode.

References

Bem, D. J. (1987). Writing the empirical journal article. In M. P. Zanna & J. M. Darley (Eds.), The compleat academic: A practical guide for the beginning social scientist (pp. 171-201). New York: Random House.

Bhaskar, R. (1975/1997). A realist theory of science. London: Verso.

Bhaskar, R.A. et al. 2008, The formation of critical realism: a personal perspective. London ; New York: Routledge.

Chalmers (1976/2013; 4th edn). What is this thing called science? Open University Press: Milton Keynes.

Feyerabend, P.K. (1975/1987) Against method. London: Verso.

Greenwood, J.D (1989). Explanation and experiment in social psychological science: Realism and the social constitution of action. London: Springer.

Harre, R., & Madden (1975). Causal Powers: Theory of Natural Necessity. Oxford: Blackwell.

Kerr, N. (1998). HARKing: Hypothesizing after the results are known. Personality and Social Psychology Review, 2, 196-217.

Lakatos, I, (1978). The methodology of scientific research programs. Cambridge: CUP.

Popper, K. (1959). The logic of scientific discovery. New York: Basic Books.

Rabbie, J. M. & Horwitz, M. (1969) The arousal of ingroup-outgroup bias by a chance win or loss. Journal of Personality and Social Psychology, 13, 269-277.

Sayer, D. (1979/1983) Marx’s method. Sussex: Harvester Press.

Scheepers, D., Spears, R., Doosje, B., & Manstead, A.S.R. (2006). Diversity in in-group bias: Structural factors, situational features, and social functions. Journal of Personality and Social Psychology, 90, 944-960.

Spears, R. (2016). Retroduction; A neglected aspect of theory building. Invited talk, KLI Conference, Zeist, April.

Spears, R., & Otten, S. (2012). Discrimination: Revisiting Tajfel’s minimal group studies. In J. Smith & S.A. Haslam (Eds.). Social psychology: Revisiting the classic studies. (pp. 160-177), London: Sage.

Tajfel, H., Flament, C., Billig, M. G., & Bundy, R. F. (1971). Social categorization and intergroup behaviour. European Journal of Social Psychology, 1, 149-177.

Tajfel, H. (Ed). (1978). Differentiation between social groups: Studies in the social psychology of intergroup relations. London: Academic Press.

Tajfel, H., & Turner, J. C. (1979). An integrative theory of intergroup conflict. In W. G. Austin & S. Worchel (Eds.), The social psychology of intergroup relations (pp. 33-48). Monterey, CA: Brooks/Cole.

Tausch, N., Becker, J.C., Spears, R., Christ, O., Saab, R., Singh, P., & Siddiqui, R.N. (2011). explaining radical group behavior: developing emotion and efficacy routes to normative and nonnormative collective action Journal of Personality and Social Psychology, 101, 129-148.

Tavory, I., & Timmermans, S. (2014). Abductive analysis: Theorizing qualitative research. Chicago: The University of Chicago Press.

 

NOTE: Image by Paul Albertella, licensed under CC BY 2.0.

 

Russell Spears’ research interests are in social identity and intergroup relations, and cover social stereotyping, discrimination, distinctiveness and differentiation processes, group-based emotions, social influence and address socio-structural variables such as power and status. He is a past editor of the British Journal of Social Psychology, and the European Journal of Social Psychology. He co-authored/co-edited The social psychol­ogy of stereotyping and group life (Blackwell, 1997), Social identity: Context, commitment, content (Blackwell, 1999). He received the Kurt Lewin (mid-career) Award from the European Association of Social Psychology.


Key references:


Spears, R. (2010). Group rationale, collective sense: Beyond intergroup bias. Invited position paper. British Journal of Social Psychology, 49, 1-20.


Klein, O., Spears, R., & Reicher, S. (2007). Social identity performance: Extending the strategic side of the SIDE model. Personality and Social Psychology Review, 11, 28-45.


Scheepers, D., Spears, R., Doosje, B., & Manstead, A.S.R. (2006). Diversity in in-group bias: Structural factors, situational features, and social functions. Journal of Personality and Social Psychology, 90, 944-960.


Ellemers, N., Spears, R., & Doosje, B. (2002). Self and social identity. Annual Review of Psychology, 53, 161-186.


Spears, R., Doosje, B., & Ellemers, N. (1997). Self-stereotyping in the face of threats to group status and distinctiveness: The role of group identification. Personality and Social Psychology Bulletin, 23, 538-553.


You may also like

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.