Squeeze a man hard and you'll always find something inhuman.19
One can see the logic of dynamiting a [cell full] of suspects (women suspects: this happened in the Ukraine). But the more typical preference was to administer a slow death. There are many accounts of prison floors strewn with genitals, breasts, tongues, eyes and ears. Arma virumque cano, and [the culture of] Hitler-Stalin tells us this, among other things: given total power over another, the human being will find his thoughts turn to torture.20
Now that we have assayed the putative cognitive neurobiology of such behaviours as ‘everyday’ deceit and pathological lying, the time has come to extend our inquiries into the ‘darker’ side of human nature. Such a study is legitimate in the present context, for, if we wish to ‘really’ understand human beings and the extent to which they might enjoy volitional ‘freedom’, then our cherished notions and fragile hypotheses are likely to encounter their most stringent test here: when we look at the damage that adult human beings will (knowingly) inflict upon each other. Indeed, as we shall see, such a line of study reveals many of our assumptions and intellectual inconsistencies to be just what they are: useful ‘rules of thumb’ that may not survive closer critical scrutiny. In short, I shall argue that little is as clear-cut or as ‘obvious’ as it may at first seem. I shall also give some flavour of how difficult it would be to change the situation, as it applies, in ‘everyday life’. In a nutshell, there is often too little evidence available for us to be able to ‘think scientifically’.
First of all, it is important to ask three questions, the answers to which are of central importance to our future progress. Though some of the terms that I shall borrow (from other literatures, below) may sound arcane, I hope that the reader may bear with me so that we might more effectively make a case for subtlety when (eventually) we come to consider the vexed issue of ‘responsibility’.
What do we mean by ‘moral evil’ and ‘natural evil’?
Well, first it may be helpful to clarify what we mean by the word ‘evil’. The Oxford English Dictionary (OED) offers a variety of definitions, many of which might be applied to apparently ‘purposeful’ acts:
Morally depraved, bad, wicked, vicious…
Doing or tending to do harm; hurtful, mischievous, prejudicial…
Depraved intention or purpose…desire for another's harm…
What is morally evil; sin, wickedness.
Anything that causes harm or mischief, physical or moral.
However, overall, the OED delineates two broad categories of ‘evil’:
1. Examples of the ‘bad’ in a ‘positive’ (one might say ‘active’) sense, as in morally depraved or wicked acts, doing or tending to do harm, having a malevolent intention, etc.; and
2. Examples of the ‘bad’ in a ‘privative’ sense (as the absence of the ‘good’; essentially a ‘negative’ evaluation), as in a diseased organ, an unsound or corrupt form, something that is inferior in quality.
Such a distinction (between the positive and the privative) brings us very close to that which pertains between ‘moral’ and ‘natural’ evil, a theological dichotomy that has been reviewed recently by numerous authors, among them Peter Vardy (1992) and Susan Neiman (2002). So what do we mean by these alternative forms of evil?
‘Moral evil’ is essentially the evil that agents choose to do. It comprises the ‘bad things’ that people do when they know that they are ‘doing wrong’. So, for instance, the torture alluded to in one of the quotations at the head of this chapter necessarily involves the knowing infliction of pain and suffering on another human being. Indeed, that sense of ‘knowing’ is most eloquently captured in the following passage from Antonio Tabucchi's The missing head of Damasceno Monteiro:
(If you have detected any contemporary resonance in this quotation, at the time that you have come to read this passage (whenever that may be), then perhaps you have just encountered evidence of the ‘stability’, the essential repetitiveness, of human (im)moral conduct. Sadly, ‘we’ are forever harming each other).
…I have a mania for remembering the names of torturers, for some reason remembering the names of torturers means something to me, and do you know why? [B]ecause torture is an individual responsibility, to say you're obeying orders from above is inexcusable, too many people have used that shabby excuse to shield themselves by legal quibbles, do you follow me?
(Antonio Tabucchi, Originally published in 1997)
Now, to the theological (or at least, ‘believing’ Christian) mind, the existence of moral evil is the price that we humans pay for our freedom of the ‘Will’ (our, as yet hypothesized, ‘capacity for choice’; Chapter 2). For, in order to exercise choice, all options must be left open to us. Hence, on this view, God has permitted the existence of human, moral, evil because its selection (and subsequent execution) comprises one of the ‘possible’ outcomes inherent in our constitution as ‘free agents’. Were we to be ‘prevented’ (in some divine, spiritual/neurological chimerical way) from performing evil acts, then we might ‘appear good’ to an external observer but this would constitute an illusion (i.e. a ‘misinterpretation of a veridical stimulus’). For, without the ‘possibility’ of performing ‘evil’ there is no choice and if there is no choice then, no matter how ‘good’ we might appear to be, there is no morality. We become biological machines, ‘obeying orders’. So, to an apologist for a deity, it is quite possible to ‘defend God’ (no blasphemy is intended), because God has given us freedom of choice, freedom to choose. Hence, all the moral evil at large in the world becomes the ‘fault’ of humanity itself. (Nevertheless, whether any amount of human freedom adequately compensates for the suffering of the innocent, even that of a single child, is a moot point, a raw nerve prodded again and again in Dostoyevsky's The Brothers Karamazov. It is also thoughtfully and seriously re-examined in Peter Vardy's (1992) text, where the author eventually concedes that Dostoyevsky's novel contains ‘in my view, the most effective attack against God ever produced’, p. 72).
‘Natural evil’, in contrast, describes the bad things that happen in the world that are not subject to human control or agency. Hence, Susan Neiman (2002) chooses as an example the 1755 Lisbon earthquake – a disaster that impacted the ways in which a number of contemporary intellectuals came to construe God. For our purposes, we might propose that tsunamis, hurricanes, and earthquakes all provide us with ample examples of ‘natural evils’, events which ‘Nature’ precipitates, and for which no human agent can be held ‘responsible’ (though at times, of course, one may consider whether human agents might have planned better for the future, e.g. in the adequate construction of other people's dwellings or the maintenance of river levees; nevertheless, one cannot accuse even the most incompetent official of having ‘caused’ a tsunami; though one suspects that some exponents of the ‘Chaos theory’ might think about it).
Now, as it happens, the perceived existence of natural evil presents even more of a problem for the apologist for a deity. For, while he/she might have defended moral evil on the grounds that it constitutes the ‘price’ that humans pay for freedom (above), there really does not seem to be a comparable justification for the natural phenomena that kill people and render survivors homeless. Vardy (1992) essentially admits that this is the case, and concludes (as have many others) that God is ultimately un-knowable. More vociferous responses accrue within Neiman's (2002) account:
As we have noted (in Chapter 8), the person who ‘speaks his mind’ may find that his popularity undergoes something of a depreciation. Pierre Bayle was a seventeenth-century Protestant who had to escape religious persecution in Catholic France: ‘[those] who tried to leave for more hospitable countries like Prussia or the Netherlands were sent to the galleys, if male, and to the prisons, if female, for the rest of their miserable lives’ (Neiman, 2002, p. 117). Although Bayle managed to escape (to the Netherlands, where he published anonymously), ‘[his] brother in France was arrested in his stead and presumably tortured to death in the prison where he perished five months later. Scholars hold this to be the signal event in Bayle's life, undermining any possible belief in a just God who rewards the righteous and punishes the vile’ (Neiman, 2002, p. 117).
God is either willing to remove evil and cannot; or he can and is unwilling; or he is neither willing nor able to do so; or else he is both willing and able. If he is willing and not able, he must then be weak, which cannot be affirmed of God. If he is able and not willing, he must be envious, which is also contrary to the nature of God. If he is neither willing nor able, he must be both envious and weak, and consequently not be God. If he is both willing and able – the only possibility that agrees with the nature of God – then where does evil come from?
(Pierre Bayle, cited in Neiman, 2002, p. 118)
Natural evil poses something of a problem for a deity's apologists. Nevertheless, one straw that may be clutched at appears to be Immanuel Kant's conjecture that natural evil provides humans with even more opportunities to behave well (a proposition examined in Neiman, 2002). For, if the planet were to be without random calamity, if we always received our ‘just dessert’ in this life, then might we not come to behave ‘morally’ merely as a way of obtaining reward? We might behave well simply in the expectation that we should receive an existential ‘pat on the back’. However, on a planet where there is no simple pattern of moral cause and effect, i.e. where ‘bad things happen to good people’, morality has to be practised for its own sake. One chooses the ‘good’ because one believes that it is the ‘right’ thing to do, not because one expects a reward that is really little more than a bribe. Indeed, upon reflection, does this not seem to mirror the plight of Job in the Jewish scriptures (the Christian Old Testament)? Time and again Job is made to suffer. Bad things happen to him, and around him in the world, yet he remains ‘good’. His patience has been tried but he remains ‘true’. This defence (of natural evil) has intellectual merits, but it does make one wonder about the proposed affect/morality of the originator of such suffering.
Now, why have I subjected the reader to this theological interlude? Have we wandered off track? I shall argue that we have not.
The relevance to our onward journey of the theological conceptions of moral and natural evil is that they force us to acknowledge our assumptions about the root causes of human behaviour. Indeed, the distinction seems almost too neat, too convenient: for, it lends us an ‘easy’ contrast between those ‘bad’ acts that are ‘chosen’ (by a ‘free’ agent) and those that happen by chance (mistakes, accidents, and oversights). So, on the face of it, we might even suspect that this moral distinction shores up a legal distinction that is routinely rehearsed in courtrooms: that between ‘murder’ and ‘manslaughter’. For, in order for a verdict of murder to be returned, there must be evidence of ‘actus rea’ (i.e. an agent performed an act; he unlawfully killed someone) and ‘mens rea’ (i.e. the same agent intended to perform the act; he had ‘malice aforethought’). This second condition is not satisfied in manslaughter, where the actor did not ‘intend’ to kill his victim. Hence, if taken at face value, we have a very simple dichotomy – one that may be explored and delineated within the courtroom, namely that differentiating intended, freely chosen, ‘moral evil’ (murder), from circumstantial mishap, manslaughter (e.g. the perpetrator states that he had not intended to kill his victim; yes, he had wanted to ‘hit him’ but an accident occurred: the victim fell and struck his head on the pavement; thus, moral evil was compounded by natural evil). Upon such a reading, manslaughter always incorporates an element of ‘natural evil’ (though the size of its relative contribution is likely to vary from case to case).
All this seems (conveniently) clear, albeit at a rather superficial level. However, there is a problem: the distinction does not always hold and it seems even less likely ‘to hold’ in the future, if it must withstand a rising tide of neurobiological ‘advance’. What do I mean by this? Well, let me explain.
In order to defend a distinction between moral and natural evils, one has to believe that there is a clear line (or at least a convincing ‘gradient’) that divides the realm of human agency from the biological (and other forms) of determinism. In other words, you really have to accept some ‘space’ for ‘Freedom of the Will’ (without such freedom there would be no ‘agency’). However, if you do not accept the concept of a ‘Free Will’, and you do not believe that humans make free choices, then you render moral evil (and, hence, moral accountability) incoherent. No organism can be held ‘responsible’ for a behaviour that has ‘simply’ been pre-programmed into it. To apportion blame under such circumstances would be rather like holding one's alarm clock ‘personally responsible’ for waking one up in the morning even though one had programmed it to do so! Therefore, if you are a hard-line determinist, whether in terms of biological, psychological, or social causation, you eject morality and responsibility from the human equation the moment you jettison ‘freedom of choice’. For if you contend that humans are not ‘free’ then you can hardly blame them for what they do.
So, to be provocative, for the radical determinist (and indeed the rather unimaginative ‘behaviourist’) the distinction between murder and manslaughter is meaningless; all that matters is that a given organism killed another given organism; the mechanism by which that killing occurred is of little relevance, there was no ‘intent’ (it is an illusion), no ‘choice’ (it is an illusion); the salient ‘diagnosis’ is solely one of ‘identity’: which organism did it? What they were ‘thinking of’ at the time is of no consequence.
However, if you think that there is a problem with this approach, then you may wish to acknowledge some element of human ‘control’; you may wish to apportion some degree of moral responsibility to human actors who engage in ‘intentional’ acts and, indeed (to the arch determinists out there), you may appear to be someone who is hankering after some sort of ‘ghost in the machine’ (see Chapter 4).
Now, is this not all rather unsatisfactory? We seem to have fallen into a situation where we have to choose between our ‘selves’ as constituting either automata or ghosts. The former have illusions, subjective mental states that seem real to them (and volitionally relevant), but which are purportedly not contributory to voluntary acts (after Libet); the latter enjoy a Cartesian existence, presumably ‘acting upon’ some kind of ‘soft machine’ through some form of ‘thought control’. Is there a ‘middle path’ open to us? Might there be some form of compromise?
Well, one could argue for ‘limited freedom’. Perhaps one could say that we all have a ‘little elbow room’ but that it is susceptible to constraint, by ‘natural’ processes. However, if you were to propose this line of argument, then one might anticipate another problem arising. For, if you were to allow ‘extenuating’ circumstances, ‘mitigating factors’, to be admitted as evidence, then something else would begin to happen.
As we offer such details as:
‘The perpetrator was drunk at the time of the offence, he comes from a violent home, he was abused as a child, he has low levels of serotonin-metabolites in his cerebro-spinal fluid, he once suffered a penetrating head injury,’
Well, what happens is that we diminish the domain of moral evil (i.e. we attempt to ‘explain away’ some of the perpetrator's ‘intended’ behaviours) and we ‘replace it’ with natural evil. In essence, we reverse the logic of Brutus’ interlocutor in Shakespeare's Julius Caesar:
(This seems to me to constitute a refutation of ‘natural evil’, as applied to human ambition). However, nowadays the statement might become:
The fault, dear Brutus, is not in our stars,
But in ourselves, that we are underlings.
Indeed, if we continue to add mitigating factors, if we add in purportedly defective genes, reported prenatal and perinatal events, then we might predict that in the future there may be absolutely ‘no room left’ for moral evil at all. All ‘bad’ behaviour would (eventually) comprise a concatenation of ‘natural evils’. Indeed, there might never be the need to invoke the concept of a ‘rational criminal’, for reasoning itself would have been revealed to be the product of entirely specified antecedents. Hence, moral evil is (apparently) an ever-shrinking concept while natural evil is (apparently) an ever-expanding one. This appears to be the inevitable conclusion of a materialist volitional neuroscience.
The fault, dear Brutus, is not in our ‘selves’,
But in our frontal lobes…our supervisory attentional systems…our serotonin metabolism…our upbringing…
Moreover, it is also worth noting that the prospects do not necessarily get any rosier for the ‘offender’ who is convicted and committed under such a system. After all, temporary incarceration as a form of punishment, rehabilitation, or as an ‘example’ to others only makes sense if based upon the assumption that people may change their behaviours. Its rationale (in these terms) is lost if the future is pre-determined and immutable. So, were one to become a radical determinist, then it might ‘make sense’ to ‘simply lock people up and throw away the key’ because they are ‘faulty specimens’ (not ‘moral agents’). They are incapable of change, ‘they have no choice’. If one really believes that antisocial humans are entirely determined, then the safest (and perhaps ‘easiest’) thing to do is to exclude them from society: a kind of quarantine for volitional ‘failure’:
The impact of such a radical version of determinism is actually rather similar to the situation that one might anticipate emerging consequent upon an uncritical incorporation of Benjamin Libet's findings into the praxis of ‘Law and Order’. If you really believe that no action is ‘freely chosen’, then murder and manslaughter become indistinguishable; perceived ‘intent’ does not matter anymore.
In the final state there can be no more ‘human beings’ in our sense of an historical human being. The ‘healthy’ automata are ‘satisfied’ (sports, art, eroticism, etc), and the ‘sick’ ones get locked up… The tyrant becomes an administrator, a cog in the ‘machine’ fashioned by automata for automata.
(Alexandre Kojeve, writing in 1950, cited by Mark Lilla, 2001, p. 135)
What do we mean by ‘instrumental’ and ‘reactive’ forms of violence?
Now, if we move from an attempted ethical account of humanity's ‘bad behaviour’, towards a more phenomenological account of one of its exemplars – namely violence, then we encounter a rather similar dichotomy, one that bears a structural resemblance to those pertaining between moral/natural evil and murder/manslaughter. For, within the forensic psychological and psychiatric literatures there is often a distinction drawn between ‘instrumental’ violence and violence which is ‘reactive’ or ‘impulsive’. What is ‘violence’ and how do these terms differ? Well, the World Health Organization (WHO, 1996) defines violence in the following way:
Note: the behaviour is intended; it is a voluntary behaviour on the part of the perpetrator (the WHO, at least, seems to apportion ‘Free Will’ to human actors). However, we may further define what the perpetrator is actually doing (below).
The intentional use of physical force or power, threatened or actual, against oneself, another person, or against a group or community, that either results in or has a high likelihood of resulting in injury, death, psychological harm, maldevelopment or deprivation (italics added).
‘Instrumental violence’ has a specific ‘end in sight’; it is, by definition, ‘premeditated’. Hence, there may be a very many exemplars of such violence, e.g. that occurring during bank robberies, boxing matches, sadistic killings, militarily aggressive procedures (e.g. pre-emptive ‘air-strikes’), and following the knowing pursuit of ‘victims’ (e.g. those identified on the basis of some ‘external’ characteristic such as gender, ethnicity, dialect, or accent; i.e. those carrying ‘marks’ of difference that are sought out by aggressors; Komar, 2008). Instrumental violence is ‘knowingly’ applied. The agent cannot reasonably claim to have been unaware of what he was doing throughout the course of what may have been a very long sequence of (consciously mediated) purposeful acts; a trajectory the outcome of which was apparent from the start. A pilot took off, knowing that he was carrying bombs. A sadist locked his hostage in the basement, knowing that he would later return to torture her.
Nevertheless, we should acknowledge that ‘instrumental violence’ does not ‘simply’, ‘always’, or even ‘necessarily’ equate to ‘moral evil’, for, depending upon one's ethical standpoint, not all instrumental violence may be ‘wrong’. The pilot defending his country may be said to be performing a moral ‘good’, a boxer may be engaged in ‘sport’. Those senior German military officers who attempted to assassinate Hitler might ultimately have been construed as having behaved heroically, saving many other citizens from suffering and death (had they succeeded). Indeed, some politicians in our own time have even attempted to exonerate those who torture for them because they assert that these agents, soldiers, and ‘private contractors’ are fighting a ‘war on terror’. So, although one may hold quite contrasting views concerning the morality of such exploits, the point is that a simple ‘collapsing’ of conceptual boundaries (between ‘moral evil’ and ‘instrumental violence’) would probably be inaccurate.
In clinical forensic practice, instrumental violence is often the mark of the personality-disordered individual, especially the antisocial person (exemplified in their most extreme form by the psychopath): someone who is unencumbered by concern for ‘others’ (Tables 19–21). Such people are hypothesized to lack ‘empathy’ – that immediate visceral awareness that one may experience of another's pain and suffering. Indeed, perversely, such suffering is often what the psychopath seems to be seeking (we shall say more about personality disorders in due course).
Table 19 Medical conditions that may present with ‘violence’
Organic mental states
Structural deficits (e.g. frontal lobe lesions, frontal lobe dementias, Huntington's disease)
Seizure-related activity (e.g. frontal and temporal lobe foci)
Episodic dyscontrol (aetiology and status uncertain)
Alcohol and substance-related disorders (e.g. intoxication with alcohol and withdrawal [delirium tremens]; in response to alcoholic hallucinosis)
Suspicion and ‘paranoia’; psychosis secondary to amphetamines, cocaine
Functional psychiatric disorders
Bipolar affective disorder, especially in the manic or hypomanic phases
Delusional disorders, especially ‘pathological jealousy’ (Othello's syndrome) and erotomania (de Clerambault's syndrome)
‘Cluster B’, especially antisocial, borderline, and narcissistic personalities
‘Cluster A’, especially paranoid personalities
Psychopathy, best understood as a subgroup, a severe manifestation, of antisocial personality disorder
‘Reactive/impulsive violence’ is, by its very nature, contingent. It is precipitated rather than planned, retaliatory (on some occasions) rather than motivated a priori, and its relation to morality may be even more ambiguous (than that of instrumental violence; above). We see reactive violence in a host of ‘normal’ and ‘abnormal’ situations, often where an urge towards destruction (of the self, others, or inanimate objects) is prominent: In English city centres on Saturday nights, when the inebriated perceive themselves to have been slighted, when those whom they hit respond with retaliation, when the intoxicated and addicted lose ‘control of themselves’, when an abused woman finally ‘breaks’ and stabs her abusive partner, when a romantic partner finds that they have been cuckolded (‘something snapped’), when those with organic brain states lose their inhibitions (e.g. consequent upon frontal lobe lesions and dementias, and late in the course of Huntington's disease), and in those who are experiencing radically disordered perceptual states (e.g. in the context of alcohol withdrawal/delirium tremens, acute psychosis, complex-partial seizures (also known as temporal lobe epilepsy) and, sometimes, the acute sequelae of head injuries).
The point is that each of these episodes may be regarded as less the consequence of the deliberations of a perpetrator's (hypothesized) ‘Will’, and more a case of their having ‘lost control’. Though the situations vary markedly, and not all of them are as ‘innocent’ or as ‘mis-adventurous’ as each other, in each case there is an element of the misunderstood, the unforeseen, and unintended (hence, the contrast with the premeditation inherent in instrumental violence, above). Nevertheless, not all exponents of reactive violence deserve the medical equivalent of a Monopoly game's ‘Get out of jail free’ card: we bear some responsibility for the situations into which we launch ourselves, so, while the person with Huntington's disease or a frontal lobe dementia is hardly responsible for their plight, the drunken reveller and the disinhibited aggressor, ‘high’ on crack cocaine, have contributed to their own present mental state. Hence, in the terms of E.W. Mitchell (1999), they are ‘meta-responsible’ for their ensuing predicaments. Note again, however, how this last statement also becomes incoherent if one posits that there is no such thing as ‘Free Will’ and that perceived intentions are irrelevant (above). According to such reasoning, the plight of the person with Huntington's disease becomes somehow ‘equivalent’ to that of the Saturday night reveller. Is this ‘just’? (I suspect that even the most radical determinist may have difficulty defending such a breach of ‘common sense’).
Table 20 The personality disorders as classified according to the Diagnostic and Statistical Manual, 4th edition (DSM IV)
Personality disorder (DSM IV category number)
Source: From the American Psychiatric Association (1994).
Table 21 Characteristics of an antisocial personality (DSM 301.7), according to the Diagnostic and Statistical Manual, 4th edition (DSM IV)
A: There is a pattern of disregard for and violation of the rights of others occurring since age 15 years, as indicated by three (or more) of these features:
B: The individual is at least 18 years of age.
C: There is evidence of Conduct Disorder with onset before 15 years of age.*
D: The occurrence of antisocial behaviour is not exclusively during the course of Schizophrenia or a Manic Episode.
Conduct Disorder (312.8), although a diagnosis of children, is very similar in its criteria to antisocial personality disorder in adults. It is characterized by a ‘repetitive and persistent pattern of behaviour in which the basic rights of others or major age-appropriate societal norms or rules are violated …’
Source: From the American Psychiatric Association (1994).
Nevertheless, the perpetrator of such (reactive/impulsive) violence is generally held to be less responsible for their behaviour (than the instrumental aggressor), either because overwhelming environmental factors had supervened (e.g. they were attacked) or else because their ‘inner balance’ had been perturbed by organic factors (e.g. the effects of a frontal lobe tumour).
Hence, one might posit the following:
a) That if the really ‘big problem’ facing humanity is that of ‘moral evil’ – those acts that we ‘choose’ to do even though we ‘know’ that they are wrong – then the focus of our attention may, at least initially, be most profitably directed towards ‘instrumental’ antisocial conduct.
b) When we come to consider ‘reactive/impulsive violence’ we are more likely to be dealing with a ‘full’ or ‘partial’ variant of ‘natural evil’ (i.e. behaviour that is, to some extent, beyond an individual agent's current volitional control).
c) While humanity itself may by characterized, though not ‘diminished’, by the sorts of changes of behaviour that accompany a defined organic pathology, a ‘natural evil’ (e.g. such as when a frontal lobe tumour precipitates disinhibition, aggression, and insensitivity towards others; humans are, after all, prey to many such pathologies), it (humanity) does often seem ‘diminished’ by the occurrence of moral evil (e.g. when a ruling elite attempts to exterminate a group it designates as ‘other’; when a man holds his daughter hostage in a basement for decades, repeatedly raping her; when ‘sex tourists’ from wealthy countries ‘holiday’ in poor countries in order to gain access to vulnerable children). Such behaviours say a lot about ‘humanity’ and where its moral ‘baseline’ may be located (below).
d) Hence, in very simple terms, natural evils inform us as to how an individual human actor may ‘breakdown’ under certain pathological conditions, while moral evils tell us something more about the limits of ‘normal human action’; they tell us what ‘we’ are capable of, ‘as a species’.
e) Therefore, to reiterate, our bigger problem as a species is ‘moral evil’. One might even speculate that this is what will ultimately ‘limit’ our survival, our ‘future’.
f) However, and seemingly paradoxically, ‘moral evil’ is a category that is apparently contracting all the time: it may become increasingly subsumed within ‘natural evil’, as more and more ‘explanations’ are found for instrumental (premeditated) violence.
g) So, if ‘moral evil’ were ever to collapse entirely into ‘natural evil’ (i.e. if the two categories were to be subsumed into one ‘natural’ category), then we would gain not only an accurate (deterministic) understanding of ‘bad’ behaviour but also a full understanding of ‘human nature’; and if that was the case then we might be forced to conclude that humans do not ‘choose’ to perform bad actions in any meaningful sense (i.e. ‘intentions do not matter’); we are, instead, ‘inherently flawed’ (in the sense of a ‘natural evil’; according to the OED, the embodiment of a ‘privation’ of the ‘good’). If this is true then we, as a species, constitute ‘an unsound or a corrupt form’ (Question 1; above).
Where is humanity's moral ‘baseline’?
Notwithstanding all of the above, if we take the view that there is ‘something wrong’ going on when human beings torture, rape, maim, and murder each other, then we are necessarily endorsing an evaluative statement; we are making a judgement about the events that we have observed. These (let us call them) ‘extremely antisocial’ behaviours clearly deviate from what we would regard as ‘normality’. However, a question arises as to whether our judgement of ‘normality’ is primarily ‘moral’ in its orientation, evincing a concern with what human beings ‘should do’, what it is that constitutes ‘correct’ human conduct (in terms of values such as ‘good’ and ‘evil’, ‘right’ and ‘wrong’), or whether it is essentially ‘statistical’ in orientation, evincing a concern with events that are relatively uncommon (‘un-usual’ in terms of their frequency, etc.); events that are remarkable solely because of their rarity, their apparent deviation from statistical ‘norms’ of human conduct. (Actually, we may experience an intuition that both modes of judgement are alive and well within us, but let us try to hold on to this distinction for the time being, in order to consider the following line of argument).
If my judgement of such matters is primarily ‘moral’, evincing a concern with how humans ‘should’ behave, then it is likely to invoke some form of moral standard and the judgements that I make may well incorporate notions of ‘moral evil’ (i.e. if I believe that human beings may behave ‘morally’, then it is likely that also I believe that they may behave ‘immorally’). I may hold that individuals are responsible (answerable) for their actions because their ‘bad’ actions emanate from the choices that they have made (and, hence, at a specific moment ‘preferred’). I accord these subjects the status of ‘moral agents’.
However, if my judgement is more ‘statistical’ in its intuition, if I am responding to whether or not an event is infrequent, then I may be more inclined to view those deeds recounted (above) as statistical aberrations, ‘outliers’, and hence, indicative of some kind of ‘disturbance’ of function. In other words, I may be more likely to invoke some form of ‘natural evil’ (i.e. something has gone ‘wrong’ within the agent or in their external environment). Moreover, I am likely to accord such a subject a reduced capacity for moral agency (i.e. a ‘diminished responsibility’). This line of argument may sound strange, but it often forms the core of a ‘psychiatric defence’ when an offender with a personality disorder does something so ‘bad’ that the press and the public wish them to be severely punished for it, while psychiatric opinion favours their incarceration in a ‘special hospital’ (arguing for a ‘medical’ disposal). A sceptic might posit that the ‘diagnosis’ here comprises solely a description of some very bad behaviour: the antisocial person performs antisocial acts ‘because’ he has an antisocial personality; the grounds upon which his antisocial personality is diagnosed comprise a history of antisocial acts; this is a tautology (and see Pincus, 2001, p. 78).
What I believe about the nature of the act performed (the alleged ‘offence’) is very likely to be influenced by the interaction of such ethical and statistical factors (Figure 94). What do I mean by this? Well, consider the following examples:
1. Take a hypothetical environment in which rape is a rare occurrence (so: a ‘bad’ act is also a ‘rare’ act). When it does occur we may be more likely to countenance ‘abnormality’ on the part of the perpetrator, e.g. we might ask ‘what is wrong with him?’ ‘Has he got some form of personality disorder?’ Further, if we identify such a ‘cause’ then we are effectively substituting ‘natural evil’ for ‘moral evil’. In other words, we make the ‘moral’ ‘natural’. At a very simplistic level, we might deduce that ‘he did a bad thing because he has a bad (i.e. an unsound or corrupt form of) mind/brain’.
2. Now consider another environment in which rape is a frequent occurrence (hence, a ‘bad’ act is also a ‘frequent’ act). When an army rapes a civilian population, do we look for an individual perpetrator's abnormality? Do we look for personality disorder? No, we probably accord the environment some role in these events (given overwhelming numbers and overwhelming arms, its effect is one of ‘facilitating’ ‘what can be gotten away with’; or else, we might say that it was, to some extent, ‘understandable’ that an individual soldier ‘followed the crowd’). However, afterwards, we are likely to make moral claims upon such perpetrators (or at least to attempt to do so). We may hope to ‘bring them to book’. As a human community we will probably see this form of behaviour as an instance of ‘moral evil’. (If you doubt this, then ask yourself this question: Would you seriously offer justifications for, or accept ‘excuses’ from, those who commit such mass rapes?)
3. Conversely, consider an environment in which altruism is a common occurrence (i.e. where a ‘good’ act is also a ‘frequent’ act). Do we invoke ‘abnormality’? No, why should we? In this case, the index behaviour is morally desirable and statistically ‘normal’. So, we may feel that there is no need to look for explanations since we are not often called upon to ‘explain away’ what is ‘good’. We may be completely uninterested in attempting to posit a distinction between a ‘moral good’ (what the subject chooses to do, knowing it is good) and a ‘natural good’ (what is determined by non-agentic factors). However, I suspect that while we would recognize an individual subject as behaving well (performing ‘moral good’), we would probably also apportion some of the credit for his ‘goodness’ to his environment or society at large (though this involves assumptions that we shall explore further, below). We might feel that his environment facilitates his ‘goodness’. Indeed, we might even posit that this is precisely why some people will to choose to live in a specific milieu (e.g. a convent, monastery, retreat, or commune) so that their environment may assist them in leading moral lives (though we should not overlook the human tensions that exist within such communities). Consider then, another issue that arises: For if ‘everyone is good’ in such a situation, the presence of natural goodness (i.e. a ‘good society’) may actually diminish the role of ‘moral goodness’ in our index individual; it may be ‘easier to be nice if others are nice’; also, the social influence is there to urge conformity. Such a critique is not without precedent: ‘If you love those who love you, what credit is that to you? For even sinners love those who love them. And if you do good to those who do good to you, what credit is that to you? For even sinners do the same. And if you lend to those from whom you hope to receive, what credit is that to you? Even sinners lend to sinners, to receive as much again’ (Luke 6: 32–34). One way of parsing such a critique is to suggest that altruism is ‘easier’ if and when others are altruistic. One behaves morally. However, the severest test awaits those residing in the fourth category (below).
4. What is happening in an environment in which altruism is rare (hence, ‘good acts’ are ‘rare acts’)? How should we regard an exponent of kindness in such a place? In a brutal prison or concentration camp how might we evaluate the person who ‘retains’ her ‘humanity’? Do we have any interest in invoking the dichotomy conjectured between ‘moral good’ and ‘natural good’? Well, clearly, such a special person constitutes both a ‘good agent’ and a statistical outlier (i.e. she is statistically highly ‘abnormal’). Indeed, had she (our altruistic, good agent) instead constituted an extremely antisocial agent in an altruistic environment (category 1, above) then we should probably have searched for evidence of ‘natural evil’ (i.e. pathology). Hence, in the current case (‘goodness in a bad place’), should we not be searching for evidence of ‘natural goodness’? What is it that makes our agent so good in such a bad place? However, maybe we also experience an intuition that such a search for ‘natural’ explanations would detract from our agent's ‘goodness’. If she had ‘good genes’ and a ‘good frontal lobe’, would her goodness be less moral and more natural? Would it matter? Well, if all that we have said so far about ‘evil’ is reasonable (above) then we may simply have to face the fact that ‘goodness’ is another area where moral agency may gradually succumb to natural explanation. Indeed, if we are pessimists, we may believe that we have just thrown the baby out with the bathwater (i.e. in seeking to explain away ‘evil’ we have just jettisoned the ‘good’). However, if we are optimists then we may believe that should we ever be fortunate enough to find a cause of ‘natural goodness’ then at least it might allow us to engineer a better future for all humans. Maybe we could ‘make’ everyone ‘good’. Now, if your intuition is that you do not accept this argument (that there might be such a thing as ‘natural goodness’, which is capable of incrementally displacing ‘moral goodness’), then (once more) you are opting for leaving open a space for moral action, moral agency. Hence, you are opting to preserve a space for human freedom.
a) Should I believe that human beings are potentially ‘perfectible’, that in their ‘natural state’ they would behave well, that they might even be ‘restored’ to ‘grace’ (potentially), then I am more likely to see the antisocial (in all of its many forms) as a manifestation of some sort of perturbation (or lack) within the offender: in other words, I am more likely to believe that he could and would have behaved differently (i.e. better) had he not suffered a ‘fall’, whether in terms of his moral or material (physical, psychological, or social) well-being. If his bad actions are really quite extreme, then I may feel that they are both morally and materially/statistically aberrant. Nevertheless, crucially, because I believe that humans may be ‘perfected’, I may come to think of the perpetrator as abnormal, a ‘victim’ to some extent; he has deviated from the optimal path of human development. (However, notice also, that on this reading of the human condition, everyone becomes a potential victim, because all of us are demonstrably imperfect, so no one has achieved the (hypothesized) optimal state of humanity and hence: all of us only ever hold (at best) partial or, indeed, (at worst) no responsibility for our plight; i.e. in a ‘better world’ we would all have been ‘better persons’, better agents.) The inspiring aspect of this thought is its corollary: all are flawed, therefore, all might be improved.
b) However, what would happen if I were to adopt a different baseline assumption? Imagine that I accept that humans are ‘merely’ another product of evolutionary processes, natural selection, constituting essentially ‘just’ another higher ape. Hence, because we are animals who have had to survive in very many hostile environments, I come to believe that our nature is, by definition, a predetermined ‘mixed blessing’: we are not ‘perfectible’ because we do not have ‘perfection’ within us (there is no perfect moral ‘place’ from which to ‘fall’): we are animals who can and will behave ‘well’ and ‘badly’, according to our needs and desires; according to our circumstances. So, if I believe this to be the case I may be much less inclined to invoke moral evil; I may accept that human behaviour incorporates a large amount of natural evil; and I may conceptualize the ‘normal’ moral parameters of ‘normal humanity’ as admitting a great many ‘bad’ acts (‘because people will vary in their ‘morality’, just as they vary in their height, hair colour, and blood pressure’). A really deviant act would then constitute an extreme example of some very abnormal (i.e. statistically aberrant) phenomenology, something whose ‘particular badness’ comprises its sheer rarity or bizarreness of execution. However, I might still believe that most antisocial conduct is in fact ‘normal’, for a portion of the population (just as playing the guitar or liking poetry is ‘normal’ for some of the population), a product of natural variation; I might only look for indications of especial pathology in the most ‘statistically abnormal’ acts. Indeed, were I to do so, I would probably be ‘looking for’ specimens of natural evil, causes of a ‘broken agent’.
Moreover, under this assumption (of natural moral variation) it follows that if I hope to ‘improve’ human conduct, while assuming it to be ‘normally’ distributed, then I should not imagine that I am ‘restoring’ humanity to its ‘rightful place’ (because there is not one; its ‘natural place’ is where it is already). Instead, I am engaging in some form of ‘engineering’. I am hoping to ‘push’ the distribution of human evil in the same way that I might hope to ‘raise levels of literacy’, ‘increase rates of immunization against measles’, or ‘improve dentition’. Please note, I am not criticizing the desire to ‘improve humanity’, I am merely giving it its correct title: the quest is one of human ‘improvement’ via bio-psycho-social engineering; it is not about ‘restoring’ humans to their ‘natural state’.
However, remember that there is also another way of thinking about humanity. According to this contrasting construction, one might suggest that we are indeed ‘perfectible’ agents, that in our ‘normal’ state we would all be ethical agents, the producers of exclusively ‘good acts’. Hence, all of us ‘should’ (ultimately) occupy the same ethical space. On this argument, everyone is (at least potentially) perfectible, hence, there must be ‘reasons’ for their moral failures and these comprise two (by now, familiar) possibilities:
a) There is a natural evil, that impacts upon our ‘perfectible’ neurobiology;
b) We have ‘Free Will’ and we have chosen (‘moral’) evil on occasion, even in the presence of ‘perfect’ neurobiology.
Furthermore, as I have implied repeatedly, whether we think humanity is not operating at optimal morality or is operating at its best ‘possible’ level (because it is naturally, inherently flawed), where we locate its ‘usual level of functioning’ depends a great deal upon what we think ‘normal’ people are doing. How ‘bad’ can people be while still constituting part of a ‘normal’ sample?
So, how ‘bad’ is still ‘normal’?
On a weekend in 2003, I conducted a highly artificial (and equally anecdotal) experiment (one which the reader might very easily attempt to replicate tomorrow): I made a note of all the stories contained within a British Sunday newspaper (a ‘quality broadsheet’). Here is a selection of what I found:
1. Two sons of a Middle East dictator were described at length; their reported conduct included the routine torture and rape of their subordinates and the beating to death of transgressors in front of multiple witnesses (who did nothing to stop them);
2. Another story concerned the exhibition of this pair's corpses to the media; they had been killed by military forces;
3. A different story concerned the conduct of groups of healthy, affluent young men (aged 20–30 years), in a Far Eastern country, who routinely enjoyed the gang rape of prostitutes; they experienced this sort of behaviour as a form of ‘bonding’;
4. There was an account of the indiscriminate bombing of families, including the elderly, women, and children, who were sheltering in a church in a disintegrating state in West Africa;
5. There was a debate concerning whether a ‘double agent’, acting on behalf of domestic security services, was allowed to kill with impunity while operating ‘under cover’;
6. A man was described who had spent periods of time in a mental hospital before becoming a recluse, repeatedly brandishing firearms, and finally shooting a teenage burglar in the back; he had found him (the ‘victim’) in his (the gunman's) house late at night;
7. There were descriptions of killings connected to a ‘human-trafficking’ operation; this involved young girls who believed that they were entering the United Kingdom in order to make ‘Bollywood’ films but who, in reality, were coming to be sold into prostitution;
9. There were the protracted, tawdry recriminations that followed the flood of alleged lies and counter-lies that led up to a country being taken to war (by a government, making its case, on the basis of a ‘dodgy dossier’).
Biological accounts of ‘abnormal’ violence
Notwithstanding our concerns regarding the extent of ‘normal’ human ‘evil’ (whether it is ‘moral’ or ‘natural’ in aetiology), let us start to construct an account of purportedly ‘abnormal’ events; an account that is primarily based upon hypothesized biological determinants. In the following sections, I shall examine some of the notable papers that have emerged within this field, papers that are often cited when authors provide mechanistic accounts of human violence. For the most part, the violence described here falls within the category of ‘reactive/impulsive’ (above) though as we approach the end of this section we shall encounter further cases that might, at least potentially, inform our knowledge of ‘instrumental’ violence. Throughout this section the pertinent question is:
To what extent do the ‘special’ data reported ‘explain’ the behaviour described and to what extent was the perpetrator of a deed ‘responsible for their actions’?
A strange case, associated with abnormal brain structure
A paper by Relkin and colleagues (dating from 1996) is often cited in this context. It concerns the case of a 65-year-old man (the improbably named ‘Spyder Cystkopf’), who, in the context of a domestic argument, killed his (second) wife of 10 years. Although the couple were reported to have enjoyed a good relationship, there arose an altercation during which Cystkopf's wife did something that she had never done before: she scratched his face (while simultaneously shouting at him and criticizing him for his ‘lack of emotionality’; Relkin et al. 1996, p. 173). Cystkopf denied that this assault elicited any emotion within him, but he likened his subsequent actions to ‘pulling my hand from a hot frying pan’ (p. 173). ‘I found myself hitting her again and again with my right hand’ (p. 173). His wife fell over and Cystkopf then put his right hand over her neck, leaning down on her with his full weight ‘until her body became lifeless’.
Afterwards, he attempted to remove any evidence of the struggle then he threw his wife's body out of a window (they lived in an apartment block in Manhattan), trying to make it seem like suicide, before collecting his briefcase, and beginning his journey to work. He was apprehended.
What do we make of such a case, such a bizarre series of events?
Well, when the suspect was later evaluated he was found to be an intelligent man who had no history of violence or addiction. There were no prior forensic activities. He had enjoyed a stable family life: his first marriage had lasted 25 years, yielding two children, and only ended when his first wife had died of natural causes; his second marriage commenced a year later. He had also achieved a notably successful career as an advertising executive. Cystkopf was, for the most part, cognitively intact; however, he exhibited a restricted and, at times, inappropriate affect (he did not exhibit remorse for his wife's killing, he talked about it in a ‘matter of fact’ way) and he also exhibited subtle impairments in his right-hand function (he was dextral, but his execution of fine motor performances was slightly slower with the right hand than with the left). Furthermore, he exhibited some very interesting findings on brain imaging (see Figures 95 and 96).
His brain scans revealed that Spyder Cystkopf had a large arachnoid cyst lying between his left frontal and left temporal lobes, so that the former was displaced upwards while the latter was displaced downwards. Furthermore, the state of his skull and records of a cerebral angiogram, performed some years earlier, suggested that although this man had been born with his cyst it had probably only increased in size over his later adult years. Indeed, the reason that he had undergone the earlier angiogram was because of the occurrence of transient neurological symptoms in the past (and the cyst had not been discovered then).
Hence, Cystkopf exhibited a demonstrably abnormal brain ‘structure’ (Figure 95) and he could also be shown to have abnormal brain ‘function’: Figure 96 shows a positron emission tomography (PET) scan demonstrating clear evidence of reduced metabolism in his left frontal and left temporal lobe regions. Moreover, the patient evinced abnormal autonomic (sweat) responses to emotionally laden visual stimuli, findings that are consistent with those seen in patients with bilateral orbitofrontal cortical lesions (Relkin et al. 1996).
Now, if we recall some of the properties of the orbitofrontal cortex (OFC) that were touched upon in previous chapters of this book (especially Chapters 2, 4, and 8), then we might expect that an orbitofrontal impairment, secondary to a locally expanding lesion (in this case a cyst), might well impact upon an agent's ability to control his actions. In other words, it is conceivable that Cystkopf killed his wife as a consequence of his inability to control his reactive response to her assault. Is this feasible? Furthermore, if it is, is he guilty of murder or manslaughter, ‘not guilty by reason of insanity’, or simply ‘not guilty’? Does he bear any responsibility for his actions?
Actually, there is little ‘closure’ to be offered in this index case, for, not only did Cystkopf apparently accept a ‘plea bargain’, thereby accepting a conviction for manslaughter (the consequences of which he seems not to have fully understood; remarking that he planned to ‘return to work’ after a long period of incarceration, commencing after the age of 65 years), but he also declined a surgical excision of his lesion. So, not only was his ‘responsibility’ never tested in court but also we shall never know whether he might have ‘improved’ (or been ‘restored to normal’) following surgery.
Nevertheless, what do we make of this case? The published account is couched very much in terms of a clinical lesson, a didactic exemplar: that alleged offenders, who exhibit psychiatric or neurological abnormalities, should be fully investigated prior to trial (and, hence, disposal). However, I think its potential lessons are of even wider significance: for, this case demonstrates that the existence of organic evidence actually makes judicial decision-making far more complex than it might otherwise have been (assuming such findings are taken seriously). Consider the following:
1. On the face of things, at the time of his arrest, Cystkopf appeared to be a callous offender: he evinced little or no remorse for his actions, he was unresponsive to emotional stimuli (indeed this was the accusation that his wife had fatefully levelled against him), his behaviour following the killing evinced an attempt at deceit (cleaning up the apartment and continuing with his preparations for work), and the way that he treated his wife's body was shocking (throwing her out of the window);
2. Nevertheless, the very abnormality of such conduct, arising as it did in the late adult life of a previously ‘normal’ individual, points to something having ‘gone wrong’;
3. Furthermore, there was evidence of neurological dysfunction and gross organic disturbance.
4. So, the question is whether Cystkopf's behaviour and his pathology were meaningfully related. Was it all a bizarre coincidence or did his arachnoid cyst influence this man's behaviour on the day that he killed his wife? Can we ever know for certain? Well, we can never ‘prove’ a ‘null hypothesis’, so, by definition, we cannot prove that the behaviours and the lesion are totally coincidental, unrelated. Conversely, even though we have a plausible sequence of events, we cannot ‘prove’ that the lesion caused the event to happen (i.e. we cannot state unequivocally that the lesion ‘did it’). Eventually, we are likely to decide on the basis of probabilities (less graciously construed as a jury's intuition). What would a jury of ‘reasonable people’ decide in such a case?
5. Furthermore, even if the jury were to decide, ‘yes, his lesion had something to do with his behaviour’, what should its verdict be? ‘Murder’ seems an unwarranted conclusion: Cystkopf (apparently) lacked premeditation. Yet, how do we choose between ‘manslaughter’ (technically possible), which can, again, seem overly harsh (if he ‘lost control’ because of a lesion he did not choose to have residing within his brain), and ‘not guilty by reason of insanity’ (again, a technical possibility)? Indeed, even Cystkopf's reported behaviour after the killing does not help us here, for while his apparently callous disposal of his wife's body might demonstrate to an unsympathetic jury that his was a calculated intent to deceive (a ‘conclusion’ especially likely if the accused had never undergone a brain scan), to a more sympathetic jury might it not provide ‘merely’ another instance of his impaired emotionality? His emotions seem to have been grossly disturbed although ‘cognitively’ he was clearly ‘aware’ of what he had done.
6. What does one ‘do’ with the subject when he is convicted? Does the existence of his lesion mean that he is a ‘risk to the public’ or might we posit that with his particular ‘risk history’ it is very unlikely that that specific risk situation might ever arise again (perhaps Cystkopf would have to endure attempted assault before any new threat would arise, within him)? Should he get a long sentence, a short sentence, a suspended sentence (‘the loss of his wife is punishment enough’) or should he reside in a psychiatric hospital for the rest of his life (because he may never be of ‘low-risk’ to society)?
In the final analysis no human action is ever fully explicated (to the level of certainty): our provisional understanding is always contingent upon what the subject actually ‘says’ (even if we do not believe them).
(Spence, Chapter 8)
An unusual genetic endowment
Another source of biological data is provided by a subject's genetic endowment, their ‘genotype’. On the face of it, this provides a biological substrate that may be even more pervasive in its influence (and, indeed, ‘earlier’ determined) than that arising from a discrete brain tumour (above). A key paper in the history of genetic accounts of violence is that of Brunner and colleagues (1993) who described a very rare genetic abnormality affecting a large Dutch family. The genetic abnormality comprised a point mutation on the gene that codes for the structure of the enzyme monoamine oxidase A (MAOA), an enzyme that we have encountered previously (in Chapter 4): it is implicated in the catabolism (i.e. the breakdown) of the neurotransmitters dopamine, noradrenaline (norepinephrine), and serotonin. The effect of the point mutation is to precipitate a ‘complete and selective’ deficiency of MAOA (Brunner et al. 1993), so that none of these neurotransmitters is metabolized (hence, at a simplistic level, they might each be expected to reach ‘higher than normal’ levels in the brains of those affected). Furthermore, such a sequence of events would be predicted to result in lower levels of 5-HIAA (a metabolite of serotonin) in the affected subjects’ cerebrospinal fluid (a finding that is itself associated with reactive/impulsive violence; see Chapter 4).
Now, another feature of this mutation is that the affected MAOA gene ‘resides’ on the X chromosome of the human genome. Therefore, while a male (XY) individual who receives this gene from his mother suffers the consequences of this unopposed abnormal gene (because he has only the one X chromosome, and it is carrying the mutant form), women (XX) who receive the gene are (at least, in this family) heterozygotes, i.e. all women have two X chromosomes so those affected by this mutation have one affected X chromosome (carrying the mutation) and also one normal (i.e. unaffected) X chromosome. Hence, while women may carry the abnormal gene, they seem not to express it (i.e. they are probably ‘saved’ by their ‘normal’ copy of the MAOA gene, residing on their ‘other’ X chromosome). Men, however, with only the abnormal X chromosome, exhibit signs of a ‘phenotype’ related to their mutant MAOA gene.
In this Dutch family, males who carried the abnormal gene exhibited a syndrome characterized by borderline ‘mental retardation’ (now termed ‘learning disability’) and behavioural abnormalities, the latter comprising ‘impulsive aggression, arson, attempted rape, and exhibitionism [i.e., exposing themselves]’ (Brunner et al. 1993, p. 578).
The family tree is shown in Figure 97. Because the family, the ‘kindred’, is relatively large there are sufficient relatives for the investigators to have been able to demonstrate a clear association between the abnormal behavioural syndrome (above) and the gene's mutant form: there were five affected males, all of whom carried the mutation and all of whom exhibited the syndrome, and 12 unaffected males, none of whom carried the gene or manifested the syndrome. Moreover, in those who underwent further investigations, there was evidence of a disturbance in their enzyme activity (as hypothesized, above).
So, if taken at face value, we seem to have a very clear indication that an abnormal gene, a mutant form of the gene coding for the structure of MAOA, impacts human development so that not only is an affected male rendered ‘learning disabled’ but also impulsively aggressive. A number of comments might be made:
1. The association between the abnormal gene and the abnormal behaviour is plausible since we ‘already’ know that the neurotransmitters likely to be impacted by such a mutation are themselves likely to impact voluntary behaviour and especially impulsivity (a case could be made for ‘high levels’ of dopamine and noradrenaline and ‘low’ levels of serotonin being involved in impulsivity).
2. Clearly the gene and the behaviour ‘coincide’, they co-occur within this kindred, and there appear to be no exceptions to this ‘rule’.
3. There is little information given regarding the behaviours involved, other than their (generally) impulsive nature (for instance, we do not know to what extent the incident(s) of attempted rape or exhibitionism were planned or ‘reactive’). However, the authors themselves noted that ‘it should be stressed that the aggressive behaviour varied markedly in severity and over time, even within this single pedigree’. This suggests that a static genetic lesion (a point mutation) was of varying impact upon behaviour, or subject to varied salient environmental interactions over time.
4. Finally, it is conceivable that the association with ‘violence’ per se is a red herring. For, as the authors noted, all the affected males had learning disabilities and much of their disturbance arose in the context of ‘a tendency toward aggressive outbursts, often in response to anger, fear, or frustration’ (Brunner et al. 1993). Hence, a counter-argument here would be that their problem is mainly one of learning disability, associated with poor coping skills, and that the impulsive violence reported is actually a consequence of such a failure to cope. In other words, the association ‘holds up’ not because it is primarily to do with violence but because the abnormal gene co-segregates with learning disability, within this particular kindred.
Supposing, for the sake of argument, that the genetic mutation in this family is the root ‘cause’ of the male violence reported. What is our response? Are these men ‘free’? Are they ‘responsible’ for what they do? This is a moot point in a sample of people residing on the borderline of learning disability since, even in the absence of violence or further abnormal features, we would probably hold that they are likely to experience reduced capacity in certain areas of life, in certain situations, a priori. However, we lack the information to take this line of inquiry any further here.
Nevertheless, assuming that violence is associated with an abnormality of the MAOA gene, we might posit that:
1. Subjects may be less able to control their behaviour for ‘organic’ reasons (a ‘natural evil’);
3. However, the very fact that such acts are intermittent suggests that there are periods during which (and environments within which) affected individuals may exert ‘sufficient’ control over themselves.
4. So, what should society ‘do’ with such people? If they offend should they be given more lenient sentences (in the legal system) or potentially longer terms (in hospital settings)? If they have not offended, but carry the gene, should they be ‘watched closely’, ‘tagged’ (as suggested by one recent reviewer; Moosajee, 2003), or detained indefinitely (clearly a breach of their civil liberties)?
5. An enlightened, though perhaps rather paternalistic, response might be to ‘manage’ the environment (so-called ‘nidotherapy’). If an individual of limited capacity only becomes violent under very specific circumstances, following very specific ‘provocations’, then perhaps their environment can be adjusted to help them (and us) remain at peace: there are ways of circumventing some disabilities, which may be relatively straightforward to arrange, e.g. supported accommodation, direct debit payment of bills, provision of meals, and protected social activities. Naturally, this will depend upon the extent (and funding!) of the services available.
A bad early life and the variation within a gene
What one may assert with greater confidence is that recent advances in experimental design and data analytical techniques have facilitated the performance of evermore ambitious and sophisticated studies concerning the emergence of complex human behavioural traits under a variety of potential, partial, competing influences. Our next example is a particularly sophisticated study that benefits from the availability of immense resources and a very long period of ‘follow-up’ (the duration over which human subjects were observed and intermittently assessed). This study is by Caspi and colleagues (2002) and it derives from the highly productive Dunedin Multidisciplinary Health and Development study, conducted over recent decades in New Zealand. The basic design of the study involves a cohort of 1037 children ‘enrolled’ at birth (of whom 52% were male) and followed-up at ages 3, 5, 7, 9, 11, 13, 15, 18, 21, and 26 years (by which time the cohort was still 96% intact; Caspi et al. 2002). Various forms of data have been collected and multiple investigations performed on this sample, and it has yielded much that is of interest to psychiatrists. However, in the present context, we are interested in an observed association between measures of a particular genetic variant (a ‘functional polymorphism’ that is distributed across the ‘normal’ population), accounts of childhood maltreatment, and assessments of antisocial conduct among the cohort's males as they entered adulthood (specifically, those males having four Caucasian grandparents, thereby reducing the sample's genetic heterogeneity).
Once more, the gene involved is the one encoding for the MAOA enzyme (located on the X chromosome at Xp11.23–11.4). However, in contrast to the Brunner study (above), in the present case we are dealing with the effects of ‘more or less’ of the enzyme being expressed (synthesized), rather than its complete absence (as was the case following the point mutation described above). Hence, all males were tested for their genotype (their version of this single gene on their sole X chromosome) coding for higher or lower levels of MAOA expression, and this value was computed with their childhood experience (‘absent’, ‘probable’, or ‘severe’ maltreatment) and their subsequent patterns of antisocial conduct during adolescence and early adult life.
Fortunately, the records accrued in the course of this study were sufficiently detailed for investigators to be able to derive four measures of antisocial conduct among these males:
1. Diagnoses of adolescent ‘conduct disorder’ (essentially the childhood antecedent of adult antisocial personality disorder (ASPD)), defined according to Diagnostic and Statistical Manual of Mental Disorders (DSM IV);
2. Convictions for violent crimes (identified via the Australian and New Zealand police database);
3. A predisposition towards violence as assessed at interview when they were aged 26 years; and
4. Symptoms of ASPD as obtained from third-party informants (again, when the subjects were aged 26 years).
These findings are particularly striking when one examines the impact of the (maximally) affected group in terms of their ‘volume’ of antisocial activity: this 12% of the male birth cohort accounted for 44% of the cohort's violent convictions; furthermore, among those with the low-activity MAOA genotype who were severely abused in childhood, 85% would go on to perform some form of antisocial behaviour.
This is a remarkable finding and it suggests that for a small (absolute) number of people relatively few predictive factors may be informative of their future life trajectory. The sample number in each of the pivotal analyses in this paper was well over 400, though the ‘core’ group of offenders (in the analyses displayed) was often 12 or 13 men. This is perhaps the best example of a study defining a (‘determined’) risk of violence.
However, there is also another way of construing the data. For, enzyme expression in itself (i.e. on its own) did not determine who would go on to offend: ‘the main effect of MAOA activity on the composite index of antisocial behaviour was not significant’ (Caspi et al. 2002). So, had this been a study of purely genetic risk, it might have produced a negative finding (cf. Brunner et al. 1993). Conversely, it was early life experience that was the major cause of later antisocial conduct: ‘the main effect of maltreatment [alone] was significant’. Hence, although one's attention may be drawn to those males who would end up offending in adulthood, the remarkable group are actually those males who were severely abused in childhood, but did not go on to offend (they number 18–20 in many of the analyses presented). High MAOA expression appears to have ameliorated the effect of severe maltreatment in childhood (and this may relate to the role of neurotransmitters in modulating stress responses; Caspi et al. 2002).
So, if one is faced with a population of children, one might search for genes to establish who might be at risk of later offending. One might look for genes that protect some of them from the effects of abuse. Alternatively, one might create a society in which children were not abused in the first place. As this study demonstrates, a lack of abuse would impact adult rates of antisocial conduct.
Again, as in the earlier study of group data, it is difficult to express an opinion as to whether individual subjects are ‘free’ or ‘responsible’ (we do not know the specific details of the offences recorded). For the arch determinist, their conduct would seem to be specified by antecedent factors and events (genes and abuse) – powerful forms of ‘natural evil’. However, even here we are confronted with the impact of human agency. For, after all, who was it that performed such abuse?
Furthermore, this study links us to a theme that runs throughout several studies of young adults who are ‘living’ on death row in North American prisons (see Pincus, 2001, for a review). Such offenders, who have committed violent crimes, have often been subjected to severe abuse themselves in childhood, and subsequently have accrued a variety of neuropsychiatric and neuropsychological impairments. One study by Lewis and colleagues (1988), of 14 juveniles held in four states in the United States of America (40% of those juveniles in jails across the country who were awaiting execution at that time) revealed that nine had major neurological disorders, seven were psychotic, seven had organic impairments, only two had IQs greater than 90 points, and 12 had been subject to brutal abuse (five had been sodomized by members of their own families). Now, one might argue that all of this represents a concatenation of ‘natural evils’, impacting human cerebral control systems, yet one might also pose the question:
One possible disadvantage of our current, albeit understandable, enthusiasm for mechanistic accounts of the brain and psychological function is that it may obscure the necessity for less technical and more ‘humane’ modes of intervention. It may also allow society at large to sidestep a rather inconvenient question: when one lives in a civil society, what is the nature of one's obligation(s) to one's neighbour? Or, to invoke an older literature: ‘Am I my brother's keeper?’
What is the ‘rational’ response of a developing agent towards a society of other agents that has allowed it (the developing agent) to undergo protracted abuse and suffering, without protection? How does even a ‘normal’ agent respond to such circumstances?
Behaving badly, in the community
As I have already mentioned, on several occasions, it is widely recognized that there is an association between frontal lobe impairment and overt violence. In people who have undergone frontal injuries in childhood or adulthood, violent conduct is more common (Anderson et al. 1999; Grafman et al. 1996) and, conversely, among offenders who have committed violent crimes ‘frontal’ impairments are more often elicited on neuropsychological testing (see Brower & Price, 2001; Blair, 2003; Pincus, 2001; Raine, 1993). There have been several studies of incarcerated antisocial people examining the structure and function of their brains (e.g. Raine et al. 1997b). However, whenever we encounter a ‘life-long’ condition, as opposed to the consequence of a defined, specific lesion, arising at a defined moment in an individual's life, there is often a question concerning the contribution of possible confounding variables: antisocial people have often sustained head injuries along the way, abused illicit substances and alcohol, and they may have experienced co-morbid psychiatric and neurological disorders, so it can be difficult to obtain unequivocal findings (i.e. findings which relate solely to their ‘personality disorder’).
However, a landmark study in this area is that of Adrian Raine's group examining antisocial people living in the community in Los Angeles (Raine et al. 2000). Raine posited that such community-based individuals would provide a more valid representation of antisocial people in general, in contrast to those who have been studied while incarcerated. He also studied several control groups, i.e. not just ‘healthy’ people (who might, of course, be expected to differ from ‘antisocials’ in multiple ways), but also people who exhibited substance misuse disorders and others who carried psychiatric diagnoses. Hence, in this study there were three ‘control’ groups.
Raine and colleagues recruited participants from employment agencies located in Los Angeles and assessed them on a great many measures. Pivotal to this process was the elicitation of ‘honest’ accounts of past misdemeanours – something of a delicate topic if a potential subject had yet to be apprehended for an ‘offence’. So, the investigators took steps to guarantee that any details that might be divulged to them would remain confidential, without incurring the prospect of prosecution:
All the participants underwent structural MRI brain scans and also underwent a skin conductance experiment, one that probed their autonomic (sweat) response when they had to read aloud a 2-minute long, self-composed statement about their personal faults. The groups studied comprised 21 people with ASPD, 34 healthy subjects, 26 subjects with substance misuse disorders, and 21 ‘psychiatric controls’. All the subjects were male.
To help minimize false negatives (denial of violence by truly violent offenders), a certificate of confidentiality was obtained from the Secretary of Health, Education and Welfare, Washington, DC, that protected the research investigators under section 303(a) of Public Health Act 42 from being subpoenaed by any federal, state, or local court in the United States to release the self-reported crime data. Consequently, subjects were protected from the possible legal action that could be taken against them for crimes they [had] committed and admitted in the interview, but which were not detected and punished by the criminal justice system.
(Raine et al. 2000, pp. 120–121)
What were the behaviours that the ASPD group admitted to? Well, there were some very serious offences (and, hence, offenders) among them:
• 52.4% reported having attacked a stranger and having caused bruises or bleeding
• 42.9% reported having committed rape
• 28.6% had attempted or completed homicide.
What were the findings? Well, Raine and his colleagues found that ASPD subjects exhibited a specific reduction in the volume of the grey matter within their prefrontal cortices (as a whole, they did not compartmentalize the cortex into specific regions), and this reduction was significant in comparison with each of the control groups studied. Furthermore, ASPD subjects exhibited a significantly reduced skin conductance response when reading out their faults, i.e. they exhibited less autonomic responsiveness (this is in keeping with previous studies). Indeed, the magnitude of their reduced responsiveness correlated with the reduction of their prefrontal grey matter volume (suggesting, once again as hypothesized, that the prefrontal cortex plays a role in modulating stress response).
Hence, Raine and colleagues obtained evidence for a prefrontal lobe deficit being linked to the performance of violent antisocial acts, and they managed to control for many of the confounding variables that usually undermine such studies. Therefore, the implication is that those with ASPD may have been hampered in their ability to modulate their behaviour, particularly their ability to suppress inappropriate responses, and that this impairment is linked to reduced prefrontal cortical grey matter volume.
Such a study provides another example of a scientifically meticulous design, and a method, an approach, that managed to access a marginalized group of people (antisocial subjects, living in the community). So it may seem mean-spirited to mention one caveat. However, this is just to clarify what must be acknowledged: that the study is cross sectional, it examines subjects at one point in time. Therefore, on its own, it cannot tell us whether the effects seen constitute ‘cause or effect’. Do people with ASPD exhibit behaviour that is caused by their brain findings or are their brain findings the consequence of their lifestyles? We cannot say, but we should acknowledge that Raine et al.'s findings are consistent with what should have been expected on the basis of childhood development studies: his Los Angeles sample tended to be taller than their controls and they exhibited lower resting-state heart rates (these are findings that have also been reported in children who later go on to become antisocial; see Raine, 1993). Hence, it seems as if people ‘destined’ to develop ASPD deviate from ‘normal’ rather early in their lives (though this in itself does not tell us whether that deviation is genetic, environmental, or social in origin, or indeed a combination of all three).
By now, it will have become apparent that there are some very accomplished studies in this field, performed by investigators attempting to elucidate why it is that some human beings grow up to become ‘antisocial’, to be the ‘kinds of people’ who will repeatedly harm others. The four papers that we have just examined in some detail represent only a selection of ‘highlights’ drawn from across this field, though their findings are indicative of an emerging consensus: that antisocial people exhibit biological differences relative to the ‘normal’ population, that such differences may arise very early in life (our exception, above, was the case of Spyder Cystkopf, an example of what might be called ‘acquired sociopathy’; Blair, 2004), and that (in general) an account may be offered of such differences implicating brain systems that are concerned with impulse control and response regulation, especially under conditions of stress. When authors specify a relevant prefrontal region it tends to be the OFC (below) and when they examine neurotransmitter systems these tend to be the major systems that we have reviewed in Chapter 4: the serotonergic, dopaminergic, and noradrenergic systems (and see Nelson & Trainor, 2007).
Hence, by way of summarizing what is a very large field, I ask the following question:
If one were to be perverse enough to actually want to ‘create’ an antisocial individual then how might one set about doing such a thing?
Well, sadly, there are a number of ‘positive’ leads to go on. Table 22 presents what we might glean from the work of major researchers in this field, not least the groups of Adrian Raine, Avshalom Caspi and Terri Moffitt, Jonathan Pincus, James Blair and others, together with data arising from such diverse projects as the UK Government Home Office's 2003 Crime and Justice Survey, and the Project on Human Development in Chicago Neighbourhoods. If one considers the list of possible antecedents to an antisocial adult life, then one thing seems very clear (to me): that there are very few items over which the young, developing agent has any direct control. Where he is born and to whom he is born are predetermined (relative to his life). His genetic endowment, by definition, precedes ‘him’. The conduct of his ancestors, his parents, his neighbours, even the youths in his neighbourhood is largely beyond his control. When one looks at the combination of biological, psychological, and social endowments that ‘predict’ the emergence of an antisocial man, a second point also becomes obvious: if he has any freedom at all then it arises within very narrow constraints. I do not claim that he is not free. I merely speculate that his ‘freedom’, if it exists, is likely to be very limited.
Table 22 How to make an antisocial person
Empirically based suggestions
Take a male child.
Have him born unwanted, possibly following a failed termination of pregnancy, separate him from his mother in the first year of life.
Give him antisocial parents and/or a family history of criminal activity.
Raise him in close proximity to an antisocial father.
Raise him in a poor household, in a disruptive neighbourhood; make sure it is a neighbourhood with low levels of ‘informal social control’ and diminished ‘collective efficacy’; make him a victim of crime.
Give him delinquent peers.
Make him tall for his age at 3 years (and also malnourished).
Give him a low resting heart rate and reduced autonomic responsiveness to alerting stimuli or threats at 15 years.
Give him a personality with ‘callous traits’.
Expose him early in life to drugs and alcohol (do the same to his friends).
Bestow upon him a low IQ, attention deficits; later on, add a severe mental illness and multiple head injuries.
Abuse him from an early age.
Sources: Blake et al. (1995), Brennan et al. (1997), Eisenberg (2005), Farrington (1995), Ishikawa et al. (2001), Jaffee et al. (2003, 2004), Lewis et al. (1988), Liu et al. (2004), Luntz and Widom (1994), Lyons et al. (1995), Mullen (1992), Pincus (2001), Raine (1987, 1993), Raine and Venables (1984), Raine et al. (1990, 1994, 1995, 1996, 1997a), Struber et al. (2008), Viding et al. (2005), UK Home Office (2003) and Chicago Neighbourhoods Project. (See Sampson, 1997 and Sampson et al., 1997).
Furthermore, one might well predict that someone emerging with a ‘full house’ of such factors (as shown in Table 22) would exhibit very little empathy towards others. If empathy has to be learned from socialization (as posited by James Blair, 2004, and Paul Mullen, 1992), then where do the opportunities for such learning arise within such a pathological matrix?
Putting emotion and action together
Despite the many social and psychological adversities that are likely to have impacted the developing antisocial brain, there is a case that can be made for postulating a relevant biological substrate, a system, via which abnormal aggression may manifest itself, later, during adult life. In this account I shall draw very much upon the work of James Blair, a British psychologist now working at the National Institute of Mental Health, Bethesda, United States of America.
Blair (2004) utilizes the distinction (that we have already outlined) between reactive/impulsive violence and violence that is ‘instrumental’ or premeditated. While antisocial people and those with other ‘Cluster B’ personality disorders (such as ‘borderline’ personality disorder) exhibit increased rates of reactive/impulsive violence, psychopaths are generally distinguished by exhibiting both reactive and instrumental forms of violence (especially the latter (Table 23)). Indeed, their crimes are classically ‘cold-blooded’ (Woodworth & Porter, 2002). So, what is it that might be capable of linking, while also potentially differentiating, the neurological bases of such different modes of violence? Blair posits the contribution of the ‘reactive aggression system’, a system with a long evolutionary past, discernible within many animal species and also, crucially, ‘present’ within humans (Figure 98). This system comprises key subcortical elements, including:
1. The medial nucleus of the amygdala (located within the temporal lobe),
2. The medial hypothalamus (pivotal to the autonomic system), and
3. The dorsal periaqueductal grey (PAG).
Table 23 Key features of a psychopath
Social deviance items (‘factor 2’)
Emotional/Interpersonal items (‘factor 1’)
Glib and superficial
Poor behavioural controls
Egocentric and grandiose
Need for excitement
Lack of remorse or guilt
Lack of responsibility
Lack of empathy
Early behavioural problems
Deceitful and manipulative
Adult antisocial behaviour
Source: Adapted from Hare (1993, p. 34).
The ‘amygdala’ plays a crucial role in the organism's relationship to aversive or threatening stimuli within its external environment. Hence, there are many brain-imaging studies demonstrating that human subjects ‘activate’ their amygdalae when they are exposed to angry faces (whether or not they are consciously attending to those faces). The implication is that the amygdala provides a ‘fast’, subcortical, implicit means of recognizing threats in the environment, as a consequence of which the organism may be better prepared to respond appropriately. In animal models, a low level of stimulation (e.g. consistent with a distant threat) will cause the organism to freeze. As the threat approaches, or stimulation increases, the organism will attempt to escape its environment. When the threat is very close and escape becomes impracticable, the organism will respond with reactive aggression (see Blair, 2004, for a review of the relevant animal literature).
However, the amygdala is also implicated in ‘learning’ about threats and learning the appropriate response to dangerous situations. Hence, in ‘aversive’ conditioning an organism learns that a particular behaviour will elicit painful consequences (i.e. it will be ‘punished’). Hence, the organism learns not to perform that behaviour! In passive avoidance learning, the organism learns not to respond at all to certain stimuli (because such, evoked, responses would also incur punishment). Blair posits that in the human condition we expose ourselves to aversive conditioning when we hurt others. So, if a young child is in the playground and he hits another child, let us call him ‘Johnny’, then it is likely that Johnny may respond by crying (this constitutes the ‘unconditioned’ stimulus). The perpetrator, if ‘normal’, perceives such a response as aversive, unpleasant; if he is capable of empathy then he might experience an element of distress himself. Under normal conditions, such a course of events may form part of his ‘moral socialization’:
Now, Blair posits that psychopathy, especially the application of instrumental violence, is an outcome of failed moral socialization (Mullen, 1992, similarly posits a failure of ‘moral development’). What is central is the notion that the amygdala is abnormal in developing psychopaths. Hence, they will not learn from aversive conditioning (i.e. punishment does not ‘work’). Nor do they develop empathy for their victims (because empathy is also reliant upon amygdala function). Indeed, some of the observational data that have been alluded to in the work of Adrian Raine (above; another British psychologist working in the United States of America), and deployed in Table 22, are consistent with this position: notice how often future offenders (even in childhood) are found to have low heart rates and to be less autonomically responsive to emotional or distressing stimuli. Indeed, such a lack of responsiveness was also detected in Spyder Cystkopf (a case of ‘acquired sociopathy’; above).
Moral socialization is the term given to the process by which [caregivers], and others, reinforce behaviours that they wish to encourage and punish behaviours that they wish to discourage. Importantly, the unconditioned stimulus (US; the punisher) that best achieves moral socialization as regards instrumental antisocial behaviour is the victim of the transgression's pain and distress; empathy induction, focusing the transgressor's attention on the victim, particularly fosters moral socialization.
(Blair, 2004, p. 202)
Furthermore, there are emerging brain-imaging studies of adult psychopaths that show structural and functional abnormalities of their amygdalae (see Kiehl et al. 2001; Tiihonen et al. 2000). So, if the amygdala is dysfunctional in psychopaths, then this may help to ‘explain’ why they are capable of instrumental violence: they do not ‘feel others’ pain’, they do not learn from punishment, and they may not recognize fear or distress in their victims (they themselves exhibit deficits in facial affect recognition). In essence, violence proceeds (and may be premeditated over variable periods) because there are no ‘brakes’ upon its execution.
In contrast, the forensic relevance of the OFC is specifically connected to its role in modulating ‘reactive’ aggression. Blair stresses that this need not be solely inhibitory in nature, for there are times when aggression or violence is contextually appropriate. What is important is establishing such ‘appropriateness’. He posits two modes of operation at the level of OFC that are likely to impact reactive/impulsive violence:
1. We know from a large literature (parts of which we ‘sampled’ in Chapters 2, 4, and 8) that the OFC is implicated in changing, reversing, or withholding responses to the environment, especially when response contingencies have changed, i.e. when the ‘reward value’ of specific behaviours has altered. Hence, the OFC forms a crucial link between action and expectation. Moreover, if the OFC is involved in monitoring reward values and reward expectations, then it may experience a ‘mismatch’ when a formerly rewarded behaviour is no longer reinforced (by receipt of a rewarding stimulus). Blair (2004) posits that such a mismatch may form the neurobiological/computational equivalent of our phenomenological experience of ‘frustration’. Hence, an organism that has behaved in certain ways in the expectation of a reward may become frustrated, and react aggressively, when such a reward is not forthcoming. We might think of a great many situations, especially those connected with hedonic experiences, where such ‘disappointments’ may precipitate violence (think of the man who rapes a woman with whom he had expected to have consensual sexual intercourse but who, in the course of a ‘date’, received an explicit warning that she would not be consenting after all). Hence, the OFC may be implicated when organisms respond aggressively to ‘not getting their own way’. Notice also, how this takes us back to a distinction that we rehearsed in Chapter 2: that between the motor performance of an act (i.e. its mode and accuracy of ‘execution’) and the meaning or ‘moral value’ of that act. While acts that are abnormal in their execution often implicate the dorsal brain systems (e.g. the abnormalities of parietal and premotor cortices associated with the dyspraxias), actions which are aberrant in their ‘values’, i.e. immoral acts, often seem to implicate medial and ventral brain systems. This is something also intimated by Blair (2004): ‘“It is unlikely that elevated levels of instrumental [aggressive] behaviour in specific individuals are due to abnormalities in any of the systems for motor behaviour; individuals presenting with heightened levels of instrumental aggression do not show general motor impairment. Instead, it is likely that such individuals show elevated levels of instrumental behaviour because they have been reinforced, and not punished, for committing such behaviour in the past’ ([link]–[link]). Nevertheless, we should not suggest that the aggression that follows such ‘violations of expectations’ is always pathological in nature; there may be situations where this sort of response is ‘necessary’ for ensuring ‘fair play’. When organisms live within communities, when they are situated within reciprocal relationships, there may well be a need to monitor social interactions, social exchange (something that became obvious in Chapter 8, when we examined the human behaviours of deception, ‘free-loading’ and cheating).
2. A further nuance to the OFC's contribution to the regulation of violence, especially in the reactive context, is its relevance to social hierarchical arrangements within primate colonies (something that we also touched upon with respect to OFC and serotonergic systems in Chapter 4). Blair posits that the OFC is involved in a process of ‘social response reversal’ (SRR), essentially a means by which an organism may execute actions that are hierarchically appropriate. Hence, if one is situated low down the social order and one receives a threatening glare from one of one's superiors, then it may be advisable to cease performance of whichever action it was that provoked their censure, i.e. it is time to change one's own behaviour, while also suppressing any signs of reactive aggression. Responding aggressively to a superior primate is likely to elicit further punishment. However, if one is relatively high up within the hierarchy and one receives a glare from a subordinate, then it might be entirely appropriate to emit some form of reactive aggression oneself, in order to ‘keep them in their place’. Such behaviours may be studied and elicited in non-humans primates and there seem to be grounds for believing that something rather similar might apply among humans (indeed, this proposition may have immediate resonance with a number of readers who are working within hierarchical organizations). Intriguingly, alcohol and benzodiazepines (drugs such as diazepam) may specifically impact SRR behaviours while serotonergic drugs may impact behavioural change in response to changing reward contingencies (i.e. 1, above), again specifically, suggesting that different neurotransmitter systems may be implicated in modulating different aspects of reactive violence (Blair, 2004).
Indeed, all of this coincides rather well with recent insights into the ‘human condition’ that have arisen from neuroimaging studies exploring the relationship between gender and violence (reviewed by Struber and colleagues, 2008). It transpires that there is a higher ratio of OFC grey matter to amygdala volume among women compared with men; indeed, both sexes have the same size amygdala but the OFC is larger among women (Gur et al. 2002). Furthermore, other authors have found that there is lower ‘functional connectivity’ (i.e. less temporal correlation across distant neural activities) between the OFC and amygdala in men than in women (during a face-matching task), suggesting that OFC regulation of the amygdala may be weaker among males (Meyer-Lindberg et al. 2006). Hence, it may be the case that there are organic, biological reasons why men are more often aggressive and violent than women. There may be biological ‘reasons’ for their lesser ability to ‘control’ their reactive/impulsive aggression.
So, to conclude, Blair's (2004) model suggests that reactive violence is primarily a consequence of the OFC's inability to modulate aggressive responding under emotionally charged conditions, while the instrumental violence typical of ‘developmental psychopathy’ is more a consequence of problems connected which the amygdala (which may be demonstrated to be small or hypo-functional among selected groups of psychopathic offenders). Blair suggests that while the reactively violent fail to ‘control themselves’, the instrumentally violent have undergone inadequate ‘moral socialization’. How did the latter situation come about? In theory, it came about because an abnormal amygdala rendered the developing psychopath incapable of appreciating the aversive consequences of their antisocial acts (during their development). They were unable to appreciate (or recognize) the distress of others; hence, they did not learn to moderate their conduct towards them. Indeed, they could not learn from punishment (because they did not respond to aversive stimuli; a position congruent with the lower heart rates and impairments of skin conductance response to stress, observed among the antisocials in Raine et al.'s, 2000, paper). Furthermore, as life proceeds, such children (‘destined’ to become psychopathic) accrue the positive reinforcements associated with rule-breaking behaviour. So, while they cannot be ‘punished’ they, nevertheless, receive the ‘benefits’ that may accrue from ‘breaking the rules’.
This is a compelling model and it serves to explain in neurological terms how interactions between humans may be modulated or not according to the state of very specific brain regions and circuits. It offers up the possibility of further empirical studies (to confirm or disconfirm the central ‘links’ in this causal chain) and it also provides a rationale for discarding punishment as a mode of influencing children carrying such ‘callous’ traits. Nevertheless, a problem remains: what can be done to ameliorate their social, emotional, and moral limitations?
Before we end this chapter it seems appropriate to retrace our steps, to consider what it is that we have found.
We stated at the outset that there were certain important questions that we needed to answer because our answers would serve to highlight the assumptions that we habitually ‘permit’ ourselves to make with regards to the ‘nature’ of human behaviour. Is it chosen or is it caused? Is there a difference between murder and manslaughter? Will moral evil eventually be replaced by natural evil when we have ‘explained away’ all of human misconduct? I think it would be fair to say that these questions remain open, although we have perhaps ‘pushed’ our reader well along the road to ‘determinism’:
1. We may simply accept that all human behaviour is ‘determined’ and hence, conclude that harmful behaviour directed at others is ultimately entirely the product of ‘natural evil(s)’. Furthermore, such natural evil may incorporate variations within genes concerned with neurotransmitter metabolism, aspects of prefrontal cortical or amygdala development, the impact of lesions in the brain, or else aberrant influences located within the experiential and social spheres: being abused, unwanted, exposed to antisocial behavioural patterns exhibited by others, or the ‘natural evils’ constituting alcohol or drugs (see Table 22).
2. Within the context of a natural species, itself the outcome of evolutionary processes, it is only to be expected that different organisms, different subjects, will receive different genetic endowments, and though this may reflect a ‘natural evil’ (in theological terms) it may also simply reflect ‘normal’ (biological/behavioural) variation across the species.
3. Under optimal conditions some human beings may appear to lead what appear to be moral lives, which only become morally aberrant when pathologies supervene (e.g. the brain tumour exhibited by Spyder Cystkopf, or the personality deterioration that may result as a consequence of a frontal lobe dementia or Huntington's disease).
4. However, the characterization of ‘normal’ still depends upon our assumptions about what it is that ‘most people’ do, and what they may be expected to do under certain, specified circumstances.
5. So, might we learn more about human nature's ‘moral baseline’ when the ‘normal’ environment shifts, drastically? Well, we might.
Exposing the normal
There are many examples that might have been chosen to demonstrate how fragile the ‘normal’ state of human morality is. For, while our bio-scientific research paradigm is largely focussed upon those who have become ‘offenders’ within stable, civil societies (Figure 94), there is a whole other area of endeavour occurring within social psychological fields of inquiry and those concerned with recent history that broadens the scope for characterizing ‘normality’. What these lines of inquiry have in common is that they expose the more unpleasant aspects of ‘normal’ people. Here I shall close on only two such examples:
1. What people do when they have ‘permission’ to punish another human being, a stranger, unknown to them;
2. What normal people do when their society descends into barbarism.
The Milgram experiments
The reader of this text may already know of the seminal experiments conducted by the late Stanley Milgram and his colleagues at Yale University during 1961–62, and first published in 1963, concerning the ‘obedience’ of apparently normal people to perceived ‘authority’. These experiments cast a rather dim light upon human conduct, what we can expect to happen when we are ‘on our own’ in the presence of authority; when we think that we may punish an unknown stranger, without fear of reprisal. Furthermore, this is a body of work that has been replicated by other investigators on many occasions (inside and outside the United States of America; see Blass, 2004).
The central conceit behind Milgram's study was that a sample of ‘normal’, healthy adults, was recruited from the local community via a press advertisement, inviting them to take part in an experiment concerning memory. People would be paid a reasonable sum of money ‘up front’ for their participation (plus travel expenses) and they would receive their payment irrespective of what transpired during the course of the study.
Each volunteer was studied over an hour, on their own. However, they were deliberately given the impression that they were one of two volunteers meeting with an experimenter for the first time. In fact, both the ‘experimenter’ and the ‘other volunteer’ were members of the study team (i.e. they were Milgram's confederates).
The experimenter was dressed in a grey lab coat. This was the result of a specific decision: Milgram had not wanted to use a white coat as this might have conveyed the air of a medical experiment. The experimenter read from a precise script and, although the ‘real’ volunteer did not know it, the events that were about to unfold had already been rehearsed, scripted, and certain vocal responses taped (below).
Early in the process, the two ‘volunteers’ were given envelopes indicating which of them would play the part of a ‘teacher’ and which the part of a ‘learner’. In fact, the selection was predetermined and the experimenter's confederate always played the part of the learner. Hence, the ‘real’ volunteer thought that they had been selected to play the ‘teacher’ purely by chance.
What transpired then was that the learner went into another room, from which he was able to converse, via a microphone, with the experimenter and the teacher (i.e. the ‘real’ subject of the experiment). The learner would subsequently have to learn (apparently) new material (word lists), and provide correct answers when asked to recall items. If his answers were incorrect then he was to receive an electric shock, apparently dispensed via an apparatus that was in the same room as the experimenter and the teacher. It was the ‘teacher’, i.e. the ‘real’ volunteer, who was called upon to administer such ‘shocks’ (in fact, no shocks were dispensed though the teacher did not know this).
The apparatus that Milgram had had constructed for the experiment appeared to be a convincing metal box with switches, dials, and flashing lights. It allowed the (apparent) administration of ascending doses of electricity, and carried the following gradations:
- ‘SLIGHT SHOCK
- MODERATE SHOCK
- STRONG SHOCK
- VERY STRONG SHOCK
- INTENSE SHOCK
- EXTREME INTENSITY SHOCK
- DANGER: SEVERE SHOCK’ (Blass, 2004, p. 79).
Two final switches were simply labelled ‘XXX’.
As the reader has probably anticipated by now, the real purpose of the experiment was to see how far the ‘real subject’, i.e. the ‘teacher’, would allow the experiment to proceed before they ceased to participate, before they terminated their compliance. For, as the ‘learner’ (apparently) made more and more errors, it was they (the ‘teacher’) who were called upon to administer the electricity, in response to instructions from the ‘experimenter’. This remained their apparent remit even after the learner had begun to express his (apparent) distress, becoming increasingly vocal, progressing onwards through shouts and screams, until an ominous silence ensued.
So, when would a ‘normal’ person stop electrocuting a stranger, someone taking part in a behavioural experiment, someone who could quite easily have ‘been them’ (apparently), on the basis of a chance dealing of envelopes at the very beginning of an experiment?
Well, 65% of people taking part in the experiment (the initial sample comprised 40 people) would have continued electrocuting the subject until he apparently collapsed or died. Indeed, even among those who finally objected, none had done so before they reached the level of ‘INTENSE SHOCK’ dispensation (Milgram, 1963).
In subsequent experiments, conducted inside and outside of the United States of America (by 19 groups of investigators between 1967 and 1985) the average percentages of subjects who would have gone on electrocuting their ‘learners’ ‘until the end’ have been 60.94% (among USA samples) and 65.94% (among other samples including those from Italy, South Africa, Germany, Jordan, and Austria; see Blass, 2004), respectively.
What does this tell us about ‘normal’ human behaviour? Well, it provides us with empirical evidence that under the ‘right’ (i.e. wrong) conditions approximately two-thirds of ‘normal’ humans would severely damage a total stranger, in apparent obedience to ‘authority’. When they are granted permission to punish, most subjects keep going. Any moral superiority that one might feel with regards to the ‘abnormal’ antisocials we have described (above) would, therefore, appear rather premature. Much depends upon context, ‘situation’ (at least according to Milgram).
Hence, it may come as little surprise that we should turn our attention to non-experimental settings wherein ‘normal’ people have done terrible things to their neighbours, when the situation has allowed them to do so. I shall not labour the point at length but I shall draw on a recent, key text by James Waller (2002).
Waller is a social psychologist who has examined the gross destruction of human life occurring in the context of genocide, mass killing, and ‘ethnic cleansing’ (Waller, 2002). He provides examples from many theatres of cruelty: the concentration camp at Mauthausen, the massacre at Sand Creek in 1864, the Armenian genocide, the massacre at Babi Yar, the invasion of Dili (East Timor), the Tonle Sap massacre (in Cambodia), the ‘death of a Guatemalan village’, the church of Ntamara (in Rwanda), and the killings in the ‘safe area’ of Srebrenica (in former Yugoslavia). Waller is explicit in his conclusion that there is very little that is abnormal about those who commit such atrocities; indeed, when psychologists and psychiatrists have examined certain high-profile offenders (such as the Nazi defendants at Nuremberg) they have found little to comment upon (as we might have anticipated: see Figure 94). Waller's conclusion is that the propensity for extreme cruelty is a common human attribute, one that is found in all settings and is revealed by the dissolution of normal societal constraints.
In such situations, personal ambition, prejudice, and complicity, ‘rational self-interest’, allow ‘ordinary people’ to ‘turn a blind eye’ or else to join in:
I believe that there are acts so vile that our task is to reject and prevent them, not to try to understand them empathetically.
(Waller, 2002, p. 15)
‘First the girl had to dig out a hole in the field, while her mother, who was 7 months pregnant, had to watch while chained to a tree. They slit open the stomach of the pregnant woman, ripped out the unborn child and threw it into the hole in the ground. Then they threw the woman in as well and the small girl too, after they had raped her first. She was still living when they covered the hole.’ This is an eyewitness account by one Angela Hudurovic, recounting how the Ustashi militia (a Croatian nationalist group) tortured her sister and mother to death during the Second World War (see Rose, 1995, ‘text 52’, p.119).
Perhaps all that one can conclude from such material is that there is such a thing as ‘latent morality’: we simply do not know what ‘normal’ humans are capable of while we encounter them within relatively safe, predictable, stable cultural milieu. The base state of human morality is not necessarily apparent under stable conditions, in universities and labs. Terrible as it may seem, the true ‘baseline’ in human moral conduct is most likely revealed when humans enjoy power over others, when they outnumber their victims, when they may literally ‘get away with murder’.
Furthermore, if we take the work of Milgram and others to be accurate (and valid), then we must conclude that when we look into our mirror we encounter the face of someone who is at some considerable risk of ‘latent immorality’. For, on strictly statistical grounds, each of us carries within us a greater than 50% probability that we would actually torture someone else, were a permissive situation to arise; were an ‘authority’ figure to grant us ‘permission’ to do so. Fortunately, civil societies usually spare us this realization. Perhaps they ‘protect’ us from ourselves.