In his article "Animal Suffering" and in multiple debates on related topics, WLC has made the following argument:
In his book Nature Red in Tooth and Claw, Michael Murray explains on the basis of neurological studies that there is an ascending three-fold hierarchy of pain awareness in nature1:
Level 3: Awareness that one is oneself in pain
Level 2: Mental states of pain- Level 1: Aversive reaction to noxious stimuli
...
The argument essentially accepts that animals feel pain but it goes on to make the more problematic assertions that: a) animals lack a meta-awareness of pain (which WLC seems to define as "suffering"), and b) that this meta-awareness necessitates a pre-frontal cortex. What this boils down to is the suggestion that animals cannot reflect on, or understand, the sensations of pain that they have and this is a condition required for "suffering".Level 3 is a higher-order awareness that one is oneself experiencing a Level 2 state. Your friend asks, “How could an animal not be aware of their suffering if they're yelping/screaming out of pain?" Brain studies supply the remarkable answer. Neurological research indicates that there are two independent neural pathways associated with the experience of pain. The one pathway is involved in producing Level 2 mental states of being in pain. But there is an independent neural pathway that is associated with being aware that one is oneself in a Level 2 state. And this second neural pathway is apparently a very late evolutionary development which only emerges in the higher primates, including man. Other animals lack the neural pathways for having the experience of Level 3 pain awareness. So even though animals like zebras and giraffes, for example, experience pain when attacked by a lion, they really aren’t aware of it.
NEUROBOLLOCKS
To deal with the two issues mentioned above a little out of order, there is a great video online that tackles the claimed neuroscientific basis of self-awareness here:
The video interviews a number of scientists but, in my opinion, the best responses are from Bruce Hood and Lori Marino. Both of them expertly dismiss the possibility that something as complex as self-awareness could be localised in such a specific part of the brain rather than requiring a more global effort from numerous brain structures. Importantly, as Marino points out, this does not mean that there are no structures which are intimately related to self-awareness in animals as there are cases where damage to certain parts of the brain have impaired self-awareness abilities in humans. What it does mean, however, is that limiting self-awareness to a single area of the brain is necessarily wrong, and this doesn't even take into account the second argument raised by Marino; that convergent functions can come about utilising different, yet analogous brain structures. What this means is that, as the example given in the video suggests, it is as wrong to say that animals can't be self-aware because they lack a pre-frontal cortex in the same way it is wrong to say that a balloon can't fly because it has no wings.
Given that no neuroscientist or behavioral researcher accepts the claim that self-awareness is limited to the pre-frontal cortex, why would well-educated and presumably intelligent men by WLC and Murray suggest such a thing? The video above attributes this to "The Seductive Allure of Neuroscience Explanations"2, a paper which looks at whether irrelevant neuroscientific terms added to an explanation could affect how satisfying a nonexpert would find the explanations given. As we would expect with our own experience with advertising, adding jargon to a sales pitch (in this case, a neuroscientific explanation to an argument) increases how well that argument is received. This line of reasoning seems rather accurate when applied to WLC, given that he is perhaps recognised more for his debating tactics instead of the quality of his arguments.
"LEVEL 3 PAIN": META-AWARENESS
The strongest part of WLC's argument is where he questions whether other animals experience this higher level comprehension of pain that us humans have. To be extra clear, however, whilst this is the strongest part of his argument, it is not a strong argument and he is, once again, wrong. The problem that WLC faces here is the unarguable fact that animals have passed a number of tests designed to assess self-awareness in animals, the most popular of which is called the "mirror test"3.
The basic setup and rationale of the mirror test isn't particularly hard to understand; the idea is just simply that we attach something like a red dot or a sticker on the forehead of an animal, place the animal in front of the mirror, and watch to see how it reacts. In humans, the general response is to touch our foreheads because we understand that the image we are viewing is ourselves and we understand that the red dot on the forehead of the mirror-human is actually our own forehead. This test is imperfect though and does not guarantee the identification of a self-aware animal, as there are a number of assumptions the test makes that aren't necessarily true. For example, we assume that a self-aware animal, upon seeing a red dot on its forehead, will feel the need to touch it and this might not be the case. But the test does give us a reason to suspect that the animals that do pass the test are self-aware, and so any animal that passes the test would negate WLC's claims.
So which animals pass the test? As we would expect, most of the great apes are great at this4, as well as dolphins5, elephants6, magpies7 and so on. There are valid questions over how the test should be interpreted, for example, Epstein, Lanza, and Skinner8 question the assumption that the awareness demonstrated by passing the mirror test was innate and to do this, they simply trained a pigeon how to use a mirror and found that after learning to identify themselves in a mirror, they were able to successfully find the red dot placed on their body. But even considering most of the criticisms leveled at the mirror test (a good discussion on some more issues can be found here), we can still be confident in concluding that some animals appear to be demonstrating the self-awareness needed to experience what WLC defines as "suffering" - some even have that "rare" pre-frontal cortex.
REFERENCES:
1. Murray, M., (2008) "Nature Red in Tooth and Claw: Theism and the Problem of Animal Suffering", Oxford: Oxford University Press.
2. Weisberg, D.S., Keil, F.C., Goodstein, J., Rawson, E., & Gray, J.R. (2008). The seductive allure of neuroscience explanations. Journal of Cognitive Neuroscience, 20, 470-477.
4. Jason, M. (2009). "Minding the Animals: Ethology and the Obsolescence of Left Humanism". American Chronicle. Retrieved 05-10-2012.
5. Marten, K. & Psarakos, S. (1995). "Evidence of self-awareness in the bottlenose dolphin (Tursiops truncatus)". In Parker, S.T., Mitchell, R. & Boccia, M.. Self-awareness in Animals and Humans: Developmental Perspectives. Cambridge University Press. pp. 361–379.
6. Plotnik, J. M., de Waal, F., & Reiss, D. (2006) Self-recognition in an Asian elephant. Proceedings of the National Academy of Sciences, 103(45):17053–17057.
7. Prior, H., Schwarz, A., Güntürkün, O., De Waal, F., (2008). "Mirror-Induced Behavior in the Magpie (Pica pica): Evidence of Self-Recognition". PLoS Biology (Public Library of Science), 6 (8): e202.
8. Epstein, R., Lanza, R., Skinner, B. F., (1981). "Self-awareness" in the pigeon". Science, 212 (4495): 695–696.
This is a pretty big problem, and it leads to all sorts of philosophical rabbit holes. I am familiar with the argument that suffering requires meta-awareness (a prior form of the argument, popular for decades was merely that it required "memory"). However, I have never had anyone successfully explain why we would ethically care about inflicting "suffering", but not ethically care about "pain". Given that the two words are fairly interchangeable in plain English, it is not clear to me which technical definition we would care about when facing a particular ethical conundrum.
ReplyDeleteIf you want a cool behaviorist spin on some of this, check out Nick Thompson's chapter in my book on Holt. It is Chapter 10: Interview with an Old New Realist. There is a bit about whether it is possible to design robots that feel pain, etc. Also... perhaps of relevance... a paper on a very radical behaviorist take on emotion and feelings just released here ;- )
"However, I have never had anyone successfully explain why we would ethically care about inflicting "suffering", but not ethically care about "pain". Given that the two words are fairly interchangeable in plain English, it is not clear to me which technical definition we would care about when facing a particular ethical conundrum."
DeleteYes this is the major problem I see with the claims of WLC and Michael Murray. Even if we grant the truth of every other part of his argument (that a PFC is necessary for self-awareness, that animals don't have self-awareness, etc), we're still left with this logical gulf between the premises and the proposed conclusion. Even if animals could only experience pain, then this would realistically make no functional or practical difference to the issues of animal welfare.
Thanks for the suggestion on your book chapter - I saw your latest post about its release but haven't gotten around to checking it out. And that article looks incredibly interesting too, so I'll definitely give it a read!