Monday 17 December 2012

Robert Sapolsky and B.F. Skinner Discuss Behaviorism



This is an interesting video up on YouTube at the moment where the uploader has spliced together snippets from one of Sapolsky's lectures with various snippets of Skinner discussing similar topics or claims as those raised by Sapolsky. The brilliance in this is the seamless juxtaposition of contradictory claims being raised on the same topic - behaviorism.

The video isn't too long and I think it does a great job of highlighting some of the character of Skinner as well, rather than presenting the dry, matter-of-fact scientist that is often seen only discussing reinforcement schedules and pigeons.

Enjoy!

Saturday 8 December 2012

Debunking Evolutionary Psychology

I wanted to discuss this topic because recently there have been a few disgruntled comments made about Rebecca Watson's talk at the "Skepticon" conference called: "How Girls Evolved to Shop" where she brings up a number of dodgy claims made by researchers in the field as well as how they're presented in the media. The main focus of her discussion, from my point of view at least, was on how we should be skeptical of the main assumptions of evolutionary psychology, how we should question how science is presented in the media, and also a good discussion on the effect that sexist and misogynistic attitudes have on the direction of some research in the field. Some of the criticisms against Watson are just plain silly, like the idea that since she is not an evolutionary psychologist then she should just remain quiet on the topic (or, "Shut Up and Sing", as P.Z. Myers puts it), but some arguably carry a little more substance.

An argument which is potentially more troubling is one presented by Ed Clint here which suggests that Watson's talk was an attack on the entire field of evolutionary psychology, and is thus an example of science denialism. This characterisation of her position seems unfair to me given that it seemed that she was attacking the bad science, not the entire field, but I thought it might be a good idea to discuss why the field of evolutionary psychology is often dismissed and what distinguishes the good science from the bad.
The latest deadweight dragging us (evolutionary biology) closer to phrenology is evolutionary psychology, or the science formerly known as sociobiology. If evolutionary biology is a soft science, then evolutionary psychology is its flabby underbelly. - Jerry Coyne1.
Given the somewhat controversial title of this essay, it is perhaps necessary for me to preface it with a few disclaimers. Firstly, I am not a creationist and, for all intents and purposes, evolution is TrueTM. Secondly, whenever somebody voices their skepticism over the veracity of evolutionary psychology, they are often met with the retort, “Do you not believe that the brain is a product of evolution?” with the implication that since behaviors are the product of the brain, and the brain is a product of evolution, then behaviors are the product of evolution. This logic, however, is flawed for reasons I will discuss later but I do accept that the brain is an evolved organ with implications for resulting behaviors. And thirdly, this is not a broad scale attack on evolutionary psychology – instead, my focus is on the particular approach to evolutionary psychology known as the “Santa Barbara church of psychology”2.

To distinguish between the two approaches, I will follow the nomenclature used by Gray, Heaney and Fairhall3 where they refer to this approach as Evolutionary Psychology (EP). This approach (used by popular authors like Steven Pinker in his “How the Mind Works”) attempts to explain a wide range of human behaviors, like whether we have an evolutionary preference for green lawns, with an emphasis on the concept of a modular mind, and utilises a cartoonish view of the Pleistocene – with all considered, we have to wonder whether it should be rebranded as the “Hanna-Barbera church of psychology”.

Thursday 29 November 2012

Gameswithwords - Participate in Language Research


Gameswithwords is a great site that has been set up and run by a number of researchers who are studying aspects of language. In order to generate larger sample sizes, they realised that they could recruit more participants by making their experiments accessible online rather than having to put up ads on noticeboards and convince people to come into universities to partake in the study. 

The tasks don't take much time to complete and they make the results available on their blog when the data has been collated and interpreted. They also give you the option of being emailed with the results so for those interested in research and want first-hand experience in what it's like to take part, here's your chance! 

Check out some of the studies and feel free to post any thoughts or feedback here.

Thursday 22 November 2012

Misunderstanding Behaviorism


Despite the fact that the title of my blog alludes to misunderstandings of behaviorism in popular thought, I've put off writing an article that elucidates and corrects these misconceptions. The reasons for this delay are varied but the main one is probably just a sense of fatigue when dealing with this issue as I've engaged people in this discussion many times over the years and rarely does it seem to change any opinions; however, recent instances of banging my head against a wall have reinvigorated my interest in the topic.

A BRIEF BACKGROUND

Behaviorism is the philosophy of science underpinning behavioral psychology, and it has taken on numerous forms over the space of a century - all of which appear to have been misunderstood or misrepresented to some degree. Arguably, the grandfather of behaviorism was Ivan Pavlov and whilst his name may not be immediately recognisable to everyone, it is likely that it rings a bell. This is because that is where the phrase came from. As a physiologist studying the salivary reflexes in dogs, Pavlov noticed that his subjects had begun to salivate even before the food had been presented – using pre-CSI investigative techniques, he reasoned that the sounds of the researchers’ footsteps as they brought the food down the hallway had become associated or paired somehow with the food. To test this experimentally, he set up conditions where he would ring a bell immediately before feeding the dogs to initially pair the two stimuli together and then later tested the bell without presenting the food. He found that the bell alone was enough to produce salivation in the dogs, and he termed this process “classical conditioning”1.

An ethologist by the name of John B. Watson (who was studying instincts at the time) heard of the work of Pavlov and pursued it further, eventually creating what was referred to as “stimulus-response” psychology – otherwise known as methodological behaviorism. In 1913 he wrote a paper called “Psychology as the Behaviorist Views It”2, informally known as the “Behaviorist Manifesto” and it is this article where Watson attempts to separate psychology from its philosophical roots in order to push it, willingly or not, into the realm of science. To do so, he argued that a science of psychology must be objective with no recourse to internal states that can only be discovered through introspection, thus rejecting the approaches of people like William James before him. He suggested that the future of psychology is in understanding our relation to the environment and how our behavior is affected by various stimulus-response relations. The culmination of which he described in his 1930 book simply titled, “Behaviorism”3. It is here that the beginning of misunderstanding behaviorism began.

Sunday 11 November 2012

The Mind-Body Problem in Science

For the philosophers out there who had an aneurysm upon reading the title, just bear with me for a minute. Instead of attempting to tackle dualism using science (and thus invoking scientism to a degree that would make Sam Harris proud), I want to focus more on how naive assumptions of the interaction between mind and body can give rise to fallacious reasoning - particularly in interpretations of neuroscientific research. In other words, this is mostly going to be a rehash of articles like "Your Brain on Pseudoscience" and "The Rise of Popular Neurobollocks"; and my favourite of this genre of cranky-skeptical diatribes, an article written by Massimo Pigliucci called: "The Mismeasure of Neuroscience". 

Massimo describes the fundamental problem quite succinctly here:
Let’s begin with what exactly follows from studies showing that X has been demonstrated to have a neural correlate (where X can be moral decision making, political leanings, sexual habits, or consciousness itself). The refrain one often hears when these studies are published is that neuroscientists have “explained” X, a conclusion that is presented more like the explaining away (philosophically, the elimination) of X. You think you are making an ethical decision? Ah!, but that’s just the orbital and medial sectors of the prefrontal cortex and the superior temporal sulcus region of your brain in action. You think you are having a spiritual experience while engaging in deep prayer or meditation? Silly you, that’s just the combined action of your right medial orbitofrontal cortex, right middle temporal cortex, right inferior and superior parietal lobules, right caudate, left medial prefrontal cortex, left anterior cingulate cortex, left inferior parietal lobule, left insula, left caudate, and left brainstem (did I leave anything out?). 
I could keep going, but I think you get the point. The fact is, of course, that anything at all which we experience, whether it does or does not have causal determinants in the outside world, has to be experienced through our brains. Which means that you will find neural correlates for literally everything that human beings do or think. Because that’s what the brain is for: to do stuff and think about stuff.
What he is describing here is a phenomenon known as the 'reverse inference fallacy' and this is just a specific example of "affirming the consequent" in logic. The traditional application (or misapplication) of the reverse inference fallacy is described by Poldrack1 who presents the argument as:

  1. In previous studies, when cognitive process X was assumed to be involved, brain area Z was activated
  2. In the current study, when task A was presented, brain area Z was activated
  3. Therefore, activation of brain area Z in the current study demonstrates the involvement of cognitive process X during task A.

This can also be presented as such:

  1. If P then Q
  2. Q
  3. Therefore, P.

The fallacious nature of the reasoning can be highlighted by inserting any everyday relationship, for example: "If it is raining, then I have an umbrella. I have an umbrella. Therefore, it is raining". This is an obviously false statement as we can think of a number of situations where (accepting the initial if-then premise) I could have an umbrella without it being raining, like if my old one had broken and I had just purchased one at a store, or maybe I'm on my way to a fancy dress party where I have donned my infamous Mary Poppins costume. 

It's important to keep in mind, however, that just because the logic is fallacious, it does not mean that the conclusion is necessarily false. So even though there are possible exceptions to me having an umbrella and it being raining, it could be the case that when the argument is used, it does happen to be raining. Or, in Poldrack's example, even though it does not follow that just because cognitive process X activates brain area Z that the activation of brain area Z must implicate the involvement of cognitive process X, it could be true that cognitive process X actually is involved. I'm not sure if this coincidental correct conclusion has a fancy Latin word to describe it but I liken it to the saying that a broken clock is right twice a day; that is, the fact that the clock is broken does not justify the claim that the time it reads is definitely false, but it does justify our skepticism over the process by which it reaches the correct time twice a day. 

Tuesday 6 November 2012

The Sunk Cost Effect


Ah, the Concorde; the joint development program of the British and French governments that pushed ahead even when the economic benefits of the project were no longer possible. It was designed to be a passenger aircraft capable of supersonic flight but its lasting legacy resides mostly in game theory, where it has been adopted as a description of irrational behavior - the Concorde fallacy. More generally, the process behind the fallacy is known as the sunk cost effect.

As the Concorde example suggests, the problematic behavior in question is when a person continues to engage in a behavior due to their initial investment, even though the payoff is no longer available. In common parlance, this could be described as "knowing when to cut your losses"; or, as a famous philosopher once remarked, "You got to know when you hold 'em, know when to fold 'em, know when you walk away and know when to run". It was either Descartes or Kenny Rogers, I can never remember. 

It is mostly of interest to researchers because these behaviors violate our optimality predictions and instead of engaging in behaviors which maximise returns, there seems to be a consistent deviation towards sub-optimal responding. Initially it was believed to be an irrational approach that was unique to humans (and perhaps even limited to adult humans), which led to the hypothesis suggesting that the phenomenon was a product of higher-order thinking - specifically, the overgeneralisation of a rule like "Don't waste"1. Recent research, however, suggests that this might not be true2, 3.

For example, Kacelnick and Marsh4 looked at the preferences of starlings in a two phase task where they initially had to respond differently on two possible schedules - a high effort schedule (flying 16 times over a 1m distance) and a low effort schedule (flying 4 times over a 1m distance) that were signaled by different colours. In the second phase, the two alternatives had the same effort requirement but they found that the subjects would consistently prefer the alternative that had the same colour as the high effort schedule. The results were interpreted in terms of the sunk cost fallacy by arguing that the level of investment involved with the high effort schedule produced a greater perceived value of that alternative.

Saturday 27 October 2012

Even bees suffer from the Monday blues

Some research questions and experimental designs just leave us in awe of the incredible minds that thought them up, like the creation of CERN to see what happens when they fire tiny particles at each other at incredible speeds. Some leave us wondering if scientists aren't just little boys that never grew up, where instead of pulling the wings off flies or burning ants with a magnifying glass, they keep honeybees awake so that they become cranky and have difficulty learning how to navigate a maze. Interestingly enough, this entirely random preamble helps me segue into an interesting study I read today: Honeybees consolidate navigation memory during sleep:
ABSTRACT: Sleep is known to support memory consolidation in animals, including humans. Here we ask whether consolidation of novel navigation memory in honeybees depends on sleep. Foragers were exposed to a forced navigation task in which they learned to home more efficiently from an unexpected release site by acquiring navigational memory during the successful homing flight. This task was quantified using harmonic radar tracking and applied to bees that were equipped with a radio frequency identification device (RFID). The RFID was used to record their outbound and inbound flights and continuously monitor their behavior inside the colony, including their rest during the day and sleep at night. Bees marked with the RFID behaved normally inside and outside the hive. Bees slept longer during the night following forced navigation tasks, but foraging flights of different lengths did not lead to different rest times during the day or total sleep time during the night. Sleep deprivation before the forced navigation task did not alter learning and memory acquired during the task. However, sleep deprivation during the night after forced navigation learning reduced the probability of returning successfully to the hive from the same release site. It is concluded that consolidation of novel navigation memory is facilitated by night sleep in bees.
They use fancy technical words to try to distract us from their obviously evil intentions to drive honeybees crazy through sleep-deprivation but, in their defence, instead of blasting Bruce Springsteen's "Born in the USA" for hours and hours on repeat, they simply placed them on a machine that would shake the colony on a regular basis to prevent a restful sleep. So they weren't absolute monsters.

Sunday 21 October 2012

The Unpredictability of Humans

There is a common belief among the general public that humans are unpredictable. This seems to stem from the intuitive understanding that, at any point, we could simply choose to behave in a completely different way - so how could such a thing possibly be predicted? In contrast to this, I like to think of humans as meaty, irregular-shaped billiard balls.

This extends the billiard ball construct that is often used to characterise and demonstrate principles of physics but adds the component of irregularity (the "meaty" part is just for artistic effect). The importance of this distinction is that it describes the illusion of unpredictability, in that a regular billiard ball is said to be predictable as it travels in a way that is consistent with the direction of the initial force acting upon it, but an irregular shaped billiard ball will appear to almost have "a mind of its own" as multiple forces and impacts drive it in various directions. To illustrate this, I will appeal to my dog's favourite toy:
(For any hardcore behaviorists, I recommend you turn away now as I am about to engage in some mild anthropomorphism).

My dog loves this ball (currently sans a number of nodules that have been gnawed off) and I believe it is because, unlike a tennis ball, it will bounce in an unpredictable way when thrown. Sometimes it just bounces forwards like a tennis ball, but often it will swerve wildly to the left or right, and then bounce off in some other direction after making contact with the floor again. I can't say for sure but it seems to me that this captivates my dog as it almost mimics the unpredictability of living creatures (which, incidentally, seem to be the only other thing that can hold his attention for any extended period of time). 

HUMAN NODULES

My above analogy was a rather roundabout way of trying to distinguish between true unpredictability and pseudo-unpredictability. The former refers to aspects or agents in the world which cannot be predicted due to some inherent stochastic component, whereas the latter refers to things which only appear to be unpredictable due to our ignorance of the details of the situation. In the case of the irregular-shaped billiard balls, the "unpredictability" comes about as a result of us not having direct accessibility to the variables affecting its behavior at any point in time; that is, we don't know which nodule is being acted upon, or at what angle, or with what force, and this prevents us from making simple predictions like "if I hit the ball on this angle, it will travel in this direction". 

So what are these 'nodules' on humans? As you might have expected, humans have slightly more nodules than the ball my dog loves, and these nodules are composed of far more intricate substances than the rubber of the ball. They are the complex interactions between our genetics and environment, our histories and current situations, our perceptions and reality, and so on. When we are pushed by a force, we are not only behaviorally thrust in one simple direction, but instead we are essentially pushed into another force (e.g. our genetics) which pushes us into another direction that crashes into another force (e.g. our reinforcement history) which pushes into another direction again, ad infinitum

Tuesday 16 October 2012

Are science and naturalism compatible?

This may seem like an odd question to ask, especially given that the usual argument of compatibility is between science and religion, but it has recently been posed by Christian apologist Alvin Plantinga in his book, "Where the Conflict Really Lies: Science, Religion, and Naturalism", and in more detail in this lecture:



Religious apologists are known for their ridiculous arguments, especially when venturing into discussions on science, but it is also wise for us to consider the fact that people like Plantinga and Craig are not stupid men; they are well-educated and often have impressive philosophical and logical skills. It is for this reason that when I read this quote:
“There is indeed a science/religion conflict, all right, but it is not between science and theistic religion: it is between science and naturalism. That’s where the conflict really lies”
I decided to try to deduce what possible rational line of thinking could give us such a conclusion. For those who are not sure why this would be a particularly strange claim, it might help to look at one of the main assumptions of science: methodological naturalism.

NATURALISM AND SCIENCE

This combination of terms is sometimes rejected as mere "navel-gazing" by people who enjoy the more practical benefits of science rather than analysing the philosophical foundations of science, but when we look at what the concept actually means we find that it isn't very controversial at all. The "naturalism" part refers to the type of things we study; that is, we study things which are observable, measurable, repeatable, and so on.  The "methodological" part contrasts it with a metaphysical position, so since metaphysics is the study of what is "real" then a methodological position is one that simply assumes naturalism is true for pragmatic reasons, rather than claiming it is absolutely true. To put it most simply: methodological naturalism is the claim that no matter what is "real" or "true", science should just assume the world is observable and measurable and ignore anything else that doesn't fall within that category because that is what gives us meaningful results.

This is where my first possible explanation for Plantinga's claim came from: maybe Plantinga was conflating methodological naturalism with metaphysical naturalism. It would be a valid argument to claim that science is incompatible with metaphysical naturalism as metaphysical naturalism makes claims beyond what science can demonstrate or support. For example, there is no scientific experiment that could be devised to support the claim that the world is naturalistic rather than dualistic (the idea that reality is composed of two distinct substances; mind and matter) and instead we have to rely on logical arguments to disprove the idea that the brain is simply an antenna rather than being an organ that produces thoughts. Reading through his arguments though, this is not the argument he is making.

This led me to considering a second possible explanation for his claims: maybe Plantinga was conflating the natural/supernatural distinction that is considered in science and philosophy with the distinction that is often used in common language. Unlike the first possibility, this does not constitute a strong argument, however, it would be a reasonable mistake to make given that there is still a fair amount of debate and confusion over what the terms 'natural' and 'supernatural' refer to. As I mention above, what is 'natural' is generally agreed to be that which is observable, measurable, and repeatable, and the supernatural is thus its opposite (the unobservable, immeasurable, and unrepeatable). This is not how the terms are treated in common usage though, as 'supernatural' has come to take on the meaning of 'wacky' or 'magical'. What this means is that sometimes the judgement of what is or is not supernatural is made before considering how the concept is formulated, and instead it is often just claimed that things like ghosts, psychic abilities, gods, and so on are supernatural. This isn't necessarily the case though, as psychic phenomena like the kind that Daryl Bem searches for1 is most certainly "natural". So this would be a reasonable, yet incorrect argument, but again this is not the argument he is making.

Saturday 13 October 2012

Let's agree to disagree....


The phrase "let's agree to disagree" often occurs in everyday conversations as a way of communicating the notion that the discussion has reached an impasse; a point where the two debaters have proposed two incommensurable ideas that no amount of further discussion could overcome. But often, especially in discussions on science between opponents and detractors, this phrase is used in an attempt to conflate opinion with fact and philosopher Michael Stokes has written a good article on the topic: "No, you're not entitled to your opinion". He starts the entry with a speech he gives to his first year philosophy students:
“I’m sure you’ve heard the expression ‘everyone is entitled to their opinion.’ Perhaps you’ve even said it yourself, maybe to head off an argument or bring one to a close. Well, as soon as you walk into this room, it’s no longer true. You are not entitled to your opinion. You are only entitled to what you can argue for.”
Stokes goes on to highlight the relevance of Plato's distinction between "opinion" (or common belief) and "knowledge" to this popular equivocation, with the former expressing uncertain claims and the latter representing claims which are certain. For example, subjective beliefs like, "Red is a prettier colour than blue!" or "Nirvana are way better than the Foo Fighters!" are uncertain and are essentially just a matter of taste or preference. They are not claims which could really be proved one way or the other. However, claims like, "All unmarried men are bachelors" and "There are no square circles" are certain and are not claims that can be reasonably questioned or attributed to subjective preference.


Anyone who has argued with people on "controversial" topics, like evolution, climate change, vaccinations causing autism, etc, will recognise this tactic where presenting the scientific consensus of a position is dismissed as being just "your opinion" and demand that you accept their opinion as equally valid. This is, more or less, the entire basis of the creationists' argument behind the "Teach the Controversy" movement. 

What this all means is that when people try to tell you that, for example, the fact of evolution is just your "opinion", or they try to weasel out of a discussion by suggesting that you should just "agree to disagree" as if the subject you're debating is something that you simply choose to 'agree' with, then don't be fooled into accepting it on the basis of misplaced etiquette and politeness. Scientific conclusions are not claims about the world that we have tastes or preferences for; we don't choose to accept that 'vaccinations work' in the same way we accept that blue is pretty. We accept scientific conclusions based on the evidential basis for these positions and the evidence either supports it or it doesn't, and so if someone wants to "disagree" then make sure that they understand that it is not a matter of opinion. Disagreement either suggests knowledge of evidence which refutes the scientific conclusion or it represents a rejection of reality itself.

Thursday 11 October 2012

Bridging the Gap: Stereotype Threat

A common disagreement to the cause championed by feminists, and social justice advocates in general, is that the fight is over. The idea being that because the moustached villains in black top hats and capes of days gone by are mostly extinct, no longer able to oppress their victims through overt laws banning them from voting or by relegating them to the "Mad Men"-esque secretaries of the past, then there are no longer any "real" problems that need to be addressed or solved.

Unfortunately, there are still obvious problems of inequality in society; blacks generally performing worse than other races in academic tests, women being paid less than men for doing the same jobs, all types of minorities having difficulties getting hired for various jobs, and so on. Some people argue that these differences are caused by innate or natural differences between these groups, and this is perhaps a possible explanation for some of the differences mentioned, however, it is important to ensure that our conclusions are based on evidence and not just our assumptions about what might be true. In order to figure out what could cause these differences, we must consider all possibilities and some of the best evidence seems to be coming from the research looking at social and cultural influences. What this means is that large-scale emergent differences can arise from very subtle behaviors, beliefs, and norms which are accepted by society. One of these contributing factors is a process known as "stereotype threat".

WHAT IS IT?

'Stereotype threat' describes the phenomenon where being aware of negative stereotypes about the stigmatised group you belong to can put you at risk of confirming that stereotype, as defined in the seminal paper by Steele and Aronson1. To put it more simply, it means that if you have a negative thought drilled into you, you will start to believe it and it will affect your performance. For example, the original study by Steele and Aronson looked at the racial gap in academic achievement to see if it could be explained in terms of stereotype threat. To do this, they presented a series of tests with varied instructions; sometimes they were told that the test measures intellectual ability (which should activate the stereotype threat) and sometimes they were told that the test did not measure intellectual ability at all (which should neutralise the stereotype threat). What they found were the results presented in the graph below:


So this study suggested that a key factor in the disparity in academic achievement is how the student perceived their abilities in relation to their race. Steele2 argued that these threats are "in the air" and by clearing the air, these group differences will be diminished.

Friday 5 October 2012

William Lane Craig on Animal Suffering

For those who aren't aware of William Lane Craig, he is technically a philosopher of religion but it is probably more accurate to refer to him as a Christian apologist. He has made a number of ethically dubious claims over the years, like suggesting that the genocide of a people is morally right if God commands it, but recently he has been pulled back into the spotlight for arguing that animals lack the capacity to suffer.

In his article "Animal Suffering" and in multiple debates on related topics, WLC has made the following argument:
In his book Nature Red in Tooth and Claw, Michael Murray explains on the basis of neurological studies that there is an ascending three-fold hierarchy of pain awareness in nature1:
  • Level 3: Awareness that one is oneself in pain
  • Level 2: Mental states of pain
  •           Level 1: Aversive reaction to noxious stimuli
...
Level 3 is a higher-order awareness that one is oneself experiencing a Level 2 state. Your friend asks, “How could an animal not be aware of their suffering if they're yelping/screaming out of pain?" Brain studies supply the remarkable answer. Neurological research indicates that there are two independent neural pathways associated with the experience of pain. The one pathway is involved in producing Level 2 mental states of being in pain. But there is an independent neural pathway that is associated with being aware that one is oneself in a Level 2 state. And this second neural pathway is apparently a very late evolutionary development which only emerges in the higher primates, including man. Other animals lack the neural pathways for having the experience of Level 3 pain awareness. So even though animals like zebras and giraffes, for example, experience pain when attacked by a lion, they really aren’t aware of it.
The argument essentially accepts that animals feel pain but it goes on to make the more problematic assertions that: a) animals lack a meta-awareness of pain (which WLC seems to define as "suffering"), and b) that this meta-awareness necessitates a pre-frontal cortex. What this boils down to is the suggestion that animals cannot reflect on, or understand, the sensations of pain that they have and this is a condition required for "suffering".

Wednesday 3 October 2012

Priming Denialism

The concept of priming in psychology refers to the unconscious effect that a stimulus can have on future behavior. For example, one study looked at the effect that holding a hot or cold beverage before an interview would have on that person's opinion of the interviewer1. The stimulus in this situation is the hot or cold beverage and what they found was that the temperature translated almost directly into our metaphorical way of assessing the behavior of the people we meet; that is, after holding a cold drink people were more likely to interpret their behavior as cold and unwelcoming, whereas holding a hot drink people were more likely to interpret their behavior as warm and welcoming. As the researchers describe it, this is like "holding warm feelings towards someone" and "giving someone the cold shoulder".

Recently, a classic experiment in priming by Bargh, Chen, and Burrows2 was called into question by Doyen, Klein, and Pichon in their paper: "Behavioral Priming: It's All in the Mind, but Whose Mind?"3. The original study looked at the effect that including "old" words in a language task has on the speed at which subjects leave the lab following the experiment, so the expectation was that if a list of words a subject was asked to memorise included words like "old", "grey", or "bingo" (among others) then the participants would walk slower as they leave the room. Doyen, however, suspected that subtle behaviors of the experimenters may have affected the behavior of the subjects and so they attempted to replicate the study with a stricter methodology to rule out a number of possible confounds.

To do this, Doyen gave a set script to all 10 'experimenters' which they were to repeat to the subjects taking part in the study. The interesting twist in this study was that Doyen told half of the 'experimenters' that they should expect their subjects to walk slowly, and the other half that they should expect their subjects to walk more quickly. The 'experimenters' were given stopwatches to time the speed of the participants (as was done in the Bargh experiment), but there were also infra-red sensors that gave a more objective and more accurate measure of the walking speed.

Friday 28 September 2012

Why Addicts Overdose: Learned Tolerance

At first glance, the question of why addicts overdose seems absurd given the apparent straight-forwardness of the situation; an addict enjoys taking a drug, over time takes more of it, and eventually takes more than his body can handle which results in an overdose. However, things are not quite so simple. Since the victims of overdose are typically long-term users rather than novices, we can expect that they would have an extensive history with the substance and as a consequence will have developed a significant tolerance to the drug, and (looking specifically at heroin) this means that the user would need high levels of the drug to induce a fatal respiratory depression. When we compare heroin addicts who died from an overdose versus those who died through homicide, we find that the majority of victims in the "overdose" group had no higher levels of morphine in their blood than the comparison group1. The conclusion of this study was that, for the majority of overdose victims, the death could not be attributed to a toxic quantity of morphine in the blood. Even 30 years ago, the problems with the standard "overdose" story were succinctly summarised by Brecher2, who said:
  1. the deaths cannot be due to overdose,
  2. there has never been any evidence that they are due to overdose,
  3. there has long been a plethora of evidence demonstrating they are not due to overdose.
TOLERANCE

To understand why people have claimed that it is a misnomer to attribute these deaths to the traditional understanding of "overdose", we have to look at the factors that influence the development of drug tolerance and why the usual processes of tolerance failed. Tolerance is usually defined as the decreasing effects of a drug through repeated administrations, but even as far back as the 60's researchers were arguing that a complete explanation of tolerance requires an element of learning. This was argued on the basis that there were findings that could only be explained from a learning perspective; for example, the observation that the analgesic effect of morphine can persist in rats even after a number of drug-free months3.

Due to the way we normally conceive of 'tolerance' and our reliance on the purely physiological model, the idea that learning affects our biological tolerance to drugs can be quite a difficult concept to get our heads around. However, after looking at how classical conditioning can affect our response to the placebo effect (and the functioning of our immune system), we can look at how classical conditioning could play a role in drug tolerance. By looking at how classical conditioning was proposed to work by Pavlov, with a previously neutral stimulus (e.g. a bell) being paired with a unconditioned stimulus (e.g. food) and the neutral stimulus taking on the value of the unconditioned stimulus to produce the same effects (the sound of the bell becoming capable of making a dog salivate in the same way food does), we can begin to understand how classical conditioning could affect drug tolerance.

Saturday 22 September 2012

Free Animal Behavior Course - University of Melbourne

Coursera, the website dedicated to providing free university level courses to the public, has announced that Raoul Mulder and Mark Elgar will be presenting a course on animal behavior. The course is planned to be six weeks long but currently does not have a specified start date yet. You can find more details here:
Many of us derive inspiration from watching natural history documentaries and their astounding catalogue of wild animal behaviours.  In this course, we will explore how scientists study animal behaviour, and in particular how behaviour is shaped by the evolutionary forces of natural and sexual selection. Topics include resource acquisition; avoiding enemies; mate choice and sexual conflict; cues, signals and communication; parental care and social behaviour; and the role of genes, hormones and learning in regulating behavioural diversity.  We draw on examples from across the animal kingdom to illustrate the complex mechanisms underlying adaptations, and complement these with natural history videos that highlight key concepts. We evaluate the scientific rigour of studies used to test theory, and highlight the often ingenious methods adopted by researchers to understand animal behaviour. 
It sounds like it could be quite interesting, so sign up before all the spaces are filled! 

Cawsal Reasoning

A few years ago, the psychologists Saxe, Tzelnic, and Carey looked at how children as young as 7-months old would react when a bean bag is thrown from behind a screen, with the screen then being raised to show either a human hand or puppet, or an inert object like a toy train1. What they found was that, even in these young children, there was evidence of causal reasoning in the form of the children showing signs of surprise after the screen was removed to reveal an inert object. That is, the children were utilising an abstract understanding of how causal agents (the hand or puppet) can affect its environment, which remains true even if the causal agent cannot be seen.

At this point you might be asking: "What's with the horrible pun in the title?". The answer is found when we compare the novel aspect of Saxe's work with the novel aspect of the recent work of Taylor, Miller and Gray. The former is interesting for extending evidence of causal reasoning to very young children, and the latter is interesting for extending evidence of causal reasoning to crows.

In their latest paper, "New Caledonian crows reason about hidden causal agents"2, Taylor and colleagues set up an analogue situation to that used by Saxe that was obviously adapted for crows (or perhaps they just could not find any puppets and toys trains at short notice). Their design is best characterised in the figure below:

For the first condition (on the left), a crow would observe two people enter the aviary and one person would go behind the "hide" (a screen preventing the crow from seeing the person) whilst the other remains motionless within the room. This condition was termed the "Hidden Causal Agent" (HCA) condition as the hidden person would move a stick in the baited hole where the crow would forage for food, and for those capable of causal reasoning, they should be able to infer that the movement of the stick was being caused by the human in the hide. After moving the stick in and out fifteen times, the person would leave the hide and the aviary completely (all being seen by the crow). The second condition (on the right) was the "Unknown Causal Agent" (UCA) condition, where only one person would enter the aviary but they would remain motionless in plain view of the crow and the stick would move in and out fifteen times with no apparent cause (the experimenters were manipulating it with a hidden string).

The logic behind the experiment is that the crow should hesitate when attempting to retrieve their food from the baited tube, as movement of the stick could result in them being poked in the side of the head (importantly, as the authors take effort to note, this was in actuality impossible as they only moved the stick when setting up the condition, before the crow began to forage for food). In the HCA condition, if the crow accurately infers the association between the movement of the stick and the human, then there should be little-to-no hesitation when foraging for food because the agent causing the movement of the stick leaves before the crow enters the baited tube. In other words, there is no reason to fear being poked in the side of the head. With the UCA condition, however, the crow has no clues about what could be causing the movement of the stick so it has no information on whether the stick will move again or not (thus presenting a risk). The results are shown below:



The pretty lines and dots tell us that in the HCA condition, the crows spent significantly less time investigating the stick and surrounding before foraging for food. Since the only difference in the two conditions was the observation of a second experimenter entering and then leaving the hide, our explanation must utilise this variable. The authors argue (and successfully, in my opinion) that the best explanation for this behavior is one that is consistent with the Saxe research - that the crows are demonstrating causal reasoning as they have less reason to fear being poked in the head due to the fact that the agent believed to be causing the movement of the stick is no longer present.

It is pretty amazing research as even though some observational reports had alluded to these possible abilities in animals (for example, Darwin's example of dogs barking at a parasol being blown across a garden by the wind), this is the first example of it being demonstrated in experimental conditions. The authors rightly speculate that the methodology they used could be applied to other animals to assess the causal reasoning abilities of various species, and this could give us some information about possible selective pressures producing this ability.

For anyone interested, the lead author Alex Taylor did an "AMA" (Ask Me Anything) on Reddit yesterday and answered a number of questions put to him on the study. You can find it here: Caws and Effect – IAM Alex Taylor, Evolutionary Psychologist and lead researcher on the recent paper, "New Caledonian crows reason about hidden causal agents". AMA.

REFERENCES:

1. Saxe, R., Tzelnic, T., Carey, S. (2007) Knowing who dunnit: Infants identify the causalagent in an unseen causal interaction. Developmental Psychology, 43:149–158.

2. Taylor, A.H., Miller, R., Gray, R.D. (2012) New Caledonian crows reason about hidden causal agents. Proceedings of the Academy of Sciences, Published Online First: 17 September.

Wednesday 19 September 2012

Do Insects Feel Pain?

As you get older, do you sometimes find yourself standing in the kitchen wondering what it was that you were looking for? Do you find yourself struggling to remember the name of that movie, the one with that guy in it, where something happened at the end? When constructing your web to catch your food, do you find you are making more errors in terms of the length of the capture spiral, the number of anomalies per cm, and in four parameters of web regularity?

Okay, maybe the last example applies more to spiders than humans but researchers have recently looked at the behavioral effects of ageing on spiders and found similar patterns of cognitive decline that are often observed in humans and other "higher" animals.

Ageing alters spider orb-web construction (M. Anotaux, J. Marchal, N. Châline, L. Desquilbet, R. Leborgne, C. Gilbertb, A. Pasquet):
ABSTRACT: Ageing is known to induce profound effects on physiological functions but only a few studies have focused on its behavioural alterations. Orb-webs of spiders, however, provide an easily analysable structure, the result of complex sequences of stereotypical behaviours that are particularly relevant to the study of ageing processes. We chose the orb spider Zygiella x-notata as an invertebrate organism to investigate alterations in web geometry caused by ageing. Parameters taken into account to compare webs built by spiders at different ages were: the length of the capture spiral (CTL), the number of anomalies per cm, and four parameters of web regularity (the angle between radii, the number of spiral thread units connecting two successive radii, the parallelism and the coefficient of variation of the distances between silk threads of two adjacent spiral turns). All web parameters were related to ageing. Two groups of spiders emerged: short- and long-lived spiders (with a higher body mass), with an average life span of 150 and 236 days, respectively. In both short- and long-lived spiders' webs, the CTL and the silk thread parallelism decreased, while the variation of the distances between silk threads increased. However, the number of anomalies per cm and the angle between radii increased in short-lived spiders only. These modifications can be explained by ageing alterations in silk investment and cognitive and/or locomotor functions. Orb-web spiders would therefore provide a unique invertebrate model to study ageing and its processes in the alterations of behavioural and cognitive functions.
The details of the study can be found in the link above but one of the more interesting aspects of the article to me was the discussion on the current scarcity of animal models of ageing using invertebrates. I think this is (at least partially) a result of a pervasive belief that humans are distinct from animals, and even though over the last century we have chipped away at the dividing line between man and brute with a greater understanding of how humans and related species share commonalities, there seems to still be some resistance to extending this understanding to insects. To suggest that insects are not just automata that respond to their world based entirely on instinct can elicit surprise and incredulity despite the fact that much of our knowledge of how learning works in the brain comes directly from insects1. Once we begin to investigate this topic, finding the similarities of cognitive functions between insect and man, finding the similarities of learning processes between insect and man, we inevitably find ourselves posing a scientifically and ethically difficult question: do insects feel pain?

Sunday 16 September 2012

How is a Cricket Like a Rat?

The question posed is not intended to be a provoking thought experiment seeped in metaphor and it is not some lesser known Buddhist kōan. It is, in fact, a literal question to which a group of researchers recently attempted to answer.

How is a cricket like a rat? Insights from the application of cybernetics to evasive food protective behaviour (Heather C. Bell, Kevin A. Judge, Erik A. Johnson, William H. Cade, Sergio M):
ABSTRACT: Robbing and dodging is a well-documented food protective behaviour in rats. Recently, we demonstrated that a simple cybernetic rule, gaining and maintaining a preferred interanimal distance, can account for much of the variability in dodging by rats. In this paper, the field cricket, Teleogryllus oceanicus, was used to test whether or not the same or similar cybernetic rules are used by animals of different lineages and body plans. Pairs of female crickets were tested in a circular arena with a clear glass surface. A small food pellet was given to one of the crickets and the attempts to rob the food by the other were videotaped from beneath. The results show that, although crickets, unlike rats, use a variety of defensive strategies, all of the cases in which they use evasion to protect a portable food item conform to the same cybernetic rules used by rats.
There are essentially two interesting aspects to this article. The first, and the more complex of the two, is how the authors use an elaborate methodological design to test what variables were controlling the behavior of their crickets when stealing food. What the authors refer to as a 'cybernetic rule' is the idea that the "robber's" behavior is a result of a basic rule: maintain distance from other animals. The importance of this suggestion is that it proposes a rule that does not depend on an automatic stimulus-response algorithm, which would suggest that as a competing animal moves towards the "robber", there would be a fixed and constant reactive response from the "robber". The authors found this was not the case, and instead the crickets relied on the same rule that had previously been observed in rats, where their behavior was constantly modified through experience and a changing environment - the cybernetic rule. The authors summarise their results as such:
CONCLUSION: Like rats, crickets are able to protect food from being stolen by other crickets by using evasive strategies. The two types of evasion used by crickets (running and dodging) both adhere to the cybernetic ‘gain and maintain the preferred interanimal distance’ rule that is used by rats, despite the large differences in their body morphology and their mechanics of locomotion. Not only does this show that cybernetic rules can be applied to two different organisms, but also to organisms from vastly different evolutionary lineages, supporting the idea that cybernetic rules may be widely, if not universally applicable (Powers 1973). This possibility has wideranging implications, both for understanding the behaviour of organisms and for the development of artificial systems (e.g. robotics).
The second interesting component of this article is how sometimes it is possible to judge a book by its cover; or, in this case, a scientific article by its title.

The Placebo Effect


The placebo effect is a well-recognised concept in popular culture. On the surface it seems like a relatively easy construct to understand but there are still questions in science over exactly how it works and even questions over how it is to be best defined. One extreme perspective from Rupert Sheldrake (in "The Science Delusion"1) suggests that the placebo effect is a demonstration of the mind's ability to manipulate reality and that the very concept of a placebo contradicts the materialist assumptions of science, thus demonstrating the inadequacy of science to explain our universe.

I, on the other hand, will either demonstrate why Sheldrake's understanding of the placebo effect is flawed or I will deconstruct and solve some of the deepest issues surrounding the mind-body problem and question the fundamental nature of our metaphysical reality. I assume the former is more likely.

DEFINING PLACEBO

In common parlance, the term "placebo" is often used as a way of describing or explaining the claimed benefits of some pseudoscientific treatment; for example, a skeptic, when presented with the anecdote of their mother being cured of the cold after taking some homeopathic concoction, may assert that it was "just the placebo effect". In a more formal sense, medical research uses placebo conditions to judge the comparative effectiveness of their drug, with the implication that the drug has no effect if it is no better than placebo. In both of these cases (and among others) there appears to be a few common elements, namely that:
  1. there is a benefit (perceived or actual) that needs explaining
  2. there is an illusionary or confounding influence distinct from the substance used as a placebo
  3. there are conceptual (and ethical) issues in viewing the placebo as a "real" treatment.
These elements have been combined and restructured in numerous ways over the years, to suit different agendas, applications, and interpretations, and this has left us with a broad range of formal definitions. Kirsch, for example, defined it as: “substances, given in the guise of active medication, but which in fact have no pharmacological effect on the condition being treated”2, which emphasises the inert nature of the placebo. This definition, however, is complicated by the existence of 'active placebos', where active compounds are used to mimic the side effects of the drug being tested without having any beneficial effect on the condition being treated (like using lorazepam to produce the known side effects of sleepiness and drowsiness associated with the use of the pain killers morphine and gabapentin, without having any pain killing effects of its own3). To complicate things further, other researchers (like Shapiro and Morris) have suggested that it is important to include the possibility of placebo therapies, like sham surgeries4.

Stewart-Williams and Podd looked at some of these definitions and argued that the placebo effect should be defined as: "a genuine psychological or physiological effect, in a human or another animal, which is attributable to receiving a substance or undergoing a procedure, but is not due to the inherent powers of that substance or procedure"5. The advantage of this approach is that it accounts for: a) perceived and actual improvements, b) human and animals results, c) active substances which are inert in relation to the condition being treated, and d) different forms that placebos can take (e.g. pills and surgeries).