Thursday, 29 November 2012

Gameswithwords - Participate in Language Research

Gameswithwords is a great site that has been set up and run by a number of researchers who are studying aspects of language. In order to generate larger sample sizes, they realised that they could recruit more participants by making their experiments accessible online rather than having to put up ads on noticeboards and convince people to come into universities to partake in the study. 

The tasks don't take much time to complete and they make the results available on their blog when the data has been collated and interpreted. They also give you the option of being emailed with the results so for those interested in research and want first-hand experience in what it's like to take part, here's your chance! 

Check out some of the studies and feel free to post any thoughts or feedback here.

Thursday, 22 November 2012

Misunderstanding Behaviorism

Despite the fact that the title of my blog alludes to misunderstandings of behaviorism in popular thought, I've put off writing an article that elucidates and corrects these misconceptions. The reasons for this delay are varied but the main one is probably just a sense of fatigue when dealing with this issue as I've engaged people in this discussion many times over the years and rarely does it seem to change any opinions; however, recent instances of banging my head against a wall have reinvigorated my interest in the topic.


Behaviorism is the philosophy of science underpinning behavioral psychology, and it has taken on numerous forms over the space of a century - all of which appear to have been misunderstood or misrepresented to some degree. Arguably, the grandfather of behaviorism was Ivan Pavlov and whilst his name may not be immediately recognisable to everyone, it is likely that it rings a bell. This is because that is where the phrase came from. As a physiologist studying the salivary reflexes in dogs, Pavlov noticed that his subjects had begun to salivate even before the food had been presented – using pre-CSI investigative techniques, he reasoned that the sounds of the researchers’ footsteps as they brought the food down the hallway had become associated or paired somehow with the food. To test this experimentally, he set up conditions where he would ring a bell immediately before feeding the dogs to initially pair the two stimuli together and then later tested the bell without presenting the food. He found that the bell alone was enough to produce salivation in the dogs, and he termed this process “classical conditioning”1.

An ethologist by the name of John B. Watson (who was studying instincts at the time) heard of the work of Pavlov and pursued it further, eventually creating what was referred to as “stimulus-response” psychology – otherwise known as methodological behaviorism. In 1913 he wrote a paper called “Psychology as the Behaviorist Views It”2, informally known as the “Behaviorist Manifesto” and it is this article where Watson attempts to separate psychology from its philosophical roots in order to push it, willingly or not, into the realm of science. To do so, he argued that a science of psychology must be objective with no recourse to internal states that can only be discovered through introspection, thus rejecting the approaches of people like William James before him. He suggested that the future of psychology is in understanding our relation to the environment and how our behavior is affected by various stimulus-response relations. The culmination of which he described in his 1930 book simply titled, “Behaviorism”3. It is here that the beginning of misunderstanding behaviorism began.

Sunday, 11 November 2012

The Mind-Body Problem in Science

For the philosophers out there who had an aneurysm upon reading the title, just bear with me for a minute. Instead of attempting to tackle dualism using science (and thus invoking scientism to a degree that would make Sam Harris proud), I want to focus more on how naive assumptions of the interaction between mind and body can give rise to fallacious reasoning - particularly in interpretations of neuroscientific research. In other words, this is mostly going to be a rehash of articles like "Your Brain on Pseudoscience" and "The Rise of Popular Neurobollocks"; and my favourite of this genre of cranky-skeptical diatribes, an article written by Massimo Pigliucci called: "The Mismeasure of Neuroscience". 

Massimo describes the fundamental problem quite succinctly here:
Let’s begin with what exactly follows from studies showing that X has been demonstrated to have a neural correlate (where X can be moral decision making, political leanings, sexual habits, or consciousness itself). The refrain one often hears when these studies are published is that neuroscientists have “explained” X, a conclusion that is presented more like the explaining away (philosophically, the elimination) of X. You think you are making an ethical decision? Ah!, but that’s just the orbital and medial sectors of the prefrontal cortex and the superior temporal sulcus region of your brain in action. You think you are having a spiritual experience while engaging in deep prayer or meditation? Silly you, that’s just the combined action of your right medial orbitofrontal cortex, right middle temporal cortex, right inferior and superior parietal lobules, right caudate, left medial prefrontal cortex, left anterior cingulate cortex, left inferior parietal lobule, left insula, left caudate, and left brainstem (did I leave anything out?). 
I could keep going, but I think you get the point. The fact is, of course, that anything at all which we experience, whether it does or does not have causal determinants in the outside world, has to be experienced through our brains. Which means that you will find neural correlates for literally everything that human beings do or think. Because that’s what the brain is for: to do stuff and think about stuff.
What he is describing here is a phenomenon known as the 'reverse inference fallacy' and this is just a specific example of "affirming the consequent" in logic. The traditional application (or misapplication) of the reverse inference fallacy is described by Poldrack1 who presents the argument as:

  1. In previous studies, when cognitive process X was assumed to be involved, brain area Z was activated
  2. In the current study, when task A was presented, brain area Z was activated
  3. Therefore, activation of brain area Z in the current study demonstrates the involvement of cognitive process X during task A.

This can also be presented as such:

  1. If P then Q
  2. Q
  3. Therefore, P.

The fallacious nature of the reasoning can be highlighted by inserting any everyday relationship, for example: "If it is raining, then I have an umbrella. I have an umbrella. Therefore, it is raining". This is an obviously false statement as we can think of a number of situations where (accepting the initial if-then premise) I could have an umbrella without it being raining, like if my old one had broken and I had just purchased one at a store, or maybe I'm on my way to a fancy dress party where I have donned my infamous Mary Poppins costume. 

It's important to keep in mind, however, that just because the logic is fallacious, it does not mean that the conclusion is necessarily false. So even though there are possible exceptions to me having an umbrella and it being raining, it could be the case that when the argument is used, it does happen to be raining. Or, in Poldrack's example, even though it does not follow that just because cognitive process X activates brain area Z that the activation of brain area Z must implicate the involvement of cognitive process X, it could be true that cognitive process X actually is involved. I'm not sure if this coincidental correct conclusion has a fancy Latin word to describe it but I liken it to the saying that a broken clock is right twice a day; that is, the fact that the clock is broken does not justify the claim that the time it reads is definitely false, but it does justify our skepticism over the process by which it reaches the correct time twice a day. 

Tuesday, 6 November 2012

The Sunk Cost Effect

Ah, the Concorde; the joint development program of the British and French governments that pushed ahead even when the economic benefits of the project were no longer possible. It was designed to be a passenger aircraft capable of supersonic flight but its lasting legacy resides mostly in game theory, where it has been adopted as a description of irrational behavior - the Concorde fallacy. More generally, the process behind the fallacy is known as the sunk cost effect.

As the Concorde example suggests, the problematic behavior in question is when a person continues to engage in a behavior due to their initial investment, even though the payoff is no longer available. In common parlance, this could be described as "knowing when to cut your losses"; or, as a famous philosopher once remarked, "You got to know when you hold 'em, know when to fold 'em, know when you walk away and know when to run". It was either Descartes or Kenny Rogers, I can never remember. 

It is mostly of interest to researchers because these behaviors violate our optimality predictions and instead of engaging in behaviors which maximise returns, there seems to be a consistent deviation towards sub-optimal responding. Initially it was believed to be an irrational approach that was unique to humans (and perhaps even limited to adult humans), which led to the hypothesis suggesting that the phenomenon was a product of higher-order thinking - specifically, the overgeneralisation of a rule like "Don't waste"1. Recent research, however, suggests that this might not be true2, 3.

For example, Kacelnick and Marsh4 looked at the preferences of starlings in a two phase task where they initially had to respond differently on two possible schedules - a high effort schedule (flying 16 times over a 1m distance) and a low effort schedule (flying 4 times over a 1m distance) that were signaled by different colours. In the second phase, the two alternatives had the same effort requirement but they found that the subjects would consistently prefer the alternative that had the same colour as the high effort schedule. The results were interpreted in terms of the sunk cost fallacy by arguing that the level of investment involved with the high effort schedule produced a greater perceived value of that alternative.