Friday 28 September 2012

Why Addicts Overdose: Learned Tolerance

At first glance, the question of why addicts overdose seems absurd given the apparent straight-forwardness of the situation; an addict enjoys taking a drug, over time takes more of it, and eventually takes more than his body can handle which results in an overdose. However, things are not quite so simple. Since the victims of overdose are typically long-term users rather than novices, we can expect that they would have an extensive history with the substance and as a consequence will have developed a significant tolerance to the drug, and (looking specifically at heroin) this means that the user would need high levels of the drug to induce a fatal respiratory depression. When we compare heroin addicts who died from an overdose versus those who died through homicide, we find that the majority of victims in the "overdose" group had no higher levels of morphine in their blood than the comparison group1. The conclusion of this study was that, for the majority of overdose victims, the death could not be attributed to a toxic quantity of morphine in the blood. Even 30 years ago, the problems with the standard "overdose" story were succinctly summarised by Brecher2, who said:
  1. the deaths cannot be due to overdose,
  2. there has never been any evidence that they are due to overdose,
  3. there has long been a plethora of evidence demonstrating they are not due to overdose.
TOLERANCE

To understand why people have claimed that it is a misnomer to attribute these deaths to the traditional understanding of "overdose", we have to look at the factors that influence the development of drug tolerance and why the usual processes of tolerance failed. Tolerance is usually defined as the decreasing effects of a drug through repeated administrations, but even as far back as the 60's researchers were arguing that a complete explanation of tolerance requires an element of learning. This was argued on the basis that there were findings that could only be explained from a learning perspective; for example, the observation that the analgesic effect of morphine can persist in rats even after a number of drug-free months3.

Due to the way we normally conceive of 'tolerance' and our reliance on the purely physiological model, the idea that learning affects our biological tolerance to drugs can be quite a difficult concept to get our heads around. However, after looking at how classical conditioning can affect our response to the placebo effect (and the functioning of our immune system), we can look at how classical conditioning could play a role in drug tolerance. By looking at how classical conditioning was proposed to work by Pavlov, with a previously neutral stimulus (e.g. a bell) being paired with a unconditioned stimulus (e.g. food) and the neutral stimulus taking on the value of the unconditioned stimulus to produce the same effects (the sound of the bell becoming capable of making a dog salivate in the same way food does), we can begin to understand how classical conditioning could affect drug tolerance.

Saturday 22 September 2012

Free Animal Behavior Course - University of Melbourne

Coursera, the website dedicated to providing free university level courses to the public, has announced that Raoul Mulder and Mark Elgar will be presenting a course on animal behavior. The course is planned to be six weeks long but currently does not have a specified start date yet. You can find more details here:
Many of us derive inspiration from watching natural history documentaries and their astounding catalogue of wild animal behaviours.  In this course, we will explore how scientists study animal behaviour, and in particular how behaviour is shaped by the evolutionary forces of natural and sexual selection. Topics include resource acquisition; avoiding enemies; mate choice and sexual conflict; cues, signals and communication; parental care and social behaviour; and the role of genes, hormones and learning in regulating behavioural diversity.  We draw on examples from across the animal kingdom to illustrate the complex mechanisms underlying adaptations, and complement these with natural history videos that highlight key concepts. We evaluate the scientific rigour of studies used to test theory, and highlight the often ingenious methods adopted by researchers to understand animal behaviour. 
It sounds like it could be quite interesting, so sign up before all the spaces are filled! 

Cawsal Reasoning

A few years ago, the psychologists Saxe, Tzelnic, and Carey looked at how children as young as 7-months old would react when a bean bag is thrown from behind a screen, with the screen then being raised to show either a human hand or puppet, or an inert object like a toy train1. What they found was that, even in these young children, there was evidence of causal reasoning in the form of the children showing signs of surprise after the screen was removed to reveal an inert object. That is, the children were utilising an abstract understanding of how causal agents (the hand or puppet) can affect its environment, which remains true even if the causal agent cannot be seen.

At this point you might be asking: "What's with the horrible pun in the title?". The answer is found when we compare the novel aspect of Saxe's work with the novel aspect of the recent work of Taylor, Miller and Gray. The former is interesting for extending evidence of causal reasoning to very young children, and the latter is interesting for extending evidence of causal reasoning to crows.

In their latest paper, "New Caledonian crows reason about hidden causal agents"2, Taylor and colleagues set up an analogue situation to that used by Saxe that was obviously adapted for crows (or perhaps they just could not find any puppets and toys trains at short notice). Their design is best characterised in the figure below:

For the first condition (on the left), a crow would observe two people enter the aviary and one person would go behind the "hide" (a screen preventing the crow from seeing the person) whilst the other remains motionless within the room. This condition was termed the "Hidden Causal Agent" (HCA) condition as the hidden person would move a stick in the baited hole where the crow would forage for food, and for those capable of causal reasoning, they should be able to infer that the movement of the stick was being caused by the human in the hide. After moving the stick in and out fifteen times, the person would leave the hide and the aviary completely (all being seen by the crow). The second condition (on the right) was the "Unknown Causal Agent" (UCA) condition, where only one person would enter the aviary but they would remain motionless in plain view of the crow and the stick would move in and out fifteen times with no apparent cause (the experimenters were manipulating it with a hidden string).

The logic behind the experiment is that the crow should hesitate when attempting to retrieve their food from the baited tube, as movement of the stick could result in them being poked in the side of the head (importantly, as the authors take effort to note, this was in actuality impossible as they only moved the stick when setting up the condition, before the crow began to forage for food). In the HCA condition, if the crow accurately infers the association between the movement of the stick and the human, then there should be little-to-no hesitation when foraging for food because the agent causing the movement of the stick leaves before the crow enters the baited tube. In other words, there is no reason to fear being poked in the side of the head. With the UCA condition, however, the crow has no clues about what could be causing the movement of the stick so it has no information on whether the stick will move again or not (thus presenting a risk). The results are shown below:



The pretty lines and dots tell us that in the HCA condition, the crows spent significantly less time investigating the stick and surrounding before foraging for food. Since the only difference in the two conditions was the observation of a second experimenter entering and then leaving the hide, our explanation must utilise this variable. The authors argue (and successfully, in my opinion) that the best explanation for this behavior is one that is consistent with the Saxe research - that the crows are demonstrating causal reasoning as they have less reason to fear being poked in the head due to the fact that the agent believed to be causing the movement of the stick is no longer present.

It is pretty amazing research as even though some observational reports had alluded to these possible abilities in animals (for example, Darwin's example of dogs barking at a parasol being blown across a garden by the wind), this is the first example of it being demonstrated in experimental conditions. The authors rightly speculate that the methodology they used could be applied to other animals to assess the causal reasoning abilities of various species, and this could give us some information about possible selective pressures producing this ability.

For anyone interested, the lead author Alex Taylor did an "AMA" (Ask Me Anything) on Reddit yesterday and answered a number of questions put to him on the study. You can find it here: Caws and Effect – IAM Alex Taylor, Evolutionary Psychologist and lead researcher on the recent paper, "New Caledonian crows reason about hidden causal agents". AMA.

REFERENCES:

1. Saxe, R., Tzelnic, T., Carey, S. (2007) Knowing who dunnit: Infants identify the causalagent in an unseen causal interaction. Developmental Psychology, 43:149–158.

2. Taylor, A.H., Miller, R., Gray, R.D. (2012) New Caledonian crows reason about hidden causal agents. Proceedings of the Academy of Sciences, Published Online First: 17 September.

Wednesday 19 September 2012

Do Insects Feel Pain?

As you get older, do you sometimes find yourself standing in the kitchen wondering what it was that you were looking for? Do you find yourself struggling to remember the name of that movie, the one with that guy in it, where something happened at the end? When constructing your web to catch your food, do you find you are making more errors in terms of the length of the capture spiral, the number of anomalies per cm, and in four parameters of web regularity?

Okay, maybe the last example applies more to spiders than humans but researchers have recently looked at the behavioral effects of ageing on spiders and found similar patterns of cognitive decline that are often observed in humans and other "higher" animals.

Ageing alters spider orb-web construction (M. Anotaux, J. Marchal, N. Châline, L. Desquilbet, R. Leborgne, C. Gilbertb, A. Pasquet):
ABSTRACT: Ageing is known to induce profound effects on physiological functions but only a few studies have focused on its behavioural alterations. Orb-webs of spiders, however, provide an easily analysable structure, the result of complex sequences of stereotypical behaviours that are particularly relevant to the study of ageing processes. We chose the orb spider Zygiella x-notata as an invertebrate organism to investigate alterations in web geometry caused by ageing. Parameters taken into account to compare webs built by spiders at different ages were: the length of the capture spiral (CTL), the number of anomalies per cm, and four parameters of web regularity (the angle between radii, the number of spiral thread units connecting two successive radii, the parallelism and the coefficient of variation of the distances between silk threads of two adjacent spiral turns). All web parameters were related to ageing. Two groups of spiders emerged: short- and long-lived spiders (with a higher body mass), with an average life span of 150 and 236 days, respectively. In both short- and long-lived spiders' webs, the CTL and the silk thread parallelism decreased, while the variation of the distances between silk threads increased. However, the number of anomalies per cm and the angle between radii increased in short-lived spiders only. These modifications can be explained by ageing alterations in silk investment and cognitive and/or locomotor functions. Orb-web spiders would therefore provide a unique invertebrate model to study ageing and its processes in the alterations of behavioural and cognitive functions.
The details of the study can be found in the link above but one of the more interesting aspects of the article to me was the discussion on the current scarcity of animal models of ageing using invertebrates. I think this is (at least partially) a result of a pervasive belief that humans are distinct from animals, and even though over the last century we have chipped away at the dividing line between man and brute with a greater understanding of how humans and related species share commonalities, there seems to still be some resistance to extending this understanding to insects. To suggest that insects are not just automata that respond to their world based entirely on instinct can elicit surprise and incredulity despite the fact that much of our knowledge of how learning works in the brain comes directly from insects1. Once we begin to investigate this topic, finding the similarities of cognitive functions between insect and man, finding the similarities of learning processes between insect and man, we inevitably find ourselves posing a scientifically and ethically difficult question: do insects feel pain?

Sunday 16 September 2012

How is a Cricket Like a Rat?

The question posed is not intended to be a provoking thought experiment seeped in metaphor and it is not some lesser known Buddhist kōan. It is, in fact, a literal question to which a group of researchers recently attempted to answer.

How is a cricket like a rat? Insights from the application of cybernetics to evasive food protective behaviour (Heather C. Bell, Kevin A. Judge, Erik A. Johnson, William H. Cade, Sergio M):
ABSTRACT: Robbing and dodging is a well-documented food protective behaviour in rats. Recently, we demonstrated that a simple cybernetic rule, gaining and maintaining a preferred interanimal distance, can account for much of the variability in dodging by rats. In this paper, the field cricket, Teleogryllus oceanicus, was used to test whether or not the same or similar cybernetic rules are used by animals of different lineages and body plans. Pairs of female crickets were tested in a circular arena with a clear glass surface. A small food pellet was given to one of the crickets and the attempts to rob the food by the other were videotaped from beneath. The results show that, although crickets, unlike rats, use a variety of defensive strategies, all of the cases in which they use evasion to protect a portable food item conform to the same cybernetic rules used by rats.
There are essentially two interesting aspects to this article. The first, and the more complex of the two, is how the authors use an elaborate methodological design to test what variables were controlling the behavior of their crickets when stealing food. What the authors refer to as a 'cybernetic rule' is the idea that the "robber's" behavior is a result of a basic rule: maintain distance from other animals. The importance of this suggestion is that it proposes a rule that does not depend on an automatic stimulus-response algorithm, which would suggest that as a competing animal moves towards the "robber", there would be a fixed and constant reactive response from the "robber". The authors found this was not the case, and instead the crickets relied on the same rule that had previously been observed in rats, where their behavior was constantly modified through experience and a changing environment - the cybernetic rule. The authors summarise their results as such:
CONCLUSION: Like rats, crickets are able to protect food from being stolen by other crickets by using evasive strategies. The two types of evasion used by crickets (running and dodging) both adhere to the cybernetic ‘gain and maintain the preferred interanimal distance’ rule that is used by rats, despite the large differences in their body morphology and their mechanics of locomotion. Not only does this show that cybernetic rules can be applied to two different organisms, but also to organisms from vastly different evolutionary lineages, supporting the idea that cybernetic rules may be widely, if not universally applicable (Powers 1973). This possibility has wideranging implications, both for understanding the behaviour of organisms and for the development of artificial systems (e.g. robotics).
The second interesting component of this article is how sometimes it is possible to judge a book by its cover; or, in this case, a scientific article by its title.

The Placebo Effect


The placebo effect is a well-recognised concept in popular culture. On the surface it seems like a relatively easy construct to understand but there are still questions in science over exactly how it works and even questions over how it is to be best defined. One extreme perspective from Rupert Sheldrake (in "The Science Delusion"1) suggests that the placebo effect is a demonstration of the mind's ability to manipulate reality and that the very concept of a placebo contradicts the materialist assumptions of science, thus demonstrating the inadequacy of science to explain our universe.

I, on the other hand, will either demonstrate why Sheldrake's understanding of the placebo effect is flawed or I will deconstruct and solve some of the deepest issues surrounding the mind-body problem and question the fundamental nature of our metaphysical reality. I assume the former is more likely.

DEFINING PLACEBO

In common parlance, the term "placebo" is often used as a way of describing or explaining the claimed benefits of some pseudoscientific treatment; for example, a skeptic, when presented with the anecdote of their mother being cured of the cold after taking some homeopathic concoction, may assert that it was "just the placebo effect". In a more formal sense, medical research uses placebo conditions to judge the comparative effectiveness of their drug, with the implication that the drug has no effect if it is no better than placebo. In both of these cases (and among others) there appears to be a few common elements, namely that:
  1. there is a benefit (perceived or actual) that needs explaining
  2. there is an illusionary or confounding influence distinct from the substance used as a placebo
  3. there are conceptual (and ethical) issues in viewing the placebo as a "real" treatment.
These elements have been combined and restructured in numerous ways over the years, to suit different agendas, applications, and interpretations, and this has left us with a broad range of formal definitions. Kirsch, for example, defined it as: “substances, given in the guise of active medication, but which in fact have no pharmacological effect on the condition being treated”2, which emphasises the inert nature of the placebo. This definition, however, is complicated by the existence of 'active placebos', where active compounds are used to mimic the side effects of the drug being tested without having any beneficial effect on the condition being treated (like using lorazepam to produce the known side effects of sleepiness and drowsiness associated with the use of the pain killers morphine and gabapentin, without having any pain killing effects of its own3). To complicate things further, other researchers (like Shapiro and Morris) have suggested that it is important to include the possibility of placebo therapies, like sham surgeries4.

Stewart-Williams and Podd looked at some of these definitions and argued that the placebo effect should be defined as: "a genuine psychological or physiological effect, in a human or another animal, which is attributable to receiving a substance or undergoing a procedure, but is not due to the inherent powers of that substance or procedure"5. The advantage of this approach is that it accounts for: a) perceived and actual improvements, b) human and animals results, c) active substances which are inert in relation to the condition being treated, and d) different forms that placebos can take (e.g. pills and surgeries).