A few entries ago I uploaded a fragment from a study that discusses an intriguing experiment with three chimpanzees (Pan troglodytes) which were trained to tap regularly on a piano keyboard...... Read more »
Hattori, Y., Tomonaga, M., & Matsuzawa, T. (2013) Spontaneous synchronized tapping to an auditory rhythm in a chimpanzee. Scientific Reports. DOI: 10.1038/srep01566
Hasegawa, A., Okanoya, K., Hasegawa, T., & Seki, Y. (2011) Rhythmic synchronization tapping to an audio–visual metronome in budgerigars. Scientific Reports. DOI: 10.1038/srep00120
Honing, H., Merchant, H., Háden, G., Prado, L., & Bartolo, R. (2012) Rhesus Monkeys (Macaca mulatta) Detect Rhythmic Groups in Music, but Not the Beat. PLoS ONE, 7(12). DOI: 10.1371/journal.pone.0051369
In days of old, a good bit of learning was done by rote memorization. The lesson is given. Recite and repeat over and over until you’ve got it down. Rote learning still exists. It gets used in some places and for some topics. A radically different approach is discovery learning. With discovery learning, you work [...]... Read more »
Mayer, R. (2004) Should There Be a Three-Strikes Rule Against Pure Discovery Learning?. American Psychologist, 59(1), 14-19. DOI: 10.1037/0003-066X.59.1.14
This post was written by Christian Jarrett and originally found on the BPS Research Digest blog. For the penultimate round of the TV show The Apprentice, the competing entrepreneurs must face a series of interviews with a crack team of hardened executives. The implicit, believable message is that these veterans have seen all the interview tricks in the book and will spot any blaggers a mile off. However, a new study provides the reality TV show with a reality check. A team led by Marc-André Reinhard report that experienced job interviewers are in fact no better than novice interviewers at spotting when a candidate is lying.The researchers filmed 14 volunteers telling the truth about a job they'd really had in the past and then spinning a yarn about time in a job they'd never really had. The volunteers were offered a small monetary reward to boost their motivation. These clips were then played online to 46 highly experienced interviewers (they'd conducted between 21 and 1000 real-life job interviews), 92 interviewers with some experience (they'd interviewed at least once), and 214 students who'd never before acted as a job interviewer. The participants' task was to identify the clips in which the interviewee was speaking truthfully about their work experience, and the clips in which the interviewee was fabricating.Overall the participants achieved an accuracy rate of 52 per cent - barely above chance performance, which is consistent with a huge literature showing how poor most of us are at spotting deception. But the headline finding is that the more experienced interviewers were no better than the novice interviewers at spotting lying job candidates - the first time that this topic has been researched. Greater work seniority, having more work experience and having more subordinates at work were also unrelated to the ability to spot lying job candidates.There was a glimmer of hope that interview lie-detection skills could be taught. Participants who reported more correct beliefs about non-verbal cues to lying (e.g. liars don't in fact fidget more) were slightly more successful at recognising which job candidates were lying (each correct belief about a non-verbal cue added 1.2 per cent more accuracy on average). Experienced and novice interviewers in the current study didn't differ in their knowledge about lying cues, which helps explain why the veterans were no better at the task. The more experienced interviewers were however more skeptical overall, tending to rate more of the clips as featuring lying."Our results provide the first evidence that employment interviewers may not be better at detecting deception in job interviews than lay persons," the researchers said, "although it is a judgmental context that they are very experienced with."Although the main gist of the results is consistent with related research in other contexts - for example, studies have found police detectives are no better at spotting lies, despite their interrogation experience - this study has some serious limitations, which undermine the applicability of the findings to the real world. Above all, the study did not involve real interviews, which meant the participants were unable to interact with the interviewees in a dynamic manner.Reinhard, M., Scharmach, M., and Müller, P. (2013). It's not what you are, it's what you know: experience, beliefs, and the detection of deception in employment interviews Journal of Applied Social Psychology, 43 (3), 467-479 DOI: 10.1111/j.1559-1816.2013.01011.x ... Read more »
Reinhard, M., Scharmach, M., & Müller, P. (2013) It's not what you are, it's what you know: experience, beliefs, and the detection of deception in employment interviews. Journal of Applied Social Psychology, 43(3), 467-479. DOI: 10.1111/j.1559-1816.2013.01011.x
A recent study found that people high in agreeableness, ego-resiliency, and low in neuroticism have a stronger response to placebo pain relief. The placebo effect may be related to a person's capacity for self-control. ... Read more »
Peciña M, Azhar H, Love TM, Lu T, Fredrickson BL, Stohler CS, & Zubieta JK. (2013) Personality trait predictors of placebo analgesia and neurobiological correlates. Neuropsychopharmacology : official publication of the American College of Neuropsychopharmacology, 38(4), 639-46. PMID: 23187726
I told you so.I'm talking about the paper by Pu and colleagues* who meta-analysed the currently available literature looking at two SNPs in everyone's favourite Scrabble classic gene, MTHFR in relation to autism spectrum disorders (ASDs). Said gene controls production of methylenetetrahydrofolate reductase (MTHFR) which fits very snugly into the whole one carbon metabolism cycle (see here).Love at first sight? @ Wikipedia Regular readers might know that I have a bit of a thing for MTHFR with autism in mind. And how MTHFR serves an important purpose in reducing the compound 5,10-methylenetetrahydrofolate to 5-methyltetrahydrofolate and onward its links to homocysteine (see here) and methionine (see here) and all that methylation palava.For a good summary (well, at least I think so) you might also want to have a look at this older post detailing the process, complete with hand-drawn diagram by yours truly.In essence, Pu et al reiterated the important role than the MTHFR C677T SNP might have to some cases of autism; in particular how "the C677T polymorphism was found to be associated with ASD only in children from countries without [folic acid] food fortification" denoting the potentially important link with the vitamin of the hour, folate (folic acid, vitamin B9) (see here).There's little more for me to add to this post that hasn't already been said. MTHFR is probably not going to be an issue for everyone with autism, and indeed might also be potentially important to other conditions outside of the autism spectrum (see here for a discussion of that recent schizophrenia paper). Mmm... perhaps another part of that common ground and potential RDoC variable?The nutrition link is perhaps something which adds to the view that environment might be a modifier of risk of some ASDs bearing also in mind the overlap with things like vitamin B12 (see here). That being said I'm also going to draw your attention back to all that folate receptor autoantibody stuff too just to bear in mind.I told you so.----------* Pu D. et al. Association between MTHFR gene polymorphisms and the risk of autism spectrum disorders: a meta-analysis. Autism Res. May 2013.----------Pu D, Shen Y, & Wu J (2013). Association between MTHFR Gene Polymorphisms and the Risk of Autism Spectrum Disorders: A Meta-Analysis. Autism research : official journal of the International Society for Autism Research PMID: 23653228... Read more »
Pu D, Shen Y, & Wu J. (2013) Association between MTHFR Gene Polymorphisms and the Risk of Autism Spectrum Disorders: A Meta-Analysis. Autism research : official journal of the International Society for Autism Research. PMID: 23653228
For the penultimate round of the TV show The Apprentice, the competing entrepreneurs must face a series of interviews with a crack team of hardened executives. The implicit, believable message is that these veterans have seen all the interview tricks in the book and will spot any blaggers a mile off. However, a new study provides the reality TV show with a reality check. A team led by Marc-André Reinhard report that experienced job interviewers are in fact no better than novice interviewers at spotting when a candidate is lying.
The researchers filmed 14 volunteers telling the truth about a job they'd really had in the past and then spinning a yarn about time in a job they'd never really had. The volunteers were offered a small monetary reward to boost their motivation. These clips were then played online to 46 highly experienced interviewers (they'd conducted between 21 and 1000 real-life job interviews), 92 interviewers with some experience (they'd interviewed at least once), and 214 students who'd never before acted as a job interviewer. The participants' task was to identify the clips in which the interviewee was speaking truthfully about their work experience, and the clips in which the interviewee was fabricating.
Overall the participants achieved an accuracy rate of 52 per cent - barely above chance performance, which is consistent with a huge literature showing how poor most of us are at spotting deception. But the headline finding is that the more experienced interviewers were no better than the novice interviewers at spotting lying job candidates - the first time that this topic has been researched. Greater work seniority, having more work experience and having more subordinates at work were also unrelated to the ability to spot lying job candidates.
There was a glimmer of hope that interview lie-detection skills could be taught. Participants who reported more correct beliefs about non-verbal cues to lying (e.g. liars don't in fact fidget more) were slightly more successful at recognising which job candidates were lying (each correct belief about a non-verbal cue added 1.2 per cent more accuracy on average). Experienced and novice interviewers in the current study didn't differ in their knowledge about lying cues, which helps explain why the veterans were no better at the task. The more experienced interviewers were however more skeptical overall, tending to rate more of the clips as featuring lying.
"Our results provide the first evidence that employment interviewers may not be better at detecting deception in job interviews than lay persons," the researchers said, "although it is a judgmental context that they are very experienced with."
Although the main gist of the results is consistent with related research in other contexts - for example, studies have found police detectives are no better at spotting lies, despite their interrogation experience - this study has some serious limitations, which undermine the applicability of the findings to the real world. Above all, the study did not involve real interviews, which meant the participants were unable to interact with the interviewees in a dynamic manner.
Reinhard, M., Scharmach, M., and Müller, P. (2013). It's not what you are, it's what you know: experience, beliefs, and the detection of deception in employment interviews Journal of Applied Social Psychology, 43 (3), 467-479 DOI: 10.1111/j.1559-1816.2013.01011.x
Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.
... Read more »
Reinhard, M., Scharmach, M., & Müller, P. (2013) It's not what you are, it's what you know: experience, beliefs, and the detection of deception in employment interviews. Journal of Applied Social Psychology, 43(3), 467-479. DOI: 10.1111/j.1559-1816.2013.01011.x
If often seems as though policy-making has devolved into nothing more than a contest where the goal is to blame as many people as possible (but not yourself) for the country’s problems. Fossil fuel companies blame environmental regulations for economic stagnation and high energy prices. Neocons blame civil libertarians for national security weaknesses. And of [...]... Read more »
Rothschild, Z., Landau, M., Molina, L., Branscombe, N., & Sullivan, D. (2013) Displacing Blame over the Ingroup’s Harming of a Disadvantaged Group can Fuel Moral Outrage at a Third-Party Scapegoat. Journal of Experimental Social Psychology. DOI: 10.1016/j.jesp.2013.05.005
The month of May is a violent thingIn the city their hearts start to singWell, some people sing, it sounds like they're screamingI used to doubt it, but now I believe itMonth Of May ------The Arcade FireToday is Mental Health Month Blog Day, sponsored by the American Psychological Association (APA). It's designed to:...educate the public about mental health, decrease stigma about mental illness, and discuss strategies for making lasting lifestyle and behavior changes that promote overall health and wellness.If the public has been following the recent hullabaloo about how to diagnose mental illnesses, they might be confused about the current and future direction of the field. How did we get here?As most of you know, the American Psychiatric Association (the other APA) is about to release its updated Diagnostic and Statistical Manual of Mental Disorders, the much maligned DSM-5. Weeks before the big launch, however, the National Institute of Mental Health (NIMH) stole the show by announcing that it will be re-orienting its research away from DSM categories:...While DSM has been described as a “Bible” for the field, it is, at best, a dictionary, creating a set of labels and defining each. The strength of each of the editions of DSM has been “reliability” – each edition has ensured that clinicians use the same terms in the same ways. The weakness is its lack of validity. Unlike our definitions of ischemic heart disease, lymphoma, or AIDS, the DSM diagnoses are based on a consensus about clusters of clinical symptoms, not any objective laboratory measure. Instead, the Research Domain Criteria (RDoC) framework would become the preferred method for organizing biologically-based research on mental illnesses, with the ultimate goal of constructing a new classification scheme.This caused quite a commotion, leading many to comment on NIMH's shocking repudiation of DSM-5. However, to long-time observers of RDoC's development, this was not a surprise. And the initial lack of clarity on the distinction between the RDoC Dimensional Approach for Research vs. DSM-5 for Diagnosis didn't help matters, nor did the uncertainty about whether NIMH would fund DSM-based research at all.1NIMH issued a press release on May 13 to clarify its position:DSM-5 and RDoC: Shared InterestsThomas R. Insel, M.D., director, NIMHJeffrey A. Lieberman, M.D., president-elect, APANIMH and APA have a shared interest in ensuring that patients and health providers have the best available tools and information today to identify and treat mental health issues, while we continue to invest in improving and advancing mental disorder diagnostics for the future.Today, the APA's Diagnostic and Statistical Manual of Mental Disorders (DSM), along with the International Classification of Diseases (ICD) represents the best information currently available for clinical diagnosis of mental disorders Patients, families, and insurers can be confident that effective treatments are available and that the DSM is the key resource for delivering the best available care. The NIMH has not changed its position on DSM-5. As NIMH’s Research Domain Criteria (RDoC) project website states, “The diagnostic categories represented in the DSM-IV and the International Classification of Diseases-10 (ICD-10, containing virtually identical disorder codes) remain the contemporary consensus standard for how mental disorders are diagnosed and treated.”Yet, what may be realistically feasible today for practitioners is no longer sufficient for researchers. Looking forward, laying the groundwork for a future diagnostic system that more directly reflects modern brain science will require openness to rethinking traditional categories. It is increasingly evident that mental illness will be best understood as disorders of brain structure and function that implicate specific domains of cognition, emotion, and behavior. This is the focus of the NIMH’s Research Domain Criteria (RDoC) project. RDoC is an attempt to create a new kind of taxonomy for mental disorders by bringing the power of modern research approaches in genetics, neuroscience, and behavioral science to the problem of mental illness.So what is RDoC, and how might it be applied to new research projects? From the DSM perspective of categorical disorders (e.g, schizophrenia, major depression, and obsessive compulsive disorder), RDoC embraces diagnostic messiness. Patients previously excluded from a study due to comorbidities, or because they don't meet full criteria? Misfits from the "Not Otherwise Specified" (NOS) category? Now they're in. Specifically, the instructions for RFA-MH-14-050 state:Priority will be given to applications that have a well-justified plan to include patients from multiple diagnostic groups (including Not Otherwise Specified and forme fruste diagnoses) as appropriate for explicating the dimensions and constructs of interest in the study design. Studies that include patients from a single diagnostic group may also be considered if there is a particularly strong justification for examining constructs of interest within one diagnostic category. A defensible approach might be to study all patients presenting themselves at a specialty clinic, e.g., mood disorders clinic, anxiety clinic, or psychotic disorders clinic, regardless of whether they meet criteria for a particular DSM diagnosis.One potential pitfall of this approach is the money required to enroll huge numbers of patients. If commonalities in cognitive function or brain circuitry or especially genetic risk factors are to emerge from studying all patients with mood disorder-like symptoms, then sample sizes must be very large to overcome potential noise in the system(s).The applicant would propose to study one or more of the five different domains, or constructs, that have been fleshed out at NIMH Workshops:Negative Valence SystemsPositive Valence SystemsCognitive SystemsSystems for Social ProcessesArousal/Regulatory SystemsThe possible units of analysis run the gamut from genes to circuits to behavior, and the studies should use specific tasks (paradigms) and self-report measures, as shown in the Negative Valence Systems matrix below.Draft Research Domain Criteria Matrix Animal ... Read more »
Vaidyanathan, U., Nelson, L., & Patrick, C. (2011) Clarifying domains of internalizing psychopathology using neurophysiology. Psychological Medicine, 42(03), 447-459. DOI: 10.1017/S0033291711001528
Dichter, G., Damiano, C., & Allen, J. (2012) Reward circuitry dysfunction in psychiatric and neurodevelopmental disorders and genetic syndromes: animal models and clinical findings. Journal of Neurodevelopmental Disorders, 4(1), 19. DOI: 10.1186/1866-1955-4-19
The public will never tire of the nature versus nurture debate but here’s a hint: the answer in biology is always both. But if you’ve ever known any twins, you know they can have quite different personalities which, you would think, are attributable to differences in nurture of one sort or another. To understand this better, some scientists […]... Read more »
Freund, J., Brandmaier, A., Lewejohann, L., Kirste, I., Kritzler, M., Kruger, A., Sachser, N., Lindenberger, U., & Kempermann, G. (2013) Emergence of Individuality in Genetically Identical Mice. Science, 340(6133), 756-759. DOI: 10.1126/science.1235294
Guest post by Patrick Rabbitt, commenting on an article that claimed that simple reaction time is slower now than in the Victorian era. Mundane differences in equipment sensitivity may be responsible... Read more »
Michael A. Woodley, Jan te Nijenhuis, & Raegan Murphy. (2013) Were the Victorians cleverer than us? The decline in general intelligence estimated from a meta-analysis of the slowing of simple reaction time. Intelligence. info:/http://dx.doi.org/10.1016/j.intell.2013.04.006
I am thrilled to announce that this month I am joining a new top-notch science blogging team at Scitable, Nature Education’s award-winning science education website! (But don’t worry, friends. I will continue to post here about animal physiology and behavior every Wednesday). Next week, Scitable will be launching eleven new blogs covering topics like neuroscience, genetics, oceanography, physics and more. I will be co-authoring an evolution blog called Accumulating Glitches together with Sedeer el-Showk (the author of the fantastic nature blog Inspiring Science). To celebrate the launch of these new science blogs, many of us are writing guest posts at Student Voices, another Scitable blog. What follows is the start of my guest post:__ A female western black widow contemplates the tastinessof her suitor. Photo by Davefoc at Wikimedia Commons. Sexual reproduction is a costly affair, but the costs are not usually equal for males and females. Among animals, females generally produce larger gametes (eggs are way bigger than sperm), spend more energy gestating or incubating the young before they are born, and spend more effort caring for the young after they are born. It’s no wonder then that across animal species, females are typically more choosy of who they mate with than males are. But what if the tables are turned and sex is more costly for males than it is for females? Such is often the case for black widow spiders, named for the females’ infamous reputation for making a post-coital snack of their mates. In such a situation where every sexual encounter is potentially the last, who would blame males for being more choosy of their mating partners? But are they? To find out, read the rest of the post here! And to find out more, check this out:Johnson, J., Trubl, P., Blackmore, V., & Miles, L. (2011). Male black widows court well-fed females more than starved females: silken cues indicate sexual cannibalism risk Animal Behaviour, 82 (2), 383-390 DOI: 10.1016/j.anbehav.2011.05.018 ... Read more »
Johnson, J., Trubl, P., Blackmore, V., & Miles, L. (2011) Male black widows court well-fed females more than starved females: silken cues indicate sexual cannibalism risk. Animal Behaviour, 82(2), 383-390. DOI: 10.1016/j.anbehav.2011.05.018
Five to seven million companion animals arrive at animal shelters in the US each year, and about half of these are animals being surrendered by their owners. Why do people surrender their pets? To find out, a new study by Jennifer Kwan and Melissa Bain compared dogs being relinquished at three Sacramento animal shelters to those dogs that were there simply to receive their vaccinations.The experimenter spent time at the shelters during the hours when relinquishments could take place, and when vaccination clinics were available. She approached people to ask them to complete the questionnaires, which were available in English or Spanish. A total of 129 people took part; 80 relinquishing owners, and 49 continuing owners. Some people were not approached to take part because their dogs seemed to be aggressive, and the experimenter would have had to hold them while the owner completed the questionnaire. In addition, if relinquishing owners seemed particularly upset or arrived requesting euthanasia of the dog, they were not asked to take part, so as not to exacerbate their distress. It is possible this had an effect on the results.The questionnaire asked about demographic information, attachment to the pet, behavioural problems, and, in the case of relinquished dogs, the reasons why. Participants could rate potential reasons for relinquishment as ‘not a reason’, ‘somewhat of a reason’ and ‘strong reason’, so it was possible for multiple reasons to be given. The results from the three shelters were combined for analysis. Relinquished dogs and ‘continuing’ dogs were equally likely to have attended training classes. The relinquished dogs were significantly more likely to live as outside dogs all of the time, and were significantly older; amongst the male dogs, they were significantly more likely to be intact.Relinquishing and continuing owners were equally likely to have used punishment-based techniques in training their dogs. There was a correlation between the use of prong and choke collars and problems in loose-leash walking. However, it is not possible to know if these were only employed because of difficulties training loose-leash walking, or if they contributed to the problems, for example by misuse or by owners assuming they didn’t need to train if using them.Dogs in the relinquished group were significantly more likely to have problem behaviours than those that were being kept. Sixty-five per cent of relinquishing owners said that a behavioural problem was a contributing factor, and about half said it was a relatively strong influence. Aggression was the most common behavioural problem given as a strong reason for relinquishment.Attachment to pets is a construct that includes knowledge about the pet’s needs, feelings of closeness to the pet, and time spent with them. Attachment scores were significantly lower for relinquishing owners compared to continuing owners. Although not surprising, this is the first time it has been shown using a standard measure of attachment. It would be interesting to know how attachment changes and develops over the duration of an owner’s relationship with their pet. About a third of owners said they were ‘very satisfied’ with their dog’s behaviour. Those who were not so satisfied also had significantly lower scores for attachment, suggesting a link between behaviour and attachment to dogs.Although moving house was a common reason for animal relinquishment, many people had other pets that weren’t being relinquished. This doesn’t mean they gave incorrect information; many rental properties have rules about the number, height or breed of pets. This is also a potential reason for the numbers of pit bulls in the relinquished group, because they are often listed as one of the restricted breeds. While it is surprising to learn that people might relinquish some pets and choose to keep others, it is useful to know as future studies can make a point of learning about kept animals as well as relinquished ones.The most interesting finding of this study is the frequency of behavioural problems as a reason for relinquishment. This is not surprising, but it underlines the need to help owners find better ways of preventing problems in the first place and managing them if they arise. Surprisingly little is known about people's information-seeking regarding behaviour and training issues, and unfortunately there is a lot of misinformation.What are your favourite books or other resources for dog owners? (N.B. Please avoid posting active links because urls usually end up in the spam folder. Thank you!)ReferenceKwan, J., & Bain, M. (2013). Owner Attachment and Problem Behaviors Related to Relinquishment and Training Techniques of Dogs Journal of Applied Animal Welfare Science, 16 (2), 168-183 DOI: 10.1080/10888705.2013.768923... Read more »
Kwan, J., & Bain, M. (2013) Owner Attachment and Problem Behaviors Related to Relinquishment and Training Techniques of Dogs. Journal of Applied Animal Welfare Science, 16(2), 168-183. DOI: 10.1080/10888705.2013.768923
by Rita Handrich in The Jury Room
We have likely all heard the saying “Don’t shoot the messenger”. According to new research, we are more likely to shoot that unlucky messenger when they are an outgroup rather than ingroup member. While that makes sense (sort of) it’s an intriguing article. And likely a depressing article for those who would like to promote [...]
The “hoodie effect”: A domestic variant of the turban effect
The hypercorrection effect: Correcting misinformation and false beliefs
Simple Jury Persuasion: The Sunshine Samaritan Effect
... Read more »
Esposo SR, Hornsey MJ, & Spoor JR. (2013) Shooting the messenger: Outsiders critical of your group are rejected regardless of argument quality. The British Journal of Social Psychology. PMID: 23316747
The Irish poet Brendan Behan is, I think, credited with the phrase: "There's no bad publicity except an obituary". One wonders how appropriate this phrase might be to the 'diagnostic Bible' (except that it isn't) which is DSM-V which is poised to make its entrance into the World in the coming days.The real Homer @ Wikipedia Indeed, the story of DSM-V even before it hits the diagnostic shelves of all good psychiatric bookshops, has the makings of an epic piece of poetry or literature, or at least a Storify tale. Drama, intrigue and divisions reminiscent of Good and Evil (I'll let you decide who has taken which role) are all included.The various debates on the details of the psychiatric diagnoses contained in DSM-5 have seemingly unearthed smouldering questions about the way mental health is classified, and whether such classifications are helpful for those at the receiving end of such diagnoses, the social-medical world and indeed the wider research universe.Two papers recently published under the heading of 'Current controversies in psychiatry' (understatement of the year) by the BioMedCentral journal series add fuel to the diagnostic debate fire. Ian Hickie and colleagues* (open-access) provide an interesting commentary on clinical classifications in mental health, and how reverse translation "that is, working back from the clinic to the laboratory" might be a direction to think about. Bruce Cuthbert and Tom Insel** (open-access) bring forward the concept album that is RDoC (Research Domain Criteria) and its potential "to transform the approach to the nosology of mental disorders". Their notion of the seven pillars of RDoC harks back to the writings of one T.E. Lawrence.Both opinion papers acknowledge that the psychiatric labelling systems we have at the moment are not perfect and reflect the feeling of common ground across various diagnostic labels.I've followed a fair bit of the DSM-V development discussions with autism, sorry the autisms, in mind and how it has morphed into the larger question of how useful labels and tick-box criteria are to the real world. Speaking within the confines of the proposed categorisation of autism spectrum disorder (ASD) it strikes me that much of the debate boils down to the lack of progress made in isolating the biological factors which define conditions like autism. Yes, heterogeneity and maturation have played their part in cloaking autism from biological definition, but despite the seemingly very close relationship between one or two of the gold-standard autism assessment instruments and the new revisions proposed to DSM, one doesn't get the sense that autism will be revealing its definitive biological footprint anytime soon.Although not a novel idea, I have often wondered whether some simple changes to the way that research is carried out in autism circles might yet yield some knowledge gains. So for example, moving away from autism as a diagnosis as being the primary variable; instead focusing on those all important endophenotypes and their discriminating factors. I've talked about work from the MIND Institute as one example of this direction, but there are others too (yep, branched chain amino acids). Intervention, or rather response to intervention is another possible discriminating factor. Y'know best responders vs. non-responders vs. worst responders to the myriad of interventions out there for conditions like autism. Obviously the question then is: how do you categorise responder status?Anyhow, I can't see anything happening too quickly despite all this talk about rethinking nosology given that DSM-IV was with us for 19 years. That's not however to say that changes might not already be afoot...----------* Hickie IB. et al. Clinical classification in mental health at the cross-roads: which direction next? BMC Medicine 2013; 11: 125.** Cuthbert BN. & Insel T. Toward the future of psychiatric diagnosis: the seven pillars of RDoC. BMC Medicine 2013; 11: 126.---------- Ian B Hickie1, Jan Scott, Daniel F Hermens, Elizabeth M Scott, Sharon L Naismith, Adam J Guastella, Nick Glozier, & Patrick D McGorry (2013). Clinical classification in mental health at the cross-roads: which direction next? BMC Medicine, 11... Read more »
Ian B Hickie1, Jan Scott, Daniel F Hermens, Elizabeth M Scott, Sharon L Naismith, Adam J Guastella, Nick Glozier, & Patrick D McGorry. (2013) Clinical classification in mental health at the cross-roads: which direction next?. BMC Medicine, 126. info:/
Researchers have found that the size of the frontal lobes of the brain is not the only crucial factor of human intelligence.
Proceedings of the National Academy of Sciences (PNAS)
Frontal lobes, as the name suggest, are present at the front of each cerebral hemisphere - either of the two symmetrical halves of the front part of the brain.
Researchers have reported in the new study that size of the brain’s frontal lobe is not the only reason of humans’ intelligence.
Professor Robert Barton, lead author, from the Department of Anthropology at Durham University, said in a statement, "Probably the most widespread assumption about how the human brain evolved is that size increase was concentrated in the frontal lobes.
“Although absolute and proportional frontal region size increased rapidly in humans, this change was tightly correlated with corresponding size increases in other areas and whole brain size, and with decreases in frontal neuron densities,” Researchers wrote.
"It has been thought that frontal lobe expansion was particularly crucial to the development of modern human behaviour, thought and language, and that it is our bulging frontal lobes that truly make us human. We show that this is untrue: human frontal lobes are exactly the size expected for a non-human brain scaled up to human size,” Robert Barton added.
Researchers further noted that cerebellum and other 'primitive' areas of the brain, and the more extensive brain networks in the different parts are also essential in the expansion of the human brain. These areas are important in human cognition and the related disorders such as dyslexia and autism.
Barton, R., & Venditti, C. (2013). Human frontal lobes are not relatively large Proceedings of the National Academy of Sciences DOI: 10.1073/pnas.1215723110... Read more »
People who ostracize – ignore or exclude – others incur psychological costs. Researchers who recently explored whether people suffer psychological costs when they comply with social directives to ignore or exclude cause others reached that conclusion. The pressure to ignore or exclude someone has become an “all too common” experience, and the authors noted [...]The post Ostracism Hurts: The Psychological Costs of Ignoring or Excluding Others appeared first on Psycholawlogy.... Read more »
Legate N, Dehaan CR, Weinstein N, & Ryan RM. (2013) Hurting you hurts me too: the psychological costs of complying with ostracism. Psychological science, 24(4), 583-8. PMID: 23447557
Can fluent presenters makelearning feel too easy?
Eloquent and engaging scientific communicators in the mould of physicist Brian Cox make learning seem fun and easy. So much so that a new study says they risk breeding overconfidence. When a presenter is seen to handle complicated information effortlessly, students sense wrongly that they too have acquired a firm grasp of the material.
Shana Carpenter and her colleagues showed 42 students a one-minute video of a science lecture about calico cats. Half of them saw a version in which the female lecturer was confident, eloquent, made eye-contact and gestured with her hands. The other students saw a version in which the same lecturer communicated the same facts, but did so in a fumbling style, frequently checking her notes, making little eye contact and few gestures.
After watching the video, the students rated how well they thought they'd do on a test of its content ten minutes later. The students who'd seen the smooth lecturer thought they would do much better than did the students who saw the awkward lecturer, consistent with the idea that a fluent speaker breeds confidence. In fact, both groups of students fared equally well in the test. In the case of the students in the fluent lecturer condition, this wasn't as good as they'd predicted. Their greater confidence was misplaced.
A second study was similar - 70 students watched either a fluent or fumbling lecturer, but this time the students had a chance afterwards to spend as long as they wanted reviewing the script. On average, both groups of students devoted the same amount of time (perhaps out of habit). But only among the students who'd watched the fumbling lecturer was there a link between time spent on the script and subsequent performance on the test. This suggests only they used the time with the script to fill in blanks in their knowledge.
"Learning from someone else - whether it is a teacher, a peer, a tutor, or a parent - may create a kind of 'social metacognition'," the researchers said, "in which judgments are made based on the fluency with which someone else seems to be processing information. The question students should ask themselves is not whether it seemed clear when someone else explained it. The question is, 'can I explain it clearly?'".
An obvious limitation of the study is the brevity of the science lecture. It remains to be seen whether this result would replicate in a more realistic situation after a longer lecture. Also, in real life, there may be costs to a fumbling lecture style that weren't picked up in this study, such as students mind wandering and skipping class.
Carpenter, S., Wilford, M., Kornell, N., and Mullaney, K. (2013). Appearances can be deceiving: instructor fluency increases perceptions of learning without increasing actual learning. Psychonomic Bulletin and Review DOI: 10.3758/s13423-013-0442-z
Co-author on this study, Nate Kornell, wrote a guest Digest post in 2008 with study tips for students.
How fluency affects judgement, choice and processing style
Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.
... Read more »
Carpenter, S., Wilford, M., Kornell, N., & Mullaney, K. (2013) Appearances can be deceiving: instructor fluency increases perceptions of learning without increasing actual learning. Psychonomic Bulletin . DOI: 10.3758/s13423-013-0442-z
In their 1968 book Pygmalion in the Classroom, Robert Rosenthal and Lenore Jacobson presented their groundbreaking research that showed teacher expectations are self-fulfilling prophecies. If two students start the school year at the same achievement level, the student the teacher is told is a high achiever will make more gains than the student the teacher believes is [...]... Read more »
Sorhagen, N. (2013) Early teacher expectations disproportionately affect poor children's high school performance. Journal of Educational Psychology, 105(2), 465-477. DOI: 10.1037/a0031754
All it takes is an antenna on a headband. If you've got a breathless video report on the dangers of wireless internet connections, that will help your case. It doesn't take much, though, to turn an ominous hint into a real headache.
Some people consider themselves sensitive to electromagnetic fields. They report symptoms such as burning skin, tingling, nausea, dizziness, or chest pain, and they blame their malaise on nearby power lines, cell phones, or WiFi networks. A recent Slate article described such people moving to a remote West Virginia town where radio-frequency signals are banned. (The town is within the U.S. National Radio Quiet Zone, an area that's enforced to keep signals from interfering with radio telescopes there—telescopes that work because they receive the radio-frequency signals constantly hitting our planet from space.)
There's no known scientific reason why a wireless signal might cause physical harm. And studies have found that even people who claim to be sensitive to electromagnetic fields can't actually sense them. Their symptoms are more likely due to nocebo, the evil twin of the placebo effect. The power of our expectation can cause real physical illness. In clinical drug trials, for example, subjects who take sugar pills report side effects ranging from an upset stomach to sexual dysfunction.
Psychologists Michael Witthöft and G. James Rubin of King's College London explored whether frightening TV reports can encourage a nocebo effect. They recruited a group of subjects and showed half of them a clip from a BBC documentary about the potential dangers of wireless internet. (The BBC later acknowledged that the 2007 program was "misleading.") The remaining subjects watched a video about the security of data transmissions over mobile phones.
After watching the videos, subjects put on headband-mounted antennas. They were told that the researchers were testing a "new kind of WiFi," and that once the signal started they should carefully monitor any symptoms in their bodies. Then the researchers left the room. For 15 minutes, the subjects watched a WiFi symbol flash on a laptop screen.
In reality, there was no WiFi switched on during the experiment, and the headband antenna was a sham. Yet 82 of the 147 subjects—more than half—reported symptoms. Two even asked for the experiment to be stopped early because the effects were too severe to stand.
Witthöft says he expected to see a greater effect in people who had watched the frightening documentary. This wasn't the case overall. Instead, the movie mainly increased symptoms in subjects who described themselves beforehand as more anxious.
"It suggests that sensational media reports especially in combination with personality factors (in this case anxiety) increase the likelihood for symptom reports," Witthöft says.
Plenty of symptoms were reported without the sensationalist TV show, though. The antenna on the head, the researchers' allusion to a "new kind of WiFi," and the instructions to monitor their bodies closely were enough to trigger symptoms in many people who watched the other video.
Witthöft points out that his study would have been stronger if there were a third group of subjects who didn't wear the "WiFi" headband at all, but were simply told to pay attention to their bodies for 15 minutes. This kind of attentiveness might trigger symptoms on its own.
Still, Witthöft says, "I think the high percentage of symptom reports nicely shows how powerful nocebo effects are."
Though the researchers set out to show how irresponsible reports in the media can trigger a nocebo effect, they ended up showing how easy it is to make a person feel sick with just a a prop and a few choice words. Even a National Radio Quiet Zone can't protect against that.
Witthöft, M., & Rubin, G. (2013). Are media warnings about the adverse health effects of modern life self-fulfilling? An experimental study on idiopathic environmental intolerance attributed to electromagnetic fields (IEI-EMF) Journal of Psychosomatic Research, 74 (3), 206-212 DOI: 10.1016/j.jpsychores.2012.12.002
Image: Scott Beale/Laughing Squid (via Flickr)
... Read more »
Witthöft, M., & Rubin, G. (2013) Are media warnings about the adverse health effects of modern life self-fulfilling? An experimental study on idiopathic environmental intolerance attributed to electromagnetic fields (IEI-EMF). Journal of Psychosomatic Research, 74(3), 206-212. DOI: 10.1016/j.jpsychores.2012.12.002
by Liz in Science of Eating Disorders
I have been fascinated and perplexed by reports of the seemingly invigorating and anxiety reducing effects of bingeing and purging (purging by self-induced vomiting). Personally, I cringe at the idea of self-induced vomiting and have always wanted to avoid vomiting at all costs, including during food poisoning. The insight from recent blog entries and the subsequent comments has made an impact on me. I see that the motivation to engage in bingeing/purging (b/p-ing) behavior can be intense and can provide an effective way increase positive affect and reduce stress. The ameliorating effects of b/p-ing remind me of drug addiction, with b/p-ing behavior as the “drug.” This made me wonder, what happens in the brain to impart such “addiction-like” reinforcement?
I know there are reports of opiate and endorphin release following purging, but to me, this seemed like an effect meant to counter the intense aversion (and discomfort?) of the act of purging itself. Correct me if I’m wrong, but it seems like the feeling of being “empty” should be reinforcing as well. As someone who used to restrict quite a bit, I certainly found that feeling …
You May Also Like:
What’s The Point of Bingeing and Purging? And Why Can’t You Just Stop?
Bingeing and Purging Marathons: Repeated Binge/Purge Cycles in Bulimia Nervosa
Binge Eating: When Should We Call It An “Addiction”?
... Read more »
Avena, N., Rada, P., Moise, N., & Hoebel, B. (2006) Sucrose sham feeding on a binge schedule releases accumbens dopamine repeatedly and eliminates the acetylcholine satiety response. Neuroscience, 139(3), 813-820. DOI: 10.1016/j.neuroscience.2005.12.037
Do you write about peer-reviewed research in your blog? Use ResearchBlogging.org to make it easy for your readers — and others from around the world — to find your serious posts about academic research.
If you don't have a blog, you can still use our site to learn about fascinating developments in cutting-edge research from around the world.
Research Blogging is powered by SMG Technology.
To learn more, visit seedmediagroup.com.