The Fairytale of a Static Rate of Autism Part III – Prevalence Hookups or What if They Threw An Autism Epidemic And Nobody Cared?
Posted May 30, 2011on:
Hello friends –
The osmotic pressure of cool people and pop culture tells me that what we used to call one night stands are now called ‘hookups’, casual sexual encounters as convenient that don’t necessarily mean people are dating, but some release can be found, and everyone moves on with their lives until the next time. This reminds me a lot of how people that ought to know better have been treating autism prevalence studies lately. The results are useful in cementing an already reached conclusion, but ultimately, the findings are only used as isolated ejaculations of the same ideological tweets. Last week’s hookup doesn’t mean anything come this Saturday night, and there is absolutely no reason, no reason, anyone should be troubled to compare this weeks findings used to trumped a static rate of autism with last weeks findings. What we are witnessing is the equivalent of a scientific one night stand, and anyone who doesn’t think the scientific method should be framed for the sake of expediency ought to be furious.
These posts can oftentimes take me a long while to complete, so dating my start point a bit, about two weeks ago, the NHS study from England came out that described a near 1% prevalence of ‘autism’ in adults. The ‘findings’ from this study actually came to light and received attention in the autism community over a year ago, but the real publication happened in May 2011, so there you are.
About a week ago, the Korea ‘study’ on autism came out; it hit the web with a large footprint, and amazingly, described an atmospheric autism ‘prevalence’ of autism of near 2.5%, with 1 in 38 (!!!!) Korean children ‘estimated’ to be on the autism spectrum. If it has not happened already, this study and ‘conclusions’ will soon became part of the autism lexicon; an uber-Kevlar argument, impervious to any concerns involving the possibility of an actual increase in the number of children with autism.
Both of these studies share very similar methodologies; essentially a lot of people were screened through a questionnaire, a subset of people with ‘high’ scores on the questionnaire were subsequently retested with standard tools for assessing autism. Based on how well the questionnaire did at predicting autism spectrum diagnosis, an extrapolation, with various ‘corrections’, was made as towards how many people in the general public are on the spectrum. In both studies, the overwhelming majority of people ‘estimated’ with autism were previously undiagnosed and were not receiving any services.
Here’s the thing that is driving me up the wall crazy, apeshit mystified and enraged. Nobody cared. Let’s look again at what these studies found and see if we can detect anything of potential interest in their conclusions when compared between one another.
Nobody, and I mean nobody, took these two studies as evidence of an autism epidemic, despite the fact that here we have two supposedly (?) well designed studies that found entire spectrum sized differences in the number of children and adults with autism! You could literally drive the old spectrum through the hole in the new spectrum! If both of these two studies are meaningful, if both have accurately captured autism in their respective target populations, we have no choice but to admit that the epidemic is real, and we have proof that children have an autism spectrum disorder two and a half times more frequently than adults. There is an epidemic of autism in our children; or at least, in Korean children!
Did anyone see those headlines that I somehow missed? Did the online skeptical community acknowledge that we now finally have some solid evidence that indeed, autism rates are higher in children than adults, and somehow I failed to see those conversations?
Here’s what really confuses me. Some of the same people, same ‘skeptics’, and same news organizations breathlessly reported both of these findings without, apparently, understanding their implications alongside one another. For example, in 2009, here’s a post from Stephen Novella at Science Based Medicine that touched on the England study that includes this nugget:
They found a consistent prevalence of 1% in all age groups they surveyed. This is remarkable for two reasons – first, they found the exact same 1% figure as the CDC US survey (assuming the CDC data is more accurate than the phone survey published in Pediatrics). This supports the conclusion that the 1% figure may be close to the true prevalence of ASD in the population.
Second, the NHS study found that the prevalence of autism was the same in all age groups, strongly suggesting that true ASD incidence has not been increasing over recent decades and supporting the increased surveillance and definition hypothesis.
Check out how ‘remarkable’ Mr. Novella thinks the 1% matchup between English adults and American children is in terms of making the case for a static rate of autism. This is a guy whose posts outside the autism realm I tend to enjoy in many instances, he is clearly a superior intellect, and applies a very skeptical eye towards his non-autism posts. My presumption is that he was well aware that the NHS study actually diagnosed a grand total of 19 adults, and had good reasons, which he declined to illuminate in that post, for why this relatively low number of results was immune to significant confounding problems, which is why it provided such ‘remarkable’ evidence ‘strongly suggesting that true ASD incidence has not been increasing’.
Then, in May 2011, Mr. Novella posted Autism Prevalence Higher than Thought, concerning the Korea study. Here is a snippet from the conclusions:
This study adds an interesting data point to the whole picture of ASD. If correct, then the theoretically upper limit of ASD prevalence is about 2.6% of the population, more than twice the previous estimate. It also indicates that when you undergo a program of thorough searching, you will find more diagnoses.
What is going on here? The England study, which found a prevalence of 1%, the study that previously was found to be remarkable evidence of a static rate of autism was exactly the same type of study, wide-scale screening for likely candidates within the general population, followed by targeted autism assessment of people with high scores, and backwards extrapolation. Does anyone think that the Korea study was that much more thorough than the England study? If a study came out tomorrow that reported 5%, or 10% prevalance, would we simply assign this to a even more strenously executed methodology? Is there any evidence that we might use to suspect a 5% prevalance reported next week in Columbia is faulty that could not also be applied against Korea?
For what reason should we, now, believe that the England study of adults was so fatally flawed that it missed more than one autistic adult for every one it found? Surely a study capable of missing more than half of the autistic adults had some type of warning signs back in 2009 that might indicate that the evidence might be less than remarkable, maybe questionable, or that, in fact, it might be a Fairytale?
Am I cynical to suggest that what really made the England study such remarkably ‘strong evidence’ of a static rate of autism was that, at the time, it had findings within the statistical range of existing CDC numbers in children? Was the online and media love affair with the England NHS study little more than prevalence hookup? Have I reached the theoretical limit of jadedness?
There really isn’t a way to reconcile these two findings without either accepting a two and a half times increase in autism in children versus adults, a sort of epidemic-lite, or accepting that one or both of the studies suffer from serious flaws. But if we start accepting that the studies might have serious problems, we shouldn’t be saying they are ‘strong evidence’ of anything, except, perhaps, the difficult to overstate problems of autism prevalence studies. Of course, it is a different ballgame if you are relieved of the intellectual responsibility of actually trying to reconcile the two findings; if you allow yourself the prevalence doublethink that England has meaningful data, and so does Korea, and that the rate of autism isn’t increasing, then, no harm, no foul Big Brother.
One prevalence study that didn’t get the booty call was Brief Report: Prevalence of Pervasive Developmental Disorder in Brazil: A Pilot Study, which came out in February, 2011; just three months before Korea. Methodology wise, this study is a kissing cousin to Korea and England, a screening was performed in the general population, and assessments were subsequently performed and then statistical extrapolations were performed to reach a prevalence rate. Let’s see what these values look like up against each other, and see if we can detect a pattern.
Can anyone see a pattern here?
Now the skeptic might tell you that the Brazil study was a lot smaller, which is true; the initial screening of children only contained a little less than 1,500 children. But it hardly matters; just to get to the level of English adults ‘found’, they would have had to miss two children for every child they found, and to approach Korea values, they needed to have missed almost nine children for every child actually diagnosed. Does anyone think this is reality? Why would prospective screening and backwards extrapolation be so accurate in one population, and so wildly inaccurate in another population? The Brazil and England study used versions of the same screening questionnaire!
I understand that being partially funded by Autism Speaks, and having a ‘cultural anthropologist’ with a book on the subject of autism carries some weight in the press conference area; so that might explain why one study got press, and another didn’t. Forgetting the press issue, where are the calls that we should try throwing four thousand Brazilian genomes at a sequencer to see what in their genetic makeup appears to be protecting them from autism so effectively? Why aren’t these studies meaningful evidence of some environmental force acting to create wildly different rates of autism in these different populations?
I would note that the press releases, media regurgitations, and skeptical viewpoints nearly all contained the boilerplate note that more studies are needed. Consider, however, if our need for ‘more study’ is so extensive, if we place so little confidence in our methodologies that papers published within months of each other, with nearly identical study methods, find literally nine times higher rates of autism in one population aren’t a warning sign of an real difference in incidence, what this ought to be telling us is that all of our prevalence data are crapshoots, at best. We shouldn’t get to pick and choose which studies we think are meaningful because they happen to meet comforting quotas, or discard those that fail to support those palliative notions.
It is tempting to look at the Brazil study and evaluate for design or implementation problems that could cause such startlingly low rates of autism; the authors go into some discussion about the reasons their findings might seem so low. Complicating matters along this line, however, is that the Brazil and Korea studies, shared a researcher, the relatively well known psychiatrist with a large pubmed autism prevalence footprint, Eric Fombonne. It occurred to me that it might be a fun experiment to see how reliable Mr. Fombonne has been regarding autism prevalence.
[Click on the image to get a bigger view / stupid wordpress template] Note that I have omitted review papers, or papers that had no abstracts, but it doesn’t really help. (How could it?)
All of these findings were wholly or partially authored by the same person. Is there anything more damning for the state of autism prevalence research than this person continues to be considered a source of reliable information?
I used to live with a fun dude in college; he went to engineering school and went on to work at a manufacturing facility near our town. One of the funniest things he told me about engineering was this quote:
Dilution is the solution to pollution!
In other words, if you have a hundred pounds of diethyl-pthylate-poisonate to dispose of, ship in a hundred thousand gallons of water, and start pumping; if you have two hundred pounds to eject, ship in two hundred thousand gallons of water. This is what is happening to the definition of autism, the quirky element, the ‘broad autistic phenotype’ is seeping into these studies. After dozens, or hundreds of prevalence studies we are ultimately left with as many portraits of different entities as envisioned by the researcher and width of spectrum de jour. The upshot of this, however, is that it makes no sense to try to compare these studies.
In the meantime, we are told time and time again that even though our common sense, our memories of childhood, and the repeated lamentations from every person who has worked with children for the last few decades, all of which are warning us that something is different; all of these things are all supposedly subject to an array of biases so strong that we cannot trust them to reach any conclusions. Only through carefully planned, objective analysis can we reach any conclusions on autism incidence. The results of this choreographed investigation looks like this:
Does anyone really think there aren’t some pretty serious biases operating here? If we cannot use common sense to try to reconcile the picture above, what can we use? If trusting common sense is dangerous to valid conclusions, so is trusting this.
If anyone really thought that Korea and Brazil were measuring the same condition, a condition that until very, very recently has been considered lifelong and severely debilitating, the two wildly different findings would be cause for alarm, undeniable evidence of a massive environmental force influencing the development of autism in some populations. But no one thinks this, no one cares, and that is because; no one really believes these studies are measuring the same thing. But admitting this is dangerous to too many, it is the implicit acknowledgement of just how little we understand, how beholden our policies and research prioritizations are guided by the softest of science and scientists, and ultimately, how frequently we’ve been sold a narrative with the scientifically defendable value of a set of monetized South Florida mortgages.
Such is the way of the prevalence hookup, transiently entertaining, but without meaning from week to week. Until we can find a way past this, past reliance on the shifting sands of behavioral assessments that can vary from researcher to researcher (or by the same researcher!), we can perform all of the ‘thorough investigations’ that we can afford and repeat the ‘findings’ that support our meme until we are blue in the face. None of it will mean a goddamned thing, though we may lose a generation of children while we bounce from one set of findings to another, feeling pleased with the ones that make doom seem unlikely, and discarding the ones that should be cause for great alarm.