Home News Speeding up science

Speeding up science

| Print |  Email
Tuesday, February 25, 2014
02.25.14 MedwasteBY JOE ROJAS-BURKE | OB BLOGGER

The medical research enterprise wastes tens of billions of dollars a year on irrelevant studies. It’s time to fix it.

A few years ago I wrote about a five-year-old Portland girl named Katie who was born with Smith-Lemli-Opitz syndrome, a rare disorder that causes severe developmental delays and a spectrum of birth defects.  I keep thinking about what her mother Kathy told me about her situation as a parent:  

What's been most difficult [she said] "is having no proven, real treatment out there for her, and having it all be in studies. But also it's not knowing what to expect for her future, not having the answers you would have for a more common disease -- whether she could be independent, what she may or may not be able to do as she grows into an adult."

Although the syndrome's root cause has been known since 1993 – and several major medical centers including Oregon Health & Science University formed a consortium in 2010 to expedite studies – researchers still have not published a randomized, controlled clinical trial of any treatment.  Only six small, nonrandomized studies appear in the National Library of Medicine’s PubMed database, and they don’t provide much in the way of answers.

The “race for a cure” is in many ways a fantasy. If only. Medical research is at best a precarious, plodding mountain climb with many contestants taking dead-end routes.  Some of the most acclaimed drug discoveries of the past decade took a median of 24 years from conception to become established as clinically useful treatments, according to an analysis reported a few years ago in the journal Science.  The authors zeroed in on 101 studies published between 1979 and 1983 that were highly regarded and made strong claims about potential to cure illness.  A decade later, only five had led to a practical treatment, and only one that was widely used.  And 40% of the discoveries celebrated as “breakthroughs” later proved partially or completely ineffective. Among them: hormone replacement for preventing stroke and heart disease, and vitamin E for preventing heart disease.

At its worst, medical research is an enterprise that wastes tens of billions of dollars a year on studies that are repetitive, irreparably biased, kept hidden by industry sponsors, or designed with no regard to the desires of people who have to live with serious illness. I’m disgusted but no longer surprised by the estimate that 85% of the money poured into medical research is wasted,  which The Lancet reported in 2009.  The esteemed British journal has now published a major series of papers on the problem of waste in medical research, and I hope it will lead to a watershed.

The series is freely accessible online (with registration) and worth reading in its entirety. Here are some of the highlights:

Flawed decisions about what research to pursue. From basic science to clinical trials, scientists frequently fail to consider the needs of patients and medical caregivers, and ignore much of what is already known from earlier studies.

• A review of existing knowledge could have spared six healthy volunteers from life-threatening complications in a 2006 safety study on the monoclonal antibody TGN1412. And a healthy volunteer recruited for a 2001 study on the effects of inhaled hexamethonium would not have died from the toxic effects of the drug on her lungs. At least 16 published papers had warned of lung complications from the drug.

• If researchers had assessed animal studies showing no protective effects, they would not have pointlessly tested the drug nimodipine on more than 7,000 stroke patients in the 1990s.  

• Timely review of the evidence on risk factors for sudden infant death syndrome could have identified the danger of babies lying on their front at least a decade earlier than it was, and tens of thousands of infant deaths could have been avoided, a look-back in 2005 concluded.

Flawed research designs, methods, and analysis. Scientists routinely plan and conduct studies without seriously considering the value or usefulness of the information that they will produce.

•  When researchers at Oregon Health & Science University – much to their credit – looked back at their record of cancer clinical trials, they found that a third “resulted in little scientific benefit” because they failed to enroll enough subjects.

• Single-mindedly aiming for adequate statistical power may help a scientist bring in grant money and get published in big-time journals. But it tends to pervert the science by tempting researchers to use treatment outcome measures that don’t matter to patients. Clinical trials of potential Alzheimer's disease treatments, for example, have used cognitive function scales that allow detection of small “improvements” that are essentially meaningless for patients.

• Current incentives reward scientists for getting newsworthy results published in prestigious journals more directly than they reward them for producing well-designed studies that actually help people who are sick. My favorite example is a wonderfully candid self-assessment by researchers in the Netherlands who conducted a multicenter study of the role of the stress hormone cortisol in mental illness.  The scientists involved produced a stream of publications in highly cited journals, presentations at conferences, and successful PhD dissertations. But the conclusions of many of the investigators contradicted each other, producing an “incoherent” overall result and no scientific progress.  

• Scientists face few negative consequences for making exaggerated claims or producing flawed or incorrect results, and even the most egregious offenders may go undetected for years.  I’m reminded of Marc Hauser, the Harvard professor who got away with fabricating data for ten years before investigators outed the scientific misconduct, and the prestigious Dutch psychologist who made up or manipulated data in dozens of papers before being caught.

• Even when incorrect results are overturned by more definitive research, they can continue to steer scientists in the wrong direction.  Hundreds of scientific papers, for instance,  repeated the discredited claim that vitamin E protects against heart disease after clinical trials ruled out that benefit more than ten years ago. Likewise, researchers continued to quote discredited claims for beta-carotene for cancer and estrogen for Alzheimer disease.

Inaccessible research information.  About half of all health-related studies never go public. Even when researchers publish clinical trial results, they or their industry sponsors very often keep the patient-level data secret from other scientists who otherwise could use it to speed further discoveries.

• The flu drug oseltamivir (Tamiflu) gained FDA approval in 1999, but 60% of the patient data—including the largest known trial—was kept secret by Roche, the drug’s manufacturer for more than a decade. Some independent researchers say they still can’t be sure how safe and effective the drug is.

• The studies that get published paint an overly rosy picture of new drugs. Dr. Erick Turner, a former drug reviewer for the federal Food and Drug Administration now at the Portland Veteran Affairs Medical Center has carefully documented the problem. In 2008, he and colleagues found that nearly a third of the clinical trials of antidepressants by drug companies produced questionable or negative results that were not published.  Trials with positive outcomes were about five times more likely to be published than those without, researchers at the University of California San Francisco found in 2008 when they examined all the new drugs approved by the U.S. Food and Drug Administration in a two-year period.

• Deprived of study results, patients end up using ineffective or harmful treatments. When researchers dug up unreported trial data on the antidepressant reboxetine, their analysis showed it was more harmful and no more effective than placebo for treating  major depression—in contrast to the positive findings of published clinical trials. Pfizer, which sells the drug in Europe, had not published 74% of the patient data.

                                                                                                           • • •

I’ll close with the words of Alessandro Liberati, an Italian doctor and medical professor, who was diagnosed with multiple myeloma and knew well the anguish of not having sound research to make treatment decisions. Liberati died in 2012 at the age of 57. Here he is in a 2010 interview:

We need to move forward on several fronts. We need to increase awareness of the misalignment between the research that is done and what needs to be done. Few people understand how much waste there is in research – research on questions that have already been answered, research on irrelevant questions, and so on. Those who use research – health practitioners and patients – need to be involved in setting priorities and designing research.

When I had to decide whether to have a second bone-marrow transplant, I found there were four trials that might have answered my questions, but I was forced to make my decision without knowing the results because, although the trials had been completed some time before, they had not been properly published!

This should not happen. I believe that research results must be seen as a public good that belongs to the community – especially patients. Several practical changes are needed: more public funding and so more public control of research, more integration of research into clinical practice, and routine use of all sound research results in everyday practice. Every clinical encounter should be an occasion for contributing, in some way, to new knowledge.

 

Joe Rojas-Burke blogs on health care and science for Oregon Business.

 

Comments   

 
Guest
-3 #1 RE: Speeding up scienceGuest 2014-02-26 17:49:51
Most doctors have no clue that the body is actually connected. Most endo doctors have no clue what the last 10 years of research has shown.
Quote | Report to administrator
 
 
Guest
0 #2 CaliRNGuest 2014-02-26 17:56:55
Sounds to me as though IRBs should be focusing as much on usefulness and stewardship as much as bioethics...bec ause, at the end of the day, how ethical is it to waste a precious resource (be it money or mental effort) on meaningless studies?

Additionally, in this day and age, to not scour the literature (Google Scholar, anyone?!) to ensure that a study will be safe is plainly irresponsible.
Quote | Report to administrator
 
 
Guest
0 #3 If you're not part of the solution...Guest 2014-02-26 20:03:31
My apologies, but I'm going to be blunt, to avoid wasting too many additional billions of medical research dollars:

Your basic thesis is correct - there's quite a lot of waste in medical research. However, quite a lot of this is unavoidable, and what is avoidable isn't going to be helped or corrected by poorly informed rabble-rousing such as you've published here.

You're simultaneously arguing multiple contradictory viewpoints, and it is exactly because it's so easy to sway the general public with emotionally appealing, but logically flawed arguments like these, that much medical research is as wasteful and as redundant as it is.

If you have something valuable to contribute, say it. If the best you can do use half of one breath to suggest that somehow repeating studies is wasteful, while using the other to point out that frequently initial studies are refuted by later evidence, then all you're doing is contributing to the general societal misunderstandin g of how research works, and contributing to the red tape and BS that makes it impossible to conduct most medical research efficiently.

Stop being part of the problem.
Quote | Report to administrator
 
 
Guest
-1 #4 JZ,MDGuest 2014-02-26 21:27:48
I have to agree with the 3rd comment about being part of the problem. Of course $$ appear to be wasted, but part of that problem lies with requirements of the NIH and FDA when performing studies. If you want to get studies published, then have every medical organization possible, agree to mandate that whatever data is presented at their meeting (national or local) must be published or submitted for publication within a year post-presentati on. If this has not been done, then none of the authors or pharma company sponsors (including the NIH) would be allowed to present at future meetings of any kind worldwide until that requirement is met.
Quote | Report to administrator
 
 
Guest
0 #5 No simple solutionsGuest 2014-02-27 01:04:54
Unfortunately it's not nearly as easy as saying "force them to publish" either. The vast majority of the data simply can't be published. To publish requires being able to reach some useful conclusion from the data. There's no venue in which to publish "this didn't really tell us anything", or a million other non-stories that simply don't make for a publishable paper. Without a "result" to publish, tons of information is simply invisible, not because anyone wants to hide it, but because it's just a pile of numbers, and without someone telling a story around them, they're effectively useless.

"Well, then there needs to be a place for all data, even non-publishable data, to be stored" isn't an answer either. A lot of modern research techniques generate Terabytes of data from a single experiment. Fundamentally, more new data is generated across the research world every day than could possibly be collected, compiled and analyzed by any researcher in their entire lifetime. As a result, the bulk of that data will never be useful for anything, unless someone summarizes and synthesizes it by writing it into a story and publishing it.

At the same time, if everything was written up and published, we'd be just as swamped with the publications as with the raw data - it's already impossible to keep up with the rate of publications, even with the limited selection of the data that is being published.

As much as I don't want to be disrespectful of the general public, the real solution lies somewhere in the direction of reducing the extent to which people who don't understand how research works, attempt to mandate processes and targets for scientists. It's not really the public's fault that they get misled in this, because inflammatory pieces like what was presented here, completely misconstrue the realities and issues of science and how the process works.

[more to come]
Quote | Report to administrator
 
 
Guest
0 #6 No simple solutions - part2Guest 2014-02-27 01:08:31
Researchers hate to waste funds. There are always exceptions, but, fundamentally researchers want to squeeze every last piece of information out of the resources they have. Anything else doesn't produce as many publications, or any of the other outcomes of research that they're hoping for. When they choose to replicate research, or appear to ignore information, in the vast majority of cases, there's a very good reason. They end up being forced to waste funds, when people who don't understand what the sciences are doing, start prescribing how they should be carrying out their work, or telling them what their research targets are, and obliging them to replicate work that they really don't need to, or preventing them from taking the most economical path to results.

From this, we get suggestions like "research is focusing on publishable units that aren't meaningful to the actual victims of disease". Say this to the general public, and we get movements, people calling politicians, etc, and eventually we get mandates suggesting that if the outcome isn't "meaningful to a disease sufferer, it shouldn't be funded or published". Sounds good, easy to get behind, and disastrous to research and progress. Without someone publishing those "not meaningful, but statistically relevant" bits of stuff that don't seem important to the disease sufferers, but that lay the groundwork for someone coming along later, to understand the disease and make progress that is relevant later. Interfere with that process, even with the best of intentions, and science gets even more inefficient, progress is lost, and the exact opposite of the desired outcome is produced - no meaningful results are ever produced, because the baby steps that are necessary first, can't be taken.

I'm not suggesting that science should be left to an ivory tower, or not monitored and engaged, but, when something as complex as research, publication and defining the future directions of productive research is concerned, it's quite difficult for someone not actively engaged, to make realistically valuable contributions.
Quote | Report to administrator
 
 
Guest
0 #7 Joe Rojas-BurkeGuest 2014-02-28 19:02:29
re: "If the best you can do use half of one breath to suggest that somehow repeating studies is wasteful..."

Of course replication is important, but replication is not the same as pointlessly repeating a study because you didn't systematically review the existing research. In the examples I gave, such review could have:

1) Prevented the death of healthy volunteer (At least 16 published papers had warned of lung complications from the drug).

2) Spared six healthy volunteers from life-threatenin g complications.

And by no means are those the only cases in which research was wastefully repetitive.
Quote | Report to administrator
 
 
Guest
0 #8 Joe Rojas-BurkeGuest 2014-02-28 19:35:25
re: "it's quite difficult for someone not actively engaged, to make realistically valuable contributions."

On the contrary, engaging patients is likely to be one of the keys to making research more relevant and less wasteful. Already there are examples of such efforts positively changed how scientist approach problems, have sped-up research, called out shared priorities, and identified opportunities to abort unpromising efforts early.

As Liberati, the physician and multiple myeloma patient, put it: Quote:
If we want more relevant information to become available, a new research governance strategy is needed. Left to themselves, researchers cannot be expected to address the current mismatch. Researchers are trapped by their own internal competing interests—professional and academic—which lead them to compete for pharmaceutical industry funding for early-phase trials instead of becoming champions of strategic, head-to-head, phase 3 studies.
Quote | Report to administrator
 
 
Guest
+2 #9 www.alltrials.netGuest 2014-02-28 20:02:14
Anyone interested in making all research available, should look into this iniciative:

from www.alltrials.net

It's time all clinical trial results are reported.
Patients, researchers, pharmacists, doctors and regulators everywhere will benefit from publication of clinical trial results. Wherever you are in the world please sign the petition:
Thousands of clinical trials have not reported their results; some have not even been registered.
Information on what was done and what was found in these trials could be lost forever to doctors and researchers, leading to bad treatment decisions, missed opportunities for good medicine, and trials being repeated.
All trials past and present should be registered, and the full methods and the results reported.
We call on governments, regulators and research bodies to implement measures to achieve this.
Quote | Report to administrator
 
 
Guest
-3 #10 on replicationGuest 2014-02-28 22:57:47
Quoting Guest:
re: "If the best you can do use half of one breath to suggest that somehow repeating studies is wasteful..."

Of course replication is important, but replication is not the same as pointlessly repeating a study because you didn't systematically review the existing research. In the examples I gave, such review could have:

1) Prevented the death of healthy volunteer (At least 16 published papers had warned of lung complications from the drug).

2) Spared six healthy volunteers from life-threatening complications.

And by no means are those the only cases in which research was wastefully repetitive.


Unfortunately, you have no idea whether that research was wastefully repetitive, and neither do I.

While those outcomes are tragic, and I'd like to believe that they could have been prevented by more careful literature review, the reality is that it's simultaneously utterly impossible to do a "complete" literature review, and, quite often impossible to to determine, without replication, which pieces of the literature to actually rely on. (As a complete aside, I'll note that the consequence of your suggestion that everything ought to be published, would only make this even more difficult).

You're viewing the situation through the clarifying lens of hindsight. It's quite easy to pick out the right 16 papers that foresaw the problems after you know what the problems are, but beforehand, it's not so simple.

Pick any scientific advance that you consider worthwhile. I'll show you 20 prior papers saying it was impossible. If the people who finally made the advance had followed your recommended "waste preventing" literature review, that advance would never have occurred.

I'm not, by any means, saying that there's not waste in the research endeavor. What I am saying is that minimizing that waste requires a very finely tuned focus on the specifics of the particular question and dynamics of the problem domain, and that the people most able to optimize the economy, are almost universally the scientists engaged in it. Picking the salient prior literature on which to focus, out of the immense mountain that could be applicable but that would ultimately lead in unproductive directions and additional waste, requires a combination of intimate knowledge of the field, well developed networks of critical peers, and a significant dose of just plain luck. Clearly, in the situations you've cited, that process went wrong. On the other hand, it goes right, millions of times every day. A prescriptive attempt to "fix" that process without more knowledge than the people currently engaged in it, will convert many of those millions of "right" results into unfortunately bad outcomes, and probably not convert many of the "wrong" ones into results that are any better.
Quote | Report to administrator
 
 
Guest
-2 #11 engaging patientsGuest 2014-02-28 23:39:08
Quoting Guest:
re: "it's quite difficult for someone not actively engaged, to make realistically valuable contributions."

On the contrary, engaging patients is likely to be one of the keys to making research more relevant and less wasteful. Already there are examples of such efforts positively changed how scientist approach problems, have sped-up research, called out shared priorities, and identified opportunities to abort unpromising efforts early.


I love how the buzzword "relevant" has suddenly become the valued cache-du-jour…

That silliness aside, I'm sorry, but while that makes a great sound bite, sounding good doesn't make it true.

Make no mistake - scientists love to be relevant - do you seriously think that any of them are sitting around saying "gee, let's screw my funders by doing something pointless today"?

That being said, the public opinion filter of "relevance" has repeatedly proven to be disastrous to biomedical research. One need look no further than the billions upon billions of dollars wasted, decades of delay in research toward actual treatments, and hundreds of thousands of lives that haven't been improved, that are a direct result of uninformed tripe and innuendo swaying public opinion about the relationship between vaccines and autism.

Without meaning to disparage poor Liberati, it's not even clear that he understands how phase-3 trials come to be. The bar that must be passed to get from phase-1, to phase-3, is a significantly high, uninformed-publ ic-imposed bar, specifically requiring an adequately large effect-size for the potential treatment. Telling the public that an even higher bar should be set, resulting in aborting even more phase-1 treatments, or pre-phase-1 research, before these investigations have the chance to lead to the foundational understanding that can finally enable a solid, and successful phase-3 treatment trial, is not a recipe to produce more actually relevant treatments.

Again, I'm not suggesting that there is not waste, or that the system can't be improved. Scientists however are desperate to improve their economy themselves - their lives and livelihoods depend on it. I'd never claim that they can't use some outside perspective or insights, but to suggest that by and large researchers, most of whom spend every waking minute trying to squeeze more productivity out of their resources, are fundamentally engaged in some kind of misappropriatio n of funds, is disingenuity at its finest.

Rabble-rousing of this variety is exactly how we end up spending 20 years of humanity's lifetime chasing utterly irrelevant non-causes of disease at the hands of pundits who are more concerned about their publicity and byline, than about the community they pretend to serve.

You can choose to be part of the problem, or you can choose to be part of the solution. The solution doesn't lie in the direction of misleading an uninformed public into believing that they have critical insights into optimizing solutions for a problem that is far more complex than even its most deeply invested practitioners fully understand.
Quote | Report to administrator
 
 
Guest
+1 #12 Joe Rojas-BurkeGuest 2014-03-01 16:29:58
Quoting Guest:
Unfortunately, you have no idea whether that research was wastefully repetitive, and neither do I...


In the case of the unexpected lung complications at Hopkins, the investigators wanted to study the effect of hexamethonium on bronchodilatati on in healthy volunteers. They did not systematically review the published literature, and so were not prepared to respond to the pulmonary complications and the young woman died. A simple search of MEDLINE and the Cochrane Controlled Trials Register would have sufficed. Five published reports referred to pulmonary complications of hexamethonium in the title!
Quote | Report to administrator
 
 
Guest
0 #13 Joe Rojas-BurkeGuest 2014-03-01 16:57:32
Quoting Guest:
Unfortunately, you have no idea whether that research was wastefully repetitive, and neither do I...


And in the case of the six healthy volunteers harmed in the Phase I trial of TGN1412, the problem was suppressed knowledge.

The dangerous reactions to the antibody suffered by six could have been avoided if the results of a previous unpublished trial of a similar antibody been known.

More: http://webarchive.nationalarchives.gov.uk/20130107105354/http://www.dh.gov.uk/en/Publicationsandstatistics/Publications/PublicationsPolicyAndGuidance/DH_063117
Quote | Report to administrator
 
 
Guest
+2 #14 RE: Speeding up scienceGuest 2014-03-01 21:55:28
Thank you for this wonderful summary of the issues plaguing medical research. I work for a rare disease patient advocacy org and participate in the same network you referenced that is doing research on Smith-Lemli-Opt iz syndrome (totally different disease and research consortium, but under the same NIH Rare Diseases Clinical Research Network (RDCRN) umbrella). There are more than 80 patient group representatives in this network–all rare diseases–and your comments mirror almost exactly the ongoing concerns and frustrations we have.

The RDCRN was established, in part, to provide a mechanism for patients to have more say in research planning and execution. A major focus of the program was to strengthen the working relationship between patient groups and researchers. It has been quite successful at this. However, aligning the goals of researchers and patients is of little value when grant reviewers are still stuck in the past and fear any innovative or ‘out of the box’ approach to research. The NIH leadership has been great, the individual institutes and their reviewers--a very mixed bag, with risk aversion for anything outside of standard practice being the overriding consideration. This is especially challenging for rare diseases where the existing research models may not be ideal or even feasible.

The upshot of this disconnect is a focus on what our PI calls ‘grantsmanship. ’ If you want any funding for your disorder at all, it becomes more important to write the application that reviewers want, rather than to write the one you actually need. This is, of course, extremely frustrating when each research day and each research dollar is a limited and precious commodity and actual lives are at stake. It is a topic of conversation with nearly every patient advocacy leader in the rare disease arena that I’ve met (lots of them), many of whom are frustrated at repeated studies of already well-documented phenomenon because these studies are 'fundable' when other studies are not. (This is, I assume, what you are talking about and not the replication of clinical study findings for validation, which we all understand is an important tenant of the scientific process). Ask any leader of a rare disease research advocacy group devoted to neurological disorders how they feel about more money being spent on cognitive impairment study number 27 (or 28 or 29) and you will get a good feel for how this is perceived in the real world!

I am sharing your article with my board and with other colleagues. Thanks again for laying this all out so nicely!
Quote | Report to administrator
 
 
Guest
-1 #15 re: Suppressed KnowledgeGuest 2014-03-02 01:40:31
Quoting Guest:
Quoting Guest:
Unfortunately, you have no idea whether that research was wastefully repetitive, and neither do I...


And in the case of the six healthy volunteers harmed in the Phase I trial of TGN1412, the problem was suppressed knowledge.

The dangerous reactions to the antibody suffered by six could have been avoided if the results of a previous unpublished trial of a similar antibody been known.

More: http://webarchive.nationalarchives.gov.uk/20130107105354/http://www.dh.gov.uk/en/Publicationsandstatistics/Publications/PublicationsPolicyAndGuidance/DH_063117


"Suppressed Knowledge" -- more inflammatory, pejorative, rabble-rousing language I see.

Have you even read these things you're referencing? The "suppressed knowledge" is an anecdotal reference, to a single-subject reaction, versus an antibody that is "similar" at a level just barely more significant than aspirin and viagra (similar, because both are pills…)

Multiple of the post-TGN1412 tragedy investigations make the point that TGN1412 had a completely novel mechanism of action, and no prior evidence to suggest any type of problems. These conclusions reached, even with testimony from the person responsible for that "suppressed" 1994 study. This is probably because the thing that's most "similar" to the 1994 antibody, is the adverse reactions.

The likelihood that anyone would have guessed that a monoclonal T-cell agonist, would produce adverse reactions similar to a tri-specific antibody against CD3/CD2/CD28 is miniscule. The only reason it's possible to draw some reasonable parallels today, is in hindsight.

The Lancet's attribution of this tragedy to the unavailability of the singular 1994 Hamblin result, comes from an, I assume highly peer reviewed and known for it's medical research integrity, blog post in the UK Financial Times. Please…

I won't suggest that Hamblin's results shouldn't have been published (though publishing results from an N of 1, really isn't valuable), but if they had been, it wouldn't have made one whit of difference in the outcome. If the millions and millions of other N=1 observations that occur across the medical establishment daily, were also published, as I assume you'd suggest should be done, we'd be so swamped in irrelevant data, that other, actually preventable tragedies would be the norm, rather than the exception.
Quote | Report to administrator
 
 
Guest
0 #16 re: Repeating ResearchGuest 2014-03-02 03:10:43
Quoting Guest:
...aligning the goals of researchers and patients is of little value when grant reviewers are still stuck in the past and fear any innovative or ‘out of the box’ approach to research. The NIH leadership has been great, the individual institutes and their reviewers--a very mixed bag, with risk aversion for anything outside of standard practice being the overriding consideration.
...
many of whom are frustrated at repeated studies of already well-documented phenomenon because these studies are 'fundable' when other studies are not. (This is, I assume, what you are talking about and not the replication of clinical study findings for validation, which we all understand is an important tenant of the scientific process). Ask any leader of a rare disease research advocacy group devoted to neurological disorders how they feel about more money being spent on cognitive impairment study number 27


While I have some different opinions on whether the NIH leadership is "great", you're certainly right that the reviewers and funding mechanisms appear to be a large part of the problem. Given that a lot of what they reject, is utter garbage, the system is at least partly "working", and it's not entirely clear how things would change if the system was perturbed, but, it's hard to imagine that it can't be optimized somehow.

With respect to repeating studies, while I understand, at least to some extent the frustration of slow progress and seemingly repetitious work (I realize, fully comprehending from the patient's perspective is impossible), please realize that a lot of what looks like repetition is really not repetition.

Progress, safe and careful progress, that minimizes risk to the potential treatment population, requires careful testing of many variables - often many more than are understood. No-one doing the research wants to waste time or resources any more than you want the resources and time water, but they also don't want to accidentally overlook some potentially unforeseen complication that could end up costing lives. Most often, it's quite difficult from the science side, to adequately explain why some additional study, testing some other combination of variables isn't "just repeating research". Explaining the nuanced details of experimental protocols sufficiently such that even other scientists working on the same problem completely understand, is quite challenging. Conveying these to the non-scientist sometimes seems impossible, but that doesn't make them any less important to determining the effectiveness of a treatment or protecting the safety of the subjects.

As frequently as attempts at complete repetition reveal that science is far from able to fully comprehend the complexity of disease processes, and we find that what should have been a literal repetition produces dramatically different results, it's hard to be sure that repetition is ever really needless. While there are undoubtedly cases where repetition could be reduced, it's usually only possible to say in hindsight that an experiment was unneeded.
Quote | Report to administrator
 
 
Guest
-1 #17 re: Repeating ResearchGuest 2014-03-02 04:56:19
Quoting Guest:
Quoting Guest:
Unfortunately, you have no idea whether that research was wastefully repetitive, and neither do I...


In the case of the unexpected lung complications at Hopkins, the investigators wanted to study the effect of hexamethonium on bronchodilatation in healthy volunteers. They did not systematically review the published literature, and so were not prepared to respond to the pulmonary complications and the young woman died. A simple search of MEDLINE and the Cochrane Controlled Trials Register would have sufficed. Five published reports referred to pulmonary complications of hexamethonium in the title!


Seriously - do you read beyond the titles of the things you're quoting? While I generally respect the Lancet, they clearly have an agenda here, and aren't providing exactly even coverage.

In the Hopkins case, the research wan't trying to study the effects of hexamethonium, they were trying to induce bronchocostrict ion, and study how healthy subjects compensated. Bronchocostrict ion is one of the side-effects of hexamethonium.

It's hardly clever to claim that the death would have been prevented if they'd bothered to look up the drug's side effects, when the study was about one of the side-effects…

It's doubly disingenuous to suggest that being aware of the other side effects would have prevented the death, as the post-mortem investigation appears to have concluded that the subject didn't die from any of the known side-effects of hexamethonium (I don't have time to review all of the relevant literature here, so a deeper read may reveal other evidence).

In spite of these issues, I find it hard to believe that this death could not have been prevented.

Unfortunately, the cause here may be darker and more sinister than simple waste and sloppiness. It's unconscionable that an agent was administered specifically to induce a negative side-effect, without an overwhelming body of evidence that the long-term consequences of that side-effect were known, and that there was an iron-clad protocol in place to limit the damage if something went wrong. While I am too far from the data to claim malfeasance, the number of places where ethics, responsibility, and assorted checks-and-bala nces appear to have been either circumvented, or ignored in this case, seems much greater than likely to have happened by chance.
Quote | Report to administrator
 
 
Guest
0 #18 Joe Rojas-BurkeGuest 2014-03-02 09:47:48
Quoting Guest:
It's hardly clever to claim that the death would have been prevented if they'd bothered to look up the drug's side effects, when the study was about one of the side-effects…


The study wasn't about "one of side effects" of hexamethonium on the lungs. The researchers, in fact, had concluded incorrectly that inhalation would be non-toxic to the lungs of healthy volunteers. The study was attempting to shed light on the nerve signaling that protects the airways from obstruction, and how a failure of that signaling might be involved in asthma. The researchers chose to use hexamethonium because it blocks neurotransmissi on by the nerves thought to be involved. It was given by inhalation to the volunteer before she was given another inhaled drug that causes airway constriction. She developed acute respiratory distress syndrome, an adverse reaction to hexamethonium that had been identified in earlier studies.

After the death, an internal review committee at Johns Hopkins criticized the IRB for approving the study without requiring “more safety evidence for a non-FDA approved drug no longer in clinical use, and administered by a non-standard route.” The committee criticized the study leader for not searching “more comprehensively ” for previous reports that hexamethonium has pulmonary toxicity. (Report of internal investigation into the death of a volunteer research subject. Johns Hopkins Medicine, July 16, 2001)

The Office for Human Research Protections concluded in its review of the events that led to the volunteer's death that the study investigators and the Johns Hopkins institutional review board "failed to obtain published literature about the known association between hexamethonium and lung toxicity. Such data was readily available via routine MEDLINE and Internet database searches, as well as recent textbooks on pathology of the lung." (OHRP July 19, 2001)

Dr Frederick Wolff, a professor emeritus at the George Washington School of Medicine, told the Baltimore Sun it was “foolish” and “lazy” that the investigator and the Hopkins review board failed to peruse the pre-1960s medical literature warning of lung damage caused by inhaling hexamethonium. (J Med Ethics, 2002, J Savulescu, M Spriggs)

The OHRP and an external review panel convened by Johns Hopkins found many other lapses and the latter found signs of an arrogant disregard for safety: "Our interviews suggest that many people at Hopkins believe that oversight and regulatory processes are a barrier to research and are to be reduced to the minimum rather than their serving as an important safeguard.” (Report of Johns Hopkins University External Review Committee, August 8, 2001.)

Hopkins subsequently doubled spending on IRB activities and undertook other initiatives to improve protection of research subjects and change the culture among its biomedical research scientists.
Quote | Report to administrator
 
 
Guest
0 #19 Joe Rojas-BurkeGuest 2014-03-02 22:03:39
Quoting Guest:
The likelihood that anyone would have guessed that a monoclonal T-cell agonist, would produce adverse reactions similar to a tri-specific antibody against CD3/CD2/CD28 is miniscule...


The Expert Scientific Group reviewing what went wrong in the TGN412 trial concluded that access to the unpublished clinical trial data might indeed have made a difference. Among the expert group's recommendations :

"Developers of medicines, research funding bodies and regulatory authorities should expedite the collection of information from unpublished pre-clinical studies relevant to the safety of human."

The researchers handling the TGN1412 trial had surmised that the risk a dangerous “cytokine storm” mediated by IL-2 signalling was relatively low based on the results testing in nonhuman primates

Terry Hamblin had supplied the expert group with the unpublished details of a 1994 first-in-human trial of a tri-specific antibody that caused an unexpected and dangerous IL-2 mediated adverse reaction – and at a dose estimated to be one sixth of the dose by weight used in the TGN 1412 trial.

“Members of the Expert Scientific Group considered that if information regarding this kind of unpublished trial had been in the public domain TeGenero [the biotech firm sponsoring the trial] may have been able to learn from it and potentially avoid the reaction seen on administration of TGN 1412. This reinforced the position of the Expert Scientific Group that a database should be created where experiences from first-in-man trials could be collated." (Expert Scientific Group on Phase One Clinical Trials Final Report, Nov 30, 2006)

At the time of the expert group’s review in 2006, the sponsor of the 1994 study still had not yet publicly disclosed the full data.

Had it been accessible, it’s reasonable to conclude that the researchers would have at least designed a safer trial, for instance, using lower starting doses and allowing more than 10 minutes to check for adverse events before giving successive doses to other volunteers.

“Ten minutes is simply too short an interval between dosing to observe for infusion-relate d adverse events, and a longer period of observation for each subject would have saved the other five volunteers in the TGN1412 trial from suffering the same fate as the first subject who received the drug,” a 2008 scholarly review by E. William St. Clair concluded (J Clin Invest. Apr 1, 2008; 118(4): 1344–1347)

“Without a doubt, TGN1412 should have been considered a high-risk drug, mandating a much lower starting dose than was selected for the first-in-human trial,” St. Clair wrote.
Quote | Report to administrator
 

More Articles

What I'm Reading

October 2014
Thursday, September 25, 2014

Nick Herinckx, CEO of Obility, and Jake Weatherly, CEO of SheerID, share what they've been reading.


Read more...

Downtime

October 2014
Thursday, September 25, 2014
BY JESSICA RIDGWAY

I'm not very interesting,” says a modest Ray Di Carlo, CEO and executive producer of Bent Image Labs, an animation and visual effects studio.


Read more...

Constant Contact

October 2014
Thursday, September 25, 2014
BY AMY MILSHTEIN

To prevent burnout, companies are banning email and after-hours communications. But is the 24-hour workday here to stay?


Read more...

Powerlist: Law Firms

October 2014
Thursday, September 25, 2014
BY KIM MOORE

A conversation with leading partners at law firms in Portland and eastern Oregon, followed by October's powerlist.


Read more...

College Hacker

September 2014
Wednesday, August 27, 2014
BY KLINT FINLEY

Treehouse CEO Ryan Carson builds a 21st-century trade school.


Read more...

Green Endeavor cleans up

News
Wednesday, August 06, 2014
080614 ULnew greenendeavorBY LINDA BAKER | OB EDITOR

Portland startup Green Endeavor strikes gold, inking a partnership with Underwriters Laboratories, an Illinois-based consulting and certification company with offices in 46 countries.


Read more...

Podcast: Testing for Emotional Intelligence with John Hersey

Contributed Blogs
Friday, September 19, 2014
ivbU3sIXBY TOM COX | OB BLOGGER

How can you tell if you, a peer, a subordinate or a job candidate has the emotional intelligence needed to do well?


Read more...
Oregon Business magazinetitle-sponsored-links-02
SPONSORED LINKS