Photo de l'auteur

Pour les autres auteurs qui s'appellent Richard Harris, voyez la page de désambigüisation.

1 oeuvres 99 utilisateurs 2 critiques

A propos de l'auteur

Richard Harris is one of the nation's most celebrated science journalists, covering science, medicine, and the environment for more than thirty years for NPR, and the winner of more than a dozen national awards. He lives in Washington, DC.

Œuvres de Richard Harris

Étiqueté

Partage des connaissances

Autres noms
Harris, Richard F.
Sexe
male

Membres

Critiques

Ulp. About the crisis of reproducibility in the medical field, which appears at least as bad as the crisis of reproducibility in behavioral psychology. I learned about “Eroom’s Law,” the opposite of Moore’s law, which holds that there is an exponential slowing in the state of drug development, starting in 1950; if it holds, we’ll be done in 2040. Lack of rigor in biomedical research is an important culprit.

Even if mice are good models (which they often aren’t) it turns out that cage position can affect the outcome of an experiment, given mice’s distaste for bright lights and open spaces. Harris quotes a scientist: “As you move from the bottom of the rack to the top of the rack, the animals are more anxious, more stressed-out, and more immune suppressed.” Also, “Mice are so afraid of [human] males that it actually induces analgesia,” numbing pain and screwing up studies. So mouse study results can vary hugely from lab to lab. But the bigger problem may be testing in mice at all, or testing only in one strain of animal. If you tested a new drug on white women aged 35 who all lived in one town with identical homes, husbands, diets, thermostats, and grandfathers, “that would instantly be recognized as a terrible experiment, ‘but that’s exactly how we do mouse work.’”

Harris is only moderately optimistic about small-molecule innovations. He quotes a scientist who argues that “evolution has created so many redundant systems that targeting a single pathway in a complex network will rarely work…. ‘We have evolved seventeen different biological mechanisms to avoid starving to death. Drugging one of those mechanisms isn’t going to do anything!’”

Cell experiments are troubling too, even when they’re properly identified. “The very act of propagating cells in the laboratory changes them profoundly,” and atmospheric oxygen in particular is really important because a lot of regulatory factors that affect tumor growth are oxygen regulated. “In fact, cell lines derived from all sorts of cancers end up looking much more like one another than they do the original tumors from which they came… ‘Some people say that HeLa is a new species,’ [a scientist] told me. ‘… The chromosomes are all rearranged… [I]t has made all these changes to adapt’ to the environment where it now makes its home.” Precision medicine can’t be developed until we deal with the fact that even molecules in a living body change when surgeons cut off the blood supply to the tissue they’re going to remove.

Here are a couple of statistical twists I hadn’t thought about, too. If you set your p-value for significance at 0.05, then there’s almost a 50% chance that running the experiment again would give you a higher value, and almost a 50% chance that you’d get a lower one, and therefore be deemed insignificant. To have a 95% chance that an experiment run a second time would still be statistically significan, a p-value of 0.005 would be required. This can often be done, if the phenomenon at issue is real, by increasing the sample size by 60%--expensive, but Harris argues pretty persuasively that it would be worth the costs. Another point: scientists too often confuse exploratory research with confirmatory research. Statistical tests used to confirm or disconfirm a hypothesis don’t work if you don’t have a hypothesis and are just fishing around for anything interesting or unexpected in the data. “It’s fine to report those findings as unexpected and exciting, but it’s just plain wrong to recast your results as a new hypothesis backed by evidence.”

All is not lost. A federal law requiring scientists doing drug studies to declare endpoints in advance seems to have had significant effects: of 30 big studies done before the law, 57% showed a benefit. But after, only 8% confirmed the preannounced hypothesis.

Reproducibility is the key. Although some responses to the crisis point out that failed attempts to reproduce certain results may be because the original lab did important things differently, but that’s part of the point: “if any tiny detail can derail an experiment, just how robust is the result? Nobody cares about an experiment that … requires conditions so exquisite that only the lab where it originated can repeat it.” Harris advocates (1) blinding (amazingly, not universal); (2) repeating basic experiments; (3) presenting all results rather than cherry-picking; (4) using positive and negative controls—running experiments that should succeed and fail, respectively, if the hypothesis is correct; (5) careful validation of the ingredients (which turns out to be a much bigger problem than I knew; for example, did you know that lots of cell lines labeled otherwise are actually HeLa, which is very good at taking over, and between 18-36% of cell experiments used misidentified cell lines?); and (6) using the right statistical tests.
… (plus d'informations)
1 voter
Signalé
rivkat | 1 autre critique | Jun 22, 2017 |
It seems like every other week a new study hits the news: Red wine cures cancer, coffee is terrible for you, taking vitamins is crucial for good health, red wine might actually cause cancer, caffeine in small amount is good for you, vitamins are worthless. With this whirlpool of conflicting information coming rapid-fire into the public sphere, one could certainly forgive the average person if they stopped paying attention, or even started to doubt everything they hear from a scientific source.

In Rigor Mortis, former NPR science journalist Richard F. Harris seeks to illuminate the systemic problems which underlie this phenomenon. Especially in this political environment, such an undertaking is a double-edged sword. It would be too easy for someone to take the basic concept: that there are structural problems within the field of medical research, and leap wildly to the conclusion that science itself is deeply flawed. However, the current situation within the scientific community needs to be addressed. Improvement can only be achieved with honest admissions of fault, greater transparency, and dedication to change. In this regard, Harris’ book does the field more good than harm.

The current crisis has been labeled one of reproducability. Flawed research, lack of standardized methods, and inadequate analysis, combined with the chaos of working within living systems, result in a nigh-impossibility of one lab successfully reproducing the results of another. The causes of this are multifaceted; lack of training in laboratory and statistical methods, the dog-eat-dog nature of research funding, the press by Universities to “publish, publish, publish” with more regard to quantity of work than quality. Right now, it pays far better to be first to be right.

Harris’ book isn’t just a condemnation of the state of the field, he provides concrete adjustments and changes that can be made to improve the quality of research being done, and shares the stories of those within the field who are working towards those ends. The emphasis here is that we should not throw the baby out with the bathwater. As more and more researchers begin to deal honestly with the flaws of their research and seek solutions, the benefits for medical research, and for doctors and patients, will be profound.

A copy of this book was provided by the publisher in exchange for an honest review.
… (plus d'informations)
 
Signalé
irregularreader | 1 autre critique | Apr 12, 2017 |

Vous aimerez peut-être aussi

Auteurs associés

Nick Hurran Director

Statistiques

Œuvres
1
Membres
99
Popularité
#191,538
Évaluation
4.1
Critiques
2
ISBN
256
Langues
8

Tableaux et graphiques