Reproducibility: Don’t cry wolf

By | July 16, 2021

Major discoveries in particle astrophysics and cosmology have been announced in the past few years. The list includes neutrinos faster than light; – particles of dark matter that make up the rays; X-rays are scattered from the nucleus underground; and even evidence in the cosmic microwave background for gravitational waves caused by the rapid inflation of the early universe. Most of these turned out to be false alarms; And in my view, this is the likely fate of the rest.

There are consequences to disseminate exceptional results to peers and the public before they are reviewed, or even without knowing that better data is yet to come. Colleagues who once got agitated now nod and scoff at ‘another dark-matter candidate’. The area has cried wolf several times and has lost credibility. A colleague told me that grant-funding panels are getting wary of funding astrophysical discoveries of dark-matter particles.

I am also concerned that false discoveries are undermining public confidence in science. As cosmic phenomena come and go – not to mention endless speculation about hypothetical concepts like parallel and holographic universes – why should anyone believe that there would be any scientific results?

Several trends have brought us to this position. Intense competition, increased use of public data sets, and online publication of draft papers without proper referees have eroded traditional standards for making extraordinary claims.

Particle physics and astrophysics pioneered the open release of data and publications more than two decades ago; Other disciplines are following his lead. The scientific community must now address those habits to ensure that seductive reports of false discoveries do not supersede more sober accounts of real scientific breakthroughs.

Transfer Practices

Three changes in the way scientific studies and reporting are fueling this surge of false discoveries.

First, there is a decline in statistical standards. Extraordinary claims demand extraordinary proof. In particle physics, the typical threshold is ‘5 sigma’: a signal 5 times stronger than the average noise level (sigma), which translates to about a 1-in-3.5 million chance that the results were due to coincidence.

But claims of 5-sigma are becoming rarer as scientists hurry to prioritize those with exciting but tentative results. The official announcement of the discovery of the Higgs boson with the Large Hadron Collider at CERN, Europe’s particle-physics laboratory near Geneva, Switzerland, in July 2012 was preceded by press releases of weak but suggestive signals, even though there was no competition.

The scientists change the words in their paper from ‘discovery’ to ‘evidence’ or ‘hint’, with little effect on how the results are used. Take the latest dark-matter discovery claim. On March 8, astronomers posted a preprint of a paper in the arXiv repository, and their university issued a press release explaining that the authors were “tantalizing” -rays coming into the Milky Way from a recently found dwarf-galaxy companion. signal is said.

Which is reportedly full of Dark Matter 2. The -ray signal detected in images from the Fermi-ray satellite’s Large Area Telescope (LAT) appears to be consistent with the high-energy radiation produced when dark matter particles are annihilated. But the photon was inconclusive only 3-4 times the noise level, as acknowledged by the authors.

Another paper posted on arXiv on the same day contradicted this finding. A more comprehensive re-analysis of the same data by the Fermi-LAT instrument Team3 – 30-40% more sensitive using updated software – recorded no signal beyond noise. The authors of the first paper acknowledged that a software upgrade was imminent and would confirm or deny their claim, but did not wait for it.

Detecting noise fluctuations is nothing new, but the possibility that the ‘detection’ could be dark matter meant it was widely reported in the media. Balanced reporting also raises this issue in the public mind; The non-identification was mentioned in the account in The New York Times4, but a hint of enthusiasm propelled the story.

Second, the greater use of public data sets increases the risk that some researchers will make spurious searches near the edge of an instrument’s sensitivity. More brains can be selected to mine the data. But analysis is difficult without the inside knowledge of the people who built and calibrated the instrument.

Such was the case with Fermi-LAT dark-matter detection. The released Fermi-LAT data – public since 2009 – are the product of complex algorithms and calibrations that turn the detectors’ electronic signals into quantities that any physicist can in principle analyze. However, instrument builders have information on reducing noise thresholds.

The risk of a person misusing the data also increases when more people have access to them.

Leave a Reply

Your email address will not be published.