Science Swindle
28 September 2023 2023-09-24 19:59Science Swindle
“It doesn’t matter how beautiful your theory is, it doesn’t matter how smart you are. If it doesn’t agree with experiment, it’s wrong.” – Richard P. Feynman, received the Nobel Prize in Physics in 1965
Flawed Peer Review
Peer review is thought of as an essential part of proper scientific inquiry. But what is it, and is it reliable? Peer review, as the name suggests, is thought of as the process whereby scientists in the same field as the author of the paper (peers) review the paper, during which they scrutinize the methodology, presentation, data, conclusion etc. of the research that the paper describes. It is of the utmost importance to have the basis correct. So, asking “What is a peer?” and “What is a review?” are important fundamental questions. The process and outcomes will change depending on how these questions are answered. If a peer is a scientist doing similar research, then this is also a direct competitor. Though very well suited to understand the research, the incentive to be fair may be lost. Similarly, if the peer is in the same field, then competition may be an issue. A review is also subject to wide interpretations. Where some may be satisfied with a general “This looks fine to me.” statement, others will poor over all the data, methodology, references etc. The classic peer review system goes something like the following. An editor looks at a paper, then sends it to two reviewers of whom the editor thinks they know something about the subject, and if the reviewers deem the paper worthy then it gets published. If the reviewers disagree, a third reviewer is enlisted to cast a decisive vote.
So, what is all this work to accomplish? If it is to select the best papers for publication, then it turns out that an experienced editor and the full on peer review process show little difference in the selected papers. If it is to improve the quality of papers that get funding, then there is also little to support this. The process may be useful to detect errors or fraud. And yet, the process picks up fraud by chance, and there is no reliable method to detect it. The process is based on trust. Which is peculiar, or ironic, that science is based in belief.
The process has several defects. It is slow and expensive, even though reviewers are often not payed. But reviewers still have to spend time reviewing, which could be used for other productive tasks, such as original research. The process isn’t consistent either. As the contents and subject of a paper can be complex, different people will take different perspectives on the importance, quality, advantages and limitations of a paper. The evidence suggests that reviewers only agree slightly more than they would be expected to by chance on whether or not to publish a paper. The process is also open to abuse. One can steal ideas, and pretend they are their own. You can make unfair reviews to slow down or block a competitor. Bias is another issue. There is evidence that authors bias against less prestigious institutions. There is even a strong bias against studies that show that an intervention doesn’t work, a so-called negative study. Bias can also be introduced by a variety of mechanisms, for example publishing many positive trails and not publishing negative trials, reinterpreting data before it is sent to regulatory agencies, incongruence between the results and the conclusions, ghostwriting, using so-called seeding trails, or predatory publishing.
Predatory Publishing
Investigative journalists Svea, Till and Suggy investigated predatory publishing. At the DEF CON 26 convention, in 2018, they presented their findings in a talk titled “Inside the Fake Science Factory”. Predatory publishing is the practice of organizing pseudo-academic conferences and journals in order to give badly-done or nonsense studies the appearance of a proper scientific method and credibility, while at the same time bringing in millions of dollars for the companies that organize it. The process is quite simple: You make a submission to one of these organizations, they give some superficial comments (if any), you make payment, and your submission gets accepted.
Some examples of these organizations are: WASET: world academy of science, engineering and technology; OMICS Group; IOSR Journals: International Organization of Scientific Research; Science Publications; and SCIENCE DOMAIN International. Among WASET and OMICS they found papers from elite universities like Stanford and Yale, and top institutions like the Mayo Clinic.
Reasons for getting caught up in this are varied. It can be as simple as getting scammed. It can be that researcher cave under the pressure of having to publish or otherwise perish. Or researchers simply take advantage of predatory publishing to further their career. Authors piling is a way to take advantage of this. This is the practice of putting many authors on one paper, which is unusual. Copy-paste is another way. Some of these paper have received grant money, so tax payers are paying for it. And they are scientifically questionable, likely because they were written in a very short time.
Big Tobacco and Big Pharma were caught using this practice. But also institutions responsible for critical infrastructure, such as nuclear safety, were caught. And then these studies get cited elsewhere, such as was found for the German federal Institute for Risk Assessment or in patents.
You can watch their full talk, after which they present their documentary as well, here:
Influencing Evidence
Industry funded research has been found four times more likely to produce positive results as opposed to research funded by other means. Notably are pharmaceutical companies, as they have been able to influence and bias clinical research at every stage of its production. Evidence that measures against this have stopped the biasing is lacking, and it is unclear if it even slowed down.
One tactic is using inappropriate doses, dosing intervals and comparators. This tactic involves companies using low doses of a comparator drug to make their drug seem more effective, or they use a high dose of the comparator so as to elicit more side effects. An unusually rapid and substantial increase in the dose is another way to elicit more side effects with the comparator drug. This misuse of doses violates the principle of equipoise, which is that it is unethical to enroll patients in trials where there is substantial uncertainty whether a treatment will be beneficial. It is also unethical because it can expose patients to harm or suffering from the inappropriate dose.
Another tactic is known as selective publication. Here companies do not publish unfavorable results and publish favorable results more prominently, often in multiple publications. This biases the assessment of a treatment. Adding the unpublished studies into the meta-analyses can change the result. This happened with trials for selective serotonin reuptake inhibitors (SSRI). Analyzing the published literature revealed a positive benefit to risk profile, but this changed when the unpublished literature was accounted for.
Reinterpreting data that gets submitted to regulatory agencies also happens. For example, a case where 12 antidepressants were investigated and the effect size was greater in the published literature than the effect size reported to the U.S. Food and Drug Administration (FDA). Subsequently it will appear to clinicians reading the literature that the drug is more effective than it likely is. In another case, looking at trials submitted to the FDA, many were not published five years after FDA approval of the drug. The ones that were published were much more likely to show a positive result.
Authors may distort conclusions. They may present a conclusion that is more favorable than is appropriate from the results. Meta-analysis also appear to produce more favorable conclusions when they are industry-supported, although the results may not be more favorable. Conclusions may also be more favorable when there is a conflict-of-interest (COI). This association has been found to persist after controlling for sample size, study design, and country of primary authors. Methodological quality, statistical power, type of experimental intervention, type of control intervention or medical specialty has been found not to explain the association between COI and a more favorable conclusion. When there is industry funding, reporting a positive result is more likely, while in the absence of industry funding there is no association between COI and conclusion.
When men and women are specifically recruited to take data from trials and write a favorable article, it is called ghostwriting. Then a well-known academic or doctor is recruited to masquerade as the author. This academic or doctor is then presented as the author of the article, without mentioning the role of a ghostwriter. There are cases where a first draft of an article is already written in advance, with the author’s names listed as ‘to be determined’. Ghostwriting is not only employed to ensure favorable outcomes to the sponsoring company but also to cast doubt about unfavorable results and research.
Doctors involved in clinical trials on medicine are known to increase their use of that particular drug. Trials can be utilized to take advantage of this, so-called seeding trials. The only purpose of these trials is to get doctors used to a product so that it becomes a regular part of their prescriptions.
The Physicist’s Picture Of Nature
On June 25, 2010 the magazine Scientific American republished an article by Paul Dirac from its May 1963 issue, titled “The Evolution of the Physicist’s Picture of Nature”. In the article Paul Dirac describes the development of general physical theory. First he looks back a little to the past and later on he makes suggestions for the future.
One point Dirac seems to emphasize is that the beauty of one’s equations is more important than to have them fit the experiments. He really gets this point going once he writes about advances in quantum theory in 1925. Werner Heisenberg and Erwin Schrödinger independently advanced quantum theory, with Heisenberg making his contribution first and then Schrödinger soon after. Heisenberg stayed close to the experimental evidence that came from spectra. He fitted the experimental information into a scheme that is now known as matrix mechanics. “All the experimental data of spectroscopy fitted beautifully into the scheme of matrix mechanics, and this led to quite a different picture of the atomic world.” writes Dirac. Schrödinger did not stay close to the experimental data. He extended ideas of De Broglie to formulate a “very beautiful equation”. This is in stark contrast to what Richard Feynman is known for having said, that if your theory doesn’t agree with experimentation, then it is wrong. To that end, something new was invented to make Schrödinger’s equations fit the results of experiments, namely electron spin. This begs the question: If you have to invent something new in your equation to make it fit, is it still beautiful as you claimed before? Isn’t the point of science to describe the reality we live so as to better understand it?
Further Reading
Dirac, Paul. The Evolution of the Physicist’s Picture of Nature. Scientific American, 2010. https://blogs.scientificamerican.com/guest-blog/the-evolution-of-the-physicists-picture-of-nature/
Lexchin, J. Those Who Have the Gold Make the Evidence: How the Pharmaceutical Industry Biases the Outcomes of Clinical Trials of Medications. Science and Engineering Ethics, Vol. 18, pp. 247–261, 2012. https://doi.org/10.1007/s11948-011-9265-3
Smith, Richard. Peer review: a flawed process at the heart of science and journals. Journal of the Royal Society of Medicine, Vol. 99, pp. 178-182, 2006. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1420798/
Svea; Till; Suggy. Inside the Fake Science Factory. YouTube: DEFCONConference, DEF CON 26, 2018. https://www.youtube.com/watch?v=ras_VYgA77Q