Standing on the Shoulders of Dwarfs

Is science still the giant we think it is?
Science and society
Philosophy of science

Nanos gigantum humeris insidentes. But what if the giants are shrinking?

TL;DR

The original metaphor (often attributed to Newton, but coined by Bernard of Chartres in the 12th century) was about dwarfs sitting on the shoulders of giants, seeing further not because of their own merit but because of the height beneath them. Today, that metaphor deserves to be inverted. Science is increasingly done by underfunded, career-pressured researchers producing small, fragile results - dwarfs standing on the shoulders of other dwarfs. The giants were never the scientists themselves; they were the conditions that allowed science to happen: sustained funding, institutional patience, intellectual freedom, and time. Those conditions are eroding. Meanwhile, a flood of AI-assisted, formulaic, and poorly conceived papers is polluting the scientific literature, not because AI is inherently bad, but because the incentive structures of modern science reward volume over substance. Science is under threat, not from external enemies, but from within.

The Full Story

The Original Dwarfs

Bernard of Chartres, sometime around 1120, told his students they were like dwarfs perched on the shoulders of giants. The point was not modesty. It was a statement about intellectual inheritance: we see further because we stand on what came before, not because we are taller.

When Newton borrowed the phrase in 1675, writing to Robert Hooke, he may have been sincere, or he may have been subtly mocking Hooke, who was famously short. Either way, the metaphor stuck. It became the unofficial motto of science: progress by accumulation, each generation building on the last.

But what happens when the giants beneath us start to shrink?

The Myth of the Lone Genius

Popular culture loves the story of the brilliant mind who changes everything: Newton and the apple, Einstein and the patent office, Crick and Watson and the double helix. These stories are seductive because they suggest science advances through sparks of individual genius. The reality is far less cinematic.

Most scientific progress is slow, grinding, and incremental. It happens when hundreds of researchers, working on fragments of a problem, gradually push the boundary of what is known by a few millimetres. A technique is refined. A measurement becomes more precise. A confounding variable is identified. None of these make headlines.

More importantly, what the popular narrative omits is that the great leaps, when they do occur, are almost always enabled by technology, not the other way around. The microscope came before microbiology. The telescope came before modern astronomy. X-ray crystallography came before the structure of DNA. PCR came before genomics became feasible. Sequencing technology came before the Human Genome Project. CRISPR was a biological curiosity for years before it became a tool.

Science does not drive technology; technology drives science. The relationship is not symmetrical. And this has a consequence that is rarely acknowledged: when we underfund the infrastructure (the instruments, the computing, the technicians, the institutional support) we are not trimming the edges of science. We are cutting its legs.

The Funding Desert

Across the developed world, public investment in research as a share of GDP has been in decline for decades. Governments that once treated science as a strategic priority now treat it as a discretionary expense, competing for scraps alongside everything else in austerity-era budgets. Grant terminations, funding freezes, and hiring caps have become routine. Early-career researchers increasingly leave academia, or leave their countries altogether, in search of conditions that allow them to do their work.

The damage is not linear. Science is not a factory where you reduce input and get proportionally less output. It is an ecosystem. Lose a generation of trained researchers, close a laboratory, cancel a longitudinal study, and you do not simply pause progress - you lose the capacity to resume it.

The giants, it turns out, were never the individual scientists. The giants were the institutions, the funding, the patience, the decades-long commitment to asking questions whose answers might not arrive within a single electoral cycle. Those giants are being starved.

The Flood

At the same time that science is being dried of funding, it is being drowned in literature.

The scientific literature is being flooded with formulaic, low-quality papers at an industrial scale. A 2025 analysis published in PLOS Biology documented an explosion of near-identical biomedical papers, all drawing on public health datasets like NHANES, all following the same template: pick a health condition, pick a single variable, report a correlation, publish. Papers using NHANES data increased from about 4,900 in 2023 to nearly 7,900 in 2024.

The rise has been linked to AI-assisted workflows and, more troublingly, to paper mills, i.e. commercial operations that mass-produce manuscripts for paying clients. These enterprises harvest public databases, apply standardised analytical pipelines, generate AI-written introductions and discussions, and sell the finished product with guaranteed authorship slots. One editor at Scientific Reports described receiving one or two such submissions per day.

The papers are not always easy to spot. They use real data. They follow standard formats. They pass plagiarism checks. But they are, in the most important sense, empty. They strip complex, multifactorial health conditions down to single-factor correlations, ignore confounders, avoid correction for multiple comparisons, and present trivial or implausible findings as if they were discoveries. The result is what one research team called “science fiction masquerading as science”.

By mid-2025, nine major public health datasets had been identified as targets of systematic exploitation, with a combined publication count exceeding 23,000 papers in a single year.

This is not a side effect. It is a symptom of a system that rewards quantity over quality, publication over understanding, output over insight.

Publish or Perish, and the Perishing of Knowledge

The phrase “publish or perish” has been a running joke in academia for decades. It is no longer funny. It has become the primary engine of a dysfunctional incentive structure that selects for volume rather than rigour.

Hiring, promotion, and funding decisions in most academic systems still rely heavily on publication counts, journal impact factors, and h-indices. This means that the rational career strategy for a young scientist is not to pursue the most important question they can think of, but to pursue the most publishable one. The difference is enormous.

A publishable question is one that can be answered quickly, with available data, using established methods, producing a result that is novel enough to pass peer review but not so surprising that it invites scepticism. An important question may require years of method development, produce null results, challenge existing frameworks, or simply demand patience. In the current system, the first is rewarded and the second is punished.

The consequences are predictable. Negative results go unreported. Methods sections shrink. Data and code are not shared. Exploratory analyses are repackaged as confirmatory. Effect sizes are inflated. And the literature fills with findings that are technically “significant” in the statistical sense but trivial in the epistemic one.

This is not a moral failure. It is a structural one. Scientists are responding rationally to irrational incentives. As the reproducibility crisis has shown, the problem is not that researchers are dishonest - it is that the system does not reward honesty.

AI as Accelerant, Not Cause

It is tempting to blame artificial intelligence for the flood of poor science. But AI is not the disease. It is the accelerant.

The underlying pathology (the pressure to publish, the profit-driven publishing model, the misuse of metrics, the erosion of editorial standards) existed long before large language models. What AI has done is make it cheaper and faster to produce the symptoms. A paper that once took weeks to write can now be drafted in hours. A dataset that once required domain expertise to analyse can now be processed by someone with no understanding of the underlying biology.

AI has also exposed a deeper problem: the peer review system was never designed to handle the current volume of submissions, and it was already struggling before the flood began. Reviewers are unpaid, overworked, and increasingly unable to distinguish genuine contributions from formulaic noise. Publishers, many of whom operate on a for-profit model funded by author fees, have limited incentive to reject papers that meet minimum formal requirements.

The Stockholm Declaration, drafted at the Royal Swedish Academy of Sciences in June 2025, called for systemic reform: a shift from commercial to scholar-led publishing, the end of publication-count-based evaluation, and a recommitment to evaluating research on its depth, rigour, and significance. Whether these reforms will take hold is an open question. The incentives pushing in the opposite direction are formidable.

What Society Loses

None of this stays inside the academy. Medical treatments are developed on the basis of published evidence. Policies are justified by reference to scientific findings. Textbooks are written from the literature. When any of these rest on results that cannot be reproduced, society inherits the error without knowing it.

Science earns public trust not by being always right, but by being self-correcting. That self-correction depends on transparency, replication, and critical scrutiny. All three are under strain.

When the literature becomes too large to read, too noisy to filter, and too fast to verify, self-correction breaks down. What remains is a system that produces the appearance of knowledge without the substance.

Dwarfs All the Way Down

Bernard of Chartres understood something that the popular version of the metaphor obscures. The dwarfs did not see further because they were clever. They saw further because the giants held them up. Take the giants away, and the dwarfs are just dwarfs.

The metaphor needs updating. We are no longer dwarfs on the shoulders of giants. We are dwarfs on the shoulders of dwarfs, each generation a little lower, each view a little narrower, each step a little less certain. And the irony is that we are congratulating ourselves on the number of steps we are taking, while failing to notice that we are walking downhill.

The BioLogical Footnote

Perhaps the deepest threat to science is not any one of these forces in isolation (the defunding, the publish-or-perish machinery, the AI-generated noise, the erosion of editorial gatekeeping) but their convergence. Each alone might be survivable. Together, they describe a system optimising for its own metrics rather than for understanding. And the uncomfortable question this raises is whether we would even notice if science quietly stopped working. The papers would keep appearing. The conferences would keep running. The metrics would keep climbing. Only the knowledge would stop accumulating. Science, in this scenario, would not die dramatically. It would simply become a performance of itself - all the rituals, none of the revelation.

To Explore Further

Standing on the shoulders of giants | Wikipedia

Low-quality papers are surging by exploiting public data sets and AI | Science (AAAS)

AI-Generated Scientific Papers: Crisis? What Crisis? | eNeuro (PMC)

The Reproducibility Crisis | The BioLogical Footnote

Back to top