Are scientists sabotaging intelligence research? Behind the myth lies a previously unacknowledged methodological flaw

18 Mar 2026 News Fresh studies For practitioners

No description

In recent years, information has occasionally surfaced on social media claiming that scientists are “sabotaging” research on intelligence because it is a highly heritable trait with the potential to lead to discrimination. This claim is based, among other things, on a 10-year-old study by German scientists on the so-called “reverse publication effect” in research on the relationship between intelligence and school grades. In his recently published article in the journal Intelligence, Hynek Cígler of the Psychology Research Institute at Masaryk University’s Faculty of Social Studies demonstrated that the cause is in fact an undescribed methodological shortcoming in certain meta-analyses (i.e., studies that synthesize the results of dozens or hundreds of individual source studies).

No matter what phenomenon we are studying, the observed effects between two phenomena vary across studies due to simple chance. The smaller the sample size, the greater the role of chance; consequently, small studies contain more noise and report effects that are “further” from the true effect size. For this reason, when testing hypotheses, small studies are less likely to observe a “statistically significant effect”—that is, to support the existence of the phenomenon under investigation. In other words, to investigate weak effects, we need larger research samples, because small research samples are more likely to yield a “false-negative result.”

And this is precisely where so-called publication bias, or the “drawer effect,” described by Rosenthal as early as 1979, comes into play. If a researcher fails to find support for their hypotheses, there is a greater likelihood that the research will never be published and will end up in the “drawer.” There are many causes, and they aren’t particularly relevant here; what’s more interesting to us is the consequence—publication bias leads not only to an overestimation of empirical evidence for the existence of the effects being studied, but also to an overestimation of the magnitude of the effect. Simply put: if research “doesn’t pan out,” few people publish it. This distorts the resulting picture of science. This phenomenon is well known across scientific disciplines, including psychology, chemistry, and medicine.

A secondary consequence of publication bias is the apparent relationship between effect size and sample size across published studies. If a particular area of research is subject to publication bias, then studies with smaller samples tend to report systematically larger effects, while large, extensive studies report smaller effects. Simply put, small samples overestimate the effect. This is exploited by a number of procedures in meta-analyses that summarize the results of a large number of individual source studies. These procedures both identify whether publication bias is present and correct the resulting estimates of effect size.

In the aforementioned meta-analysis on the relationship between school grades and intelligence, however, German researchers observed the exact opposite effect, known as “reverse publication bias”: that is, the smaller the sample, the weaker the correlation between grades and intelligence. Since then, claims have repeatedly surfaced on social media that psychologists are deliberately downplaying the significance of intelligence in people’s lives, thereby sabotaging research on it, in an effort, among other things, to diminish the role of intelligence testing in psychological diagnosis. The alleged cause is said to be the ideological, liberal-leftist convictions of academics, with which intelligence research is supposedly at odds. Intelligence is, in fact, largely innate, strongly linked to the ability to learn, and has far-reaching consequences for people’s lives—which may seem to contradict various inclusive and egalitarian beliefs.

Hynek Cígler of INPSY demonstrated that this conclusion is nonsensical and that it was merely an artifact of an incorrectly applied statistical approach. The key problem lies in the fact that the German authors did not examine the sample size directly, but rather the so-called sampling error—that is, the degree of random noise in each study. This sounds similar, but there is a fundamental difference. For a correlation coefficient (i.e., a measure of how strongly two things are related), the sampling error depends not only on the sample size but also on how strong the correlation is. Studies that found a strong relationship between intelligence and grades therefore automatically had a lower error—without any bias or sabotage.

The graph shows three different ways to analyze the same data. Each point on the graph represents the result of a single source study. The left panel shows that the strength of the correlation between intelligence and school grades (horizontal axis) is not related to sample size (vertical axis). The middle panel shows a very strong correlation with the magnitude of errors—a finding published by the original German authors, which led to false conclusions regarding reverse publication bias. The figure on the right illustrates the optimal procedure using the so-called Fisher transformation.

Hynek Cígler utilized the source data from the original German team and recalculated the results of the meta-analysis, checking for publication bias using three alternative methods. It turned out that the magnitude of the correlation is unrelated to sample size, provided the correct statistical method is applied. He then illustrated the results using a simulation study.

The described effect is relatively trivial and quite obvious if one has sufficient statistical knowledge. Perhaps this is precisely why it has not yet been described in the professional literature. On the contrary, some common methodological studies and meta-analysis textbooks recommend this inappropriate procedure as not optimal, but still suitable. Hynek Cígler, on the other hand, demonstrated that this procedure should under no circumstances be used, which may help improve the quality of future research studies.

Is there, then, a lack of trust among psychologists regarding intelligence research, or perhaps regarding the construct of intelligence itself? Probably not; or rather, there is no evidence to suggest that this is the case. Intelligence plays a relatively prominent role in psychological diagnosis; it is routinely tested and is a very good predictor of, for example, academic success, the ability to study, or the ability to perform intellectually demanding jobs. In this light, it does not appear that the alleged biases against intelligence exist, and this is not supported by the personal experience of a researcher at the Psychology Research Institute either.


Recommended citation:

Cígler, H. (2026). No evidence for reversed publication bias in research on intelligence and school grades: Funnel plot asymmetry as an artifact of conditional standard errors. Intelligence, 116, 102005. https://doi.org/10.1016/j.intell.2026.102005

Interested in the study? Contact its author!

Mgr. Hynek Cígler, Ph.D.
Team Measurement in Psychology
cigler@fss.muni.cz

Read the study

Many of our publications follow the principles of Open Science. We want to ensure that our studies are reproducible by other teams and are free to read.

Our goal is to

Open Science at INPSY


More news

All articles

You are running an old browser version. We recommend updating your browser to its latest version.

More info