How to interpret factor analysis? Let's ditch the simple rules
Confirmatory Factor Analysis (CFA) is one of the core tools of modern psychology, widely used to evaluate how psychological measurement instruments function. Despite its importance, CFA is often applied in an overly mechanical way. A new methodological study by Petr Palíšek and Edita Chvojka (INPSY), together with Anna Literová (IRTIS), warns that blind reliance on “magic” statistical cut-offs used to assess the quality of factor models – such as RMSEA or CFI – can lead to erroneous conclusions. In a clear and accessible tutorial, the authors demonstrate how model evaluation can be approached more thoughtfully and with a deeper understanding of what the data actually convey.
Model fit does not tell us whether a model is “correct”
The aim of the publication is to provide a practical, yet theoretically grounded guide to evaluating model fit (i.e., fit indices) without relying on universal and supposedly “recommended” values that are assumed to indicate an acceptable fit. The authors explain that fit indices are not tests of model correctness, but rather indicators of the degree of misfit between the model and the data – similar in spirit to effect size measures (e.g., Cohen’s d). This means that their interpretation must always be contextualized, taking into account sample size, model complexity, item quality, and the substantive research goal.
Why this matters
Contemporary psychological research routinely works with complex models, while simultaneously facing a replication crisis and the accumulation of poorly specified models. The study highlights that rigid application of rules such as “RMSEA < .05 = good model” may result in accepting theoretically weak or misspecified models simply because they pass a mechanical statistical filter. This has direct consequences for scale validity, interpretation of findings, and subsequent theoretical claims.
Are there alternatives?
The authors advocate moving away from binary thinking (most commonly “the model fits / does not fit”) toward analytical judgment based on the integration of multiple sources of information. In addition to global fit indices and related statistics, they emphasize systematic work with the residual matrix, which reveals where exactly the model fails to capture the data. They also introduce so-called dynamic fit indices, which adapt interpretive thresholds to the specific model and data. Importantly, these are not meant to serve as new “better cut-offs,” but rather as tools to support informed decision-making.
Summary and implications for practice
The study by Palíšek, Chvojka, and Literová offers an accessible bridge between psychometric theory and everyday research practice in psychology. It reminds us that a good model is not one that merely satisfies tabulated criteria, but one that makes theoretical sense and provides an adequate representation of the data. Blind reliance on fixed cut-off values for model fit contributes to a broader “modeling crisis” in psychology – where statistically acceptable-looking models obscure deeper problems in theory or measurement, much like the historical overuse of p-values contributed to the replication crisis. For psychology students and practitioners who use CFA primarily as a tool rather than as the main focus of their work, this study represents a valuable invitation to slow down, look “under the hood,” and evaluate models using more nuanced and up-to-date approaches.
Recommended citation:
Palíšek, P., Chvojka, E., & Literová, A. (2025). Abandon all thumbs ye who model: An up-to-date tutorial on fitting CFA models. Collabra: Psychology, 11(1), Article 147248. https://doi.org/10.1525/collabra.147248
Interested in the study? Contact its author!
Mgr. et Mgr. Petr Palíšek
Measurement in Psychology
palisek@fss.muni.cz
Many of our publications follow the principles of Open Science. We want to ensure that our studies are reproducible by other teams and are free to read.
Our goal is to
preregister
publish data materials- and make them
publicly available.