Please have a seat (if you haven’t already), and let me introduce myself. I am a Bayesian psychometrician and expert in Bayesian statistical modeling, working at the intersection of psychometrics and experimental psychology. Currently, I’m doing my PhD at the Cognition, Attention and Learning Lab at Universidad Autónoma de Madrid, Spain.
My research aims to bridge psychometrics and experimental psychology by developing and validating psychometric measurement models tailored to experimental paradigms. These models allow researchers to estimate the latent common factors underlying multiple experimental tasks, leading to a more robust and generalizable understanding of cognitive processes.
My first love: Psychology
My journey in psychology began at the Complutense University of Madrid, where I first aspired to become a clinical psychologist (oh, dear…). That interest naturally drew me toward psychopathology and the basic psychological processes underlying it, such as attention and memory. However, life had other plans: I ended up pursuing a PhD not in therapy or diagnosis but in how to measure basic psychological processes accurately within experimental designs.
How did I end up here? Out of skepticism, and to be honest, a bit of ignorance. The more I engaged with research, the more I realized how little I understood about the statistical machinery behind our conclusions. Reading papers without understanding their methods began to feel like an act of faith, and I wasn’t comfortable with that. To overcome it, I took what seemed like a small detour that ultimately redirected my career: I immersed myself in methodology, statistics, and psychometrics to figure out how the models we use actually work, especially when they fail and produce statistical artifacts that researchers might mistakenly treat as substantive evidence. In short, I’m a bit of a skeptic.
My true love: Psychometrics
To me, psychometrics is the most important discipline in psychology. It safeguards the quality of our measurements by examining the sources of validity evidence and the reliability of psychological constructs. Without measurement, there is no research; and with measurement, every substantive conclusion ultimately depends on the strength of the validity evidence supporting the interpretation of those scores. How could I not fall in love with that?
Psychometrics starts from a simple yet powerful idea: what we observe are imperfect reflections of latent constructs that cannot be directly measured. These reflections show up in different indicators, such as questionnaire items, task performances, or behavioral measures, each capturing the construct in its own imperfect way. Psychometric models help us make sense of that imperfection by linking observed variables to latent constructs, accounting for measurement error, and allowing us to study structural relations among cognitive constructs.
That perspective changed the way I understood research. Measurement stopped being a technical step and became a way of thinking about how we connect data to theory. In fact, the substantive conclusions researchers care about most are precisely the structural part of psychometric models: the relations among latent constructs that give psychological theory its meaning.
My toxic love: Bayesian Statistics
Bruno de Finetti once said that “probability does not exist.” It sounds absurd at first, almost like a philosophical trick, but think about something as simple as deciding where to have dinner. You are at home, and you remember the new restaurant that just opened in your neighborhood. It is late and you have no idea if it will still be open, but you have to decide whether to go or not. If you leave, it means you believe there is more than a fifty percent chance it will be open. But the restaurant being open is not probable; it is already a fact! You just do not know it yet. And that is what De Finetti meant. Probability is not a feature of the world; it is not in the restaurant. It lives in what we know and in how we think.
De Finetti’s provocative idea planted a new seed of doubt about my own work, since every frequentist model deals with probability in one way or another. What began as a small detour into Bayesian statistics quickly became an obsession, and, unexpectedly, it helped me understand frequentist methods more clearly than ever before. In fact, all the frequentist statistics we know are simply a particular case of Bayesian statistics, much like a golden retriever is just one breed of dog. And to be precise, the frequentist version is probably the most cumbersome of all the Bayesian ones.
Maybe that is why those who discover Bayesian statistics rarely go back. It is not just a different way of analyzing data, but a more flexible way of thinking. In fact, it allows you to build and test virtually any model you can imagine, to quantify the uncertainty in a natural and intuitive way, and to work without relying on asymptotic assumptions that often make little sense in practice. In my view, the Bayesian approach, especially in applied research, is far more transparent and honest about its assumptions than any frequentist model.