Projects

Our group is working on a large number of projects organised around the following topic areas.

1. Research quality

What are the dimensions we should consider when evaluating research quality? How do researchers evaluate the quality of each other's work, and how do they come to those evaluations? Can we improve the evaluation of research quality by introducing rubrics, group discussions, or other structured processes?

2. Journal policies and practices

What are common journal policies and informal practices around peer review and publication?   How does peer review work and how can we improve it?  How is the published literature changing over time?

3. Statistical inference

What are common errors in statistical inference, and how common are they?  How can we improve statistical training and practice?   What are the best tools for detecting and correcting errors in statistical inference?

4. Self-correction

What are the self-correcting mechanisms in science?  How common and effective is post-publication critique?  Do replications help to correct the scientific record? Who should do the work of verifying and correcting published studies, and how should it be incentivised?

5. Intellectual humility

Do research claims match the evidence presented?  How common are hype and spin? How can we incentivise researchers to make calibrated claims?

6. Transparency and Reproducibility

Does increased transparency lead to better quality research, or better detection of errors and inaccuracies in research?  What kinds of transparency are most important, and how common are they?  How can important forms of transparency be incentivised? To what extent and under what conditions does transparency improve reproducibility.

repliCATS project

The repliCATS project aims to crowdsource predictions about the reliability of 3000 published research claims in the social and behavioural sciences. The “CATS” in repliCATS stands for Collaborative Assessment for Trustworthy Science.

We aim to estimate the replicability of 3,000 published social scientific research claims in: Business research, Criminology, Economics, Education, Political Science, Psychology, Public Administration, and Sociology.

The repliCATS project is part of a research program called SCORE, funded by DARPA, that eventually aims to build automated tools that can rapidly and reliably assign confidence scores to social science research claims.

Bluesky: @replicats.bsky.social

More information

replicats