Summary
This multi-centre reproducibility initiative assessed 150 studies using real-world clinical practice data to evaluate the reliability of findings intended to inform regulatory and coverage decisions. Original and reproduction effect sizes showed strong positive correlation (r = 0.85), with a median relative effect magnitude of 1.0, indicating that whilst most results were closely reproduced, a subset showed meaningful divergence explainable by incomplete reporting and data updates. The authors conclude that greater methodological transparency and adherence to reporting guidance would improve reproducibility and validity assessment, supporting more robust evidence-based decision-making.
UK applicability
The findings are broadly applicable to UK healthcare systems and regulatory bodies (MHRA, NICE) that rely on real-world evidence for medical product assessment and coverage decisions. Recommendations for improved reporting and methodological transparency align with UK standards for evidence evaluation and would strengthen the credibility of studies informing NHS policy.
Key measures
Pearson's correlation between original and reproduction effect sizes (0.85); median relative magnitude of effect ratio (hazard ratio original/hazard ratio reproduction) of 1.0 [IQR 0.9–1.1, range 0.3–2.1]; completeness of methodological reporting
Outcomes reported
The study reproduced results from 150 peer-reviewed studies analysing real-world evidence from digital clinical practice data and evaluated reporting completeness for 250 studies. Reproducibility was assessed by comparing original and reproduction effect sizes across healthcare databases.
Topic tags
Dig deeper with Pulse AI.
Pulse AI has read the whole catalogue. Ask about this record, its theme, or how the findings apply to UK farming and policy — every answer cites the underlying studies.