1
Evaluation of clinical prediction models (part 3): calculating the sample size required for an external validation study
2
Evaluation of clinical prediction models (part 2): how to undertake an external validation study
3
Evaluation of clinical prediction models (part 1): from development to external validation
4
Open Science 2.0: Towards a truly collaborative research ecosystem
5
OPEN SCIENCE PRACTICES NEED SUBSTANTIAL IMPROVEMENT IN PROGNOSTIC MODEL STUDIES IN ONCOLOGY USING MACHINE LEARNING.
6
ACCORD (ACcurate COnsensus Reporting Document): A reporting guideline for consensus methods in biomedicine developed via a modified Delphi
7
The TRIPOD-P reporting guideline for improving the integrity and transparency of predictive analytics in healthcare through study protocols
8
What's fair is… fair? Presenting JustEFAB, an ethical framework for operationalizing medical ethics and social justice in the integration of clinical machine learning: JustEFAB
9
Protocol for the development of an artificial intelligence extension to the Consolidated Health Economic Evaluation Reporting Standards (CHEERS) 2022
10
Transparent reporting of multivariable prediction models for individual prognosis or diagnosis: checklist for systematic reviews and meta-analyses (TRIPOD-SRMA)
11
Commentary: Patient Perspectives on Artificial Intelligence; What have We Learned and How Should We Move Forward?
12
Machine learning for the prediction of toxicities from head and neck cancer treatment: A systematic review with meta-analysis.
13
Systematic review finds "Spin" practices and poor reporting standards in studies on machine learning-based prediction models.
14
Machine Learning and Statistics in Clinical Research Articles-Moving Past the False Dichotomy.
15
Gynecological cancer prognosis using machine learning techniques: A systematic review of the last three decades (1990-2022)
16
Overinterpretation of findings in machine learning prediction model studies in oncology: a systematic review
17
Machine Learning Models to Forecast Outcomes of Pituitary Surgery: A Systematic Review in Quality of Reporting and Current Evidence
18
There is no such thing as a validated prediction model
19
Transparent reporting of multivariable prediction models developed or validated using clustered data (TRIPOD-Cluster): explanation and elaboration
20
Transparent reporting of multivariable prediction models developed or validated using clustered data: TRIPOD-Cluster checklist
21
Current state and completeness of reporting clinical prediction models using machine learning in systemic lupus erythematosus: A systematic review.
22
Cardiovascular complications in a diabetes prediction model using machine learning: a systematic review
23
PRISMA AI reporting guidelines for systematic reviews and meta-analyses on AI in healthcare
24
Reporting and risk of bias of prediction models based on machine learning methods in preterm birth: A systematic review
25
Systematic review identifies the design and methodological conduct of studies on machine learning-based prediction models.
26
Tackling bias in AI health datasets through the STANDING Together initiative
27
Machine learning in predicting cardiac surgery-associated acute kidney injury: A systemic review and meta-analysis
28
Social determinants of health in prognostic machine learning models for orthopaedic outcomes: A systematic review.
29
Risk of bias of prognostic models developed using machine learning: a systematic review in oncology
30
Reporting guideline for the early-stage clinical evaluation of decision support systems driven by artificial intelligence: DECIDE-AI
31
Methodological conduct of prognostic prediction models developed using machine learning in oncology: a systematic review
32
Guidelines and quality criteria for artificial intelligence-based prediction models in healthcare: a scoping review
33
Minimum sample size calculations for external validation of a clinical prediction model with a time‐to‐event outcome
34
Risk of bias in studies on prediction models developed using supervised machine learning techniques: systematic review
35
Standardized Reporting of Machine Learning Applications in Urology: The STREAM-URO Framework.
36
Completeness of reporting of clinical prediction models developed using supervised machine learning: a systematic review
37
Protocol for development of a reporting guideline (TRIPOD-AI) and risk of bias tool (PROBAST-AI) for diagnostic and prognostic prediction model studies based on artificial intelligence
38
Reporting of prognostic clinical prediction models based on machine learning methods in oncology needs to be improved
39
Machine learning in vascular surgery: a systematic review and critical appraisal
40
Developing a reporting guideline for artificial intelligence-centred diagnostic test accuracy studies: the STARD-AI protocol
41
Minimum sample size for external validation of a clinical prediction model with a binary outcome
42
Presenting artificial intelligence, deep learning, and machine learning studies to clinicians and healthcare stakeholders: an introductory reference with a guideline and a Clinical AI Research (CAIR) checklist proposal
43
Availability and reporting quality of external validations of machine-learning prediction models with orthopedic surgical outcomes: a systematic review
44
Equity in essence: a call for operationalising fairness in machine learning for healthcare
45
Clinical prediction models: diagnosis versus prognosis.
46
Reproducibility in machine learning for health research: Still a ways to go
47
Leveraging Open Science to Accelerate Research.
48
Machine learning prediction models in orthopedic surgery: A systematic review in transparent reporting
49
Health data poverty: an assailable barrier to equitable digital health care.
50
Artificial intelligence in dental research: Checklist for Authors, Reviewers, Readers.
51
External validation of clinical prediction models: simulation-based sample size calculations were more reliable than rules-of-thumb
52
Clinician checklist for assessing suitability of machine learning applications in healthcare
53
External Validations of Cardiovascular Clinical Prediction Models: A Large-Scale Review of the Literature
54
Minimum sample size for external validation of a clinical prediction model with a continuous outcome
55
Using machine-learning risk prediction models to triage the acuity of undifferentiated patients entering the emergency care system: a systematic review
56
Recommendations for Reporting Machine Learning Analyses in Clinical Research
57
Ethical Machine Learning in Health Care
58
Guidelines for clinical trial protocols for interventions involving artificial intelligence: the SPIRIT-AI extension
59
Minimum information about clinical artificial intelligence modeling: the MI-CLAIM checklist
60
MINIMAR (MINimum Information for Medical AI Reporting): Developing reporting standards for artificial intelligence in health care
61
Transparent Reporting of Multivariable Prediction Models in Journal and Conference Abstracts: TRIPOD for Abstracts
62
Ethical limitations of algorithmic fairness solutions in health care machine learning.
63
Systematic review and critical appraisal of prediction models for diagnosis and prognosis of COVID-19 infection
64
Presenting machine learning model information to clinical end users with model facts labels
65
Calculating the sample size required for developing a clinical prediction model
66
Reporting quality of studies using machine learning models for medical diagnosis: a systematic review
67
Checklist for Artificial Intelligence in Medical Imaging (CLAIM): A Guide for Authors and Reviewers.
68
A systematic review of machine learning models for predicting outcomes of stroke with structured data
69
Prognostic models for outcome prediction in patients with chronic obstructive pulmonary disease: systematic review and critical appraisal
70
Reporting of artificial intelligence prediction models
71
A systematic review shows no performance benefit of machine learning over logistic regression for clinical prediction models.
72
PROBAST: A Tool to Assess Risk of Bias and Applicability of Prediction Model Studies: Explanation and Elaboration
73
PROBAST: A Tool to Assess the Risk of Bias and Applicability of Prediction Model Studies
74
Minimum sample size for developing a multivariable prediction model: PART II ‐ binary and time‐to‐event outcomes
75
Minimum sample size for developing a multivariable prediction model: Part I – Continuous outcomes
76
Sample size for binary logistic prediction models: Beyond events per variable criteria
77
GRIPP2 reporting checklists: tools to improve reporting of patient and public involvement in research
78
No rationale for 1 variable per 10 events criterion for binary logistic regression analysis
79
Prediction models for cardiovascular disease risk in the general population: systematic review
80
Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD): Explanation and Elaboration
81
Transparent reporting of a multivariable prediction model for individual prognosis or diagnosis (TRIPOD): the TRIPOD Statement
82
External validation of multivariable prediction models: a systematic review of methodological conduct and reporting
83
Reducing waste from incomplete or unusable reports of biomedical research
84
Reporting and Methods in Clinical Prediction Research: A Systematic Review
85
Developing risk prediction models for type 2 diabetes: a systematic review of methodology and reporting
86
Reporting methods in studies developing prognostic models in cancer: a review
87
Guidance for Developers of Health Research Reporting Guidelines
88
Predicting Outcome after Traumatic Brain Injury: Development and International Validation of Prognostic Scores Based on Admission Characteristics
89
EQUATOR: reporting guidelines for health research
90
General Cardiovascular Risk Profile for Use in Primary Care: The Framingham Heart Study
91
The use of clinical risk factors enhances the performance of BMD in the prediction of hip and osteoporotic fractures in men and women
92
Projecting individualized probabilities of developing breast cancer for white females who are being examined annually.
93
What is a patient representative?
95
Ethics approval This study was conducted with the approval of the East London and City Health Authority Ethic Committee. Provenance and peer review Not commissioned; externally peer reviewed.
96
Department of Computer Science, University of Toronto
97
, Institute for Medical Engineering and Science, Massachusetts Institute of Technology, Cambridge, MA, USA
99
Department of Learning Health Sciences, University of Michigan Medical School, Ann Arbor, MI, USA
100
Data sharing: Aggregated Delphi survey responses are available on the Open Science Framework