Reporting checklists: from a tool after publication to a tool before submission
PDF
Cite
Share
Request
Artificıal Intelligence And Informatics – Commentary
VOLUME: ISSUE:
P: -

Reporting checklists: from a tool after publication to a tool before submission

1. Department of Imaging, Tongren Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
2. Shanghai Key Laboratory of Flexible Medical Robotics, Tongren Hospital, Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, China
No information available.
No information available
Received Date: 14.03.2025
Accepted Date: 24.03.2025
Online Date: 05.05.2025
PDF
Cite
Share
Request

Artificial intelligence (AI) methods have attracted widespread interest in the field of medical imaging, and the increasing number of AI publications in radiological journals reflects this growing attention from researchers and journals alike. As the old saying goes, “interest is the best teacher,” yet interest in AI does not automatically translate into proper use and adequate reporting of AI methods. Promising results in published articles do not necessarily ensure high methodological quality. More often than not, incomplete reporting of methodology and the lack of data and code sharing hinder study evaluation and model replication. To address this issue, the Checklist for AI in Medical Imaging (CLAIM) was developed as a guide for the complete reporting of AI studies in medical imaging.1 CLAIM has been widely adopted, with more than 800 articles citing the guideline as of March 14, 2025, and has also been used as a tool for quality assessment in systematic reviews of AI studies.2 However, these systematic reviews, which typically focus on medical imaging studies using AI, highlight the limited quality of current studies. This raises the question of whether CLAIM has been used appropriately and how the reporting and methodological quality of AI studies in medical imaging can be improved.

In this issue of Diagnostic and Interventional Radiology, the study by Koçak et al.3 not only reveals a substantial gap between the current state of reporting and the ideal reporting of AI studies in medical imaging but also identifies factors influencing adherence to CLAIM. The study finds that CLAIM adherence is associated with the journal impact factor quartile, publication year, and specific radiology subfields. Not surprisingly, CLAIM adherence improved after CLAIM’s publication, likely because authors became more familiar with standard practices. Higher adherence to CLAIM was observed in cardiovascular studies, suggesting a more mature use of AI methods in this subfield, from automated reconstruction tools for coronary computed tomography angiography to analysis software for cardiac magnetic resonance. Despite this progress, improving CLAIM adherence remains more important than identifying the sources of high adherence. High-impact journals might promote more transparent reporting practices through more rigorous peer review processes and encourage authors to follow AI guidelines and include them in submission requirements.4As the mandatory use of reporting guidelines has been shown to improve study quality,5 the current study provides a clear and actionable recommendation to enhance the quality of AI studies: increase journal support for CLAIM use.6

In this study, Koçak et al.3 follow a two-level analysis to address concerns regarding common CLAIM critiques. The study summarizes comments from systematic reviews and identifies two main critiques: concerns about the inapplicability of certain items to all study types and the subjective nature of reporting decisions. The concern regarding inapplicability has been addressed with the update of CLAIM 2024, which includes a “not applicable” option for item evaluation,7 but the issue of subjectivity remains. These factors may all contribute to the unreliability of CLAIM evaluation, such as unclear item descriptions, subjective comprehension, and the complexity of AI methods.8 When researchers use CLAIM for future systematic reviews, a greater focus on reproducibility may be necessary. CLAIM still needs updates, including more explanations and elaborations with examples, so that users can apply the tool with a better understanding.9Additionally, developing user-friendly online tools would enhance convenience.10 The introduction of automatic tools, such as large language models, may also aid in optimizing the reproducibility of CLAIM evaluation. Furthermore, translated versions of the tools endorsed by the original authors may increase visibility and adaptability to local cultures.

Beyond the CLAIM tool itself, the quality of individual studies remains crucial. A previous study by Kocak et al.9 investigated the use of CLAIM in individual studies. The study found that only a small percentage of publications used CLAIM along with a supplementary filled-out checklist, and many of the completed checklists contained errors. CLAIM is a useful tool for post-publication evaluation,2 but it is not currently required before submission. It remains unclear whether the endorsement of CLAIM and other AI-specific guidelines can improve reporting and methodological quality. Furthermore, it is uncertain whether and how these AI-specific guidelines are used during the editorial process, as only a limited number of journals practice open peer review or publish articles with filled-out checklists. Instead of solely critiquing the adherence of published studies to CLAIM, it would be more valuable to investigate the influence of CLAIM on scientific publication practices. The primary intention of developing a checklist is not to evaluate existing studies retrospectively with strict standards but to guide ongoing research. The checklist can also serve as guidance for peer review before publication and as a tool for study design prior to submission.

In conclusion, the work by Koçak et al.3 draws the community’s attention to the lack of quality in reporting and methodology in AI studies in medical imaging. Although checklists may not resolve this problem overnight, they pave the way for a future of transparent reporting and high-quality methodology. Therefore, the use of reporting checklists is recommended before submission, during evaluation, and after publication.

Conflict of interest disclosure

Dr. Jingyu Zhong acknowledges his position as a member of the Scientific Editorial Board of European Radiology, Insights into Imaging, American Journal of Roentgenology, and BMC Medical Imaging.

Funding

This study has received funding by National Natural Science Foundation of China (82302183), Research Found of Health Commission of Shanghai Municipality (20244Y0214), Yangfan Project of Science and Technology Commission of Shanghai Municipality (22YF1442400), Laboratory Open Fund of Key Technology and Materials in Minimally Invasive Spine Surgery (2024JZWC-YBA07), and Research Fund of Tongren Hospital, Shanghai Jiao Tong University School of Medicine (TRKYRC-XX202204, TR2024RC16). They played no role in the study design, data collection or analysis, decision to publish, or manuscript preparation.

References

1
Mongan J, Moy L, Kahn CE Jr. Checklist for Artificial Intelligence in Medical Imaging (CLAIM): a guide for authors and reviewers.Radiol Artif Intell. 2020;2(2):e200029.
2
Si L, Zhong J, Huo J, et al. Deep learning in knee imaging: a systematic review utilizing a Checklist for Artificial Intelligence in Medical Imaging (CLAIM).Eur Radiol.2022;32(2):1353-1361.
3
Koçak B, Köse F, Keleş A, Şendur A, Meşe İ, Karagülle M. Adherence to the Checklist for Artificial Intelligence in Medical Imaging (CLAIM): an umbrella review with a comprehensive two-level analysis.Diagn Interv Radiol. Epub 2025 Feb 10.
4
Koçak B, Keleş A, Köse F. Meta-research on reporting guidelines for artificial intelligence: are authors and reviewers encouraged enough in radiology, nuclear medicine, and medical imaging journals?Diagn Interv Radiol. 2024;30(5):291-298.
5
Dewey M, Levine D, Bossuyt PM, Kressel HY. Impact and perceived value of journal reporting guidelines among radiology authors and reviewers.Eur Radiol. 2019;29(8):3986-3995.
6
Zhong J, Xing Y, Lu J, et al. The endorsement of general and artificial intelligence reporting guidelines in radiological journals: a meta-research study.BMC Med Res Methodol. 2023;23(1):292.
7
Tejani AS, Klontzas ME, Gatti AA, et al. Checklist for Artificial Intelligence in Medical Imaging (CLAIM): 2024 Update.Radiol Artif Intell. 2024;6(4):e240300.
8
Kocak B, Keles A, Akinci D’Antonoli T. Self-reporting with checklists in artificial intelligence research on medical imaging: a systematic review based on citations of CLAIM. Eur Radiol. 2024;34(4):2805-2815.
9
Kocak B, Borgheresi A, Ponsiglione A, et al. Explanation and elaboration with examples for CLEAR (CLEAR-E3): an EuSoMII radiomics auditing group initiative.Eur Radiol Exp.2024;8(1):72.
10
Kocak B, Akinci D’Antonoli T, Mercaldo N, et al. METhodological RadiomICs Score (METRICS): a quality scoring tool for radiomics research endorsed by EuSoMII.Insights Imaging. 2024;15(1):8.