Research Fellow Thomas Jefferson University Hospital
Introduction: Large language models (LLMs) have been utilized to automate tasks like writing discharge summaries and operative reports in neurosurgery. The present study evaluates their ability to identify current procedural terminology (CPT) codes from operative reports.
Methods: Three LLMs (ChatGPT 4.0, AtlasGPT and Gemini) were evaluated in their ability to provide CPT codes for diagnostic or interventional procedures in endovascular neurosurgery at a single institution. Responses were classified as correct, partially correct or incorrect, and the percentage of correctly identified CPT codes were calculated. The Chi-Square test and Kruskal Wallis test were used to compare responses across LLMs.
Results: A total of 17 operative notes were used in the present study. AtlasGPT identified CPT codes for all procedures with partially correct responses, while ChatGPT provided partially correct responses for 94.1% procedures, and Gemini provided partially correct CPT codes for 47.1% procedures (P < 0.001). AtlasGPT identified CPT codes correctly in an average of 41.6% of procedures, followed by ChatGPT (37%) and Gemini (13.8%) (P < 0.001). A pairwise comparison among three LLMs revealed that AtlasGPT and ChatGPT outperformed Gemini.
Conclusion : Untrained LLMs have the ability to identify partially correct CPT codes in endovascular neurosurgery. Training these models could further enhance their ability to identify CPT codes and minimize healthcare expenditure.