The aim of this study was to assess the accuracy and readability of the answers generated by LLM-chatbots to common patient questions about low back pain.
This cross-sectional study analysed responses to 30 LBP-related questions, covering self-management, risk factors, and treatment. The questions were developed by experienced clinicians and researchers, and were piloted with a group of consumer representatives with lived experience of LBP. The inquiries were inputted in prompt form into: ChatGPT 3.5, Bing, Bard (Gemini) and ChatGPT 4.0. Responses were evaluated in relation to their accuracy, readability, and presence of disclaimers about health advice. The accuracy was assessed by comparing the recommendations generated with the main guidelines for LBP. The responses were analysed by two independent reviewers and classified as accurate, inaccurate, or unclear. Readability was measured with the Flesch Reading Ease Score (FRES).
Out of 120 responses yielding 1069 recommendations, 55.8% were accurate, 42.1% inaccurate, and 1.9% unclear. The chatbots demonstrated overall moderate accuracy in their recommendations on low back pain. They deliver relatively precise responses in areas such as 'self-management' and 'treatment.' However, they exhibit notable inaccuracies concerning 'risk factors'. Overall, LLM-chatbots provided answers that were “reasonably difficult” to read, with a mean (SD) FRES score of 50.94 (3.06). Readability was generally poor and could negatively impact patient understanding and behaviour. We also found that the chatbots included a "disclaimer about health advice" in 70% to 100% of their responses, helping users recognise that the information provided is not a substitute for professional medical advice.
The use of LLM-chatbots as tools for patient education and counselling in LBP shows promising but variable results. These chatbots generally provide moderately accurate recommendations. However, the accuracy may vary depending on the topic of each question. The reliability level of the answers was inadequate, potentially affecting the patient's ability to comprehend the information
This study highlights the potential and limitations of using LLM-chatbots as a patient resource for low back pain. The findings suggest that while LLM chatbots can provide moderately accurate information, inconsistencies in accuracy across different domains, especially in risk factors, and challenges with readability could impact patient understanding and behaviour. These findings can guide future research on improving LLM-chatbot algorithms, inform clinical practice, and shape policy decisions regarding integrating AI in patient education and support systems worldwide.
low back pain
patient education