Evaluating the performance of AI chatbots in responding to dental implant FAQs: A comparative study


TUZLALI M., BAKİ N., ARAL K., ARAL C. A., BAHÇE E.

BMC Oral Health, cilt.25, sa.1, 2025 (SCI-Expanded) identifier identifier

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 25 Sayı: 1
  • Basım Tarihi: 2025
  • Doi Numarası: 10.1186/s12903-025-06863-w
  • Dergi Adı: BMC Oral Health
  • Derginin Tarandığı İndeksler: Science Citation Index Expanded (SCI-EXPANDED), Scopus, CINAHL, EMBASE, MEDLINE, Directory of Open Access Journals
  • Anahtar Kelimeler: Artificial intelligence, ChatGPT, Claude, Deepseek, Dental implant, Google gemini advanced, Implantology, Perplexity pro
  • İnönü Üniversitesi Adresli: Evet

Özet

Background: This study aims to evaluate and compare the performance of five publicly accessible large-language-model (LLM) based chatbots—ChatGPT-o1, Deepseek-R1, Google-Gemini-Advanced, Claude-3.5-Sonnet, and Perplexity-Pro—in providing responses to frequently asked questions (FAQs) about dental implant treatment. The primary goal was to assess the accuracy, completeness, clarity, relevance, and consistency of chatbot-generated answers. Methods: A total of 45 FAQs commonly encountered in clinical practice and online patient forums regarding dental implants were selected and categorized into nine thematic domains. Each question was submitted to the chatbots individually using a standardized protocol. Responses were independently assessed by a panel of four dental experts and one layperson using a 5-point Likert-scale. Python with Google-Colab was used for statistical analysis. Results: ChatGPT-o1 achieved the highest overall performance, particularly in relevance (M = 4.99), consistency (M = 4.97), and accuracy (M = 4.96). Deepseek-R1 followed closely, with strong scores in completeness and relevance. Claude-3.5-Sonnet ranked moderately, while Gemini-Advanced and Perplexity-Pro showed lower performance in completeness and clarity. Significant differences were observed among chatbots across all criteria (p < 0.001). Inter-rater reliability was high (α = 0.87), confirming consistency among evaluators. Conclusions: AI-driven chatbots demonstrated strong potential in delivering accurate and patient-friendly information about dental implant treatment. However, performance varied considerably across platforms, with ChatGPT-o1 and Deepseek-R1 showing the highest reliability. These findings highlight the emerging role of AI chatbots as supplementary tools in dental education and patient communication, while also underscoring the need for continued validation and ethical oversight in clinical applications.