2026 Volume 21 Issue 1 Pages 9-19
Objective: This study aimed to assess and compare the readability, understandability, and actionability of radiation-related health information targeting fetuses and children, as provided by Japanese-language web-based sources and AI chatbot-generated content. Furthermore, this study aimed to explore the potential of AI tools to improve access to health information in rural and underserved regions.
Materials and Methods: We analyzed 40 publicly accessible Japanese webpages and 30 AI-generated texts produced by ChatGPT (paid and free versions), Copilot, and Gemini. Two prompt types were used: one at the standard reading level and the other at the 6th-grade reading level. Texts were evaluated using the Japanese version of the Patient Education Materials Assessment Tool for Printable Materials (PEMAT-P) to assess understandability and actionability, and jReadability to evaluate text complexity.
Results: At the standard level, 46.7% of the ChatGPT-4o texts and 78.6% of the Gemini texts achieved PEMAT-P scores ≥70. At the 6th-grade level, all AI-generated texts exceeded this threshold. The AI texts were consistently easier to read than the web-based materials. The paid version of ChatGPT-4o generated slightly more comprehensible text than its free counterpart. However, both AI and web content lack sufficient actionable elements and visual support. Among chatbots, Gemini produced the most user-friendly content, whereas Copilot exhibited notable limitations in terms of coherence and clarity.
Conclusion: Even free AI chatbots can generate health information that is easy to read and understand when guided by well-designed prompts. These tools have the potential to reduce health information disparities, especially in rural areas or during disasters where access to professional medical consultations may be limited. Future studies should address the accuracy, reliability, and practical implementation of AI-generated content in real-world health communications.