With the rapid spread of generative artificial intelligence (AI) tools such as ChatGPT, adolescents have unprecedented access to instant, private health information, yet little is known about how they use these tools for physical and mental health or how they view associated risks. This study explored prevalence, motivations, and perceptions of AI use for health and mental health advice among high school students.
A cross-sectional questionnaire was completed by 146 students aged 14–18 at an international high school in Tokyo, Japan. Items assessed AI use, motivations, perceived helpfulness and accuracy, preferred sources of help for serious concerns, and qualitative reflections.
Most participants (76.6%) had used AI for health questions, and 11.6% were frequent users; key motivations included convenience (52.1%), curiosity (37.7%), and privacy (19.9%). About one-third reported using AI for both physical and mental health, with fewer using it for mental health alone. Helpfulness and accuracy were mixed: about one third found AI helpful, one fifth unhelpful, half neutral; one third rated responses moderately accurate, none very accurate. For serious health concerns, teens would primarily approach parents or guardians, followed by clinicians and friends, with only a small minority turning first to AI. Qualitative comments highlighted AI's accessibility and privacy but also concerns about misinformation and safety.
AI already functions as a common entry point for adolescent health information, yet trust in its accuracy and safety is limited. Improving accuracy, transparency, and digital health literacy will be essential before AI can be safely integrated into adolescent health support ecosystems.
View full abstract