Generative AI
Online ISSN : 2759-0321
最新号
選択された号の論文の8件中1~8を表示しています
  • 2025 年3 巻 p. 0-
    発行日: 2025年
    公開日: 2025/12/02
    解説誌・一般情報誌 オープンアクセス
  • Addressing Epistemic Injustice and Supporting Student Well-Being
    Hiroko KANOH
    2025 年3 巻 p. 1-22
    発行日: 2025/11/27
    公開日: 2025/12/02
    解説誌・一般情報誌 オープンアクセス
    This paper argues that teaching in the age of generative AI must treat epistemic justice and student well-being as co-equal design constraints. Rather than centering tools, we examine how classroom practices allocate credibility: whose voices are believed, which interpretive resources are legible, and how policy climates affect participation. We show how unreflective AI use can narrow expression and misread competence, while prohibition often coexists with covert use that erodes trust. To address these tensions, we propose a justice-and-care framework that works under prohibition and scales to guided, declared use. Core routines make reasoning visible and gradeable without detectors: a short pre-ideation record, concise transparent model mediation (AI as mediation, not evidence), an embedded verification paragraph calibrated to claim stakes, and a brief oral micro-defense. We extend TPACK to AI-TPACK Plus, adding domains of algorithmic awareness and affect/ethics, and tie these to assessable behaviors via mode-agnostic rubrics. The chapter suite details course design for data-analysis tasks, care-centered data governance, equity-first resourcing, and implementation pathways ("one design, two routes") from prohibition to teach-to-use. Evaluation integrates learning and climate indicators—voice distribution, hermeneutical breadth, and verification behavior—alongside motivation metrics aligned with self-determination theory. The approach aligns with national guidance (e.g., MEXT) and international principles (OECD/UNESCO) while incorporating locally validated rubrics (e.g., Kano, 2025). We conclude that authenticity is best established through process evidence and dialogue, not automated detection, and that centering voice, plurality, transparency, and psychological safety provides a practical path from policy to classroom practice.
  • Irene C. Taguinod, Gazala Yusufi
    2025 年3 巻 p. 23-35
    発行日: 2025/11/27
    公開日: 2025/12/02
    解説誌・一般情報誌 オープンアクセス
    AI tools have gained immense popularity in a span of few years. ChatGPT is one such AI tool which has created a buzz among students and academicians. This descriptive research takes an insight into ChatGPT by considering the perspective of academicians and gathering their insights on its usage and the effects it has on academic integrity in education. The selection of the respondents of this study is based on convenience sampling. Different lecturers from different parts of the world who have experienced the use of ChatGPT responded to a structured questionnaire composed of Lickert Scale questions and some open-ended questions. The questionnaire basically focused on the perception of the user of ChatGPT in terms of its usefulness, accuracy, speed, security and accessibility; the pros and cons of using ChatGPT; the drawbacks of using ChatGPT; and the suggested solutions to the drawbacks. The study resulted in some recommendations to academic institutions to follow the UNESCO framework for Artificial Intelligence in Education. Furthermore, academic institutions need to benchmark their AI policies with well-established AI policies from different countries. Institutions should also emphasize on the aspect of security in using AI especially in security risks identification and mitigation; developing new software for accurately detecting AI generated text; reforming the assessment methods and criteria in order to lower the dependency on AI and to enhance the learners learning capabilities; conducting awareness programs for the teachers and students on AI usage and dependency control; and enhancing the capability of plagiarism software in terms of identifying AI generated text.
  • Reflections on Generative AI in Teaching and Learning
    Keirah Comstock
    2025 年3 巻 p. 36-39
    発行日: 2025/11/27
    公開日: 2025/12/02
    解説誌・一般情報誌 オープンアクセス
    This paper will share how one private university in the United States created, developed, and launched a custom design coursebot that has been AI-generated to support students’ learning during a regular school term. The study found that using a course bot benefits students by providing flexibility to access their study area anytime and anywhere, within specific topics and themes as their learning support tool. The study also identified several challenges, including issues related to accuracy and equitable opportunities for students. This paper will walk through the journey of AI-driven Coursebot’s invention, the pros and cons of using Coursebot, and AI ethical issues and concerns.
  • Paul Kamau
    2025 年3 巻 p. 40-52
    発行日: 2025/11/27
    公開日: 2025/12/02
    解説誌・一般情報誌 オープンアクセス
    The intersection of affective computing and mental well-being presents a significant frontier for artificial intelligence. Existing digital wellness tools often lack the capacity for nuanced, real-time personalization. I introduce Symphonic Mood Therapy (SMT), a novel framework and web-based application that leverages a multimodal large language model (LLM) to generate personalized therapeutic music experiences. The system processes user input, comprising both natural language descriptions of their emotional state and optional visual data (facial expressions), to perform a holistic affective analysis. This analysis informs a two-stage generative process. First, the LLM conceptualizes a bespoke "therapeutic symphony," defining its title, mood, compositional style, and specific musicological elements grounded in music therapy principles. Second, a crucial component of this concept, a distilled primaryMoodKeyword, is used as a semantic bridge to query a large-scale music catalog (Deezer API) and retrieve a congruent audio track. This paper presents the system architecture, the formalisms behind a multimodal prompt engineering, the semantic bridging mechanism, and a hypothetical user study designed to evaluate its efficacy. The results suggest that this concept-driven approach provides a more resonant and therapeutically aligned user experience than traditional mood-based playlisting, demonstrating a promising direction for AI-powered mental health interventions.
  • 石田 拓也
    2025 年3 巻 p. 53-63
    発行日: 2025/11/27
    公開日: 2025/12/02
    解説誌・一般情報誌 オープンアクセス
    近年,ヒトの投資行動において精神的な問題(バイアスなど)が多く存在することが,行動経済学や心理学によって明らかにされつつある.投資家の心の乱れによって投資行動が悪化することを防ぐために,AI が投資家に対して精神面のサポートを行う.(決して個別の有価証券の価値等に関し助言を行い報酬を得る行為をせず,意思決定は投資家自身が行う.)AI は日々投資家との対話や生体情報から,性格や思考,感情を定量化する.投資家と接していく中で投資家の心理状態の悪化を察知すると,認知行動療法に基づく対話を行い,精神の緩和に繋げる.(これらの手法は専門家によって監修され,また投資家の精神状態が危機的,または改善されない等の場合は,カウンセラーや医者の診療を仲介する.)得られた投資家の心理状態の数や,ブロードリスニングされた金融商品や市場経済全体への感想は,販売業者にとってのアンケートや,市場運営者や国家機関にとって経済状況の把握に適したデータとなる.
  • Takaya Endo
    2025 年3 巻 p. 64
    発行日: 2025/11/27
    公開日: 2025/12/02
    解説誌・一般情報誌 オープンアクセス
  • Masayoshi YASUMOTO
    2025 年3 巻 p. 65-69
    発行日: 2025/11/27
    公開日: 2025/12/02
    解説誌・一般情報誌 オープンアクセス
    In today’s rapidly changing organizations, effective human resource development is as important as technology. Leveraging individual strengths is essential for leadership and followership. This study introduces a team-building approach that combines Gallup’s CliftonStrengths assessment with generative AI. Participants identified and discussed their top strengths, then used AI to receive personalized, actionable feedback on applying these strengths and enhancing team performance. The AI facilitated both self-reflection and team-level planning, supporting practical application and discussion, even for those less comfortable expressing ideas. Results suggest that integrating strengths assessment with AI enhances self- and mutual understanding, providing a sustainable tool for ongoing learning. Future research should investigate long-term effects and optimal models of human–AI collaboration.
feedback
Top