This paper argues that teaching in the age of generative AI must treat epistemic justice and student well-being as co-equal design constraints. Rather than centering tools, we examine how classroom practices allocate credibility: whose voices are believed, which interpretive resources are legible, and how policy climates affect participation. We show how unreflective AI use can narrow expression and misread competence, while prohibition often coexists with covert use that erodes trust. To address these tensions, we propose a justice-and-care framework that works under prohibition and scales to guided, declared use. Core routines make reasoning visible and gradeable without detectors: a short pre-ideation record, concise transparent model mediation (AI as mediation, not evidence), an embedded verification paragraph calibrated to claim stakes, and a brief oral micro-defense. We extend TPACK to AI-TPACK Plus, adding domains of algorithmic awareness and affect/ethics, and tie these to assessable behaviors via mode-agnostic rubrics. The chapter suite details course design for data-analysis tasks, care-centered data governance, equity-first resourcing, and implementation pathways ("one design, two routes") from prohibition to teach-to-use. Evaluation integrates learning and climate indicators—voice distribution, hermeneutical breadth, and verification behavior—alongside motivation metrics aligned with self-determination theory. The approach aligns with national guidance (e.g., MEXT) and international principles (OECD/UNESCO) while incorporating locally validated rubrics (e.g., Kano, 2025). We conclude that authenticity is best established through process evidence and dialogue, not automated detection, and that centering voice, plurality, transparency, and psychological safety provides a practical path from policy to classroom practice.
抄録全体を表示