抄録
In short-term game development projects such as Game Jams, student teams often face challenges in balancing creativity with implementation due to strict time limits and high coordination costs. This study investigates the use of large language models (LLMs) within a modular development workflow for student game projects developed in Unity. We begin by introducing the principles of modular development and LLMs, then categorize mainstream tools into three groups—programming assistance, AI agents, and conversational AI—analyzing their respective strengths and appropriate use cases. The discussion then shifts to three core subsystems—core gameplay logic, user interface (UI), and tools & pipeline automation—evaluating each in terms of code generation, cross-module collaboration, and validation. Finally, we present practical guidelines, including context provision, iterative decomposition, tool chaining, and prompt optimization, while also addressing limitations such as performance tuning and closed-source debugging. Our findings suggest that LLMs can substantially shorten development cycles for well-defined, template-based modules, but still require human oversight in performance-sensitive tasks and contexts requiring global consistency. This research provides a reference framework for both educational practice and the design of future AI-driven game development platforms.