抄録
The intersection of affective computing and mental well-being presents a significant frontier for artificial intelligence. Existing digital wellness tools often lack the capacity for nuanced, real-time personalization. I introduce Symphonic Mood Therapy (SMT), a novel framework and web-based application that leverages a multimodal large language model (LLM) to generate personalized therapeutic music experiences. The system processes user input, comprising both natural language descriptions of their emotional state and optional visual data (facial expressions), to perform a holistic affective analysis. This analysis informs a two-stage generative process. First, the LLM conceptualizes a bespoke "therapeutic symphony," defining its title, mood, compositional style, and specific musicological elements grounded in music therapy principles. Second, a crucial component of this concept, a distilled primaryMoodKeyword, is used as a semantic bridge to query a large-scale music catalog (Deezer API) and retrieve a congruent audio track. This paper presents the system architecture, the formalisms behind a multimodal prompt engineering, the semantic bridging mechanism, and a hypothetical user study designed to evaluate its efficacy. The results suggest that this concept-driven approach provides a more resonant and therapeutically aligned user experience than traditional mood-based playlisting, demonstrating a promising direction for AI-powered mental health interventions.