抄録
This paper proposes TAK10, a design-oriented framework for prompt-level control of artificial personality in large language models (LLMs). Although LLMs are stateless at the parameter level, TAK10 introduces an explicit state-transition mechanism that modulates personality weights across dialogue turns without modifying internal model parameters. Personality is represented as a constrained weight vector over multiple modes and updated through bounded adjustment, normalization, and periodic resynchronization, enabling consistency with controlled variability. A central component is the E_{score} mechanism, a scoring-based output control model that evaluates responses across dimensions such as reference reliability, consistency, factual stability, validation breadth, and contextual alignment. Through threshold-based regulation, E_{score} supports structured response moderation and enhances transparency. Rather than serving as a statistical inference model, E_{score} functions as a design-level scoring framework that promotes reliability awareness in generative systems. From a data science perspective, the personality weight vector can be interpreted as a constrained state variable on a normalized simplex, and the update rule resembles a bounded discrete-time state-transition system. TAK10 is presented as an architectural contribution, with empirical validation and comparative evaluation reserved for future work.