人工知能学会第二種研究会資料
Online ISSN : 2436-5556
2025 巻, ALIFE-010 号
第10回人工生命研究会
選択された号の論文の6件中1~6を表示しています
  • 松崎 天, 一ノ瀬 元喜
    原稿種別: 研究会資料
    2025 年2025 巻ALIFE-010 号 p. 02-
    発行日: 2025/11/17
    公開日: 2025/11/17
    研究報告書・技術報告書 フリー

    Human societies have flourished through cooperation among individuals, yet the question of how cooperation emerges and evolves remains central to understanding social behavior. While previous studies have shown that network structures can promote cooperation, it is still unclear whether this effect persists in contexts that require sustained cooperative interactions. In this study, we used the centipede game to model such sustained cooperation and conducted evolutionary simulations on various networks. Our results show that the presence of network structure generally promotes cooperation compared to well-mixed populations. In particular, small-world networks and scalefree networks with accumulated payoffs exhibited higher levels of cooperation. This enhancement can be attributed to the formation of cooperator clusters and the diffusion of cooperative strategies.

  • 稲垣 星那, 酒井 瑞樹, 一ノ瀬 元喜
    原稿種別: 研究会資料
    2025 年2025 巻ALIFE-010 号 p. 03-
    発行日: 2025/11/17
    公開日: 2025/11/17
    研究報告書・技術報告書 フリー

    In the society we live in, social dilemmas frequently arise in which individual interests conflict with collective interests. Although people often cooperate at a personal cost, factors such as personality and cultural background are thought to influence their cooperative decisions. However, previous studies with human participants have reported no significant relationship between nationality and cooperative behavior. To further explore the potential influence of nationality, we investigated whether "nationality," introduced purely as a social contextual cue, affects the cooperative behavior of Large Language Models (LLMs). We conducted social dilemma games in which LLM agents were assigned either a "Japanese" or "American" nationality. Across all tested models, assigning a Japanese nationality consistently increased cooperation rates, whereas assigning an American nationality decreased them. Furthermore, in repeated public goods games, o1 with an American nationality increased cooperation when interacting with cooperative Japaneseassigned o1, whereas GPT-5 with a Japanese nationality decreased cooperation when paired with non-cooperative American-assigned GPT-5.

  • 石田 笙真, 一ノ瀬 元喜
    原稿種別: 研究会資料
    2025 年2025 巻ALIFE-010 号 p. 04-
    発行日: 2025/11/17
    公開日: 2025/11/17
    研究報告書・技術報告書 フリー

    In recent years, large language models (LLM) have rapidly improved their reasoning abilities, and their application to Theory of Mind (ToM) ? like reasoning, which aims to infer others' beliefs and intentions, has gained much attention. Previous studies have compared the depth of ToM reasoning in LLM, but the validity of how such differences affect actual behavior has not been sufficiently examined. In this study, we analyzed how the depth of ToM reasoning (levels 0 ? 3) introduced into LLM influences strategic behavior and outcomes in the incomplete-information game Leduc Hold'em. We constructed LLM agents whose ToM depth was recursively defined and controlled through prompting, and conducted games between agents of different depths. The results showed that the behaviors of each ToM-level agent were consistent with reasonable strategies expected at their respective levels of reasoning depth. At level 0, agents acted simply based on hand strength without considering the opponent's intentions. At level 1, agents tended to fold more often when they interpreted an opponent's raise as a sign of confidence. At level 2, agents exploited the level-1 reasoning of opponents and sometimes acted weak even with strong hands. At level 3, agents anticipated that the opponent might use level-2 reasoning and therefore called more often to guard against bluffs. Moreover, as the ToM level increased, agents showed a stronger tendency to aim for the second round, making greater use of available information. Consequently, the amount of information used for reasoning increased, and the average number of chips won per victory also tended to rise.

  • 酒井 瑞樹, 舘石 和香葉, 一ノ瀬 元喜
    原稿種別: 研究会資料
    2025 年2025 巻ALIFE-010 号 p. 05-
    発行日: 2025/11/17
    公開日: 2025/11/17
    研究報告書・技術報告書 フリー

    As Large Language Models (LLMs) are deployed as autonomous agents beyond their role as text generation tools, their interactions become unpredictable and exploitative, potentially leading to unintended conflicts. As a solution to this problem, personality has been proposed as a framework to guide LLM behaviour, yet previous studies rely largely on qualitative analyses with limited quantitative work. Therefore, in this study, we investigate the causal relationship between personality traits and cooperative behaviour under quantitative and controlled conditions using the Big Five personality traits, which describe personality in five dimensions. First, we measure each model's baseline with BFI-44. Then, we analyse behaviour in the iterated Prisoner's Dilemma under the measured condition. Finally, we manipulate each Big Five personality trait individually to extreme values to verify causal effects. Results show that agreeableness most strongly promotes cooperation across models, aligning with previous studies. However, conscientiousness effects are model-dependent. Specifically, GPT-3.5-turbo shows behaviour consistent with previous studies, GPT-4o shows minimal or almost no effect, and GPT-5 yields counterintuitive results. These findings suggest that newer models employ strategic reasoning beyond simple personality-driven decisions.

  • 原 匠, 有田 隆也, 鈴木 麗璽
    原稿種別: 研究会資料
    2025 年2025 巻ALIFE-010 号 p. 06-
    発行日: 2025/11/17
    公開日: 2025/11/17
    研究報告書・技術報告書 フリー

    Large Language Models (LLMs) show advanced cognitive functions, but how these abilities form remains unclear. This study investigates LLM adaptive evolution to understand how task characteristics shape their capabilities. We use GENOME (Zhang et al. 2025), a genetic algorithm, to evolve LoRA adapters of LLMs from near-zero performance on two distinct tasks: MMLU (knowledge) and ToMBench (social reasoning). Our experiments reveal three key findings. First, the environments induced different fitness trajectories, implying qualitatively distinct adaptive landscapes. Second, VAE-based genotype visualization showed that evolutionary paths initially overlap then diverge, suggesting a hierarchical acquisition of skills from general to specific. Third, cross-evaluation revealed asymmetric generalization; models adapted to MMLU partially generalized to ToMBench with trade-offs in complex reasoning abilities, while the reverse was not observed. This suggests broad knowledge may be foundational for particular social reasoning abilities. Our evolutionary approach offers new insights into how LLMs form complex capabilities.

  • 三浦 凜太郎, 有田 隆也, 鈴木 麗璽
    原稿種別: 研究会資料
    2025 年2025 巻ALIFE-010 号 p. 07-
    発行日: 2025/11/17
    公開日: 2025/11/17
    研究報告書・技術報告書 フリー

    Fitness landscapes conceptualize the distribution of fitness in genotype space as a topography and have been widely used in analyzing biological evolution and optimization problems. This study proposes a novel fitness landscape model that replaces gene representations in Kauffman's NK fitness landscape with word sequences and utilizes an LLM to evaluate syntactic and semantic structures. Evolutionary experiments using genetic algorithms revealed that syntactic and semantic structures exert qualitatively different influences on fitness landscapes. In syntactic structure-based models, increasing interactions among genes significantly restricted the solution space through strict constraints of grammatical rules, which may capture biological evolution based on physical constraints. In semantic structure-based models, the increase in interactions enhances opportunities to generate novel value from combinations of concepts, demonstrating a mechanism that may capture aspects of the evolution of concepts in language and culture.

feedback
Top