人工知能学会論文誌
Online ISSN : 1346-8030
Print ISSN : 1346-0714
ISSN-L : 1346-0714
36 巻, 4 号
選択された号の論文の5件中1~5を表示しています
一般論文
原著論文
  • 武田 惇史, 鳥海 不二夫
    原稿種別: 原著論文
    2021 年 36 巻 4 号 p. A-K64_1-13
    発行日: 2021/07/01
    公開日: 2021/07/01
    ジャーナル フリー

    Games are widely used as a benchmark in the field of artificial intelligence research due to the clarity of the rules and sufficient complexity. In recent years, the game "werewolf" has received a lot of attention in the field of artificial intelligence. Werewolf is an incomplete information game that is played through dialogue between the players. The artificial intelligence that plays werewolves is called "AIWolf". A platform for AIWolf has been created, and AIWolf competitions have been held to test the performance of the agents running on the platform. In the AIWolf competition, general developers are invited to submit AIWolf agents, which are expected to improve the performance of AIWolf through collective intelligent development. In this study, we first analyzed the logs of previous competitions and found that the agents from previous competitions have evolved to be stronger than the agents created before them. Furthermore, based on the strategy evolution derived from the analysis, we propose a simulation method for 5-player werewolf to investigate whether repeated evolution converges to the dominant strategy. The simulation results show that no dominant strategy is found and that the strategies change continuously and periodically. Our results show that werewolf games are sufficiently complex to be the subject of artificial intelligence research, which we believe will help us to understand the structure of werewolf games and discover appropriate strategies.

  • Kentaro Kanamori, Hiroki Arimura
    原稿種別: Original Paper
    2021 年 36 巻 4 号 p. B-L13_1-10
    発行日: 2021/07/01
    公開日: 2021/07/01
    ジャーナル フリー

    In the application of machine learning models to decision-making tasks (e.g., loan approval), fairness of their predictions has emerged as an important topic in recent years. If decision-makers detect unfairness in their models during deployment, they must modify the models to satisfy constraints on a specific discrimination criterion. However, simply retraining a model from scratch under fairness constraints may raise serious reliability issues caused by differences in prediction and interpretation between the initial model and retrained model. In this paper, we propose a post-processing framework, named Fairness-Aware Decision tree Editing (FADE), that converts a given biased decision tree into a fair decision tree without significantly changing it in terms of its prediction and interpretation. For this purpose, we introduce two dissimilarity measures between decision trees based on the prediction discrepancy and edit distance. We propose a mixed-integer linear optimization formulation for minimizing the dissimilarity measures under fairness constraints. Numerical experiments on real datasets demonstrate the effectiveness of our method in comparison with existing methods.

  • 内田 匠, 吉田 健一
    原稿種別: 原著論文
    2021 年 36 巻 4 号 p. C-KC4_1-11
    発行日: 2021/07/01
    公開日: 2021/07/01
    ジャーナル フリー

    Many studies have reported that combining multiple recommender systems improves their accuracy. In such a combination process, it is important to combine algorithms with different properties properly. However, many of the existing Hybrid Recommender Systems (HRS) can only combine specific algorithms. This study proposed a new HRS, Rescoring Hybrid Recommender System (RHRS), that integrate arbitrary recommendation lists. RHRS can integrate not only collaborative filtering but also popularity rankings and new arrivals lists into one list. It has the following features. (1) Unify the definition of the recommended score of an item in each recommendation list by scoring according to each list’s position. (2) Define the combined weight of the recommendation list as a function of the recommending situation. (3) Optimize the weight of the recommendation list according to the situation. We verified this RHRS with Netflix dataset and confirmed the following results. (1) RHRS has higher recommendation accuracy than existing HRS. (2) RHRS achieves both accuracy of the popularity rankings and diversity of the recommendation list. (3) RHRS recommends items to new users based on popularity ranking and uses collaborative filtering to reflect users’ usage history.

  • 來村 徳信, 中條 亘, 笹嶋 宗彦, 師岡 友紀, 辰巳 有紀子, 荒尾 晴惠, 溝口 理一郎
    原稿種別: 原著論文
    2021 年 36 巻 4 号 p. D-K94_1-16
    発行日: 2021/07/01
    公開日: 2021/07/01
    ジャーナル フリー

    For appropriate execution of human actions as a service, it is important to understand goals of the actions, which are usually implicit in the sequence-oriented process representations. CHARM (an abbreviation for Convincing Human Action Rationalized Model) has been proposed for representing such goals of the actions in a goal-oriented structure. It has been successfully applied for training novice nurses in a real hospital. Such a real-scale and general knowledge model, however, makes the learners difficult to understand which actions are important in a specific context such as a patient’s risk for complications. The goal of this research is to realize a context-adaptive knowledge structuring mechanism for emphasizing such actions that need special attention in a given context. As an extension of the CHARM framework, the authors have developed a general mechanism based on multi-goal action models and pathological mechanism models of abnormal phenomena. It has been implemented as a software system on tablet devices called CHARM Pad. We have also described knowledge models for the nursing domain, which include pathological mechanism models of complications with their risk factors. CHARM Pad with these models had been used by nursing students and evaluated by them through questionnaires. The result shows that CHARM Pad helped them understand the goals of nursing actions as well as finding of symptoms of complications context-adaptively.

  • Seiya Kawano, Koichiro Yoshino, Satoshi Nakamura
    原稿種別: Original Paper
    2021 年 36 巻 4 号 p. E-KC9_1-14
    発行日: 2021/07/01
    公開日: 2021/07/01
    ジャーナル フリー

    Building a controllable neural conversation model (NCM) is an important task. In this paper, we focus on controlling the responses of NCMs using dialogue act labels of responses as conditions. We introduce a reinforcement learning framework involving adversarial learning for conditional response generation. Our proposed method has a new label-aware objective that encourages the generation of discriminative responses by the given dialogue act label while maintaining the naturalness of the generated responses. We compared the proposed method with conventional methods that generate conditional responses. The experimental results showed that our proposed method has higher controllability conditioned by the dialogue acts even though it has higher or comparable naturalness to the conventional models.

feedback
Top