認知科学におけるオブジェクトファイル理論を参考にしたワーキングメモリのモデルを提案する。このモデルはベイジアンネットを用いた生成モデルとして表現される。我々はこのワーキングメモリを持った認知アーキテクチャのプロトタイプシステムを実装し、行動ルールの記述が簡潔で汎用性の高いものになる見込みを得た。
To make it possible for non-experts to operate a robot in a human environment, instruction following, that is to operate a robot by natural language instructions, is focused on. Recently ALFRED dataset is released. The ALFRED dataset is the first large dataset annotated with high-level instructions specifying the task and low-level instructions on the action to be taken at each step by the robot. The robot aims to achieve the task while observing a photo-realistic image and interacting with objects in this environment. It requires long steps to achieve these tasks. But the baseline of ALFRED is not robust to a long horizontal setting. In this work, we aim to build a robot that follows natural language instructions in a realistic environment using the recently released ALFRED dataset. We propose the method to split the task into easier sub-tasks utilizing natural language instructions and the method to use the auxiliary task predicting abstract high-level actions to make the robot robust for a long-horizontal setting. Our experiments show that our methods improve the task success rate.
In the first half of this paper, I discuss concepts surrounding the term "fluid intelli- gence," which often appears in general intelligence literature. In the second half, I discuss tasks for testing working memory, which is considered to be an essential cognitive function for the realization of fluid intelligence, and propose to use a sample-matching task to start with.
In the viewpoint of the Bayesian brain hypothesis, Bayesian network model of cerebral cortex is promissing not only for computational modeling of brain, but also for an efficient brain- like artificial intelligence. A norious drawback in a Bayesian network is, however, the number of parameters that grows exponentially against the number of parent variables for a random variable. Restriction of the model may be a solution to this problem. Inspired by the biological plausibility, we previously proposed to use the combination of the noisy-OR and noisy-AND gates, whose numbers of parameters grow linearly with the number of parent random variables. Although we showed that this model can have translation invariance in a small-scale setting, it was difficult to enlarge the scale because of the hidden variables. In this study, we extend the previous attempt by employing a variational learning method to overcome the intractability of the estimation of the massive hidden variables. We can scale the model up to learn the hand-written digit data.
In [1] we discussed the theory of artificial general intelligence. There we established the foundation of mathematics. Here after establishing quantum Yang-Mills theory we discuss time paradox and time machine. Through the theory we may use our artificial general intelligence and develop discussion on good data by way of what we call the role playing of bullying and positive spiral.