Time series data can be divided into basic strings in which components do not appear multiple times. In the previous paper, the neural network which processes the basic strings were presented. By hierarchical connection of the neural networks general time series data can be processed. The neural network shown in this paper has the function of dividing general time series data into basic string. And by using grammatical structure on hierarchical connections, learning of time series data and generation can be realized. We aim to neural model of animals acting adaptively without advanced pattern recognition ability.
With deep learning, the acquisition of eyes for artificial intelligence machines an allow the machines to understand physical concepts. Today's reserch on intuitive physics has approach three sorts of subject while in divided: the recognition of an object's physical properties, the understanding of physical laws, and action generation based on physical inference. However, even if all of these subjects are integrated, it only reach the level of several months after birth, and it is far from an understanding of Newton's law. In response, we propose a virtual crane game as a platform for computational model research on the development of physical intuition at the sensorimotor stage. By trying to solve this game by a simple learning agent, we discuss future research issues on this game.
Although deep learning is an effective method to conduct a machine learning, there is no definite answer on how to initialize the hyper parameters. In this paper, we propose an extension of DALP algorithm, an algorithm to automatically tune the number of hidden units for RBM, Double DALP (D- DALP). D-DALP is an algorithm that automatically tunes the number of hidden units that is effective for deep learning by applying DALP twice to the same data set. In the experiment, it is shown that RBM initialized with D-DALP has a higher identification accuracy when compared to RBM initialized with DALP in a deep learning setting.
Unmaned Aerial Vehicle (UAV) missions or applications, both manually controlled or autonomous, require the operator to be aware of what is happening around the UAV. In this paper we introduce an approach dividing this visual situation awareness problem into different levels, which we first try to solve separately using different deep learning techniques and later integrate into a single model using Multi-Task Learning techniques, with the aim of deploying it on an embedded system mounted on the UAV itself for running in real-time.
Low altitude survey of seafloor is important in various fields, such as biological research or resource survey. An AUV (Autonomous Underwater Vehicle) plays an important role. In this paper, we propose a new terrain following method, in which a reinforcement learning agent sets an appropriate pitch reference. We expect the new method results in self-adaptive to the environment. We conducted a simulation based analysis, in which the AUV traveled at a high surge speed (~2 m/s) in various sonar echo level environment.
For a personal robot to behave intelligently, its environmental conditions and the corresponding actions have to be prescribed by the designer in advance. Therefore, there is a problem that the appropriate actions cannot be performed in unexpected environments. On the other hand, it seems that human beings or organisms are able to behave in various environments adaptively. In this paper, we propose a method to realize autonomous action generation for a personal intelligent robot based on both its sensory-motor fusion model and internal evaluation criteria.
In teamwork, communication has an important role in understanding co-worker's future behavior. In this paper, we propose a method for a machine learning agent to explain the target of the agent's own actions. The agent grasps the target of its own actions by predicting the result of the actions, and gives an appropriate expression for the predicted results by estimating the meanings of expressions on the basis of the shared reward.
With the success of deep learning, human-level artificial general intelligence is expected to be implemented. Also, in recent years, various road maps have been announced. In the biologically inspired roadmap, a rodent level artificial general intelligence is an important research milestone. However, there are few proposals on applications available in the real world. In this paper, we propose a practical application when a rodent level artificial general intelligence is achieved, and introduce practical case studies.