We propose Quasi Bayesian Networks as a tool for efficient prototyping of cognitive models based on restricted Bayesian networks.
Feature extraction is an essential preprocessing for accurate classification and recognition in machine learning tasks. Recently, deep learning methods have shown high performance on feature extraction. However, they require much time and effort for tuning many hyper-parameters. The Basis Learner, proposed by Livni et al., is a multi-layer feature extractor with much fewer hyper-parameters. In this paper, we propose an improved version of the Basis Learner, named Covariance Maximized Basis Learner, which yields better classification accuracy with even fewer hyper-parameters and lower feature dimensionality.
Equivalence Structure (ES) extraction method is a technique for extracting a set of K-tuples consist of sequence-ID that can be regarded as equivalent based on similarities that found among sequences in the N-dimensional synchronous sequences. The ES's can be used for analysis of deep neural networks and determination of corresponding markers in imitation learning. In this paper, we provide the definition of ES and the properties of the input data for the ES extraction. In addition, the nature of the ES, and the extraction process are formulated.
This article discusses the future development of artificial intelligence in the view of integration of technology into the human society. I first overview the history of artificial intelligence research, highlighting key impacts on the human society. Then, I incorporate the artificial intelligence technologies into a larger picture of the mankind so we can gain better prospect the trend we are witnessing now. I analyze the nature of the transition we are faced with, attempting at eliciting the challenges and opportunities both for new artificial intelligence research and for social transformation. Finally, I propose a research agenda for building a bridge between the human society and technology represented by artificial intelligence.
When humans try to recognize ambiguous images, that perception is unambiguous yet changes over time. This phenomenon called perceptual change. It has been considered that perceptual change is caused by top-down attention which selectively promotes/suppresses reactivity to features depending on the expected recognition result. However, modeling research of perceptual change which take the object recognition process into account has not progressed. Therefore, we modeled the perceptual change phenomenon taking the object recognition process into account by using Convolutional Neural Networks (CNN). I simulate the perceptual change phenomenon using this model, and I visualize features which are promoted/suppressed by top-down attention. I show that these features are important to unambiguous perception.
Deep reinforcement learning has achieved great success in learning to play video games. In contrast to the video games in which the status changes discretely in space and time, robots in the real world move continuously and asynchronously following physical rules. To apply deep reinforcement learning to robot control, we prototyped a robot simulation environment "Re:ROS" with asynchronous system architecture based on Gazebo simulator and Robot Operating System (ROS).
Facial expression of humanlike agent does not perfectly follow that of human beings. Such imperfection sometimes elicits negative impressions (e.g. uncanny valley). We proposed the brain function model in qualitative description, based on the framework of "Prediction Error." In our model, the predictive movement was processed with the internal model in the cerebellum and compared with perceived movement. The validation of our model was assessed in that the function of prediction had an essential role in the negative emotional response.