Listening to speech for an artificial brain means "to image a sentence as much as possible from its words, content, and background", and introduce the concept of the means of realization (below). ①Image the contents of a sentence in a virtual field of view space based on the storage information of the middle and upper concepts of hierarchically stored images (recognition, cognitive information, etc.) for words, including "shared information with the other party" and "the situation around the conversation". ②Understand the will of the other party from words/emotions/tones and perform the actions you need. ③Acquire words and meanings for cognitive objects; Acquire the concept of time, and respond to them.
We proposed a method to realize Artificial General Intelligence in Ref. , and showed that an artificial intelligence that simulates the motion of an object from a video can be automatically generated. Then, useful information was obtained. However, it was necessary to repeat 13 simulations to obtain good result, and the success rate of the simulation was 42%. In this paper, we apply Monte Carlo tree search used in games such as shogi when evaluating predicted alternatives, and show that it is also effective in Artificial General Intelligence.
How does human symbolize the world? I introduce "Double Articulation multi-Dimensional Symbolization that are clustered and reduced into Stories as the world model" (DAmDiSS). Humans get a lot of "sensor data" via their sensory organs. The "raw" data are clustered in an unsupervised manner. And then, I assume double articulation structures in all modal data. The "meaning" is a concept which is consist of several modal clustered "raw" data. I also introduce time-series "meanings" as "Story". A "big Story" is consist of several "Stories". Humans can make a desirable "big Story" by selecting various kinds of "Stories" in their memories and can modify the "big Story" after acquiring new "raw" data. This process is just the "Bayesian inference algorithm" itself. The consciousness is a "Bayesian inference algorithm" which enables us to form and modify the optimal plan by considering year-order future value.
The hippocampus is known as the core of memory system in the brain. However, how hippocampus works is not clear. In this paper, to estimate function of brains memory system based on studies and damage examples and examine the possibilities of generating one-shot learning and making episode memory. I focus on the Papez circuit which is a circuit related to the hippocampus and memory and I estimated that part contributes to the generation of time-series information. In addition, I discuss how self- learning can be realized by combination of memory system and expressing system about pleasant and unpleasant.
In a previous presentation, we reported how to use deep generation models to improve the performance of AI using some of the functions of consciousness. However, the experiments were not completed in the last time, so I will report on the experiments this time.
This article surveys engineering and neuroscientific models of planning as a cognitive function, to present them as references for realizing the planning function in brain-inspired AI. It also proposes themes for the research and development of brain-inspired AI from the viewpoint of tasks and architecture.
Brain is a big clump of neurons. Though each of the neurons behaves independently and discretely, the mass of neurons looks to work with continuous value in group. An important feature observed in it is the symbolic thinking that is specific for human brain. The symbolic thinking has features of discrete, logical and conscious that are not seen in current brain understanding and in conventional neural network models, and the features are essential for making human intelligent. In this talk, we overview the history of researches that investigated human brain intelligence, and introduce some researches that may lead to the symbolic thinking study in future.
We propose a method to obtain correct inference rules from experience for an intel- ligent agent running in an environment. For each inference rule, the proposed method substitutes "efficiency to reach the correct answer" for "correctness." This proposed method learns the infer- ence rules using a hierarchical reinforcement learning method called RGoal. The whole architecture is biologically plausible. We believe the proposed method will be a basic principle of autonomous knowledge acquisition for artificial general intelligence.
In this paper, we introduce our trials to extend our previous work on life-long learning cognitive architecture Bacterium Lingualis, which collects world knowledge from textual resources using linguistic clues. We utilize mask prediction functionality of the BERT language model to augment simple concepts with additional knowledge like means, goals or merits and demerits. We present results of preliminary tests of the additional knowledge in an automatic ethical assessment task and report our findings.