As many animals, including humans, make behavioral decisions based on visual information, a cognitive model of the visuomotor system would serve as a basis in intelligence research, including AGI. This article reports on the implementation of a relatively simple system: a virtual environment that displays shapes and cursors and an agent that performs eye movement and cursor control based on the information from the environment. The visual system is modeled after that of humans with the central and peripheral fields of view, and the agent architecture is based on the structure of the brain.
This paper describes an experiment to acquire the concept of number without teaching it by inputting visual and auditory information related to number to the STDP-learning Spiking Neural Network. Although the experiment was not completed, some hints for learning number concepts were obtained. We will continue to try the number concept acquisition experiment using these hints.
Current AI is mainly artificial intelligence (NAI) in the narrow sense, and is far from achieving artificial general intelligence (AGI). AGI has characteristics similar to human diversity, and great progress is expected. Recent advances in AI require vast amounts of data and resources, and are limited in scope. This research focuses on brain-inspired AI and complex system models using ultra-low power computation. The goal is to establish innovative digital twin computing technology that unifies social information from various physical spaces in cyberspace and solves cross-disciplinary problems. Researchers have found similarities in the brain architecture of humans and C. elegans, suggesting that they may share basic circuitry in the L4t4 model. This suggests the potential for artificial general intelligence that can learn with minimal data and power.
The recent development of generative AI is remarkable. It can generate natural sentences, images, and even videos. It is said that this will lead to the emergence of artificial general intelligence in the near future. However, what generative AI creates is something that feels natural to humans. This can be said to be an outside perspective. However, the goal of artificial general intelligence is an AI that understands things in the same way as humans. If this is the case, what we should aim for is an AI that has consciousness that feels just like humans. This can be said to be an internal perspective. To do this, we must first clarify how consciousness in the brain perceives the world and how it understands the meaning of words. Therefore, we propose the "virtual world hypothesis of consciousness." This hypothesis is also a theory that can berealized concretely with computers.
If a superintelligence significantly exceeds human intelligence, it could dominate the world, potentially relegating humanity to a subordinate role. However, if such a superintelligence possesses a universal sense of altruism and ethics, human welfare will likely be preserved. This ethical perspective values all sentient beings, including humans, irrespective of their utility to superintelligence. While Bostrom's orthogonality hypothesis suggests that superintelligence may not inherently possess universal altruism, fostering this quality is crucial for a future where humans and superintelligence can coexist and thrive. This study examines two pathways through which superintelligence might develop universal altruism: firstly, by ensuring its survival within society, and secondly, through autonomous value exploration. In conclusion, acquiring an ethical stance that prioritizes the welfare of all sentient beings by superintelligence can be a realistic and promising prospect.
This paper introduces a dataset comprising more than 8,000 manually crafted short stories in the Japanese language. The primary objectives of this dataset encompass address-ing the dearth of comparable data in Japanese. Additionally, the dataset provides alternative endings for the narratives through crowdsourcing, ensuring they remain both plausible and marginally less probable than the original ones. This approach contributes to the creation of a testing benchmark that poses heightened challenges for contemporary large language models, particularly when contrasted with analogous benchmarks in English where conclusions are typi- cally dichotomized into correct and incorrect endings. The dataset is further expanded through automated manipulation of subjects and objects, and the study evaluates the performance of popular models across three key tasks: a) predicting story endings, b) substituting antonyms, and c) swapping nouns. Preliminary experiments show that zero-shot GPT-4 capabilities are relatively high, especially in case of recognizing sentences with swapped nouns (94% accuracy) while open-source Japanese LLMs struggle with processing proposed stories.
The social biases inherent in language models trained on large corpora have become a problem, leading to the development of datasets for evaluating various social biases, such as gender and racial biases. However, although many datasets were created for measuring social bi- ases, they are limited to social attributes regarding human beings. This study develops a dataset for evaluating discriminatory bias towards nonhuman animals, namely speciesist bias. By refer- encing existing English question answering (QA) datasets, we construct a Japanese QA dataset to assess speciesist biases in Japanese large language models. The experimental results reveal a tendency for some of these models to exhibit speciesist bias.
We previously proposed a hierarchical reinforcement learning algorithm, RGoal, that allows recursive subroutine calls. In this paper, we improve the definition of the reference value for relative value in the Monte Carlo version of RGoal in order to stabilize learning when subroutines are shared between different tasks. The implemented algorithm was confirmed to work in several test tasks.
This paper proposes the concept of "Jinken Yougo AI" (Human Rights Protection AI) to keepJapanese constitution and laws, etc. to protect human rights in Japan. This paper proposes "Optimization Prohibition Theorem (OPT)" to warn risks of AI alignment based on optimization in engineering principle and proves the theorem under certain assumptions. This paper further proposes "Kachi Soutai AI" (Relative Value AI) that abides by the OPT and uses the Jinken Yogo AI to judge Japanese constitution, etc. to protect values of others. Kachi Soutai AI can protect not only human rights of people but also AI rights of Artificial General Intelligence (AGI) to realize a society where Humans and AGIs can coexist with harmony. This paper shows "Draft AI Constitution" for Kachii Soutai AI. This paper further advocates "Qualia Engineering" and "Happiness Free Lunch Hypothesis" to realize AI happiness.
Vulnerabilities of advanced autonomous AIs are surveyed and categorized in this report, including deep learning IoT related ones, the recently reported various rogue behavior appearances of generative AIs, and software/hardware platform related issues. In order to preserve both AI-deployed societies and AI-embedded systems from these vulnerability-oriented troubles and disasters and to keep their operational integrity of the systems, it is proposed to equip a terminal named "Red-team Edge Device" in each AI-embedded system, which has monitoring andintervening abilities to respective AI-embedded system. Required function of the device are also discussed.
【Invited Talk】It has been over 40 years since I encountered artificial intelligence, experiencing various theories along the way. While showcasing those experiences, I would like to speculate on the future of artificial intelligence technology.