We may discuss fundamental theory of English learning focusing on admission study. We may also refer to world models.
Large Language Models (LLMs) are capable of sophisticated language understanding, but they can work without correcting instructions based on users' incorrect beliefs (misconceptions). The purpose of this study is to clarify the problem of not correcting false beliefs. First, we tested an LLM agent using the BDI model to clarify the instructions to correct a user's incorrect beliefs when the user is assumed to have them. In the experiment, we compared 14 cases of false beliefs with and without correction of false beliefs. The results showed that in the case of no belief estimation, 8 suggestions were made in response to the user's desire, but in all 14 cases, including estimation and correction of false beliefs, the user's desire was met. These results suggest that it is difficult to respond to the user's wishes without appropriately correcting the user's false beliefs.
This paper discusses Data Income (DI) collecting social norm data for assisting democratic AI alignment. Data Income (DI) has been proposed to address the issues of "Data Bottleneck Hypothesis" and "Social Bottleneck Hypothesis" of Artificial General Intelligence (AGI). Data Income (DI) can be used for many purposes including collecting data for value alignment. This paper proposes that Data Income (DI) can be used for certification of social norm datasets made by AI researchers, companies, etc. by qualified voters in an entire country (e.g. Japan). This paper also proposes an intellectual property system regarding the representation of the certified datasets.
This paper discusses the interdisciplinary field of "Post-Singularity Symbiosis (PSS)" in the context of rapid AI progress and superintelligence. In the post-Singularity world, superintelligence may prioritize self-preservation over human values, posing catastrophic risks. We propose developing PSS as a proactive, constructive research field to enhance humanity's survival and well-being of humans and AI, even if we cannot control superintelligence. PSS is an ideology-independent field aiming for universal human survival and development, and good relations between AI and humans. Its research spans superintelligence analysis, guidance, and human enhancement. Realizing PSS requires respecting cultural diversity and global cooperation. PSS is an interdisciplinary research field that is not dependent on specific cultures or ideologies, but rather embraces the universal goal of human survival and development. Its research areas span the analysis and guidance of superintelligence, as well as human enhancement. This paper elaborates on the specific research topics in these domains and their relationship to prior studies. Furthermore, it emphasizes that the realization of PSS requires respecting cultural diversity and building a global cooperative framework. This article highlights the importance of PSS in addressing the greatest challenge in human history - the Singularity - and paving the way for humanity's future.
Since the term "generative AI" has been making headlines, debates about the timeline for achieving Artificial General Intelligence (AGI) have intensified. While there are many methods for estimating AGI development timelines, a blog post titled "Situational Awareness" by Leopold Aschenbrenner, a former OpenAI employee, has garnered attention for its notably optimistic prediction of AGI realization in 2027 or 2028. This lecture will unpack Aschenbrenner's post, discussing the logic behind AGI timeline estimation methods and their feasibility.