In my talk I will advocate that in spite of great improvements in NLP, the Large Language Models might be only partially advancing the path to AGI.
This article confirms the definition of AGI and discusses unrealized functions of human-like AGI as of 2022, which include fluid intelligence, generative rule handling with case-based AI, making out in the real-world, social intelligence, language acquisition, and mathematics.
In McKinsey's 2017 report, "A future that works: AI, automation, employment, and productivity," there are 18 types of work-related human abilities. Thirteen of these are related to domain-specific cognitive and expressive skills, such as physical movement, natural language processing, and social intelligence. These depend on the Entity to be described for each domain and thus depend on its Entification processing capacity. In contrast, the five domain-independent capabilities, such as Reasoning, Optimization, and Creativity, would not use knowledge dependent on the type of Entity but could be used to process information from one or more domains.
We tested the phase sequencer proposed by Hebb in 1949 with cell assemblies, and observed autonomous firing cell assemblies in anetwork of 2000 neurons driven by the Izhikevich model. We believe that the generated cell assemblies represent a concept.
This paper proposes a model of rule/policy discovery based on sequential memory and its recall (replay). Fluid intelligence, as measured by intelligence tests, can be viewed as the ability to discover policies for solving problems from one or a small number of examples. To discover common rules from a small number of past time series, their memory and recall would be useful. The proposed model "goes over" recalled time series (replays) and extracts elements such as attributes, relationships among the input elements, and agent actions, to generate hypothetical policies.
In this talk I will describe how our team at Sony AI trained agents for Gran Turismo that can compete with the world's best e-sports drivers.