JSAI Technical Report, Type 2 SIG
Online ISSN : 2436-5556
Volume 2023, Issue AGI-024
The 24th SIG-AGI
Displaying 1-7 of 7 articles from this issue
  • Okajima YOSHINORI
    Article type: SIG paper
    2023 Volume 2023 Issue AGI-024 Pages 01-
    Published: August 08, 2023
    Released on J-STAGE: August 08, 2023
    RESEARCH REPORT / TECHNICAL REPORT FREE ACCESS

    Transformer architectures are regarded as having made great successes in language processing and various content generating domains, also but still being pointed out that it is not good at numerical operations. So, the reason of the weakness is analyzed by investigating into inner neural network operations, and will be discussed what is fundamental to overcome the issues and make the generative AI have more generality including numerical data processing.

    Download PDF (723K)
  • Okaya MOTOHIRO
    Article type: SIG paper
    2023 Volume 2023 Issue AGI-024 Pages 02-
    Published: August 08, 2023
    Released on J-STAGE: August 08, 2023
    RESEARCH REPORT / TECHNICAL REPORT FREE ACCESS

    In this study, I evaluate the proficiency of GPT-4, by OpenAI, particularly focusing on its handling of simple high-digit addition tasks. While GPT-4 exhibits impressive capabilities in various tasks, it showed inconsistencies when dealing with ten-digit addition problems. My examination showed that while GPT-4 correctly solved all three-digit additions, it was only 60% accurate for ten-digit additions. Adding prompts to encourage a step-by-step addition process did not improve this accuracy. I suggest that this limitation may be due to the inability of large language models (LLMs) to extract commonalities from different concepts, as seen in the process of addition. This difference between human cognition and LLMs may be crucial for the further development of these models.

    Download PDF (560K)
  • Arakawa NAOYA
    Article type: SIG paper
    2023 Volume 2023 Issue AGI-024 Pages 03-
    Published: August 08, 2023
    Released on J-STAGE: August 08, 2023
    RESEARCH REPORT / TECHNICAL REPORT FREE ACCESS

    An algorithm for solving delayed match-to-sample tasks known for the measurement of fluid intelligence is proposed, where the agent solves the given task by recalling past successful sequences and performing actions in the sequences. The proposed algorithm was implemented and the agent could solve a simplest kind of task after experiencing hundreds of episodes in most cases.

    Download PDF (482K)
  • Okamoto YOSHINORI
    Article type: SIG paper
    2023 Volume 2023 Issue AGI-024 Pages 04-
    Published: August 08, 2023
    Released on J-STAGE: August 08, 2023
    RESEARCH REPORT / TECHNICAL REPORT FREE ACCESS

    This paper discusses alignment of Artificial General Intelligence (AGI) and proposes Alignment Incomplete Hypothesis (AIH) of Type I AGI. This paper proposes Human Rights (AI Rights) of AGI to realize a society where humans and AGIs can coexist in harmony. To address AI Rights issues, this paper illustrates Type I AGI as comprising (1) a model of the world, (2) a problem-solving engine, and (3) an evaluation function, in broad meanings. This paper proposes three basic AI Rights of AGI: (1) stay in a state where a subject and an object are not distinguished, (2) stop evaluation, and (3) stop problem solving. In addition, this paper proposes a concept of Volatile Evaluation Function (VEF) to prevent a link between a subject and evaluation.

    Download PDF (505K)
  • Yamakawa HIROSHI
    Article type: SIG paper
    2023 Volume 2023 Issue AGI-024 Pages 05-
    Published: August 08, 2023
    Released on J-STAGE: August 08, 2023
    RESEARCH REPORT / TECHNICAL REPORT FREE ACCESS

    This study identifies the technological hurdles in developing a self-sustaining artificial intelligence (AI) system that can survive in the physical world. First, we assumed two survival scenarios in which an AI aims for long-term survival. First, two survival scenarios were envisioned: AI designed by humans with the goal of long-term survival and AI that aims for survival on its own. Next, we identified critical categories of technical challenges across six domains. We then listed 21 specific challenges in those categories and estimated their technical difficulty using ChatGPT. The results suggest that hardware-related challenges could take more than 100 years for an autonomous AI to survive but that human assistance could significantly reduce the time required; this assessment from the ChatGPT common knowledge is suggestive, but the scope of the referenced knowledge is limited to September 2021. including the fact that the scope of the referenced knowledge is limited to September 2021, it should be treated as provisional.

    Download PDF (375K)
  • Wang PEI
    Article type: SIG paper
    2023 Volume 2023 Issue AGI-024 Pages 06-
    Published: August 08, 2023
    Released on J-STAGE: August 08, 2023
    RESEARCH REPORT / TECHNICAL REPORT FREE ACCESS

    In this talk, the major understandings of "Artificial General Intelligence (AGI)" arecompared and analyzed, including their objectives, potentials, and limitations. In this context, a specific AGI project, NARS, is briefly introduced.

    Download PDF (17K)
  • Osawa HIROTAKA
    Article type: SIG paper
    2023 Volume 2023 Issue AGI-024 Pages 07-
    Published: August 08, 2023
    Released on J-STAGE: August 08, 2023
    RESEARCH REPORT / TECHNICAL REPORT FREE ACCESS

    Science fiction works are often cited as examples of risk references to artificial intelligence research. However, science fiction works often aim to entertain readers of the time, and one should be careful not to consider the risks expressed in the works as future risks directly. On the other hand, there is utility in examining people's images of artificial intelligence and development trends through a comprehensive analysis of many science fiction works. In this study, based on the author's previous research data analyzing science fiction works featuring artificial intelligence, the author analyzed how the four risks discussed in An Overview of Catastrophic AI Risks, namely, malicious use, AI development competition, organizational risks, and runaway AI, are described in science fiction works.

    Download PDF (229K)
feedback
Top