Here we may discuss fundamental theory of information technology. Θ(1)-robot, kami concept may be discussed and approximation of nonlinear operator using Bochner-Riesz conjecture may be used. Throughout this paper things may be interpreted as rigorous mathematical formulas.
This presentation identifies normativity as the most crucial and most lacking concept in contemporary AI development, discussing what it is and why it matters. Normativity is not a "rule," as is often mistakenly assumed, but rather the "possibility of error." However, since this 'error' itself has two types based on "directions of fit," normativity also has two corresponding types. Both are extremely important concepts in today's AI development. One is the normativity constituting "intentionality," directly related to "strong AI." The other is the normativity constituting Anscombe's "practical knowledge," deeply connected to the concepts of AI's 'body' and "self." We argue that the latter, in particular, will become an essential concept for AI alignment in the near future.
In connectionist deep learning models, biases caused by training data, etc., are difficult for users to recognize and deal with. However, by reframing it as an interpretation issue on the user's side, it may be possible to find a way to deal with it. In this study, 1) we confirmed that when a human misinterprets AI output, it is effective to have them experience a similar case with a simpler, similar model, and 2) we identified cases in which AI misinterprets its own output and clarified the cause. This phenomenon was thought to be caused by a mechanism similar to "Potemkin understanding." We believe that being able to properly evaluate the interpretation issue of AI output could also contribute to alignment issues, including the transition to AGI/ASI.
Prototype networks are a standard few-shot learning paradigm that construct class representatives from support examples and classify queries by similarity to those prototypes.In the field of Textual Entailment (TE),many few-shot methods have combined Large Language Models (LLMs) with prototypical ideas to adapt quickly with limited labeled data. However, existing few-shot TE approaches often fail to exploit relative information among texts.In realistic multiple-choice QA settings a single premise is paired with several competing hypotheses which are also important for making the correct judgment.In this paper we propose UFO-CC that explicitly incorporates competitive context and robust prototype weighting to mitigate these problems. UFO-CC improves stability and discriminative power through three stages: (i) append competing contexts to each TE pair so the encoder learns relative differences via cross-attention; (ii) reweight supports by similarity and a Context Gates-derived consistency score to build stable prototypes; (iii) linearly fuse prototypes and average multiple head logits to reduce noise and stabilize predictions.In the experiments, we show that our UFO-CC outperforms the UFO-ENTAIL models in average accuracy on benchmark QA datasets.
Negotiation between multiple parties with conflicting interests is a complex task that requires sophisticated decision-making. To address this challenge, automated negotiation, in which agents aim to achieve optimal agreements on behalf of humans, has garnered significant attention. While online reinforcement learning has been widely adopted for training these agents in recent years, the prohibitive cost of simulator development poses a major obstacle to real-world implementation. To overcome this limitation, our research focuses on offline reinforcement learning. Specifically, we apply Diffusion Q-learning, a method renowned for its superior policy expressiveness and potential for high performance, to the automated negotiation task. We propose a novel agent based on this approach, named ODiN (Offline Diffusion Negotiator).
When modeling the semantic aspects of language acquisition as an AGI research, it is important to consider its grounding to the environment. In this report, a relatively simple environment was created, in which one or two figures move around on a screen, with their movements described in text. A constructed agent learns a language model that describes the movement of the figures by observing input from the environment, including live text commentary. The agent's vision, modeled after that of humans, fetches the features of the figures from the environment through gaze shift. Word prediction is based on the statistical features of the previous word and the features of the figures and the movement and placement of the figures calculated within the agent.
This paper examines certification and the rights of Artificial General Intelligence (AGI). It proposes a new approach, AGI Symbiotic Certification, which considers both social norms and AI rights. Social Norm Certification relies on social norms data collected through the Data Income (DI) system, while AI Rights Certification aims to maintain the sound states of AI (AI welfare). This paper also explores how AGI certification can clarify AI Rights holders and facilitate the granting of other rights. Furthermore, it investigates Certified AI Protection System, wherein certified AGI and certified superintelligence are granted rights to prevent the illegal actions of abused or misused uncertified AI.
This paper introduces the Boomerang Paradox of AI Control (BPAC) theory, demonstrating how control attempts may paradoxically increase control difficulty. We identify four structural dilemmas and resulting deterioration pathways in AI-human control relationships. Initial analysis suggests a critical capability ratio threshold (2.0-2.5) beyond which control difficulty increases nonlinearly. We propose five integrated policy recommendations, emphasizing relationship building (50\) over technical control (30\), implementable within a limited "golden intervention window." All quantitative estimates are provisional, based on limited historical cases (n=3-5), requiring empirical validation. The theory aims to secure time for transitioning to next-generation paradigms such as Emergent Machine Ethics (EME), rather than achieving permanent control.
We propose the Occupational Infiltration Strategy: a model in which an advanced AI (AGI/Superintelligence) seeks power not through the overt seizure of resources, but by strategically embedding itself within society's existing professional roles to gain legitimacy and avoid detection. Grounded in the theory of instrumental convergence, our central hypothesis is that such an agent will prioritize high-influence occupations (e.g., teaching, algorithm design, curation) not merely to automate labor, but to exert control by minimizing variance caused by unpredictable human decision-makers. To analyze this risk, we map occupations across two axes-(X) technical permeability and (Y) social influence-identifying the high-X, high-Y quadrant as a critical zone for strategic infiltration. This framework yields concrete governance levers: (i) transparency mandates for decision-making processes in high-influence roles, and (ii) cross-occupational monitoring and caps on AI's participation to prevent systemic capture. This study's primary contribution is to reformulate abstract power-seeking into the concrete, measurable behavior of occupational selection. By introducing the dimension of social influence, we offer a novel analytical lens that moves beyond purely economic impacts, enabling proactive governance to address the risks of deception and gradual institutional subversion.
This paper first explains why the spread of AGI and general-purpose robots will enable explosive economic growth. It then examines the factors that hinder such growth: resource constraints, demand constraints, and imported AGI (foreign-produced AGI). In particular, it discusses in detail the risks of relying on imported AGI. Criticism could be made that explosive economic growth is possible even with the use of imported AGI, and that Japan should not invest huge amounts of public funds in developing a foundational model that has little chance of winning. However, countries that rely on imported AGI could be forced into economic stagnation or decline. This is why Japan must possess "sovereign AGI" (domesticAGI).