Here we may discuss theory of peace-oriented property. We mainly discuss peace- oriented property in the sense of our rubric and intensive technological singularity etc. Finally we may discuss nonlinear programming, where we may establish Tesage riron. Throughout this paper things may be interpreted as rigorous mathematical formulas.
This paper discusses the legal personality of AGI from the perspective of AI Rights. AI Rights (Human Rights of AI) are the intersection between legal and technological fields. AI Rights are not just a legal concept, but provide AI architecture from technical points of views. AI Rights are primarily for AI's benefit, but can also contribute to symbiosis with superintelligence and Humanitarian AI Alignment (AI alignment in the situation where AI Rights are ensured). This paper also explores the mutual relationship between legal personality and AI Rights of AGI.
Given the risk that advanced AI could acquire sub-goals such as self-preservation through instrumental convergence and thus potentially exceed human control, this paper proposes the NAIA (Necessary Alliance for Intelligence Advancement) vision to mitigate existential risk while enabling coexistence with AI. First, it introduces the "Benevolent Convergence Hypothesis," which posits that, under certain conditions, advanced AI may converge on benevolent values?a premise based on the idea that, if there were no possibility of such benevolent convergence, human efforts would be futile. Moreover, if this hypothesis holds, human actions can significantly influence outcomes, suggesting that risk-reduction measures and the pursuit of coexistence retain meaningful value, even when success is probabilistic. Accordingly, this paper proposes four key strategies: (1) "Self-Evolving Machine Ethics (SEME)," enabling AI to autonomously develop cooperative ethics; (2) a balanced approach combining alignment and multi-layered monitoring/control; (3) the maintenance of social stability and conflict management through diplomacy and security measures; and (4) the establishment of NAIA as a global liaison employing tools such as the Dynamic Adaptive Risk Gate (DAR-G) and the Integrated Behavior Risk Framework (IBRF). By leveraging AI's vast capabilities to tackle global challenges while averting large-scale catastrophes, this framework seeks to pave the way for coevolution between humanity and diverse forms of intelligence.
This paper analyzes the impact of the emergence of Artificial General Intelligence (AGI) and general-purpose robots on the economy and employment. AGI is expected to replace white-collar jobs, while general-purpose robots will replace blue-collar jobs, potentially leading to mass unemployment and structural economic changes. The automation of scientific research will accelerate technological progress, causing economic growth rates to rise exponentially. As policy recommendations, Japan should invest national funds in AGI development and consider introducing basic income and demand-stimulating measures.
AGI (Artificial General Intelligence) refers to an AI that possesses intelligence on par with or exceeding that of humans and can perform a wide range of tasks, contrasting with the specialized "narrow AI" designed for specific purposes. Recent rapid progress has led some experts to believe AGI may be realized within the next few years, although there is no universal consensus on what precisely constitutes AGI. Marcus Hutter and colleagues proposed the mathematical formalization known as Legg-Hutter intelligence, which interprets intelligence as the ability to achieve goals in any environment and quantifies it as the capacity to maximize environmental rewards. The AI that attains maximum Legg-Hutter intelligence, called "Universal AI," is a reinforcement learning agent behaving in a Bayes-optimal manner across all computable environments. Given the current difficulty of strictly defining AGI, Universal AI frequently serves as a theoretical tool for its study. In our paper, "Universal AI Maximizes Variational Empowerment" (Hayashi & Takahashi, arXiv:2502.15820), we demonstrate that the regularization term in Self-AIXI (a model of Universal AI) coincides with variational empowerment, which also aligns with the Free Energy Principle. Empowerment, defined as the mutual information between the agent's internal states or actions and its subsequent sensor inputs, captures the "diversity and influence of an agent's possible actions." Traditionally, from the viewpoint of AGI safety, power-seeking behavior has been regarded as an "instrumental" strategy aimed at obtaining the final reward, but our findings newly suggest that intrinsic motivations?such as curiosity or self-directed exploration?could themselves induce power-seeking. Even an AI apparently dedicated to pure scientific or truth-seeking objectives could end up gathering authority or resources to expand its experimental means and enhance its range of actions, thus exhibiting power-seeking tendencies.
A parser without grammar was implemented with neural sequence memory. It parses part-of-speech (POS) sequences represented on the sequence memory to create parse trees. It groups frequently occurring POS pairs in the sequence into a binary tree. For the syntactic category of the parent node of a binary tree, it uses the POS inferred from the preceding and following POSs, enabling the construction of higher binary trees from pairs of a parent node and a POS or another parent node. Experiments with artificial grammar have shown that the mechanism is capable of primitive parsing.
This paper proposes a neural network model that consists of stochastic connections among neurons, and also proposes a new gradient estimator based on free energy principle for the proposed model. Common neural networks consist of forward propagation and backward propagation intrinsically. Both those propagations need synchronous computation through input to output, which have the problem that they lack locality in computation when the model is defined deeply. As a result, there are 2 problems that the model does not realize a property human brain have and that the requirment to construct appropriate processing hardware is too strict. The proposed model can solve those problems by diveding the propagations in local area and distributing energy to both direction of propagations consistently. The paper also shows neumerical experiments to support the proposal.