While AGI is a very useful future technology, it may cause the race with machines. Since it is convinced that the winner of the race must be machines, the employment of human beings will be lost. In order to alleviate the risk of AGI, this paper clarifies that the human society must be redesigned by the collaboration of AGI researchers and social scientists.
Artificial General Intelligence has a potential to influence human beings' value judgement and lifestyle. Thus it is important to involve various actors to its research from upstream. This report introduces a background of how and why inter-disciplinary collaborative studies are required and then suggests conducting risk management and risking communication from pre-crisis phase.
I discuss how the emergence of artificial general intelligence (AGI) affects economic growth, employment, and income distribution. If AGI substitutes perfectly for human labor, the AK-type economy will occur. In the economy, the rate of economic growth gets higher over the years, the employment rate and the labor share approach 0%, and the capital share approaches 100%. I propose that basic income can contribute to the well-being of the laborer who have no capital.
This article gives an overview of patent eligibility for artificial general intelligence and implementation of patent application.
Recently Attention-based neural network is successfully applied to various tasks such as machine translation, caption generated from the image, the caption generation from video, speech recognition, generation of image from the caption. But those models are lacking in versatility. In this paper, we review the various attention-based neural models, and propose a generalized model of attention-based neural models using the knowledge of the brain and artificial general intelligence.
We propose a Japanese sign language recognition system combining Convolutional Neural Network (CNN) and Long-Short Term Memory (LSTM). Existing research has had two problems. First, it has assumed that sign language could be recognized by extracting hand/arm positions and directions as features although non-manual signals play an important role in sign language. Second, it has divided temporal structure by using velocity of the hands or the movement section of the hands. However, this assumption might have left out the complex temporal structure of sign language. In this research, we created a dataset of movies of the upper bodies of sign language signers by using Kinect version2. In order to extract the effective features that include non-manual signals, we put the visible images and depth images of the dataset into CNN by frames. Then the extracted features were put into LSTM frame by frame to capture the complex temporal structure of sign language. We trained our whole network by using the backpropagation algorithm. Comparing this CNN-LSTM model to control models, we suggest that this model is more effective for sign language recognition.
In observing the smile expressed by a humanlike agent, it sometimes elicits negative emotional valence. Our hypotheses were as follows: (i)human expeceted the facial expression of others, based on the internal model of smile expression formed by cerebellum. (ii)the movement of smile expression of humanlike agent were slightly different from that of human, and (iii)which causes the observer to detect the error when observing the smile expression of humanlike agent as it was different from expected movement. In this paper, the brain-functional model was proposed providing for explanation for the error detection.
The Whole Brain Architecture (WBA) project aims to create an artificial general intelligence with near human capabilities by implementing brain regions using machine learning algorithms and connecting them according to the architecture of the brain. To encourage the community based development of WBA, we are implementing BriCA (Brain-inspired Computing Architecture) which can connect and execute an arbitrary number of machine learning components. In this presentation I will introduce the requirements analysis, concepts, implementation status, and future work for the core features of BriCA.
近年,ニューラルネットワークを多層化したDeepLearningという手法が注目を集めている.DeepLearningを使ったネットワークの中間層は入力されるデータに内在する特徴量を抽出することができるが,多くの研究は,分類問題,回帰問題の精度向上という目標に関して扱っており,抽出された特徴量が利用できるか,といったアプローチを取っている研究は少ない.そこで,本研究は,中間層で得られた特徴量を利用することのできるモデルとして,Convolutional Neural Network型Auto Encoder(CNN-AE)と,Flag Vectorを付加したLong Short-Term Memory (FV-LSTM)を用いたCNN-AE-FV-LSTM(CAFL)を提案する,
シンギュラリティを我々が実現させるかどうかは不明にせよ,人工知能(AI)が今後かなり進化するであろうことを否定する研究者はいない.昨今のDeep Learning(DL)ブームもそろそろ落ち着く段階となり,研究目的の細分化が顕著になるであろう.著者は,ツールとしてのDLに興味があるのではなく,DLと脳科学の知見を融合させ,人が知性を創発させる原理を解明し,これを工学的に利用する方法論の確立に興味がある.人が知能を獲得するに至った理由は「生存」するためであり,そのための言語獲得であり,コミュニティ(社会性)を形成するためのインタラクション能力の獲得であるが,現時点でのAIが最も苦手とするのが,このような人同士では当たり前の能力なのである.そこで,本稿では今後のAIの飛躍的進化のために超えるべき壁となる人とのインタラクションが必須なAIが持つべき能力である,「場の空気を読む」「阿吽の呼吸」「リアクティブ性」「適応性」「学習の追加・再利用・組み合わせ」等の機能実現に向けたアーキテクチャ構築について考察する.
Artificial general intelligence (AGI) needs machine learning technology acquiring shared knowledge within task region from data and generic prior knowledge. Feasible ultimate AGI will be utilized various constraints of the physical world. Generic prior knowledge, which is based on physical constraints as well as information theory, is considered in this paper.
There are many cognitive architectures available nowadays. However, it is difficult to compare those architectures, because there is no standard measure to evaluate them. In this paper, we propose a method to evaluate cognitive architectures based on CHC model.