Time-series data subspaces that can be considered equivalent are called equivalence structures. Extraction of equivalence structures can be applied for various purposes, such as feature extraction, the analysis of neural network behavior, and dimension matching of different datasets. However, the extraction requires comparisons of all possible subspaces of the whole space of a multidimensional time series dataset, and the comparisons cause combinatorial explosion with the number of the dimensions of subspaces. In this paper, we analyze the characteristics of equivalence structures for the development of fast equivalence structure extraction.
A method to handle general time series data is shown. At first time series data is divided into basic string where a component does not appear several times, and a neural network to accept the basic string is shown. Then, hierarchically connected neural network handles target time series data by considering a row of basic string to be time series data of the upper hierarchy. The movement of the neural network is described using the element similar element of the electronic circuit, but works on the principle that is different from the oscillation circuit resemblance systems using a shift register and the recurrent network. The movement is confirmed by simulation by the C language, and hierarchy constitution and bi-directionally communicateting are shown, too.
This article argues that the whole brain architecture approach will be an advantageous way to realize human-level linguistic functions for AGI. The WBA approach aims for realizing AGI by mimicking the entire architecture of the brain. The article also gives a review of current researches in areas such as cognitive science, artificial neural nets and neural science related to the subject.
With the development of the Deep Learning, it becomes more important to verify what methods are valid for the prediction of time series data. In this study, we propose a new method of time series prediction, using mulitiple deep learners and a Baysian network. In this paper, training data is divided into some clusters with K-means clustering and the multiple deep learners are trained, depending on each clusters. A naive Bayes classifier is used to determine which the deep learner is in charge of predicting a time series. Our proposed method is applied to financial time series data, and the predicted results for the Nikkei 225 is demonstrated.
This research explores the information processing functions arising from the unique structure of the hippocampus. Specifically, two structural features are elaborated in this paper. The first is neurogenesis which takes place in the dentate gyrus and the other is the recursive neuronal projection seen in CA3. Here we refer to the representations of Restricted Boltzmann Machines (RBMs), one of the important forms of Deep Learning, and their constitutively extended models. In addition to the functionalities of ordinary RBMs, extended RBMs acquire novel features which may provide insight to the explanation of cognitive functions in the hippocampus.