電気学会論文誌C(電子・情報・システム部門誌)
Online ISSN : 1348-8155
Print ISSN : 0385-4221
ISSN-L : 0385-4221
110 巻, 3 号
選択された号の論文の14件中1~14を表示しています
  • 平田 廣則
    1990 年 110 巻 3 号 p. 111
    発行日: 1990/03/20
    公開日: 2008/12/19
    ジャーナル フリー
  • 麻生 英樹
    1990 年 110 巻 3 号 p. 112-118
    発行日: 1990/03/20
    公開日: 2008/12/19
    ジャーナル フリー
  • 二木 徹
    1990 年 110 巻 3 号 p. 119-125
    発行日: 1990/03/20
    公開日: 2008/12/19
    ジャーナル フリー
    This paper discusses an interpretation on processes in hidden and output layers of a feedforward neural network which is trained by the backpropagation algorithm. We treat a network which is applied to class discrimination task. Based on the obtained interpretation we propose a new method for detemining initial values of backpropagation algorithm. The basic idea is as gollows: Initial weight vectors of hidden layer are determined by linear discriminant analisys on original input data. Those of output layer are chosen so as to be equal to the coefficients of linear discriminant function which is derived from output of hidden layer. The proposed method can significantly improve the rate of learning. It is verified by hand-written numeral recognition experiments.
  • 西村 和夫, 川崎 美穂, 島田 毅
    1990 年 110 巻 3 号 p. 126-132
    発行日: 1990/03/20
    公開日: 2008/12/19
    ジャーナル フリー
    Evaluation of system state is becoming imperatively important in supervising and controlling huge systems such as power generation and transmission systems, and industrial plants.
    In this paper, the authors discuss two methods to build the neural networks for the purpose. One is based upon backpropagation and the other is a new method proposed by the authors. Unlike backpropagation which forms the completely distributed activation structure of neurons, the proposed method enables self-organization of the modules which are activated selectively to an input pattern. The new method not only makes biologically natural structure of neural network but also shortens considerably the learning time by localizing the links to be updated in the learning process.
  • 石川 真澄
    1990 年 110 巻 3 号 p. 133-140
    発行日: 1990/03/20
    公開日: 2008/12/19
    ジャーナル フリー
    Dealt with in the paper are an experimental evaluation and an improvement of learning performance of the two connectionist models proposed as dynamic ones, i.e., buffer models and context models. The learning performance here is composed of two factors: a learning speed and a learning ability defined as the average squared output errors after learning. Continuous-valued sequences being closely related to control problems, the author proposes to explicitly introduce time-varying input to both types of models. Also proposed is the use of more than one context layers in context models for the improvement of the learning ability.
    Simulation experiments reveal the followings. In the learning of character sequences context models with fewer units have greater learning ability than buffer models using the same order of computation. Buffer models have both higher learning speed and greater learning ability than context models in the learning of linear continuous-valued sequences. In the learning of nonlinear continuous-valued sequences, buffer models have far greater learning ability than linear buffer models as well as context models. As the final example actual linear continuous-valued sequence is adopted to evaluate various combinations of models and learning algorithms. Linear buffer models using the learning algorithm with forgetting previously proposed by the author turn out to be the best in terms of the average squared output errors both for training and testing data. It is even better than the traditional GMDH.
  • 浅川 和雄, 渡部 信雄, 川村 旭, 益岡 竜介, 棚橋 純一, 山田 博
    1990 年 110 巻 3 号 p. 141-147
    発行日: 1990/03/20
    公開日: 2008/12/19
    ジャーナル フリー
    The learning process of multi-layered neural networks and their fast learning algorithm are presented in this paper. The ability of hidden units was tested in a learning of the high order nonlinear functions. The number of hidden units was also optimized in the learning.
    The pseudo impedance method was proposed for a fast learning algorithm by analogy of a mechanical impedance control. In pseudo impedance method, learning parameters are determined by the neural network's virtual mass, damping coefficient and stiffness. The usefulness of this metod was verified by comparison with the error back propagation method.
  • 福見 稔, 細川 直史, 大松 繁
    1990 年 110 巻 3 号 p. 148-155
    発行日: 1990/03/20
    公開日: 2008/12/19
    ジャーナル フリー
    In pattern recognition, we often deal with classification problems in which the retinal input patterns may be subject to translation, rotation, scale change, perspective change etc. Usual pattern recognition methods by statistical discrimination functions are not powerful to classify those patterns.
    In this paper, a new approach for pattern recognition by a neural network is proposed based on the concept involving an invariance network and a descrambler network. The invariance network plays an important role in producing a set of outputs which are invariant to translation, and rotation, etc., of the retinal input pattern. The descrambler network is used to classify the scrambled data into the original patterns by using back-propagation algorithm. The structure of proposed neural networks is similar to that by Widrow, et al.
    But the sigmoid functions are adopted as nonlinear elements in the neural networks while Widrow's MR II is based on signam functions. Hence, the back-propagation method can be used in the learning algorithm of descrambler network.
    Finally, some numerical results are illustrated to show the effectiveness of the present algorithm for pattern recognition.
  • 林 陽一, 坂田 正輝, 中尾 隆司, 大橋 新悟
    1990 年 110 巻 3 号 p. 156-165
    発行日: 1990/03/20
    公開日: 2008/12/19
    ジャーナル フリー
    Various learning algorithms for neural networks have recently appeared and have stimulated applications of neural networks in new problem areas. This paper presents new results obtained by applying the distributed method proposed by Gallant to problems in alphanumeric character recognition. This method uses a distributed neural network with the Pocket Algorithm which is a modification of Perceptron Learning. The authors evaluates the recognition (classification) capability of the distributed method for 62 and 93 alphanumeric characters of a single font having different kinds of typeface quality and 76 alphanumeric characters of multiple fonts (12 types of fonts) having the same typeface quality. Furthermore, we propose a useful technique to distinguish between characters that closely resemble each other by using the structure information of characters. We also propose an activation criterion of output cells for character recognition.
    In the recognition of alphanumeric characters having different kinds of typeface quality, a markedly high degree of recognition accuracy (99.96% maximum, 99.74% on average) of individual font quality was attained. In the 76 alphanumeric character recognition of the multiple font, a very high degree of recognition (99.64% maximum) was also achieved. The relations among the recognition rate and the number of items of training data, the number of intermediate cells, and the number of training iterations are considered in this paper.
  • 福岡 豊, 岡田 英史, 中条 義隆, 南谷 晴之
    1990 年 110 巻 3 号 p. 166-172
    発行日: 1990/03/20
    公開日: 2008/12/19
    ジャーナル フリー
    This study proposed a new signal processing system using a neural network model for the laser Doppler velocimetry (LDV). The proposed system was based on spectrum analysis using FFT and its basic structure with the burst spectrum analyzer (BSA). Though the BSA is one of useful signal processors of the LDV, accuracy of the velocity measurements decreases at low S/N ratio. So a neural network model was constructed in the BSA system in order to increase the accuracy of measurement. The model learns variance of the Doppler frequency related to velocity to predict the succeeding Doppler frequency. Thereafter, the post processor calculates a weight function in the spectral domain and computers the Doppler frequency from the weighted spectrum.
    Actual velocity measurement on a rotating acrylic disk was carried out and the results showed high availability of the system. It allowed increase in the accuracy of velocity measurement.
  • 武藤 昭一, 吉澤 純一, 松田 聖, 田村 滋, 真鳥 岩男, 川上 潤三, 狩野 泰信
    1990 年 110 巻 3 号 p. 173-181
    発行日: 1990/03/20
    公開日: 2008/12/19
    ジャーナル フリー
    Experts, who know general causal relation between input and behavior in a physical system, can estimate the behavior against the unexperienced input through learning a few samples of the relation. Applying a framework of hypothesis-based reasoning to the experts' learning, this paper proposes a learning method based on the framework. Power system stability evaluation is considered an example problem of the physical system in the paper. In the proposed method inference is made from
    (1) Facts (results of simulations),
    (2) Hypotheses created on comparison with the facts,
    (3) Causal relation about power system stability,
    (4) Conflict between the hypotheses and the causal relation.
    In addition, accuracy factor of the hypothesis is used for hypothesis selection and conflict elimination.
    The learning system was developed and the learning results are illustrated. The results prove availability of the proposed learning method.
  • 須貝 康雄, 小圷 成一, 平田 廣則
    1990 年 110 巻 3 号 p. 182-190
    発行日: 1990/03/20
    公開日: 2008/12/19
    ジャーナル フリー
    We propose a new method based on competitive learning for the placement in VLSI layout design. In the placement problem which is one of combinatorial optimization problems, cells having various kinds of shapes or sizes are to be compactly placed to minimize the routing length of signal nets. It is difficult both to construct a cost function and to determine values of parameters contained in it.
    Recently applications of Hopfield networks to combinatorial optimization problems have been reported. For such cases, however, the cost function is requisite. Using learning schemes will make it possible to avoid the necessity of the cost function.
    Competitive learning is one of learning paradigms without teacher in the framework of neural networks. It has an ability to provide a way to discover the salient features which can be used to classify a set of patterns.
    By corresponding net data to input patterns skillfully, it is possible to minimize wirings and area without any cost function because they can be evaluated during a learning process automatically. Numerical experiments show that the proposed method can produce the optimal placement with a relatively small amount of computational time.
  • 福見 稔, 大松 繁
    1990 年 110 巻 3 号 p. 191-197
    発行日: 1990/03/20
    公開日: 2008/12/19
    ジャーナル フリー
    In this paper we propose a new neuron model CONE (Coupled neuron) and a new algorithm for training networks of CONEs. The CONE possesses two functions, the “sigmoid” and “signum” function, mapping an input pattern into an output. Therefore, such networks can be trained by the familiar back-propagation (BP) technique since the CONE uses the differentiable sigmoid function for its trainability.
    The BP and MR II algorithm have been developed up to now. However, both methods have advantages and disadvantages. The CONE proposed in this paper takes advantages of key ideas of both, namely, the systematic treatment of networks by using the gradient descent and a quick convergence of best fitting MR II. The learning algorithm is called CNR or Coupled Neuron Rule. It seems that the CONE is an effective processing element for image compression, and speech coding, etc., because the CONE possesses the signum function to provide 2-level outputs and can be expanded to 4-level, and 8-level, etc.
    As a result of applying the CNR to simple networks of CONES, it is shown that the speed of convergence of the CNR is much faster than that of the popular BP method when the variable learning rate is used. Simulation results are illustrated to show the effectiveness of the CNR algorithm.
  • 林 陽一, 中井 正人
    1990 年 110 巻 3 号 p. 198-206
    発行日: 1990/03/20
    公開日: 2008/12/19
    ジャーナル フリー
    Neural network models and the automatic generation of expert systems based on learning processes are attracting growing interest as useful tools for mainstream tasks involving artificial intelligence. Neural networks embody the information derived from the training data (examples) and are implicity assumed to contain the IF-THEN rules and/or knowledge base used for expert systems.
    This paper proposes a method to extract automatically fuzzy IF-THEN rules with “linguistic relative importance” of each proposition in an antecedent by using a feed-forward neural network. The linguistic relative importance which is defined by a fuzzy set represents the degree of effect of each proposition on consequence. By providing linguistic relative importance for each proposition, each fuzzy IF-THEN rule has more flexible expression than that of ordinary IF-THEN rules. Furthermore, truthfulness of each fuzzy IF-THEN rule is given in the form of linguistic truth value which is defined by a fuzzy set. Enhancement of knowledge presentation capability and flexibility by using the fuzzy IF-THEN rules with linguistic relative importance facilitates the automated extraction of IF-THEN rules from neural networks.
    First, we give an algorithm to select propositions in an antecedent (IF-part), that is, to extract framework of fuzzy IF-THEN rules. Second, we show a method to give truthfulness of the extracted fuzzy IF-THEN rules. Furthermore, we propose a method to determine linguistic relative importance of each proposition in an antecedent. In order to prove the validity of the proposed method, an illustrative example is solved.
  • 山口 亨, 今崎 直樹, 春木 和仁
    1990 年 110 巻 3 号 p. 207-215
    発行日: 1990/03/20
    公開日: 2008/12/19
    ジャーナル フリー
    In this paper, a reasoning and learning method of fuzzy rules, which employs associative memories, is presented. Fuzzy rules are described by neural networks, such that a proposition and an IF-THEN relation in rules are stored in a layered neural network, and a bidirectional connected neural network implemented in an associative memory, respectively. Fuzzy reasoning is performed by the dynamical changes in the associative memory. A learning functions is provided to add and correct fuzzy rules. An application of this method to a performance estimation for elevator group control systems is reported.
feedback
Top