IEEJ Transactions on Electronics, Information and Systems
Online ISSN : 1348-8155
Print ISSN : 0385-4221
ISSN-L : 0385-4221
Volume 110 , Issue 3
Showing 1-14 articles out of 14 articles from the selected issue
  • Hironori Hirata
    1990 Volume 110 Issue 3 Pages 111
    Published: March 20, 1990
    Released: December 19, 2008
    JOURNALS FREE ACCESS
    Download PDF (82K)
  • Hideki Asoh
    1990 Volume 110 Issue 3 Pages 112-118
    Published: March 20, 1990
    Released: December 19, 2008
    JOURNALS FREE ACCESS
    Download PDF (1138K)
  • Toru Niki
    1990 Volume 110 Issue 3 Pages 119-125
    Published: March 20, 1990
    Released: December 19, 2008
    JOURNALS FREE ACCESS
    This paper discusses an interpretation on processes in hidden and output layers of a feedforward neural network which is trained by the backpropagation algorithm. We treat a network which is applied to class discrimination task. Based on the obtained interpretation we propose a new method for detemining initial values of backpropagation algorithm. The basic idea is as gollows: Initial weight vectors of hidden layer are determined by linear discriminant analisys on original input data. Those of output layer are chosen so as to be equal to the coefficients of linear discriminant function which is derived from output of hidden layer. The proposed method can significantly improve the rate of learning. It is verified by hand-written numeral recognition experiments.
    Download PDF (959K)
  • Kazuo Nishimura, Miho Kawasaki, Tsuyoshi Shimada
    1990 Volume 110 Issue 3 Pages 126-132
    Published: March 20, 1990
    Released: December 19, 2008
    JOURNALS FREE ACCESS
    Evaluation of system state is becoming imperatively important in supervising and controlling huge systems such as power generation and transmission systems, and industrial plants.
    In this paper, the authors discuss two methods to build the neural networks for the purpose. One is based upon backpropagation and the other is a new method proposed by the authors. Unlike backpropagation which forms the completely distributed activation structure of neurons, the proposed method enables self-organization of the modules which are activated selectively to an input pattern. The new method not only makes biologically natural structure of neural network but also shortens considerably the learning time by localizing the links to be updated in the learning process.
    Download PDF (1059K)
  • Masumi Ishikawa
    1990 Volume 110 Issue 3 Pages 133-140
    Published: March 20, 1990
    Released: December 19, 2008
    JOURNALS FREE ACCESS
    Dealt with in the paper are an experimental evaluation and an improvement of learning performance of the two connectionist models proposed as dynamic ones, i.e., buffer models and context models. The learning performance here is composed of two factors: a learning speed and a learning ability defined as the average squared output errors after learning. Continuous-valued sequences being closely related to control problems, the author proposes to explicitly introduce time-varying input to both types of models. Also proposed is the use of more than one context layers in context models for the improvement of the learning ability.
    Simulation experiments reveal the followings. In the learning of character sequences context models with fewer units have greater learning ability than buffer models using the same order of computation. Buffer models have both higher learning speed and greater learning ability than context models in the learning of linear continuous-valued sequences. In the learning of nonlinear continuous-valued sequences, buffer models have far greater learning ability than linear buffer models as well as context models. As the final example actual linear continuous-valued sequence is adopted to evaluate various combinations of models and learning algorithms. Linear buffer models using the learning algorithm with forgetting previously proposed by the author turn out to be the best in terms of the average squared output errors both for training and testing data. It is even better than the traditional GMDH.
    Download PDF (1273K)
  • Kazuo Asakawa, Nobuo Watanabe, Akira Kawamura, Ryusuke Masuoka, Jun-ic ...
    1990 Volume 110 Issue 3 Pages 141-147
    Published: March 20, 1990
    Released: December 19, 2008
    JOURNALS FREE ACCESS
    The learning process of multi-layered neural networks and their fast learning algorithm are presented in this paper. The ability of hidden units was tested in a learning of the high order nonlinear functions. The number of hidden units was also optimized in the learning.
    The pseudo impedance method was proposed for a fast learning algorithm by analogy of a mechanical impedance control. In pseudo impedance method, learning parameters are determined by the neural network's virtual mass, damping coefficient and stiffness. The usefulness of this metod was verified by comparison with the error back propagation method.
    Download PDF (1523K)
  • Minoru Fukumi, Masafumi Hosokawa, Sigeru Omatu
    1990 Volume 110 Issue 3 Pages 148-155
    Published: March 20, 1990
    Released: December 19, 2008
    JOURNALS FREE ACCESS
    In pattern recognition, we often deal with classification problems in which the retinal input patterns may be subject to translation, rotation, scale change, perspective change etc. Usual pattern recognition methods by statistical discrimination functions are not powerful to classify those patterns.
    In this paper, a new approach for pattern recognition by a neural network is proposed based on the concept involving an invariance network and a descrambler network. The invariance network plays an important role in producing a set of outputs which are invariant to translation, and rotation, etc., of the retinal input pattern. The descrambler network is used to classify the scrambled data into the original patterns by using back-propagation algorithm. The structure of proposed neural networks is similar to that by Widrow, et al.
    But the sigmoid functions are adopted as nonlinear elements in the neural networks while Widrow's MR II is based on signam functions. Hence, the back-propagation method can be used in the learning algorithm of descrambler network.
    Finally, some numerical results are illustrated to show the effectiveness of the present algorithm for pattern recognition.
    Download PDF (1059K)
  • Yoichi Hayashi, Masateru Sakata, Takashi Nakao, Shingo Ohhashi
    1990 Volume 110 Issue 3 Pages 156-165
    Published: March 20, 1990
    Released: December 19, 2008
    JOURNALS FREE ACCESS
    Various learning algorithms for neural networks have recently appeared and have stimulated applications of neural networks in new problem areas. This paper presents new results obtained by applying the distributed method proposed by Gallant to problems in alphanumeric character recognition. This method uses a distributed neural network with the Pocket Algorithm which is a modification of Perceptron Learning. The authors evaluates the recognition (classification) capability of the distributed method for 62 and 93 alphanumeric characters of a single font having different kinds of typeface quality and 76 alphanumeric characters of multiple fonts (12 types of fonts) having the same typeface quality. Furthermore, we propose a useful technique to distinguish between characters that closely resemble each other by using the structure information of characters. We also propose an activation criterion of output cells for character recognition.
    In the recognition of alphanumeric characters having different kinds of typeface quality, a markedly high degree of recognition accuracy (99.96% maximum, 99.74% on average) of individual font quality was attained. In the 76 alphanumeric character recognition of the multiple font, a very high degree of recognition (99.64% maximum) was also achieved. The relations among the recognition rate and the number of items of training data, the number of intermediate cells, and the number of training iterations are considered in this paper.
    Download PDF (1216K)
  • Yutaka Fukuoka, Eiji Okada, Yoshitaka Nakajyo, Haruyuki Minamitani
    1990 Volume 110 Issue 3 Pages 166-172
    Published: March 20, 1990
    Released: December 19, 2008
    JOURNALS FREE ACCESS
    This study proposed a new signal processing system using a neural network model for the laser Doppler velocimetry (LDV). The proposed system was based on spectrum analysis using FFT and its basic structure with the burst spectrum analyzer (BSA). Though the BSA is one of useful signal processors of the LDV, accuracy of the velocity measurements decreases at low S/N ratio. So a neural network model was constructed in the BSA system in order to increase the accuracy of measurement. The model learns variance of the Doppler frequency related to velocity to predict the succeeding Doppler frequency. Thereafter, the post processor calculates a weight function in the spectral domain and computers the Doppler frequency from the weighted spectrum.
    Actual velocity measurement on a rotating acrylic disk was carried out and the results showed high availability of the system. It allowed increase in the accuracy of velocity measurement.
    Download PDF (1435K)
  • Shouichi Mutoh, Junichi Yoshizawa, Satoshi Matsuda, Shigeru Tamura, Iw ...
    1990 Volume 110 Issue 3 Pages 173-181
    Published: March 20, 1990
    Released: December 19, 2008
    JOURNALS FREE ACCESS
    Experts, who know general causal relation between input and behavior in a physical system, can estimate the behavior against the unexperienced input through learning a few samples of the relation. Applying a framework of hypothesis-based reasoning to the experts' learning, this paper proposes a learning method based on the framework. Power system stability evaluation is considered an example problem of the physical system in the paper. In the proposed method inference is made from
    (1) Facts (results of simulations),
    (2) Hypotheses created on comparison with the facts,
    (3) Causal relation about power system stability,
    (4) Conflict between the hypotheses and the causal relation.
    In addition, accuracy factor of the hypothesis is used for hypothesis selection and conflict elimination.
    The learning system was developed and the learning results are illustrated. The results prove availability of the proposed learning method.
    Download PDF (1301K)
  • Yasuo Sugai, Seiichi Koakutsu, Hironori Hirata
    1990 Volume 110 Issue 3 Pages 182-190
    Published: March 20, 1990
    Released: December 19, 2008
    JOURNALS FREE ACCESS
    We propose a new method based on competitive learning for the placement in VLSI layout design. In the placement problem which is one of combinatorial optimization problems, cells having various kinds of shapes or sizes are to be compactly placed to minimize the routing length of signal nets. It is difficult both to construct a cost function and to determine values of parameters contained in it.
    Recently applications of Hopfield networks to combinatorial optimization problems have been reported. For such cases, however, the cost function is requisite. Using learning schemes will make it possible to avoid the necessity of the cost function.
    Competitive learning is one of learning paradigms without teacher in the framework of neural networks. It has an ability to provide a way to discover the salient features which can be used to classify a set of patterns.
    By corresponding net data to input patterns skillfully, it is possible to minimize wirings and area without any cost function because they can be evaluated during a learning process automatically. Numerical experiments show that the proposed method can produce the optimal placement with a relatively small amount of computational time.
    Download PDF (1437K)
  • Minoru Fukumi, Sigeru Omatu
    1990 Volume 110 Issue 3 Pages 191-197
    Published: March 20, 1990
    Released: December 19, 2008
    JOURNALS FREE ACCESS
    In this paper we propose a new neuron model CONE (Coupled neuron) and a new algorithm for training networks of CONEs. The CONE possesses two functions, the “sigmoid” and “signum” function, mapping an input pattern into an output. Therefore, such networks can be trained by the familiar back-propagation (BP) technique since the CONE uses the differentiable sigmoid function for its trainability.
    The BP and MR II algorithm have been developed up to now. However, both methods have advantages and disadvantages. The CONE proposed in this paper takes advantages of key ideas of both, namely, the systematic treatment of networks by using the gradient descent and a quick convergence of best fitting MR II. The learning algorithm is called CNR or Coupled Neuron Rule. It seems that the CONE is an effective processing element for image compression, and speech coding, etc., because the CONE possesses the signum function to provide 2-level outputs and can be expanded to 4-level, and 8-level, etc.
    As a result of applying the CNR to simple networks of CONES, it is shown that the speed of convergence of the CNR is much faster than that of the popular BP method when the variable learning rate is used. Simulation results are illustrated to show the effectiveness of the CNR algorithm.
    Download PDF (674K)
  • Yoichi Hayashi, Masato Nakai
    1990 Volume 110 Issue 3 Pages 198-206
    Published: March 20, 1990
    Released: December 19, 2008
    JOURNALS FREE ACCESS
    Neural network models and the automatic generation of expert systems based on learning processes are attracting growing interest as useful tools for mainstream tasks involving artificial intelligence. Neural networks embody the information derived from the training data (examples) and are implicity assumed to contain the IF-THEN rules and/or knowledge base used for expert systems.
    This paper proposes a method to extract automatically fuzzy IF-THEN rules with “linguistic relative importance” of each proposition in an antecedent by using a feed-forward neural network. The linguistic relative importance which is defined by a fuzzy set represents the degree of effect of each proposition on consequence. By providing linguistic relative importance for each proposition, each fuzzy IF-THEN rule has more flexible expression than that of ordinary IF-THEN rules. Furthermore, truthfulness of each fuzzy IF-THEN rule is given in the form of linguistic truth value which is defined by a fuzzy set. Enhancement of knowledge presentation capability and flexibility by using the fuzzy IF-THEN rules with linguistic relative importance facilitates the automated extraction of IF-THEN rules from neural networks.
    First, we give an algorithm to select propositions in an antecedent (IF-part), that is, to extract framework of fuzzy IF-THEN rules. Second, we show a method to give truthfulness of the extracted fuzzy IF-THEN rules. Furthermore, we propose a method to determine linguistic relative importance of each proposition in an antecedent. In order to prove the validity of the proposed method, an illustrative example is solved.
    Download PDF (879K)
  • Toru Yamaguchi, Naoki Imasaki, Kazuhito Haruki
    1990 Volume 110 Issue 3 Pages 207-215
    Published: March 20, 1990
    Released: December 19, 2008
    JOURNALS FREE ACCESS
    In this paper, a reasoning and learning method of fuzzy rules, which employs associative memories, is presented. Fuzzy rules are described by neural networks, such that a proposition and an IF-THEN relation in rules are stored in a layered neural network, and a bidirectional connected neural network implemented in an associative memory, respectively. Fuzzy reasoning is performed by the dynamical changes in the associative memory. A learning functions is provided to add and correct fuzzy rules. An application of this method to a performance estimation for elevator group control systems is reported.
    Download PDF (966K)
feedback
Top