In this paper, we consider neural networks with three layers, where each unit itself is allowed to be a neural network with learning completed. (In the follwing, such neural networks are referred to as composite neural networks.) Existing neural networks are constructed for a single purpose, so that they cannot be used for more complicated information processing without teaching them from the beginning, while composite neural networks reuse these neural network resources. In composite neural networks, network units are connected by new synapses and only these weights are subject to updating through learning. This paper presents learning algorithms for composite neural networks with and without feedback in the middle layer. Two numerical examples are shown in the fields of logical circuit and alphabet recognition. Methods of interpolation learning, which is important for composite neural networks, are also given for each example.
This paper presents a new adaptive active filter for compensating harmonic voltages in electric power systems. In order to improve the control speed, the compensator provides a high-precision and high-speed compensation signal for eliminating the harmonic components. A new design method is proposed and various experimental results are presented. This method is based on a new principle using a high-speed DSP for the detection and prediction of the harmonic voltages. The adaptive compensation and the FFT processing are done in real time. The compensation has very good characteristics to instantaneous variations of load in the electric power system. Thus, it is possible to compensate such harmonic power as generated by flicker load. The control circuit is completely digitized using a DSP, and the interferences are suppressed at a high degree of precision. Therefore, it becomes possible to implement an economical high-precision adaptive active filter for electric power systems.
For linear stable time-independent parabolic distributed parameter systems, we propose a static compensation scheme in the Galerkin method to obtain reduced order lumped models whose frequency responses are consistent with the original one in wider range. We show that a scheme for boundary inputs can be generalized to one for distributed inputs in terms of static compensation, and that a unified approach can be developed which are practically applicable for various types of input. The relationship between proposed and conventional models are clarified by using a modal structure of the approximation model, in the case that the eigenfunction is employed for the coordinate function. A numerical study shows that the proposed scheme can be applied to the problems in a unified way and yields more efficient models for systems with distributed as well as boundary. inputs.
When a continuous-time system is sampled by use of a zero order holder, all stable poles are transformed into the unit circle. However, there is no simple relation between the zeros of a continuous-time system and its sampled version. In this paper, a necessary and sufficient condition for stable zeros of a sampled system is presented when a continuous-time system has a strictly proper and rational transfer function. The criterion derived in this paper is represented in terms of coefficients of a continuous-time transfer function and of a sampling period. Further, this paper gives a necessary and sufficient condition which ensures that all zeros of a sampled system are inside the unit circle for all sampling periods.
This paper proposes a framework for implementing expert systems based on a connectionist model, referred as to the neuro-expert. The conventional model of this framework is the connectionist symbol processing using distributed and recursive network models for knowledge representation. However, those models have problems in devising effective ways of representing complex knowledge structure. Our approach uses structured multi-layered models to represent the factual knowledge and rules. We propose a methodology of transforming the rules of the disjunctive normal forms. Each rule is transformed into the conjunctive normal form. Each transformed rule comprises a training example. The network architecture of the neuro-expert is predetermined from the structure of the transformed training examples. The architecture of the neuro-expert consists of several network modules with two-layers. Each network module is trained by the new learning algorithm, flash learning, that requires a single presentation of the training set. We will show the neuro-expert with the structured multi-layered networks has ability to represent complex knowledge structures with a simple inference mechanism. We also show a case study to examine the capability of the neuro-expert.