IEEJ Transactions on Electronics, Information and Systems
Online ISSN : 1348-8155
Print ISSN : 0385-4221
ISSN-L : 0385-4221
<Softcomputing, Learning>
An Efficient Block-Based Neural Network Model Modifying Calculation Procedures of Outputs
Mikiya YoshidaSeiichi KoakutsuTakashi Okamoto
Author information
JOURNAL FREE ACCESS

2017 Volume 137 Issue 9 Pages 1279-1285

Details
Abstract

In recent years a study of evolvable hardware (EHW) which can adapt to new and unknown environments attracts much attention among hardware designers. EHW is reconfigurable hardware and can be implemented combining reconfigurable devices such as FPGA (Field Programmable Gate Array) and evolutionary computation such as Genetic Algorithms (GAs). As such research of EHW, Block-Based Neural Networks (BBNNs) have been proposed. BBNNs have simplified network structures such as two-dimensional array of basic blocks, and their weights and network structure can be optimized at the same time using GAs. SBbN (Smart Block-based Neuron) has been also proposed as a hardware implementation model of basic blocks which have four different internal configurations. SBbN preserves a sufficient number of weights so as to implement all four types of basic blocks. However, SBbN constantly needs to preserve weights unnecessary for some types of basic blocks, and thus consumes redundant hardware resources. In this paper, we propose a new model of BBNNs in which all weights in SBbN are used efficiently with modifying calculation procedures of outputs of basic blocks in order to eliminate the resource redundancy of SBbN. In the proposed model, the required number of basic blocks in BBNNs can be reduced because of using efficiently all weights in SBbN. In order to evaluate the proposed model, we apply it to XOR and Fisher's iris classification. Results of computational experiments indicate the validity of the proposed model.

Content from these authors
© 2017 by the Institute of Electrical Engineers of Japan
Previous article Next article
feedback
Top