電気学会論文誌C(電子・情報・システム部門誌)
Online ISSN : 1348-8155
Print ISSN : 0385-4221
ISSN-L : 0385-4221
<ソフトコンピューティング・学習>
ブロック構造ニューラルネットワークにおける基本ブロック実装の改良
吉田 樹弥小圷 成一岡本 卓
著者情報
ジャーナル フリー

2017 年 137 巻 9 号 p. 1279-1285

詳細
抄録

In recent years a study of evolvable hardware (EHW) which can adapt to new and unknown environments attracts much attention among hardware designers. EHW is reconfigurable hardware and can be implemented combining reconfigurable devices such as FPGA (Field Programmable Gate Array) and evolutionary computation such as Genetic Algorithms (GAs). As such research of EHW, Block-Based Neural Networks (BBNNs) have been proposed. BBNNs have simplified network structures such as two-dimensional array of basic blocks, and their weights and network structure can be optimized at the same time using GAs. SBbN (Smart Block-based Neuron) has been also proposed as a hardware implementation model of basic blocks which have four different internal configurations. SBbN preserves a sufficient number of weights so as to implement all four types of basic blocks. However, SBbN constantly needs to preserve weights unnecessary for some types of basic blocks, and thus consumes redundant hardware resources. In this paper, we propose a new model of BBNNs in which all weights in SBbN are used efficiently with modifying calculation procedures of outputs of basic blocks in order to eliminate the resource redundancy of SBbN. In the proposed model, the required number of basic blocks in BBNNs can be reduced because of using efficiently all weights in SBbN. In order to evaluate the proposed model, we apply it to XOR and Fisher's iris classification. Results of computational experiments indicate the validity of the proposed model.

著者関連情報
© 2017 電気学会
前の記事 次の記事
feedback
Top