2019 Volume 12 Pages E18-008-1-E18-008-7
This paper is the progressive study of previous papers presented at the IMPACT 2015 and ICEP 2018, and evaluates effectiveness and applicability of MLCS (Memory Logic Conjugated System) with a simple deep learning processing. NVIDIA, Google, Fujitsu Intel-Altera, Intel-Nervana and Renesas recently announced that 8 bits processing can keep efficient and flexible AI computation, peculiarly in deep learning. This paper discusses on the actual MLCS circuit implemented on a commercial FPGA for deep learning, and evaluate the circuit with perceptron method for deep learning. In the MLCS architecture, deep learning computations can be done as memory operations. Our architecture can achieve its high I/O bandwidth and low-power consumption with dynamic reconfiguration functionality, high-speed connection among logics and memory cells, and low implementation cost.