Computing-in-memory (CIM) devices have attracted attention because of their high operation efficiency in edge AI, which requires low power operation. This paper proposed a digital circuit architecture controlling the inference and learning of CIM devices such as the RAND chip, which utilizes the non-linearity of ReRAM as memory elements. The RAND chip is used as the CIM device for inference and as external memory for training. The system performance in the XOR identification test achieves the same convergence as the software implementation of the learning core. The proposed learning core achieved efficiency of 7.77 GOPS/W, thereby verifying the effectiveness of the proposed architecture for on-line CIM device learning.
An encoder-decoder model consists of an encoder that encodes the input into a low-dimensional latent variable and a decoder that decodes the obtained latent variable to the same dimension as the input. The encoder-decoder model performs representation learning to automatically extract features of the data, but the model is a black box and it is not clear what features are extracted. We focused on whether including a skip connection between the encoder and decoder increased accuracy. It is generally believed that this skip connection plays a role in conveying high-resolution information. However, its actual role remains unclear. In this study, we focused on this concatenation. We experimentally clarified the role of the latent variables conveyed by this concatenation when the images given to the input and output were the same or different during training.
The Izhikevich neuron model can reproduce various types of neurons, including chaotic neurons, by utilizing appropriate parameter sets. This study analyzes the responses of a periodically forced Izhikevich neuron with chaotic parameters using three measures—the diversity index, the coefficient of variation, and the local variation—to quantify interspike intervals (ISIs). The evaluation of ISIs combining these three measures clarifies the differences in neuronal activities, but evaluation using an individual measure cannot. In addition, we analyzed the change in the stability of the equilibrium points caused by a periodic input on a phase plane. The results indicate that in electrophysiologically feasible parameter sets, the stability of equilibrium points plays a crucial role in determining the critical amplitude around which irregular activities transition to regular ones. Thus, the relationship between neural behavior and the period and amplitude of the input current is contingent upon the existence and stability of equilibrium points.
Time series model inference can be divided into modeling and optimization. Sequential VAEs have been studied as a modeling technique. As an optimization technique, methods combining variational inference (VI) and sequential Monte Carlo (SMC) have been proposed; however, they have two drawbacks: less particle diversity and biased gradient estimators. This paper proposes Ensemble Kalman Variational Objective (EnKO), a VI framework with the ensemble Kalman filter, to infer latent time-series models. Our proposed method efficiently learns the time-series models because of its particle diversity and unbiased gradient estimators. We demonstrate that our EnKO outperforms previous SMC-based VI methods in the predictive ability for several synthetic and real-world data sets.