Microbiome data have been obtained relatively easily in recent years, and currently, various methods for analyzing microbiome data are being proposed. Latent Dirichlet allocation (LDA) models, which are frequently used to extract latent topics from words in documents, have also been proposed to extract information on microbial communities for microbiome data. To extract microbiome topics associated with a subject's attributes, LDA models that utilize supervisory information, including LDA with Dirichlet multinomial regression (DMR topic model) or supervised topic model (SLDA, ) can be applied. Further, a Bayesian nonparametric model is often used to automatically decide the number of latent classes for a latent variable model. An LDA can also be extended to a Bayesian nonparametric model using the hierarchical Dirichlet process. Although a Bayesian nonparametric DMR topic model has been previously proposed, it uses normalized gamma process for generating topic distribution, and it is unknown whether the number of topics can be automatically decided from data. It is expected that the total number of topics (with relatively large proportions) can be restricted to a smaller value using the stick-breaking process for generating topic distribution. Therefore, we propose a Bayesian nonparametric DMR topic model using a stick-breaking process and have compared it to existing models using two sets of real microbiome data. The results showed that the proposed model could extract topics that were more associated with attributes of a subject than existing methods, and it could automatically decide the number of topics from the data.
Advancements in technology have recently made it possible to obtain various types of biometric information from humans, enabling studies on estimation of human conditions in medicine, automobile safety, marketing, and other areas. These studies have particularly pointed to eye movement as an effective indicator of human conditions, and research on its applications is actively being pursued. The devices now widely used for measuring eye movements are based on the video-oculography (VOG) method, wherein the direction of gaze is estimated by processing eye images obtained through a camera. Applying convolutional neural networks (ConvNet) to the processing of eye images has been shown to enable accurate and robust gaze estimation. Conventional image processing, however, is premised on execution using a personal computer, making it difficult to carry out real-time gaze estimation using ConvNet, which involves the use of a large number of parameters, in a small arithmetic unit. Also, detecting eye movement events, such as blinking and saccadic movements, from the inferred gaze direction sequence for particular purposes requires the use of a separate algorithm. We therefore propose a new eye image processing method that batch-processes gaze estimation and event detection from end to end using an independently designed lightweight ConvNet. This paper discusses the structure of the proposed lightweight ConvNet, the methods for learning and evaluation used, and the proposed method's ability to simultaneously detect gaze direction and event occurrence using a smaller memory and at lower computational complexity than conventional methods.