Word2Vec, a distributed representation method in natural language processing, is applied to purchase data in order to achieve item vector learning with low-computational cost. We perform an experiment with real POS data, and it validates how window size and dimension parameters and input purchase data format affect item vector learning. The experimental results suggest that learned item vectors within same category are located neighborhoods on the feature space under the following conditions: window size is as large as possible; dimension is more than 40; input data format is based on item variation.
In recent years, an education that students can acquire living strength to cope with drastically varying society is desired. Learning motivation is indispensable to acquire abilities of thinking, deciding and solving problems. In addition, learning motivation is treated as one of the major subjects in Japanese education according to the international assessment reports of learning achievement such as PISA and TIMSS. In this study, a learning system with story and life functions is proposed to elevate and maintain learners’ motivation. The story function supplies stories that correspond to rewards of learning, and the life function restricts chance of learning. According to the evaluation experiments, these functions could be utilized for elevation and maintenance of learning motivation.
In natural language processing, estimation of the meaning of a sentence is one of the important issues. Various mathematical expression methods have been proposed for declarative sentences such as syntactic information and distributed representation like word2vec. However, mathematical expression methods considering dialogue context have not been proposed. This paper proposes new neural networks to estimate the features of dialogue context as distributed representation. We improve two kind of conventional neural networks employed in word2vec for sentences, and combine them to generate distributed representation. In the first neural network, Long Short-Term Memory (LSTM), one of recurrent neural networks, is employed as an encoder and learning to estimate the words (Bag-of-words) contained in the previous and the next utterances of an input utterance is carried out. In the second neural network, several LSTMs are employed as an encoder and learning to estimate the words (Bag-of-words) contained in the next utterance is carried out. Encoders in both of the neural networks are combined to construct a new neural network to generate the distributed representation for new input utterance. We carried out two kinds of experiments to evaluate proposed networks: dialogue breakdown detection and dialogue act prediction. In both of the experiments, the proposed network shows higher accuracy than the conventional methods.
If groups of visitors in public spaces and commercial facilities can be detected, information depending on the attributes of the groups can be provided, and we can also provide statistics with regards to the usage of the facilities for the owners of the facilities. The features, such as person-to-person distance and gaze direction, are useful for group detection and have been used in a number of works. However, if the scene is crowded or people in a group act separately, the features don’t seem to work well. In this work, we focus on gestures, which indicate the interaction of people, and propose a group detection method using the information of gestures. Experimental results using dataset collected in an actual scene demonstrate gesture information improve the accuracy of group detection, especially the recall rate.
April 03, 2017 There had been a system trouble from April 1, 2017, 13:24 to April 2, 2017, 16:07(JST) (April 1, 2017, 04:24 to April 2, 2017, 07:07(UTC)) .The service has been back to normal.We apologize for any inconvenience this may cause you.
May 18, 2016 We have released “J-STAGE BETA site”.
May 01, 2015 Please note the "spoofing mail" that pretends to be J-STAGE.