我々は様々な場面で感動を体験し、感動という言葉を耳にする。なかでも音による感動は多くの人が体験していると言われており(大出他, 2006)，音楽聴取による感動は一般的な現象であると考えられる。音楽聴取時の感動は、“General characteristics” “Physical reactions and behaviours”“Perception”“Cognition”“Feelings/Emotions”“Existential and transcendental aspects”“Personal and social aspects”の7カテゴリーから構成されていると言われている（Gabrielsson他, 2003）が、各カテゴリー間の関係については明らかにされていない。そこで本研究ではこれらのうち情動と身体反応に着目し、両者の関係性を実験により定量的に検討した。
High-order human cognition involves processing of abstract and categorically represented knowledge. Although it has been conventionally assumed that there is a single innate representation system in our mind, we view, on the basis of recent empirical and simulation studies, the representational system as a dynamic mechanism, capable of selecting a representation scheme that meets situational characteristics. The present paper introduces a framework for a cognitive model that integrates robust and flexible internal representation machinery. Our modeling framework flexibly learns to adjust its internal knowledge representation scheme using a meta-heuristic optimization method. Three simulation studies were conducted. The results showed that SUPERSET, our new model, successfully exhibited cognitive behaviors that are consistent with three main theories of the human internal representation system. Furthermore, a simulation study on social cognitive behaviors showed that the model was capable acquiring knowledge with high commonality, even for a category structure with numerous valid conceptualizations.
This study investigates how stimulus complexity influences human learning of a Sequence Learning (SL). A usual approach to analysis of learning data is based on a learning curve (LC), which shows how a probability of correct response on each step changes across an SL. Usually, a single logistic curve (LgC) provides a good approximation of LCs under various conditions. However, our results show that increase in complexity of a stimulus causes usage of more than one LgC for approximation. In the case of high complexity, LgC curve no longer describes a learning dynamics, but only its general trend. We also extracted other functions needed to approximate a learning dynamics under high complexity. Furthermore, to approximate learning dynamics under higher complexity, we have to use the functions employed under the lower complexity conditions and some additional functions. Therefore we hypothesize a hierarchical structure of the learning process related with stimulus complexity.
Previous studies have suggested that the left and right hands have different internal models for motor control. Thus, rather than one hand being superior to the other, each might have different specialties. If so, bimanual movements might be more accurately controlled because of the coordination of the two internal models. To investigate this possibility, we examined motor movement accuracy, reaction time, and movement time in pure right-handed subjects during a three-dimensional motor control task (visually guided reaching) under conditions of visual feedback and no visual feedback. In the no-visual-feedback condition, right-hand movement had lower accuracy and shorter reaction time than did left-hand movement, whereas bimanual movement had the longest reaction time, but the best accuracy. This suggests that the two hands have different internal models and specialties: closed-loop control for the right hand and open-loop control for the left hand. Thus, during bimanual movements, both models might be used, creating better control and planning (or prediction), but requiring more computation time compared to the use of only one hand.
今回我々はBlankeら（2003）がEEGによる研究で使用した課題を用いてfMRI実験を行った。embodiment（身体化）とdisembodiment（脱身体化）の比較を目的とし、視点が自己身体の内にある状況（身体化条件）と外にある状況（脱身体化条件）のそれぞれの脳活動をfMRIを用いて調べた。課題では左右いずれかの手が黒く表示された人間の線画を提示した。被験者はその線画が、自己を直接見ているか（脱身体化条件）、もしくは鏡で見ている（身体化条件）とイメージし、黒く表示された手は自分身体では左右いずれに当たるか判断した。脱身体化条件では被験者は自己の身体の外からの視点からのイメージを用いることになる。解析の結果から、脱身体化条件において身体イメージの視覚情報処理に関与するとされるEBA（extrastriate body area）や左右頭頂葉においてより顕著な活動上昇が観察された。
Nineteen undergraduates individually learned a list containing 30 words subdivided in two distinct categories (birds and cities). Triads of respective 15 words within the categories were visually presented against the one of two background colors (BGCs), which were presented in a random order. After 30-sec filled retention interval, the participants were presented one of the BGCs shown at encoding, and then orally free recalled the words. There was no significant difference in recall between the condition in which the color presented at study and test was the same and that in which the colors at study and test were different. The ARC score was calculated to analyze the clustering by screens and BGCs. The clustering by colors was significant but not for the clustering by screens. The present results suggest that semantic relationship among words suppressed BGC context-dependent recall and enhanced clustering by colors but not that by screens.