Proceedings of the Japanese Society for Cognitive Psychology
The 12th Conference of the Japanese Society for Cognitive Psychology
Displaying 1-50 of 156 articles from this issue
Oral (English) session (Memory, Thinking & Language)
  • A Near-Infrared Spectroscopy Study
    Victor Alberto Palacios, Hirofumi Saito, Misato Oi, Shuang Meng, Ryoma ...
    Session ID: O1-1-1
    Published: 2014
    Released on J-STAGE: October 05, 2014
    CONFERENCE PROCEEDINGS FREE ACCESS
    In the present study we examined the neural correlates for incidental learning of three kinds of memory: outer speech (reading aloud), action (gesturing), and inner speech (silent reading). We hypothesized that action-based memory would have a stronger memory trace than speech- and silent reading-based memories because action seems to facilitate memory and we hypothesize that these memories will each activate distinct areas of the brain such as the left middle and superior temporal regions. Participants accurately distinguished between performing an action for each sentence and accurately perceived when a new sentence was presented. However, participants did not accurately distinguish between inner and outer speech (i.e., reading aloud and reading silently). The result suggests that the memory traces for action are stronger than other kinds of memories, and this implication will be confirmed by NIRS data analysis.
    Download PDF (706K)
  • CHIHARU YAMADA, Yoshihiro Itaguchi, Kazuyoshi Fukuzawa
    Session ID: O1-1-2
    Published: 2014
    Released on J-STAGE: October 05, 2014
    CONFERENCE PROCEEDINGS FREE ACCESS
    When people write the same kana character repeatedly as fast as possible, they sometimes mistakenly write other characters that they do not intend to write. This type of error is known as a writing slip (Nihei, 1984; 1990). Previous studies reported certain characters induce writing slips. The present study investigated the relationship between target characters and mistaken characters in terms of the kinematic and visual similarity. As a measure of the kinematic similarity, we used the similarity in trajectory and velocity. We defined visual similarity based on the questionnaire responses. In the experiment, participants most frequently slipped to write 'o' when they intended to write 'a'. Their visual similarity was very high, whereas the kinematic similarity was low. The result suggests the visual similarity affects the occurrence of writing slips to a much greater extent than the kinematic similarity.
    Download PDF (705K)
  • Kwangoh Yi
    Session ID: O1-1-3
    Published: 2014
    Released on J-STAGE: October 05, 2014
    CONFERENCE PROCEEDINGS FREE ACCESS
    To evaluate the effectiveness of the existing lexical decision tasks including the conventional one, three different definitions of word were inserted in the instructions for the experiment. Participants had to judge whether a stimulus on the screen was a word, whether it had a meaning or whether they had ever seen it. While the typical frequency effects were found significant in all three definitions, the mean RT was the longest for the conventional word/nonword decision. In the meaning and familiarity decision participants were faster than the conventional no-definition condition. The results suggest that the conventional instructions are not the fastest at least for Korean, and the best definition for word recognition studies may be different from language to language.
    Download PDF (458K)
  • An investigation using implicit/ explicit memory tasks
    Kozue Miyashiro, Satsuki Hirayama, Etsuko T. Harada
    Session ID: O1-1-4
    Published: 2014
    Released on J-STAGE: October 05, 2014
    CONFERENCE PROCEEDINGS FREE ACCESS
    A memory experiment was executed with manipulating the harmonization between word meaning and voice quality of synthetic speech, which was consisted with the learning and the test phases.  The auditory word-fragment completion task (WFC) was used as implicit memory test, while recognition test were used as explicit memory of words. Three factors, all of which were within-participant factors were manipulated; learning (with/without), harmonization between word meaning and voice quality (harmonized/non-harmonized), and voice quality in the learning and test phase (same/different=neutral). The results of the WFC task showed that the auditory perception of neutral voice was higher than others, however, access to the word in lexicon was facilitated when voice quality cues was the same, only in the harmonization condition. The harmonization between words meaning and voice quality had varying impacts on implicit and explicit memory, which may be reflecting the different patterns of interactions between linguistic and non-linguistic information.
    Download PDF (779K)
  • A cross-cultural study.
    Hiroshi Yama
    Session ID: O1-1-5
    Published: 2014
    Released on J-STAGE: October 05, 2014
    CONFERENCE PROCEEDINGS FREE ACCESS
    According to the propositional logic, the probability of conditional if p then q equals to P(¬p∨q). However, many researchers show that people judge that it equals conditional probability (P(q|p)). The data support the the suppositional theory which propose that not-p cases are regarded as void based on De Finetti’s truth table. We adopted the set-choice paradigm whereby participants are given a pair of urns which includes some chips and asked to choose one which gives higher probability of a conditional ‘if p then q’(the probability of conditional). We also set the conditional bet task in the same way. We gathered data from American and Japanese samples via internet, and found that both Japanese and American judged the probability based on the conditional probability, but some judged it based on the principle of conjunction (P(p&q)).
    Download PDF (825K)
Oral (Japanese) session (Social cognition)
Oral (English) session (Social cognition, Emotion & Motivation, Development, Education, & Learning)
  • Tao LIU, Hirofumi SAITO, Misato OI, Shuang MENG, Victor Alberto Palaci ...
    Session ID: O2-1-1
    Published: 2014
    Released on J-STAGE: October 05, 2014
    CONFERENCE PROCEEDINGS FREE ACCESS
    To examine how prior experience of role-playing affects an individual’s activation in their inferior frontal gyrus (IFG) during subsequent cooperation and competition, using near-infrared spectroscopy we simultaneously measured pairs of participants’ bilateral IFG activation when they played a computerized turn-taking game. Pairs of participants were assigned to either one of two roles in the game: a Builder taking the initial move to copy a disk-pattern on a monitor and a Partner taking the second move to aid the Builder in his/her goal in a cooperative game or to obstruct it in a competitive game. Two participants changed their Builder-Partner roles in two continuous sessions. The results suggest that one could draw from prior experience of being disturbed to more tactically disturb others in the subsequent competition. The better understanding of the Builder’s position increased one’s right IFG activation when (s)he was meant to disturb in the session 2.
    Download PDF (442K)
  • Yoshiyuki Ueda, Kie Nagoya, Sakiko Yoshikawa, Michio Nomura
    Session ID: O2-1-2
    Published: 2014
    Released on J-STAGE: October 05, 2014
    CONFERENCE PROCEEDINGS FREE ACCESS
    Previous studies suggest that observers’ facial expressions influence on emotion, leading different perceptions. In the present, study, we investigated how observers’ facial expression influence perceptions of personal traits. In the experiment, participants were asked to make specific facial expressions with a wood chopstick without changing their emotion and they judged two kinds of partner’s traits, trustworthiness and dominance. The results showed that persons with a happy expression were judged as high trustworthy than with a disgust expression and observers’ expression did not affect judgment. On the other hand, persons with a disgust expression were judged as high dominant than with a happy expression and the difference between two expressions was much larger when participants’ expression was disgust-like. In social communication scenes, some of personal traits inferred from facial expressions are unstable and they are decided by mutual interactions between the partner and receiver.
    Download PDF (603K)
  • Kanji Tanaka, Katsumi Watanabe
    Session ID: O2-1-3
    Published: 2014
    Released on J-STAGE: October 05, 2014
    CONFERENCE PROCEEDINGS FREE ACCESS
    Learning and transfer of behavioral sequences are vital for our everyday life. Our recent work has suggested that people can implicitly transfer learned sequences (Tanaka & Watanabe, 2013); even if participants are not aware of the hidden relationship between learning and transfer sequences, they can transfer their acquired knowledge to a new sequence. In the present study, we examined whether implicit transfer would depend on the number of experience of successful and/or failure trials during learning. We adopted a basic experimental paradigm of visuomotor sequence learning and modulated the number of required successful trials in the learning session. Then, we found that the number of completed trials (i.e., successful experience) rather than error trials (i.e., failure trials) in the learning session led to better implicit transfer.
    Download PDF (901K)
  • Yongning Song
    Session ID: O2-1-4
    Published: 2014
    Released on J-STAGE: October 05, 2014
    CONFERENCE PROCEEDINGS FREE ACCESS
       It has been debated whether attending to a particular facial region, such as the eyes, is impaired in children with autism. The purpose of this study was to systematically test the poor eye gaze hypothesis postulating that children with High-Functioning Autism (HFA)/AS are impaired in their ability to attend to another’s eyes. A group with ASD (n = 14) and a paired non-ASD group (n = 19) completed three emotion judgment tasks requiring perception of expressed happiness, angry and fear respectively in a facial image masked by the “Bubbles”. Results indicated that similar to non-ASD individuals, ASD individuals used information from other people’s eyes to judge the emotion of happiness and angry. In contrast, ASD individuals failed to use the information to judge the fearful emotion from other people’s eyes compared to participants without autism. The results challenge the conventional hypothesis that individuals with ASD cannot attend to or derive information from another’s eyes, and the results suggest an asymmetric eye-gaze-ability or a selective impairment in extracting facial information expressed by different emotions in ASD.
    Download PDF (659K)
  • Cross-Sectional Results from the SONIC Study
    Yoshiko ISHIOKA, Yasuyuki GONDO, Yukie MASUI, Takeshi NAKAGAWA, Meg ...
    Session ID: O2-1-5
    Published: 2014
    Released on J-STAGE: October 05, 2014
    CONFERENCE PROCEEDINGS FREE ACCESS
    We examined the relationships between false recognition and global cognitive abilities using data from 1000 Japanese community-dwelling participants in the Septuagenarian, Octogenarian, Nonagenarian Investigation with Centenarian (SONIC) Project. Multiple regression analyses were conducted with the total and five subsets of global cognitive abilities as each dependent variable, false alarm rate and hit rate as the independent variables. After adjusting for covariates (sex, education, and primary school remarks), the false alarm rate was significantly associated with all cognitive abilities. The hit rate was significantly associated with the global cognition, the recall memory, and the orientation, marginally significantly associated with executive functioning, but not significantly associated with verbal fluency and delayed memory. These results suggest that an increase of false alarm rate may reflect cognitive decline, especially explicit cognitive process, and that false recognition may be more informative for screening cognitive impairment in older adults.
    Download PDF (473K)
Oral (Japanese) session (Perception & Kansei 1)
Oral (Japanese) session (Attention)
Oral (Japanese) session (Memory, Thinking & Language)
Oral (Japanese) session (Development, Education, & Learning)
Oral (Japanese) session (Perception & Kansei 2)
  • Masakazu Ide, Souta Hidaka
    Session ID: O4-2-1
    Published: 2014
    Released on J-STAGE: October 05, 2014
    CONFERENCE PROCEEDINGS FREE ACCESS
    An input (e.g., airplane takeoff sound) to a sensory modality can mask the percept of another input (e.g., talking voices of neighbors) of the same modality. This perceptual masking effect is evidence that neural responses to different inputs closely interact with each other. While recent studies suggest that close interactions also occur across sensory modalities, crossmodal perceptual masking effect has not yet been reported. Here, we demonstrate that tactile stimulation applied to hand can mask the percept of visual stimuli. This effect did not occur when the auditory stimuli were presented or when the body parts other than hand were stimulated. We also demonstrated that the effect occurred primarily when the tactile and visual information were spatially and temporally consistent. The current findings would indicate that neural signals could closely and directly interact with each other, sufficient to induce the perceptual masking effect, even across sensory modalities.

    Download PDF (939K)
  • A case study
    Shu Imaizumi, Tomohisa Asai, Noriaki Kanayama, Mitsuru Kawamura, Shini ...
    Session ID: O4-2-2
    Published: 2014
    Released on J-STAGE: October 05, 2014
    CONFERENCE PROCEEDINGS FREE ACCESS
    Although delayed visual feedback to an action can reduce sense of agency, its effect on phantom motor sensation (i.e., sense of agency in phantom limb) has not been elucidated. We examined the effect of delayed visual feedback on phantom motor sensation in a male upper-limb amputee. In the experiment, he viewed video images of his intact right hand clasping and unclasping in a flat monitor placed in front of his left arm, and rated the intensity of phantom motor sensation. There were three delay conditions (0, 250, and 500 ms). Results showed that phantom motor sensation decreased when the image was delayed by 250 and 500 ms. However, when we instructed him to adjust the phase of phantom movement to that of the image with a 500-ms delay, phantom motor sensation increased. These results suggest that delayed visual feedback reduces phantom motor sensation, which is modulated by top-down effects.
    Download PDF (1204K)
  • Relationship between handedness and tools
    Masazumi Katayama, Yuta Kimura
    Session ID: O4-2-3
    Published: 2014
    Released on J-STAGE: October 05, 2014
    CONFERENCE PROCEEDINGS FREE ACCESS
  • KENRI KODAKA, YUKI ISHIHARA
    Session ID: O4-2-4
    Published: 2014
    Released on J-STAGE: October 05, 2014
    CONFERENCE PROCEEDINGS FREE ACCESS
  • tomotoshi okamura
    Session ID: O4-2-5
    Published: 2014
    Released on J-STAGE: October 05, 2014
    CONFERENCE PROCEEDINGS FREE ACCESS
    Human recognizes the world from sensory information when input information is integrated in the brain. Study on the mechanism of sensory integration has been done many. However, a few used three or more sensory information in previous study. In the present study, sensory integration of visual, auditory, and haptic in the five senses were examined using to visual image, sound effects, haptic device. In Experience measure the reaction time by presenting at the same time the motion stimulus of visual, auditory, and haptic. The result, a multiple stimulus reaction time was shorter than single stimulus, and confirmed short reaction time for presented multiple stimulus which cannot be generated by single stimulus. These results suggest that the reaction time shortened by the sensory integration.
    Download PDF (822K)
Poster session (Perception & Kansei, Social cognition, Development, Education, & Learning, General topics)
feedback
Top