Proceedings of the Japanese Society for Cognitive Psychology
The 11th Conference of the Japanese Society for Cognitive Psychology
Displaying 1-50 of 163 articles from this issue
Oral (English) session (perception and kansei, attention, social cognition)
  • Yasunori KINOSADA, Shinnosuke USUI
    Session ID: O1-1
    Published: 2013
    Released on J-STAGE: November 05, 2013
    CONFERENCE PROCEEDINGS FREE ACCESS
    The observer needs to estimate the TTC with the approaching vehicle while considering its perceptual information along with the prediction of how the approaching vehicle would change its motion. Seventeen people who had experience in driving a car wore a pair of liquid crystal shutter goggles and viewed a vehicle approaching from the right. After vehicle passed the photoelectronic sensors, the goggles changed from transparent to opaque. Participants imagined how the vehicle would continue to approach and pressed a button when they judged that the vehicle reached their sagittal plane. The temporal interval between the time the vehicle passed the sensors and when the participants pressed the button, corresponding to estimated TTC, was recorded. Participants estimated longer TTC in the cyclist or pedestrian condition than in the driver condition,. The results indicate that perceived vulnerability increases observers’ expectation for drivers to avoid crash risk by deceleration.
    Download PDF (751K)
  • Ryosuke Ozawa, Keisuke Fujii, Motoki Kouzaki
    Session ID: O1-2
    Published: 2013
    Released on J-STAGE: November 05, 2013
    CONFERENCE PROCEEDINGS FREE ACCESS
    Making a round trip, the return trip often seems shorter, which is called “return trip effect”. We investigated this effect psychologically and physiologically. Ten participants, a round-trip group, watched round-trip walking movies. Different ten participants, a control group, watched two walking movies of different ways. Participants evaluated time by two methods: RP3 and 11-point scale. Heart rate (HR) and cardiac vagal index (CVI) were calculated. On mean RP3, HR and CVI there was no difference between groups, but on 11-point scale only the round-trip group evaluated the first movie was longer, and a significant correlation was found between the difference of CVI and 11-point scale for only the control group. These suggest that the return trip might be caused postdictively as 11-point scale was evaluated afterward, and that time perception might be related to parasympathetic nervous system, but cognitive load might have stronger influence.
    Download PDF (615K)
  • Minseo Kim, Jeounghoon Kim
    Session ID: O1-3
    Published: 2013
    Released on J-STAGE: November 05, 2013
    CONFERENCE PROCEEDINGS FREE ACCESS
    Download PDF (567K)
  • Shuang Meng, Hirofumi Saito, Misato Oi, Tao Liu
    Session ID: O1-4
    Published: 2013
    Released on J-STAGE: November 05, 2013
    CONFERENCE PROCEEDINGS FREE ACCESS
    Previous studies have reported that imagining a motor movement from the first-person perspective (1PP) is more embodied than the third-person perspective (3PP). The motor imagery in 1PP is restricted by biomechanical constraints of actual movements. The present study investigated whether biomechanical constraints influence the motor imagery in 3PP using a hand-laterality judgment task. In the task, drawings of hands were presented on a monitor in three different orientations (0°, 90°, or 270°), and the participant was asked to judge whether a presented hand was his/her left or right hand (1PP group) or another person’s left or right hand (3PP group). In the 1PP group, participants showed different RTs in the 90° and the 270° for the left and the right hand, but not in the 3PP group. The result suggests that biomechanical constraints emerge in 1PP, but it is not clear in 3PP due to the difficulty of taking 3PP.
    Download PDF (620K)
  • Yoshiyuki Ueda, Sakiko Yoshikawa
    Session ID: O1-5
    Published: 2013
    Released on J-STAGE: November 05, 2013
    CONFERENCE PROCEEDINGS FREE ACCESS
    When observing face-to-face interactions between two people, we can instantly infer which person is more dominant, competent, or trustworthy, from various nonverbal cues. In the present study, we investigated how facial expressions of emotion influence perceptions of dominance in a face-to-face situation. In Experiment 1, participants observed two people's facial expressions presented face-to-face and judged which person looked more dominant than the other. In Experiment 2, we asked participants to rate how dominant each face looked with a single presentation. In Experiment 3, we asked participants to rate how dominant each face looked in a face-to-face situation. The results showed that persons with a happy expression were judged to dominate in face-to-face confrontation scenes, whereas their personal traits were judged to the least dominant. The present study indicates that perception of dominance in face-to-face confrontation scenes does not involve merely a comparison between the personal traits of two persons.
    Download PDF (634K)
  • A comparison using hierarchical letters and shapes in a selective attention task
    Polong Lin, Olivia Lin, Colin M. MacLeod, Etsuko T. Harada
    Session ID: O1-6
    Published: 2013
    Released on J-STAGE: November 05, 2013
    CONFERENCE PROCEEDINGS FREE ACCESS
    Group differences in global and local processing between Japanese and Caucasian Canadians were examined using the Navon global-local paradigm (Navon, 1977), a target-search task involving hierarchical figures (e.g., a large “E” made up of small “T”s). To control for dispositional biases, participants were instructed to attend to only the global in one condition, and only the local level in another.  Results showed that Canadians were significantly slower to respond to the global level for hierarchical letters (Exp. 1) and the local level for hierarchical shapes (Exp. 2) than Japanese participants, who did not differ in performance regardless of the attended level or type of stimulus (letter or shape). One possibility is that Japanese participants are more adept at controlling how they direct their attention regardless of the kind of information at each level, whereas Canadians are more influenced by bottom-up characteristics at the global and local level.
    Download PDF (629K)
  • Tao Liu, Hirofumi Saito, Misato Oi
    Session ID: O1-7
    Published: 2013
    Released on J-STAGE: November 05, 2013
    CONFERENCE PROCEEDINGS FREE ACCESS
    The present study examined the neural mechanisms underlying the inter-brain processing during cooperation in a two-person computer game. We simultaneously measured pairs of participants’ fronto-parietal activations during the game using near-infrared spectroscopy (NIRS). One participant’s task was to build a target pattern by placing disks on a monitor as a builder, while another helped (cooperation) or disrupted (competition) the builder’s process as a companion. The NIRS data demonstrated two findings: (1) The builder showed higher activation in the right inferior frontal gyrus (IFG) under the cooperation condition than under the competition condition, while the companion showed a reversed pattern. (2) The right IFG activations of the builder-companion pairs showed a positive inter-brain correlation under the cooperation condition. These results suggest that the leading player is actively engaged in achieving a goal regardless of task type, but the inter-brain synchronization emerges only during cooperation as their shared attention and/or empathic processing.
    Download PDF (579K)
Oral (English) session (memory, thinking and language)
  • Yoko Higuchi, Yoshiyuki Ueda, Hirokazu Ogawa, Jun Saiki
    Session ID: O2-1
    Published: 2013
    Released on J-STAGE: November 05, 2013
    CONFERENCE PROCEEDINGS FREE ACCESS
    Repeated sequences are implicitly learned; they facilitate responses to a target. We investigated whether implicit learning of a sequence’s location and identity depends on two different tasks (identity-related and location-related) by using the spatiotemporal contextual cueing paradigm. This paradigm involves having objects sequentially presented at multiple fixed locations. Participants were asked to respond to a target among a stream of distractors presented sequentially at different locations. The target was defined as an object belonging to a different category (Experiment 1: identity-related task) or having a spatial offset (Experiment 2: location-related task). In Experiment 1, object identities were selectively learned, whereas in Experiment 2, object locations were selectively learned. Participants did not recognize the invariant sequences in either experiment. These results suggest that the task set plays an important role in deciding which information is implicitly learned.
    Download PDF (598K)
  • Masataka Nakayama, Satoru Saito
    Session ID: O2-2
    Published: 2013
    Released on J-STAGE: November 05, 2013
    CONFERENCE PROCEEDINGS FREE ACCESS
    Language systems seem to accumulate and implement at least two types of phonological knowledge for serial ordering. One is knowledge of inter-element associations (e.g., which phoneme tends to follow which) and the other is context–element associations (e.g., which phoneme tends to occupy which position). The present study examined whether and how such phonological knowledge is accumulated and implemented by a computational model for serial order: the simple recurrent network. The model simulated the pattern of behavioral data that demonstrated the contribution of phonological knowledge in a phonological short-term memory task. The model replicated the interaction between the two types of knowledge. It also replicated dissociable effects of inter-element associations, which were evident in later within-word positions, from those of context–element associations. The only point that required further modeling was the early-effectiveness of context–element associations.
    Download PDF (593K)
  • Eriko Sugimori, Karen J. Mitchell, Carol L. Raye, Erich J. Greene, Mar ...
    Session ID: O2-3
    Published: 2013
    Released on J-STAGE: November 05, 2013
    CONFERENCE PROCEEDINGS FREE ACCESS
    Using fMRI, we investigated reality monitoring for auditory information. During scanning, healthy young adults heard some words in another person’s voice and imagined hearing other words in that same voice.  Later outside the scanner, participants judged words as heard, imagined or new. An area of left middle frontal gyrus (BA6) was more active at encoding for imagined items subsequently correctly called “Imagined” than items incorrectly called “Heard.”  An area of left inferior frontal gyrus (BA45,44) was more active at encoding for items subsequently called “Heard” than “Imagined,” regardless of the actual source of the item.  Scores on an Auditory Hallucination Experience Scale were positively related to activity in superior temporal gyrus (BA22) for imagined words subsequently incorrectly called “Heard.”  We suggest that activity in these areas reflects cognitive operations information (middle frontal gyrus) and semantic and/or perceptual detail (inferior frontal gyrus, superior temporal gyrus, respectively) used to make source attributions. 
    Download PDF (570K)
  • the investigation using multi-modality priming
    Kozue Miyashiro, Etsuko T. Harada
    Session ID: O2-4
    Published: 2013
    Released on J-STAGE: November 05, 2013
    CONFERENCE PROCEEDINGS FREE ACCESS
    To investigate the effect of harmonization between words' meaning and their expression style (typography and prosody), we conducted a series of multi-modality priming experiments using word-fragment completion task (WFC). At encoding, words were presented both visually as the letters and auditorily as speech, and their typography and prosody were manipulated to harmonize or not to harmonize with the words' meaning. The words' script (hiragana/kanji) at encoding was also manipulated. At test, all WFC items were written using hiragana. The results showed no effect of harmonization between words' meaning and typography on implicit memory, which did not corroborate with a previous study. We conducted another experiment, manipulating typography but keeping all speech sounds neutral at encoding. Results showed an effect of harmonization. These results suggest that when speech information is manipulated to harmonize with the words' meaning, the processing of speech is given the priority over that of visual character information.
    Download PDF (617K)
  • Misato Oi, Hirofumi Saito
    Session ID: O2-5
    Published: 2013
    Released on J-STAGE: November 05, 2013
    CONFERENCE PROCEEDINGS FREE ACCESS
    The purpose of this study was to examine whether bilinguals show interference from their relatively less skilled language (L2) during semantic processing in their relatively highly skilled language (L1). In a word-definition judgment task, Chinese-Japanese bilinguals judged if visually presented sets of a word and a definition were target language (Japanese or Chinese). The four types of stimuli pairs appeared in Japanese and/or Chinese dictionaries: both (shared: S), Japanese-specific (J), Chinese-specific (C), and neither (unrelated: U). The false alarm rates for the C pairs in Japanese (L2) condition duplicated the results of bilinguals in Oi et al. (2010) demonstrating higher false alarm rates than Japanese monolinguals. In turn false alarm rates for the J pairs in Chinese (L1) were similar for the C pairs in Japanese (L2) condition. These results suggest that the bilinguals may show interference bidirectionally between L1 and L2 when their scripts resemble each other.
    Download PDF (604K)
  • Kwangoh Yi
    Session ID: O2-6
    Published: 2013
    Released on J-STAGE: November 05, 2013
    CONFERENCE PROCEEDINGS FREE ACCESS
    Download PDF (500K)
  • Victor Alberto Palacios, Hirofumi Saito, Misato Oi
    Session ID: O2-7
    Published: 2013
    Released on J-STAGE: November 05, 2013
    CONFERENCE PROCEEDINGS FREE ACCESS
    Despite a detailed function of gestures, such as accelerating lexical access (e.g., Morrel-Samuels & Krauss, 1992) or aiding in conceptual planning (Alibali et al., 2000; Oi et al, 2013), researchers assume that gestures facilitate speech production. To test this assumption, we examined latencies for gesturing and speaking under three conditions: speaking-only (S), gesturing-only (G), and gesturing while speaking (GS). Participants were asked to produce a hand gesture and/or verbalized an action/motion to nine visually presented objects (all objects could be grasped with one hand i.e. a baseball). If gestures facilitate speech production, the GS group would show shorter latencies of speech than the S group. However, speech latencies of the GS group were not significantly faster than that of the S group. This may imply that gesture production fulfill a function of speech aids when utterance content needs ample consideration beyond the simple word level (such as story-telling or narration).
    Download PDF (605K)
Oral (Japanese) session (emoion and motivation, social cognition)
Oral (Japanese) session (perception and kansei)
Oral (Japanese) session (perception and kansei, attention, thinking and language)
Oral (Japanese) session (memory, thinking and language)
Poster (Japanese) session (memory, thinking and language, personality and clinic, emotion and motivation)
feedback
Top