The present study describes the physical features and the musical effects of expressive gestures performed by a professional orchestra conductor including a preliminary analysis of interpersonal relationships between a conductor and an orchestral player. I videotaped sequences of orchestra rehearsals by Michiyoshi Inoue (Conductor) and the Tokyo Philharmonic Orchestra and used this data in the following analysis. In Analysis 1, I measured the timings and the degrees of synchronization between the physical directions of a conductor and the bowing actions of a concertmaster (a principal violinist regarded as a leader of orchestra) to show the emergent processes of mutual coordinated timing structures. Results show that the largest time differences between their actions were observed on the first day of rehearsals, but those time lags were gradually reduced as the rehearsals proceeded. The longest time lags on the first day may reflect that the conductor made strong efforts to convey his musical intentions to the orchestra rather than to control the precise and/or mechanical timings for their ensemble. In Analysis 2, I sought to investigate the musical and expressive aspects of conductor's gestures on the first day. Through the detailed comparisons with a classification table of the conventional conductor's gestures compiled by Braem & Bräm (2000), the outward appearances and the musical contents of the conductor's gestures on the first day were coded. The conductor and five of orchestral players were interviewed in order to check the conductor's musical intentions underlying his gestures and the players' interpretations or impressions that were conveyed by that conductor's gestures. Findings reveal that the conductor flexibly produced many gesture variations not only in the physical appearances but also in the musical contents, which were beyond the conventional patterns. In some cases the multiple musical directions were simultaneously embodied in a conductor's single gesture in a specific manner which represented the compositional structures. Discussion of these results suggests that the conductor's gestures can be considered as dynamic activities with flexibility and simultaneous multiplicity in meaning, and that these features in the real conductor's gestures must contribute to organizing interpersonally coordinated relationships with the orchestra.
Jeannerod (1981) proposed that prehension movements consist of two components; the transport component and the manipulation component. These two components are independent each other. The transport component is based on extrinsic properties of the object (e.g. location), whereas the manipulation component is based on intrinsic properties (e.g. size). While many studies about the effects of location and size of the object on prehension were investigated, the influence of object orientation has been examined in a few papers. To investigate how orientation of the object is processed, we made two experiments with virtual reality (VR) technique. In experiment 1, we studied the properties of prehension movements to a computer-generated target (a virtual cylinder), comparing it with prehension movements to a real object (a wooden cylinder) and pantomimed movements (subjects were required to pretend to grasp the same cylinder). Results showed that the kinematic pattern of prehension movements to a 3-dimensional virtual image is similar to that of actual performance using a real object. Therefore, VR technique is valid for investigating the kinematics of prehension movements, although there is no tactile feedback when grasping. In experiment 2, we used a visual perturbation paradigm by which the location, size (width) and orientation (in this case, inclination) of the object to be reached and grasped could be changed 150ms after the beginning of movement. Participants could smoothly modify their arm movement (transport), distance of fingers (manipulation) and rotation of wrist to visual perturbation, but the latencies of the modifications for these three movements differed from each other. This result suggests that the orientation of a target is processed in a different system from that for the size or location on prehension movements. That is, hand transport, wrist rotation, and grasping could each be processed by independent visuomotor channels.
The purpose of this study is to propose and evaluate a new method of designing interfaces which we call “difference presentation”, where the differences between an old version and a new one of software are visually presented. Although many researches have focused on the change of user's mental models in using adaptive/adaptable user interfaces, the change of functions in the software itself was seldom taken into consideration. Software is, however, upgraded so frequently that the functions of the software often change. In this study, we focus on the change of functions and propose the difference presentation method to clearly show the change to users. In order to show that the difference presentation is effective in facilitating users' adaptation to a new version, we developed a new interface based on the method of difference presentation and tested its usability. The result showed that the number of errors was significantly smaller in the difference presentation condition than in the non-difference presentation condition, although, in all conditions, errors increased after the upgrade of software. This means that the difference presentation is effective in facilitating user's adaptation to a new version. It is also suggested that the difficulty in users' adaptation depended on the types of change of the system: the performance of the subjects was worse, in using a function into which the original two or more functions were integrated, than in using a newly added function. This research not only provides a new method of adaptive/adaptable interfaces but also suggests that it is required to build a new methodology of software development, in which the users' adaptation is taken into consideration.
The situation decomposition extracts multiple situations, each of which is a combination of an attribute set and a case set, from relational data. The characteristics of extracted situations depend on the choice of evaluation criterion to situations, because situation decomposition algorithms search for the local maximums of a criterion value. It is already known that situation decomposition using a Matchability criterion has a high prediction performance for card classification tasks using a small number of training cases. However, the performance is still inferior to human's ability. In this paper, an ETMIC (Edge of Total Mutual Information Cliff) criterion is proposed as a new criterion for evaluating situations. The situation decomposition using the ETMIC criterion extracts situations, which have three tendencies: a high covariant relationship inside, few influences from unselected attributes, and a large number of cases. The prediction system using the ETMIC situation decomposition can correctly answer 98% to the card classification task (which needs learning disjunctive concepts) using only 20 training cases. The prediction ability of the proposed system extremely exceeds that of the conventional one. The change of the acquisition rate of the concept to the number of experience cases is evaluated on the card task simulation with the ETMIC situation decomposition. The result is consistent with data from a psychological experiment.