抄録
This paper describes an annotation system using motion capture data to teach naginata performance. There are some video annotation tools such as YouTube. However these video based tools have only single angle of view. Our approach that uses motion captured data allows us to view any angle. A trainer can write annotations related to parts of body. We have made a comparison of effectiveness between the annotation tool of YouTube and the proposed system. The experimental result showed that our system triggered more annotations than the annotation tool of YouTube.