日本知能情報ファジィ学会 ファジィ システム シンポジウム 講演論文集
第29回ファジィシステムシンポジウム
セッションID: MD2-4
会議情報

メイン
Intelligent Living Room System Which Learns Human Activities
Madis VELLAMAE*Tomonori HASHIYAMA
著者情報
会議録・要旨集 フリー

詳細
抄録
Visions of smart houses and home automation technologies have been around for over three decades. Since that time, computers and technology have made a huge step forward and simple home automation is not so appealing anymore. In this paper we propose and prototype an intelligent living room system that without intended interaction enhances people everyday life by figuring out our desires from our natural gestures, facial expressions and speech. Human behavior such as gestures and facial expressions but also preferences vary depending on the person and environment. The meaning of the same gesture and facial expression made by different persons, or by the same person but in different situation, can have a different meaning. Therefore the intelligent system must be able to recognize the person and the situation and learn the preferences. In order to achieve this, artificial intelligence and machine learning algorithms, such as Hidden Markov Model (HMM) and Growing Self-Organizing Map (GSOM) are used. Microsoft Kinect for Windows sensor is used to monitor gestures, voice and locate people in the living room. High definition camera is used in detecting facial expressions. The information gathered from multiple sensors and users' desires recognized from gestures and facial expressions are combined in order to make correct decisions. As a result, the system seamlessly enhances people everyday life by making it more comfortable by, for example, changing the temperature, room lighting setting and ventilation. The system will learn each individual's preferences in different situations. It will adapt to different users and take actions based on user's postures, gestures, speech and facial expressions and also location in the room.
著者関連情報
© 2013 日本知能情報ファジィ学会
前の記事 次の記事
feedback
Top