詳細検索結果
以下の条件での結果を表示する: 検索条件を変更
クエリ検索: "2タッチ入力"
4件中 1-4の結果を表示しています
  • 福島 大志, 宮崎 文夫, 西川 敦
    計測自動制御学会論文集
    2012年 48 巻 3 号 159-166
    発行日: 2012年
    公開日: 2012/03/29
    ジャーナル フリー
    We have newly developed a noncontact letter input interface called “Fingual”. Fingual uses a glove mounted with inexpensive and small magnetic sensors. Using the glove, users can input letters to form the finger alphabets, a kind of sign language. The proposed method uses some dataset which consists of magnetic field and the corresponding letter information. In this paper, we show two recognition methods using the dataset. First method uses Euclidean norm, and second one additionally uses Gaussian function as a weighting function. Then we conducted verification experiments for the recognition rate of each method in two situations. One of the situations is that subjects used their own dataset; the other is that they used another person's dataset. As a result, the proposed method could recognize letters with a high rate in both situations, even though it is better to use their own dataset than to use another person's dataset. Though Fingual needs to collect magnetic dataset for each letter in advance, its feature is the ability to recognize letters without the complicated calculations such as inverse problems. This paper shows results of the recognition experiments, and shows the utility of the proposed system “Fingual”.
  • 佐藤 知充, 藤田 欣也
    ヒューマンインタフェース学会論文誌
    2009年 11 巻 4 号 409-416
    発行日: 2009/11/25
    公開日: 2019/09/04
    ジャーナル フリー

    We propose a novel Japanese Kana input device based on thumb sliding gesture for use without visual feedback, for inconspicuous information access under socially restricted situations. The device has five touch sensors at the corners and the center of a square indentation to allow intuitive Kana input and to provide haptic feedback. The input device combined with a voice-feedback function demonstrated a potential for rapid text input of 60 characters per minute without visual feedback in twenty users.

  • 加藤 邦拓, 宮下 芳明
    ヒューマンインタフェース学会論文誌
    2016年 18 巻 1 号 9-18
    発行日: 2016/02/25
    公開日: 2019/07/01
    ジャーナル フリー

    In this paper, we propose "ExtensionSticker", a striped pattern sticker that can extend touch interface by simply attaching the sticker to a touch panel display. This sticker has multiple conductive lines, and when the user touches on the sticker, a touch input can be generated. This method is not only touch input as specific locations, but allows for continuous touch input such as scrolling operation. This allows a user to prototype an interface to extend touch panel devices easily. Furthermore, we experimented on the recognition accuracy of scroll and tap actions using the proposed method.

  • 中川 聖一, 傳田 明弘, 伊藤 敏彦
    人工知能
    1998年 13 巻 2 号 241-251
    発行日: 1998/03/01
    公開日: 2020/09/29
    解説誌・一般情報誌 フリー

    Recent improvements of speech recognition and natural language processing enable dialogue systems to deal with spontaneous speech. With the aim of supporting these systems, multi-modal man-machine interface has been introduced to the system widely. We have been aiming at realization of a robust dialogue system using spontaneous speech as main input modality. Although our conventional system was developed with a robust natural language interpreter, since its user interface was built only on speech, the system did not always give enough usability. However, in this case, response sentences became too long when they contained lots of information. And that may make user miss a part of response speech. Furthermore, user can not get any information on how a particular word in the response, e.g. a name of place, should be represented in Kanji-characters, and it was very difficult to input the position on the map through speech. These examples clearly indicate that the user interface only on speech doesn't always give enough information to the user and may cause some troubles. In order to solve these problems and realize more natural human -machine interaction, we have developed a multi-modal sightseeing guidance system with 1) speech input/output, 2) touch screen input (on map/in menu) and 3) graphical/text output (map, photograph, menu and dialogue history). Furthermore, we implemented an agent interface with real face image / animation and recorded speech / sinthesized speech to the system, and carried out evaluation experiments which consist of task completions and questionnaires to evaluate the interface and whole system. In this paper, we describe the system and the evaluation experiments.

feedback
Top