Electronic Medical Record (EMR) systems are the computers used inside healthcare clinics. EMR systems have multiple stakeholders whose needs continuously evolve. The traditional EMR design approach focuses on designing systems that perfectly fit the requirements of some stakeholders as they are understood in the initial design stages. This results in EMR systems that do not answer all the stakeholder’s needs and quickly become outdated. To address the limitations of the traditional EMR design approach, we propose a utilitarian redesign approach for EMR systems. By “utilitarian redesign”, we mean that the designers continuously redesign the EMR system with the aim of maximizing the satisfaction of all the stakeholders. Our approach allows the designers to (i) identify the features to redesign and (ii) to know which features would bring the largest good to the largest number of stakeholders. We showcase the approach using a case study of redesigning an EMR system in Japanese antenatal care settings. We also evaluate our approach with 21 participants split over 7 workshops. Our results showed that the approach provides useful information to help the designers make utilitarian redesign choices. Even though our approach was applied to EMR systems, it may also be applied to redesign other complex socio-technical systems and potentially maximize the good for the largest number of stakeholders.
During classroom-type lectures, some students feel difficulty in asking questions, although it is considered to improve the lecture quality and students’ understanding. We propose a system called Robot-Assisted Questioning System (RAQS) that can promote teacher-student communication in the lecture. It allows students to post questions and opinions on an online messaging interface. The messages were sent to a robot to be posed to the teacher, with or without the voting procedure. In this paper, we report a case study of the field experiment in a lecture which was assisted by RAQS. Students found RAQS efficient and useful for improving their communication with the teacher. For the practical use, the results suggest a future improvement of the system to add the function for controlling the number of robot utterance to avoid interference with the teaching process.
This study analyses employees’ recognition of job resources focusing on the gap between recognition of employees in managerial positions and that of employees in non-managerial positions. Employees in managerial positions have different job roles from those in non-managerial positions. The difference of roles influences to employees’ recognition of their work. The gap of recognition harms the work output of their group. However, detailed points of the gap are veiled. The purpose of the analyses in this study is to clarify the detailed points of the gap. A questionnaire is designed to survey employees’ recognition of daily work. The employees answer the questionnaire once a day at the end of work time. The result of data analysis shows that the gap of recognition related to “personal relationship” can be an important managerial issue. The employees in managerial positions should understand the gap and keep a confidential relationship with each employee in non-managerial position. We believe that this result becomes the base knowledge to create new HR solutions.
This paper presents the UX process "Three step Sustainable-model" for social robots to be sustainable in society. “Sustainable social robots” means that the social robots have roles in the society and the resources to play the roles for the social robots are kept in the society. The UX process is the process for human experiences of interaction with social robots. For social robots to get positions in society, they need not only functions and performances but also well grand-designed social environments. Toinvestigate the method of grand design, in the paper three fieldworks using social robots have analyzed. As a result, "Three step Sustanable-Model" is proposed. This model is a hypothesis at present, but useful for social robots to be sustainable in society.
In a face-to-face verbal communication, listener’s movements such as nod ding and body motions are interactively synchronized with a speaker’s speech. We have developed a group communication system that uses multiple ”InterActors” that generate communicative motions such as nodding based on the rhythm of utterance like audiences in a VR (Virtual Reality) classroom. The sound environment has a psychological effect on the classroom space. For example, previous studies have shown that the acoustic environment of the classroom affects the learning efficiency of students in the classroom. It may be easier to speak in a noisy environment, such as a classroom break, than speaking in a quiet space. In this paper, we developed a speech support system based on a noisy environment generated by arranging a listener’s InterActor and multiple non-participant characters that were not involved in the conversation, and evaluated the effects of the InterActor’s nodding and back-channel in the noisy environment. As a result, type of noise would give users a different feel, which suggests individual differences in the perception of the appropriate volume for each type of noise. Thus, we analyzed and evaluated the effect of the type of noise and the presence or absence of non-participants on speech by changing the volume of the noise and selecting an appropriate volume in individuals for users. As a result, we verified the effect of the noise environment on the speech.
In this study, authors aimed at revealing the influence of group size and participant’s personality from the viewpoint of easiness of proposing his/her opinion and also revealing the difference between an onsite situation and an online situation where a discussion is conducted. The authors conducted a questionnaire survey for 76 university students. As a result of the analysis, the following were found: 1) Students feel more difficult to propose their opinion as group size is increased basically under both situations. 2) Under the onsite condition, any participant will feel easy to propose his/her opinion when group size is three or four, while agreeableness and conscientiousness will affect participant’s feelings of difficulty to propose his/her opinion if group size is larger than five. 3) on the other hand, under online condition, agreeableness and neuroticism will affect participant’s feelings of difficulty to propose his/her opinion in any group size.
We aim to develop stocking and disposal robots without changing the impression of products, and deal with a new problem of estimating products’ poses using appearance-free markers. To solve the problem, we propose to define the appearance-free marker as the marker which human cannot find within 3 sec, based on the psychology. However, it is difficult for robots to find appearance-free marker. Therefore, we develop a new pose estimation method which uses the appearance-free marker with making the robot’s camera approach the products based on the rough poses estimated by the deep learning. We performed impression evaluations. 100 participants’ evaluation showed that the size of appearance-free marker was 5 mm. Moreover, we evaluated our method by using our arm robot PA10. We implemented our method into PA10 which made its camera approach the products to estimate the products’ poses as accurately as the stocking and disposal robot which won the world competition by using 45 mm markers did. Experimental results showed that PA10 could estimate products’ poses with position errors of 11.9 mm and posture errors of 4.23 degree, and could stock and dispose products.
This paper focuses on impression-based fabrication, a framework of personal fabrication that automatically generates 3D printing data of multiple designs suitable for user's desired impression, and develops a design system for picture frames as a case study. The framework consists of a design generation unit creating design candidates and an impression evaluation model assessing the candidates in terms of their appropriateness to the desired impression. In developing the system, an impression evaluation experiment is first conducted to clarify the relationship between various picture frames and their impression. By training neural networks using the experimental result, a set of impression evaluation models is implemented. The design generation unit is built based on the approach of a genetic algorithm in which the implemented model set is used as an evaluation function. An experiment evaluating the designs generated by the system demonstrates the validity of the framework in terms of the suitability and diversity of the designs.
“Nudge theory,” an approach to influence people’s decision making, channels people’s behavior by designing environmental cues. We have been studying driving agents that encourage drivers to drive safely and make better behavioral choices for gently driving. Therefore, we attempted to apply a type of Nudge theory called default effect to driving agents. An example of the default effect is an instance wherein people need to make decisions immediately. They are more likely to follow other people’s opinions instead of considering the optimal choice. In this paper, we discuss the research outline of NAMIDA0, a multi-party interaction driving agent, and its application based on the Nudge theory.
A typical interaction research is intended for awakened users operating in bright places. However, as computer technology becomes ubiquitous and becomes avail able in every aspect of life, there are increasing opportunities to operate devices in a semi-wake state in dark places, such as bedrooms before going to bed. In this study, we propose interaction for non-awake users in a dark bedroom that does not disturb sleep. Focusing on the fact that rod photoreceptors work in the dark, we adopted a green ceiling projection to stimulate it. We designed, prototyped and evaluated an interface for checking and locking an air conditioner, lighting, alarm, security camera, handwritten memo, and entrance key.
In this study, sensing support wear which was consisted of stretchable and printable sensors on textiles was developed to monitor body movement in natural and more comfortable situations. This sensor showed stably gradual electrical resistance variations with stretching up to 100% strain, which showed higher stretchability than conventional printable sensors. To test the validation of the new device, a joint angle during elbow extension and flexion was compared to a camera-based measurement system. As a result, the correlation coefficient between the waveforms of both devices was r = 0.88±0.04. In addition, extension and flexion cycle could be counted by peak count algorism with the probability of 80.6±34.0%. On the other hand, the root mean squared errors (RMSE) were shown 21.2±5.7 and 45.4±17.4 degrees for slow and fast motion, respectively. In particular, although the RMSE increased around maximum flexion, well performed at the target range of daily physical activities. In conclusion, the results indicated the sensing support wear would be able to apply for predicting and/or classification of physical movement.
The aim of this paper is to establish a novel statistical methods for characterizing the sign language movements at multiple body parts simultaneously. The method we applied is the multivariate functional principal components analysis (MFPCA), which is capable of capturing the individual variation of sign language movements using not only palm movements but also multiple movements such as fingers, elbows, and shoulders. This method successfully captures the characteristic that sign language is composed of a combination of multiple consecutive actions. We apply MFPCA to quantify the differences in variation of the performance among ten beginner and one master of the sing languages measured at nineteen body parts. The results of MFPCA quantify the individual qualities for the sign languages by making use of multivariate function principal component scores. At the same time, MFPCA revealed which part the characteristic movement of the individual sign language appears strongly. Finally, we distinguished some words that tend to be difficult or easy to learn.