This paper presents a model to enable robots to create a suitable criterion for decision-making by indirectly interacting with people in a group. Using this model, a robot learns a suitable criterion for the group as a group member through reinforcement learning. When people, who have different personalities form a group, they adjust their criteria to a common criterion for the group. The present study investigates whether the robot can make a suitable decision-making criterion in a group by learning from interactions. Participants and the robot answer easy quizzes that have vague questions without direct conversation. Our experiments reveal that a group consisting of the participants and the robot forms a common criterion in a limited scenario. However, further study is required to reveal a robot’s social influence of human.
We propose a concept of Virtual Kansei for robots and describe our attempts to construct a series of systems for generating virtual emotion, a subset of Virtual Kansei, of a robot. Here, Kansei is a Japanese word that encompasses affection, emotion, and related mental functions. Robots need to have human-like Kansei in order for them to become true partners of humans. The proposed virtual emotion consists of three modules; emotion detector to detect emotional state of the partner, emotion generator to generate robot emotion from partner’s emotion, and expression modulator to modify robot motions to express emotion. The emotion detector utilizes facial images, voice sounds, and body motions as inputs and Bayesian networks to integrate the information from three inputs. The emotion generator is a Petri-net that has a feedback structure to represent dynamics of emotion transition. Genetic Algorithm is adopted for adjusting arc weights of the Petri-net. The expression modulator uses an emotion vector to mix basis motions corresponding to the six emotions of Ekman. Experiments and results of three modules are discussed here.
The way a product is visually depicted facilitates more/less mental simulation. The imagination difficulty/ease of use of the product impacts on product evaluations such as purchase intentions. Although male consumers perceived greater savings (positive effect) when prices were presented in red, little is known about the impact of price color on mental simulation. This study discusses the effect of prices in red and the effect of visual depiction on mental simulation and product evaluation. This research revealed that there was a discouraging effect of prices in red on mental simulation by right-handed women and their evaluations when the visual stimuli was a mismatch between the product orientation and their dominant hand (e.g., showing a picture of a kettle with the handle on the left).
The authors present clinical desires to catch signals of human expression, and then ask opinions on communication support for severely disabled patients with neurodegenerative disorders from specialist members of the International Society of Affective Science and Engineering. Two aims were considered. One aim was to detect patients’ expressions as a trigger to a switch for an augmentative and alternative communication device and to accumulate facial signals conveying demented patients’ good or no-good expressions using a tool that is typically used for providing palliative care for non-cancer patients. The other aim was to determine the examined tool’s possibility for evaluation in normal subjects. The knowledge and skills required for communicating with severely disabled patients, especially with amyotrophic lateral sclerosis (ALS), needs to be shared among colleagues in the form of vocational education. In the clinical fields, health-care professionals and undergraduates are too busy to make sufficient time to gain such knowledge and skills. With regard to the proposed tool, we raised the unusual question about whether our desires to catch signals of human expression are too idealistic and infeasible.
Affective education is a formal curriculum designed to help children better understand their feelings and respond to challenging situations, thereby transforming themselves and the world around them. Emotions impact the learning ability at multiple levels (Attention, Memory and decision making etc.). Though they have been advancements in terms of the content (Rich multimedia-based lessons etc.) for effective-learning, proportionate advancements have not taken place in the affective-learning domain – for example "How to adapt the learning based on the current mood and situation?”. Can we mitigate the adverse effects of emotions? This problem of learning is especially compounded for university students where each student has flood of information to absorb & assimilate and is constantly under stress, furthermore, Personalized & Real-time/continual mentoring by the teachers to students is not practical. We have developed a framework and prototype which can be used to adapt the learning-content based on the current mood of the student. We achieve this by capturing the real time gestures & facial expressions (based on universal facial expressions of seven emotions – anger, contempt, disgust, fear, joy, sadness, and surprise) and adapt the content shown to mitigate the negative & amplify the positive impacts of emotion. The Task (Chess Puzzles) given to validate the effectiveness of Methods show significant Improvement on sample size of 80 students.
In order to eliminate mismatches between the intentions of questioners and respondents of Question and Answer (Q&A) sites, nine factors of impressions for Japanese and English statements have experimentally been obtained. Factor scores are then estimated by using the feature values of statements. So far the possibility of searching Japanese respondents capable of giving appropriate answers to a newly posted question has been established. It has been shown that the distance and the number of appearance may help us select users who can give appropriate answers to a question. In the similar fashion, this paper tries to find the possibility of detecting respondents who can appropriately answer a newly posted question at English Q&A sites. As a result of analysis, while there were several users who iteratively gave answer statements for Japanese, there are only a small number of respondents who only posted at most two answers for English.
In this paper, we introduce a recommendation method by directly setting of users’ preference patterns of items for similarity evaluation between users. Yamawaki et al. proposed a recommendation method based on comparing users’ preference patterns instead of directly comparing users’ rating scores. However, Yamawaki et al.’s method for extraction of preference patterns has limitation of representation ability and this method is not applicable in the case that a user has “cyclic” preference patterns. Our method for extraction of preference patterns is based on partial pairwise comparison of items and it is applicable to represent “cyclic” preference patterns. A tourist spot recommender system is implemented as a prototype of recommender system based on the proposed approach and the implemented recommender system is evaluated by experiments.
We present an Interactive Evolutionary Computation (IEC) system that applies user gaze information. Historically, IEC systems have faced the problem of heavy user evaluation loads. Therefore, to solve this issue, we apply a user gaze information approach to solve such issues. When user gaze information is applied to the evaluation of candidate solutions, IEC systems can obtain user evaluation information while they view multiple candidate solutions. In this paper, we verify the effectiveness of the eye tracking IEC system using evaluation experiments with real users. In the experiment, we use a normal IEC system as a comparison method where users evaluate candidate solutions by 10-stage evaluation manually. The experimental results show that the eye tracking IEC method can generate solutions that offer results equivalent to the compared system.
Switch using bio-signal are researching for communication of a physically disabled person such as ALS. We have researched bio-signal device for daily life support of ALS patients using ocular potential. We suggested Doubled Constant to RMS (DCR) method that is dynamic calculate the threshold value using RMS to detect to intentional action. However, DCR method has problem that the threshold increase from signal fluctuation, and cannot be detected large electrooculography change by intentional action. We need research better method for detection to intentional action. In recent years, it has been found that classification by pattern recognition in intentional action gives high accuracy for EMG. We compared the DCR method and the k neighborhood method which is one of the pattern recognition. It is because, Bio-signal switch has need better detection method. In result, each method has different detect conditions. In future, DCR method and knn method are combined for improved accuracy, and consider online analysis on EOG.
To improve the safety of autonomous cars, their obstacle detection capability in bad weather must be substantially improved. Haze is a major factor that degrades outdoor images. Although various dehazing schemes have been proposed, a dehazing scheme designed to improve obstacle detection capability has not been reported. Hence, we present a dehazing algorithm that enhances the safety of an autonomous car. This algorithm should be able to work in real time, even using edge computers typically installed as car electronics. Furthermore, this algorithm should work on grayscale images, as systems dependent on color images are often unaffected by environmental color changes caused by factors such as a setting sun. We developed this algorithm based on the following three existing dehazing algorithms: dark channel prior, median dark channel prior, and the parameter tuning scheme for dark channel prior. We extend these methods based only on grayscale images. In terms of object detection capability, structural similarity index measure, and peak signal-to-noise ratio, the empirical results showed that our grayscale image-based proposed algorithm is comparable to the results of current cutting-edge methods, and operates in real time.
We studied on the MI-BCI (Motor Imagery Brain-Computer Interface). MI-BCI is an interface that operate a computer using changes in brain activity that appear when imaging moving a body part. For example, MI-BCI is possible to assign the left-hand motor imagery to the power ON/OFF command. A problem of MI-BCI is a few number of the command. Currently, MI-BCI commands are four commands using "left-hand", "right-hand", "legs" and "tongue" motor imagery. Therefore, we attempted to add the number of MI-BCI commands by classifying eight kinds of motor imagery brain activity "no movement", "left-hand", "right-hand", "legs", "both- hands", "left-hand + legs", "right-hand + legs", "both-hands + legs". Motor imagery by multiple body parts "both-hands", "left-hand + legs", "right-hand + legs", "both-hands + legs" are called multi-task. Multi-task are combination of simultaneous motor imagery of left-hand, right-hand, and legs. This makes it possible to add the number of commands to 2N − 1 (N is number of body part). We used LDA to classify motor images. As a result of classification, the correct classification rate was 26.9%. It was shown that multitask motion recall can be classified, and it was suggested that it is possible to add the number of MI-BCI commands to 2N − 1.
Brain computer Interface (BCI) is the interface that is able to control the computers or machines without physical movement by reading User’s intent from specific electroencephalogram (EEG) pattern. Event Related Desynchronization (ERD) / Event Related Synchronization (ERS) is one of the specific EEG pattern. ERD is caused the decrease of EEG frequency power in alpha (8-12Hz) and beta (12-30Hz) bands on the motor area related to the body parts by preparing movement, actual movement, or Motor Imagery (MI). In contrast to ERD, ERS is caused the increase of EEG frequency power in a similar way and bands. There are many researches of BCI using ERD/ERS because this is useful as rehabilitation and logical switch information in the controller, etc. We also have the research significance to realize the interface for patients can’t walk or VR-walking machine using BCI, therefore, leg MI was used about the kind of MI. The BCI related to leg get close to realize by enhancing the detection accuracy on leg MI. To enhance the detection accuracy, we focused on quantifying ERD/ERS. However, current quantifying is not suitable for online analysis because analysist need to set reference data by selecting the data ERD/ERS appeared. Therefore, ERD/ERS detection algorithm was proposed for online analysis in this research. On the proposing that algorithm, we used alpha band (8-12Hz).
Patient progress tracking is important to a doctor in a treatment process. In order to have enough information about a patient and the treatment made for the patient, a doctor has to view a lot of medical data about symptoms, test results, drugs and their dosages in a period of time. With electronic medical records, it is convenient for the doctor to view and search for any information he/she needs as compared to paper medical records. It would be even better if all the related electronic medical records over time of each patient are visualized appropriately to support the doctor in patient progress tracking. Therefore, our EMRVisualization system is proposed as a web-based application on tablet computers for visualizing all the related medical data in an integrative manner. The system provides an interactive visualization with accurate data at different detail levels, quick access and convenience for a doctor to keep track of the progress of each patient over time. Its demonstration with the real data of gastroenterological Vietnamese patients in Thong Nhat Hospital, Ho Chi Minh City, Vietnam, has been conducted and showed that every interaction of a doctor can be accomplished in at most two steps.
We investigated the comfort in nursing care. First, we examined the use of “Vein Display” to observe variations in individuals’ subcutaneous blood flow. We found the venipuncture site selection was significantly improved with “Vein Display”, but did not evaluate the difficulty of students to perform venipuncture as there are currently no scales to reflect their affective fluctuations. Second, we verified the comfortability of MRI with limited body movement. We measured the affective and physical distress of patients in response to body positioning using various devices. Next, we aim to measure the physical ability and perception of the elderly with a high risk of sarcopenia to cope with daily activities. Here we will determine parameters used to predict the risk of sarcopenia and identify factors which worsen sarcopenia. We intend to use the tools available from Affective Science to measure the detectable emotional change.
NCCN guidelines recommend conducting QOL assessments that display the score of a patient’s quality of life, in addition tousing common assessment methods that simply diagnose the patient’s medical conditions. Despite the effectiveness of these methods in determining the QOL for cancer patients, their paper answer sheets have always had a fixed format. As a result, there has been almost no progress in the efforts to digitize the questionnaires to effectively manage data, and clinicians have been required to invest inordinate amount of time and effort into collecting the EORTC assessment data in order to apply them to research. Accordingly, in this research, we used preliminary surveys to develop an application that calculates and manages data for EORTC QLQ-C30 which are the most popular QOL assessment methods. For future application, we have integrated functions to display acquired assessment data to patients using visual graphs for easy viewing and comparison, as well as a feature that allows the user to import the data from the answer sheet by simply taking a picture of it. Also, introduced a function to improve patients’ motivation towards rehabilitation and encourage them to continue with rehabilitation. As a result, we succeeded in developing an application that reduces the burden of data input and analytical work on the clinician, that can present a graph that allows the cancer patient to immediately understand his or her medical condition while still showing consideration to the patient’s mental state, and has a function that encourages improvement in motivation for rehabilitation.
Detection of fatigue in human face is crucial for medical and safety purposes. Although it is a simple task for a human observer, for a computer, it is a very challenging problem. That is why various attempts were made to successfully detect fatigue. In this paper, we tried to determine the presence of fatigue in a face using computer vision. Here we proposed a method for fatigue recognition by exploiting the facial landmarks. We used the OpenCV library for image processing, and dlib library for feature extraction. The whole method was tested on the extended Cohn-Kanade dataset and the Psychological Image Collection at Stirling (PICS) dataset where it provided a satisfactory level of accuracy.
Diabetes diagnosis is important due to the death and complication consequences caused by the disease. It thus has attracted much research attention and effort in Artificial Intelligence to support human decisions. Our work proposes a kernel k-means-based predictive method and explores attribute selections for effective and robust diabetes diagnosis. This method uses homogeneous subclusters in the high dimensional kernelized feature space to compute the distance of a new instance to those subclusters and classify it accordingly. The PIMA and MIMIC data sets are respectively used for training and testing. Our experimental results could identify the best effective attribute groups and show that the proposed method outperforms existing ones for the task.
While knowledge discovery and n-D data visualization procedures are often efficient, the loss of information, occlusion, and clutter continue to be a challenge. General Line Coordinates (GLC) is a rather new technique to deal with such artifacts. GLC-Linear, which is one of the methods in GLC, allows transforming n-D numerical data to their visual representation as polylines losslessly. The method proposed in this paper uses these 2-D visual representations as input to Convolutional Neural Network (CNN) classifiers. The obtained classification accuracies are close to the ones obtained by other machine learning algorithms. The main benefit of the method is the possibility to use the lossless visualization of n-dimensional data for interpretation and explanation of the discovered relationships besides the classical classification using statistical learning strategies.
This paper presents a computational framework for identity (initially about the culprit in a crime scene) based on Barwise’s situation theory. Situations support information and can carry information about other situations. An utterance situation carries information about a described situation thanks to the constraints imposed by natural language. We are concerned with utterance situations in which identity judgments are made about the culprit in a crime scene, which is the corresponding described situation. The id-situation and crime scene along with various resource situations make up a case in the legal sense. We have developed OWL ontologies to provide concepts and principled vocabularies for encoding our scenarios in RDF, and we present an example of a SPARQL query of one of our encodings that spans situations. To follow how evidence supports hypotheses on the identity of the culprit in a crime scene, we use Dempster-Shafer theory. We tightly integrate it with our ontologies by having the representation of a case per our ontologies present a network containing situations and stitched together by objects; evidence "flows" along this network, diminishing and combining. We review the modifications of Dempster-Shafer theory required when one goes from a closed-world assumption to an open-world assumption. We review our plans regarding equational reasoning based on identities established in our id-cases, and we review the related issues regarding the meanings of URIs.
The manuscript represents an approach to create a self-developing computational system of full awareness and understanding of reality through of the perception, processing, memorization and reproduction of languages, images, signals, sounds, feelings and emotions. This system is oriented on using in Kansei Engineering systems as a plug-in. The functionality of this system is realized by its processes of processing of data, information, knowledge, objects, models and modules in current situation. These processes are self-controlled and self-organized themselves under uncertainty of the changing environment with using of computational modeling of the: a) Memory, b) Fuzzy Control, c) Fuzzy Inference, d) Decisions Making, e) Knowledge Representation, f) Knowledge Generalization, g) Knowledge Explanation, h) Reasoning, i) Systems Thinking, j) Awareness, k) Cognition, l) Machine Learning, m) Computational Systemic Mind, and n) Intelligent User Interface. These processes are functioned and managed by the computational subsystems of “Memory”, “Brain”, “Cognition”, “Nervous”, “Knowledge Discovery”, “Cyber-physics” and “Communication”. These subsystems provide the functionality of the computational Self-Developing System of Full Awareness and Understanding in Kansei Engineering systems.
In the field of cosmetic development we have put our efforts mainly on Consumer interview and understanding of physical properties of products, to repeatedly obtain Consumer’s purchase intent. However, we have recognized that the methodology contributes to confirmation of product benefit and to formulas screening, but it does not necessarily bring product innovation. Delivering innovation in cosmetic development requires more in-depth understanding of in-use performance of products at molecular level or from the chemical point of view. This paper proposes a new approach of “Molecular Affective Engineering” which measures in-use product performance from macroscopic to microscopic levels in different order of magnitude, to sharply identify key elements to impact on an affective mind.
In this study, we investigated the relationship between thickness feeling for various fabric types and physical thickness under different compressive loads to clarify the effective range for human about fabric thickness. We selected eight fabric types and prepared 8 or 9 fabrics with different thickness in one fabric type. We measured thickness of samples using compression tester KES-FB3 under compressive load 0.5 gf/cm2, 10 gf/cm2, 20 gf/cm2, 30 gf/cm2, 40 gf/cm2, 50 gf/cm2, respectively. Sensory evaluation for the thickness of each fabric was carried out using SD method (semantic differential scale method) for 60 subjects. The nine fabric types were divided into two kinds which fabric types having a significant correlation between physical thickness and sensory evaluation score under any pressures, and fabric types having significant correlations in thickness under the compressive load of 0.5 and 50 gf/cm2.
The purpose of this study is to clarify psychological structure of comfort sensation of underwear made of yarn blended with polypropylene (PP). As a result of wearing experiment, the fabric of one of the PP samples, which has low surface property, had a rough sensation even as a result of the sensory evaluation. The score of comfort sensation was also lowered and there was a bad sensation after exercise. In addition, as a result of multiple regression analysis to investigate the differences in impression evaluation of each sample, it was found that for the samples of PP blended yarn, the sensation caused by moisture-transport characteristics and air flow characteristics proved to affect the comfort sensation. In all of the samples, it was found that skin texture was important.
This study investigated the effect of spinning method of cotton yarn on the tactile feeling (hand) of the knitted fabrics by laundering. Two cotton knitted fabrics of rib stitch were made with two yarns of the same yarn count made by Siro- and Ring-spinning methods. Hand of samples for laundering 0 and 20 times were compared using Scheffe’s paired comparison. Bending and surface properties of the samples were measured using Kawabata Evaluation System. It was found that the hand of the knitted fabric by Ring spun yarn was greatly changed by washing as compared with one of Siro-spun yarn. In particular, there was a significant difference by the spinning method in "smoothness". It was also found that B and 2HB of both C and E changed by laundering. However, MIU and SMD of C wan not changed by laundering although ones of E changed by laundering. However, those were not directly correlated with the hand changes. Therefore, other mechanical properties should be necessary to predict smooth feeling.
In this study, we propose a method of Kansei analysis of aspects such as preference or impression using a large amount of automatically processed data. We focused on representative colors which are low colors representing many colors in an image because colors affect to people’s feeling and impression and are easily quantify and express features. Our method consisted of automatic data acquisition by web crawling and automatic feature extraction through an algorithm of image processing. Data collected from restaurant websites were analyzed in order to verify the following hypothesis: expensive restaurants have more achromatic photographs on their websites. As a result, the hypothesis has been proven in several restaurant genres.
A molecular atlas of the human lung is important to inform basic mechanisms and treatments for lung diseases, and imaging data provide us the foundation upon which to build the lung atlas. For analyzing immunofluorescent confocal images, annotations describing precise anatomical structures are necessary. However, it is hard to annotate increasing images manually. Thus, this study aims to develop an automatic annotation system as a combination of automatic region detection and automatic structure classification modules. As an important and first step to achieving the aim, we developed an efficient annotation data collection tool that will be used collected data to develop the automatic annotation system for the lung atlas. We describe the details of our web based annotation tool that is web based and includes user control.
In this paper, a tentative theory that explains how human beings have acquired primitive Kansei as an essential intellectual ability to survive in severe conditions so that having social ability becomes evolutionary selective pressure is proposed. In the remote past, perhaps the Paleolithic age or later, Kansei ability was part of social ability that was indispensable for surviving in human society. Kansei ability enabled human beings to perceive, understand, estimate, and manipulate the influence of tangible and intangible entities; this is beneficial for the mental state of human beings. In this paper, this social ability is referred to as primitive Kansei. It was very important in ancient human society so as to secure a socially advantageous position, acquire more spouses, and leave descendants. The role of Kansei seems to have changed significantly since ancient times.
The purpose of this study is to clarify the effects of people’s psychological evaluation based on different characteristics of air flow. In this study, we gave male subjects various air-flow stimuli made by an Air Flow Generating Device (AFGD), which had been developed in a prior study, along with a Dyson Air Multiplier AM01, to the back of their hand, palm, cheek, and back of the neck in a hothouse, with a temperature set to 20 degrees and a humidity of 45%. Results showed that higher wind speed air flow brought about cool and negative sensations and air flow created by AM01 made participants colder and more comfortable than air flow made by AFGD.
The authors have been conducting research on value-creating communication. It is a process where people embody and clarify their own values and form new values through communication. The authors have observed and modeled consensus building process that has few choice as an example of value-creating communication. Therefore, in this study, we observed and modeled the consensus building process in case of multiple-choices and compared the process based on quantity of choices. In multiple-choices, there was a group that they created the conception through communication and a group that they reach the consensus in terms of a viewpoint. It is considered that the conception is important if the appearance of viewpoints through communication is few or the degree of importance between viewpoints is not clear. However, as a result of comparison the process, it was suggested that the consensus building process can be caught the same structure regardless of the number of choices.
A lot of supporting tools for vitalizing brainstorming sessions have been proposed. Some of them show the participants hints for discussions, e.g. keywords and images, to the members. The author’s research group also has proposed a supporting system for vitalizing brainstorming sessions, in which related images of ideas thrown in the session are shown for the participants as hits. However, the effects of this type of hint had not been investigated yet. Thus, experiments were conducted to show the effects. In the experiments, effects of three types of hints, (1) relevant words of the words used in ideas presented in the discussion, (2) images retrieved by using words used in ideas presented in the discussion as keywords, and (3) images retrieved by using relevant words of the words used in ideas presented in the discussion as keywords, were compared. As a result, it became clear that the third type of hints can increase number of utterances and diversity of the subjects in discussions.
The aim of this research is to develop a search system for comics based on the personalities of appearing characters. For this purpose, this paper describes the classification of characters using egograms, which are used to classify personalities. In the proposed method, texts that express a comic book character's personality are acquired from web resources, and semantic vectors are allocated based on these texts using egograms. The resulting egogram pattern is used to estimate typical properties. Our experiment reveals that the performance accuracy of this classification method is 55.0%.
Previous studies suggested that spoilers might increase the enjoyment of novels. However, the problem of spoilers has not been sufficiently clarified. The objective of our work is to clarify the effect of comic spoilers and to apply clarified knowledge for applications. In our past work, we constructed a spoiler dataset and investigated the spoilers’ effect by changing the spoiler timing for readers. However, in the past work, we could not clarify the characteristics of spoilers. In this work, we clarified that spoilers reduced the interest in continuing reading the comics and analyzed the characteristics of spoilers by using the dataset. Then, we considered how to construct a comic spoiler dataset and investigated how to determine the spoiling pages automatically using image processing, and character detection and so on.
The role word is one of the expressions for letting people imagine one’s character. It is difficult for people to learn the role words from conversation examples written in textbooks of foreign language because few examples are shown to learn the role words on the textbooks. Comic scenes have the examples of conversation in their line texts of character’s speech. We have proposed a support system for learning Japanese role words with comic scenes. The system classifies comic scenes according to a role word included in the scenes. System users can learn how to use role words by watching comic scenes and lines. We have verified the efficiency of the system for Chinese speakers whose language culture is similar with Japanese language culture; both of them use Kanji in writing. The good results might be caused by the similarity of language culture. In this paper, we report results of an experiment with English speakers. The results of English speakers were compared with those of Chinese speakers. We verified that the system can support English speakers to learn Japanese role words. We also discuss improvements of our system by referring comments from English speakers. They said that the system should enables the users to learn the role words in enjoying comic stories.
The purpose of this research is to encourage consideration of gift-receivers in the gift selection process from various perspectives. People often purchase gifts for their loved ones, such as partners and friends. There are various e-commerce websites that recommend gifts for loved ones; however, selecting an appropriate gift is difficult because these recommendations do not consider gift-receivers’ hobbies and preferences. Entering all their hobbies and preferences into the system in advance will solve this problem, but it takes time and effort. To solve this problem, we proposed a system that encourages consideration of gift-receivers by themselves. To achieve this, the proposed system provides questions about the gift-receiver and facilitates deeper consideration in gift selection. Using this proposed system, we conducted an experiment to observe the participants’ gift selection process. As a result, we confirmed that the system increases people’s consideration of gift-receivers.
Cases of accidental ingestion of pharmaceutical products by children are increasing and have become a serious social concern. In this study, we investigated a new type of soft plastic, child resistant pill container, called ESOP (easy seal open package). We first conducted a container-opening experiment on children aged 12 to 36 months, and were able to identify a relative strength of the container and a point for improvement. Next, using the improved ESOP and an existing PTP (press through package), we performed an impression-evaluation experiment regarding the safety of the respective drug container, in the opinion of guardians with 24- to 36-month-old children. By investigating the guardians’ satisfaction level and degree of ascribed importance, we determined their impression of the safety of the two containers, and conclude that it is necessary to improve the items of which the degree of ascribed importance was high and the satisfaction level low.
Color design is a crucial component in creating an appealing media presentation. Designers always prepare many color themes in their design work, while it is not an easy work for non-designers to obtain suitable colors. In this paper, we propose an affective color theme generation approach for exploring of 3-color themes. Banner design acts as an initial application. First, we create a color form with overlapped blocks for the evaluation samples of color theme, and conduct the evaluation experiment to gain the affective data for color themes which are created by designers. Second, we analyze the relationships between color features and impressions, and create the estimation model. Then, we propose to generate new color themes corresponding to specified impressions based on the affective color model. A recommender system is developed to create banners with different colors corresponding to specified impressions. Moreover, we implement the mechanism of color unification with input image and text color legibility checker in the design system.
This paper details the findings of a study that was conducted to explore how data visualizations could be used to get a targeted emotional response. In total, there were ten original graphs created that were then manipulated in some manner to attain another emotional response, giving the study twenty graphs. The study was conducted in an electronic survey format. Characteristics examined were color choices, font type, and scale. Faculty and students of Eastern Washington University were asked to partake in the survey. We found that it was possible to affect the emotional responses to a graph by changing the colors. Changing the font and scale of the graphs did not yield significant results.
Red is not a color that stands out brightly for dichromatic individuals who have a different sense of color from the majority. However, they know that the color of passion is red. They also know that the color of sadness is of bluish color. But these understandings are learned a posteriori. If so, there must be a discrepancy in terms of the impression between the stimulation of letters by the color name and the stimulation by color itself. Impression is also expected to be different between the majority and the minority. This study conducted the impression survey by SD method using stimulations by letters and colors to understand instinctively the color world as seen by individuals with different sensation of color.
To measure the difference between the recognition of pictorial images and simple data displayed in a graph or chart, a survey with varying images was developed with the intent of eliciting some type of reaction. The images were either pictorial or graphical in nature and covered multiple themes. Topics ranged included both from politically charged issues as well as sedate ones such as puppies running in a park. For each subject, there was both a picture and a graph and any of the images had the potential to emote a response. Each image, along with a slider to measure strength of response and a hidden timer, was on a separate page and the order of the pages was completely randomized. The reaction strength and time of participants to the various image pairs was compared.