We propose a formulation of the problem of color scheme adjustment and a prototype system that automatically solves it, taking into account color vision deficiencies. Our work focuses on the representation of information, such as floor maps of public spaces and figures in books and papers. Color schemes of these representations carry two aspects: the aspect of art design and the aspect of media. The aspect of art design allows the creation of appealing color schemes using the sensitivity to beauty or theme of the contents. The aspect of media, on the other hand, provides information easy to understand. In particular, in the second aspect, designers need to consider universal design. However, these two aspects make any color scheme adjustments difficult, and the optimal combination of colors is not easily determined. The original color scheme made by a designer should not change drastically from the viewpoint of art, and at same time, the scheme must be understandable to everyone even to those with color vision deficiencies. To solve this difficulty posed to designers, we formulate our color scheme adjustment as a fuzzy constraint satisfaction problem, a framework studied in the field of artificial intelligence. In our formulation, we employ the index of color conspicuity, which defines how easily-noticeable a color is for retaining the impression of the original color scheme. To prove the feasibility of the concept, we develop a prototype system in Java that automatically adjusts colors given by a designer.
In our society, there are many opportunities to exchange opinions on decision. Problems, such as derails from the subject, waste time by loops, conflicts of opinions, and blocks for frank opinions caused by their relationships, may occur when we exchange opinions. It is desirable for solving these problems to make an environment that can achieve a smooth opinion exchange. There are ``divergence phase" and ``convergence phase" in the opinion exchange for decision. The divergence phase is to enumerate various choices related to the subject, and to consider possibility. On the other hand, in the convergence phase, participants investigate choices arose in the divergence phase, and argue for making a single conclusion. In this paper, we focus on the convergence phase, and propose a system that supports the process narrowing the listed choices by using RFID tags. That is, the system supports smooth progress of opinion exchanges by controlling the time, the speech order, and choice evaluations for leading a single conclusion that many participants can agree.
In learning cognitive science, students must learn how to handle an actual production system that runs on a computer. We developed a web-based production system for education that can be used from anywhere such as class rooms, offices, and homes. The system as a web-based application has many advantages as a learning support system. It furnishes students with learning support information for if-clause matching to facilitate learning. We confirmed that our system works sufficiently in a standard computer facility and students learned important features of human cognitive processing by meta-monitoring their own thinking processes.
In this paper, we describe the evolution of OntoGear, which has ever been discussed in the previous research, and newly developed software tools. OntoGear is an engineering knowledge management software platform based on ontology engineering, and its previous system has provided the most basic functionalities based on systematization framework of functional knowledge; i.e. describing a function decomposition tree, and building way knowledge base to share and reuse organized and generalized functional knowledge. Compared to the previous one, our new system realized the functionality of physical process integration model (hereafter ppim) that allows engineers to describe whole processes of any artifacts in the form of function decomposition trees in their product lifecycle. New OntoGear system is appended two client tools and a server--a modeling tool for ppim, a viewer for ppim and OntoGear server. Furthermore, we introduce an application using the system for design support of application system of SOFC (Solid Oxide Fuel Cell), which is a kind of fuel cell and expected for its quick realization. The application contributes to clarify whole functional structures and the relationships among them through SOFC system's lifecycle. Since one of the tools of OntoGear software environment has presently released as a software product, which is named OntoloGear SE (Standard Edition), by MetaMoJi Corporation, we briefly report the productization status.
Creative innovation is achieved by observing the features of objects. This paper terms such a skill as ``the discovering viewpoints'' and proposes a framework for developing them. Our proposal implements the cognitive models of analogical reasoning that have been developed in the field of cognitive science. In the framework of this paper, learners are presented with an example of graphic composition, and they carry out the task by modifying the example. The cognitive models of analogical reasoning compute the similarities between the presented example and the learners' work. The computed similarities are assumed to be the estimations of the learners' viewpoints in the task. The learners receive the outputs of computations and reflect on their viewpoints. During the reflection, they look for hidden features in the graphics by manipulating three parameters of the similarity computations (the feature space, the abstractness, and the consistency). We developed a prototype system that implements the above framework, and conducted an experiment in which the developed system was used by the learners. As a result, we confirmed the workability of the system by analyzing the log files obtained in the experiment. Furthermore, we confirmed the effects of learning support on the development of discovering viewpoints by analyzing the subjective evaluations on the learners'.
In this study, we bring the haptic technology into the online negotiation system to improve the method of conveying nonverbal information. In this system, subjects can convey the nonverbal information by changing the ball's size as well as the force feedback. We conducted two experiments and compared them to verify the effect that the haptic interaction brings about. Results from the experiments implied that online negotiation involving haptic interaction can increase the sense of presence and is also helpful for expressing one's emotions, which play major role in online negotiations.
This paper presents an automatic question generation method for a local councilor search system. Our purpose is to provide residents with information about local council activities in an easy-to-understand manner. Our designed system creates a decision tree with leaves that correspond to local councilors in order to clarify the differences in the activities of local councilors using local council minutes as the source. Moreover, our system generates questions for selecting the next branch at each condition in the decision tree. We confirmed experimentally that these questions are appropriate for the selection of branches in the decision tree.
We propose a data processing platform that can analyze a large amount of tree-structured data. The proposed platform stores tree-structured data in separated files corresponding to each attribute, and uses MapReduce framework for distributed computing. These methods enable to reduce disk I/O load, and to avoid computationally-intensive processing, such as grouping or combining of records. An early stage of data mining needs try-and-error processes to find out how to analyze and utilize the data. Our platform speeds up computations of the try-and-error processes, such as appending new attributes and calculating statistics of attributes. Experimental results show that the proposed methods are efficient to process large-scale tree-structure data, and our platform is comparable or superior to a traditional relational database system. With the proposed platform, it became possible to process 90 GB data within 5 minutes on 6 benchmark tasks. We also describe system architecture for the try-and-error phase, which integrates the proposed platform and a few Web applications. The main contributions of this paper are: (1) formulation of vertical partitioning for tree-structured data, (2) effective utilization of MapReduce, and (3) construction of large-scale data mining system for a try-and-error phase.
Following the growing public interest of product recalls as a social issue, improving the product quality is becoming a very important concern for industries. We have developed and operated a design defect prevention system based on SSM (Stress-Strength-Model), which is a method to structuralize design knowledge using a chain of causes and their effects. How to let busy designers build and utilize structured knowledge is a very important issue in operating a structured knowledge system. This paper describes know-how to solve the above concern in developing and operating the design defect prevention systems which builds and utilizes SSM design knowledge. Our know-how consists of an organizational structure to enable busy designers to build structured knowledge. Rules for building SSM knowledge to be able to generate practical design checklists and a procedure for predicting the effect and cause of the failure in a real design site. The design defect prevention system has been operated since 2006 for controlling circuit design and structural design in developing air-conditioning equipment. The system achieved a 59 percent reduction of the number of design changes noticed and additional 12 percent of check items which manpower design review could not point out. Furthermore, we confirmed the improvement of the design ability to prevent design defect through a competence-based questionnaire for ten designers.
We propose a computing platform for parallel machine learning. Learning from large-scale data has become common, so that parallelization techniques are increasingly applied to machine learning algorithms in order to reduce calculation time. Problems of parallelization are implementation costs and calculation overheads. Firstly, we formulate MapReduce programming model specialized in parallel machine learning. It represents learning algorithms as iterations of following two phases: applying data to machine learning models and updating model parameters. This model is able to describe various kinds of machine learning algorithms, such as k-means clustering, EM algorithm, and linear SVM, with comparable implementation cost to the original MapReduce. Secondly, we propose a fast machine learning platform which reduces the processing overheads at iterative procedures of machine learning. Machine learning algorithms iteratively read the same training data in the data application phase. Our platform keeps the training data in local memories of each worker during iterative procedures, which leads to acceleration of data access. We evaluate performance of our platform on three experiments. Our platform executes k-means clustering 2.85 to 118 times faster than the MapReduce approach, and shows 9.51 times speedup with 40 processing cores against 8 cores. We also show the performance of Variational Bayes clustering and linear SVM implemented on our platform.