Several load balancing techniques for IP routing scheme have appeared in the literature. However, they require optimization process to compute optimal paths to meet traffic demand so that it requires a mechanism to measure traffic demand and to share them among all routes in order to follow dynamics of traffic. It naturally results in communication overhead and losing sensitivity to follow traffic dynamics. In this paper, we investigate a load balancing mechanism from another approach, i.e., based on IP fast reroute mechanisms. The main idea is simply to forward packets into detour paths supplied by IP fast reroute mechanisms only when packets meet congestion. This strategy enables us to use vacant resources adaptively as soon as they are required to avoid and dissolve the congestion. Through traffic simulation we show that IP fast reroute based load balancing mechanisms improve the capacity of networks.
When a large-scale disaster strikes, the communications infrastructure is usually unavailable. However, accurate and timely information of the disaster area is important because first responders rely on this information to assess the situation in the affected area and to provide an effective and immediate assistance. In this paper, a data collection method from an area of interest (AoI) within the disaster zone is proposed that uses the mobile phones of the people to serve as sensing nodes. In order for maximum AoI coverage to be achieved while minimizing delay, we propose a disruption tolerant network (DTN)-based data aggregation method. In this method, mobile phone users create messages containing disaster-related information and merge them with their respective coverage areas resulting in a new message with the merged coverage. The merging or aggregation of multiple messages will reduce message size and minimize the overall message collection delay. However, simply merging the messages can result in duplicate counting thus, to prevent this, a Bloom filter is constructed for each message. Also, to reduce further the message delivery time, the expected reaching time of a node to its destination is introduced as a routing metric. Through computer simulation with a real geographical map, the proposed method achieved a 9.7% decrease in information collection delay confirming that the proposed method achieved a smaller delay with a smaller number of total exchanged messages in collecting disaster information covering the AoI than epidemic routing.
The IP Multimedia Subsystem (IMS) has been constantly evolving to meet the tremendous rise in popularity of mobile services and Internet applications. Since IMS uses Session Initiation Protocol as the main protocol to control a signal, it inherits numerous known security vulnerabilities. One of the most severe issues is the Denial of Service attack. To address this problem, we introduce an anomaly-based detection system using the Tanimoto distance to identify deviations in the traffic. A modified moving average is applied to compute an adaptive threshold. To overcome a drawback of the adaptive threshold method, we present a momentum oscillation indicator to detect a gradually increasing attack. Generally, anomaly-based detection systems trigger many alarms and most of them are false positives that impact the quality of the detection. Therefore, we first present a false positive reduction method by using a trust model. A reliable trust value is calculated through the call activities and the human behavior of each user. The system performance is evaluated by using a comprehensive synthetic dataset containing various malicious traffic patterns. The experimental results show that this system accurately identified attacks and has the flexibility to deal with many types of attack patterns with a low false alarm.
In this paper, a processor design method using the Derivative ASIP approach is introduced. The concept of Derivative ASIP is basically to develop an ASIP architecture based on existing GPP processor architecture in order to diminish the design effort and shorten the design time. In this approach, the base processor architecture can be enhanced with more co-processor/instruction extensions quickly since all the required development tools have been available for the base processor. In order to support the Derivative ASIP approach, a new tool called the Co-processor/Instruction Extension Generator Tool is developed. This tool generates complementary files suitable for updating the base processor architecture with co-processor/instruction extensions. A complete set of software development tools consisting of a compiler, assembler, disassembler, linker, debugger, simulator and also hardware implementation for the modified ASIP architecture can be generated automatically by using these complementary files. With our proposed tool, a new co-processor/instruction extension can be designed and added to the base architecture more easily. It contributes to the reduction of the architecture exploration time in the design stage. Derivative ARM ASIP architecture enhanced with instruction extensions for the AES algorithm and a co-processor for the fingerprint navigation algorithm is given to demonstrate the effectiveness of our approach.
Secret data in embedded devices can be revealed by injecting computational faults using the fault analysis attacks.The fault analysis researches on a cryptographic implementation by far first assumed a certain fault model, and then discussed the key recovery method under some assumptions. We note that a new remote-fault injection method has emerged, which is threatening in practice. Due to its limited accessibility to cryptographic devices, the remote-fault injection, however, can only inject uncertain faults. In this surroundings, this paper gives a general strategy of the remote-fault attack on the AES block cipher with a data set of faulty ciphertexts generated by uncertain faults. Our method effectively utilizes all the information from various kinds of faults, which is more realistic than previous researches. As a result, we show that it can provide a decent success probability of key identification even when only a few intended faults are available among 32 millions fault injections.
As real-time embedded systems get more diverse and more complicated, systems with different types of tasks (e.g., periodic tasks and aperiodic tasks) are prevailing. In such a system, it is important that schedulability of periodic tasks is guaranteed and at the same time response times to aperiodic requests are short enough. Total Bandwidth Server is one of convincing task scheduling algorithms for mixed task sets of periodic and aperiodic tasks. Considering a fact that in most cases tasks' execution times are much shorter than their worst-case execution times, this paper proposes a method of reducing response times of aperiodic execution by using predictive execution times instead of worst-case execution times for deadline calculations in the total bandwidth server to obtain shorter deadlines, while ensuring the integrity of periodic tasks. In the evaluation by simulation, the proposed method combined with a resource reclaiming technique improved average response times for aperiodic tasks, by up to 22% compared with the original total bandwidth server technique, and by up to 48% compared with Constant Bandwidth Server, which is another algorithm appropriate for tasks with varying execution times.
Recently, many educational institutions have had to maintain a large number of PCs (educational PCs). It is common for the administrators of these institutions to use a disk image distribution system to manage them all together. It allows the administrators to keep all of their educational PCs in the same configuration with ease. This method, however, robs them of the flexibility of management. Suppose that use of certain application software is restricted to specific users or sites in those institutions. Suppose also that use of other application software is requested to be prohibited by teachers during classes. It is hard for the administrators to satisfy all of these requirements with the traditional method since it causes heavy administrative burden. This paper proposes a flexible method to manage application software on educational Windows PCs without significant efforts of the administrators even with these requirements. The proposed method controls the execution of individual application software on each educational PC to create the requested environments. Teachers as well as the administrators can directly and dynamically change its configuration during their classes. An execution control system was actually implemented as a prototype to show the feasibility of the proposed method.
Security is a critical concern around the world. In many domains from counter-terrorism to sustainability, limited security resources prevent full security coverage at all times; instead, these limited resources must be scheduled, while simultaneously taking into account different target priorities, the responses of the adversaries to the security posture and potential uncertainty over adversary types. Computational game theory can help design such security schedules. Indeed, casting the problem as a Bayesian Stackelberg game, we have developed new algorithms that are now deployed over multiple years in multiple applications for security scheduling. These applications are leading to real-world use-inspired research in the emerging research area of “security games”; specifically, the research challenges posed by these applications include scaling up security games to large-scale problems, handling significant adversarial uncertainty, dealing with bounded rationality of human adversaries, and other interdisciplinary challenges.
This study proposes that there are two main problems which several works challenging the issue of cooperation have not assumed. Firstly, those works basically employ the best decision that every player knows all information regarding the payoff matrix, and selects the strategy of the highest payoff. Secondly, as Ohdaira and Terano also insist, when we represent specific tendering based on the model of game theory, we confront the restriction that a player can submit only a move in a match. Considering those issues, this paper enhances Ohdaira's previous discussion of the altruistic decision by newly introducing the notion of the bounded rationality which is essential for recognizing the decision with some compromise in limited information. Utilizing the model of match between two groups with the evolutionary process, this study shows that each group establishes cooperation of a high level in comparison with the previous study employing the second-best decision. In addition, showing the detailed sensitivity analysis regarding the probability of the rational decision and the probability of mutation in the evolutionary process, this paper also reveals that the small probabilistic rational decision (a little selection of the strategy of the first grade) has an effect on the rapid collapse of cooperation, while the growth of defection does not keep pace with the rate of that collapse. Moreover, this study exhibits that the change of the probability of mutation in the evolutionary process has a moderate effect on the speed of the collapse of cooperation.
Automated negotiations occur when a negotiating function is performed among intelligent agents. Although current human-to-human negotiation appears to involve multiple extremely complex issues, each automated negotiation setting is simple. In particular, the structure of issues is independent and flat in the existing automated negotiation framework. In this paper, we propose realistic negotiation frameworks for non-monotonic utility functions. The monotonicity of the utility functions is an important characteristic because if the utility function is monotonic, the issues are independent. When the issues are independent, it is useful to separate them and reach a distinct agreement for each sequentially. In addition, we propose an automated mediation protocol for multiple non-monotonic issue negotiations. This mediation protocol consists of the communications between agents and the mediator. The procedures of the mediation protocol include recognizing related issues, announcement, bidding, awarding, and expediting. We experimentally demonstrate that the proposed method results in good outcomes and greater scalability. In addition, we demonstrate that a suitable mediation strategy leads to better outcomes and scalability.
In this paper we construct and analyze a crowdsourcing-based bug detection model in which strategic players select code and compete in bug detection contests. We model the contests as all-pay auctions, and our focus is on addressing the low efficiency problem in bug detection by division strategy. Our study shows that the division strategy can control two features of the bug detection contest, in terms of the expected reward classes and the scales of skill levels, by intentionally assembling players with particular skill distribution in one division. In this way, division strategy is able to determine the players' strategic behaviors on code selection, and thus improve the bug detection efficiency. We analyze the division strategy characterized by skill mixing degree and skill similarity degree and find an explicit correspondence between the division strategy and the bug detection efficiency. Based on our simulation results, we verified that the skill mixing degree, serving as determinant factor of division strategy, controls the trend of the bug detection efficiency, and skill similarity degree plays an important role in indicating the shape of the bug detection efficiency.
Smart Grid is the trend of next generation electrical power system which makes the power grid intelligent and energy efficient. It requires high level of network reliability to support the two-way communication among electrical services, electrical units such as smart meters, and applications. The wireless mesh network infrastructure can provide redundant routes for the Smart Grid communication network to ensure the network availability. Also due to its high level of flexibility and scalability features that make it become a promising solution for Smart Grid. However, similar with many other distributed ad-hoc networks, trust is a critical issue for wireless mesh networks. In this paper, we proposed a novel trust-based geographical routing protocol, named as Dynamic Trust Elective Geo Routing (DTEGR), which allows peers in a Smart Grid system to adjust their interaction behaviors based on the trustworthiness of others. The simulation studies have confirmed that DTEGR can achieve better routing performance in different network scenarios, and also to achieve high level of reliable data transmission in Smart Grid communication networks.
This paper presents an intelligent economic operation on smart grid environment facilitating an advanced quantum evolutionary method. The proposed method models the wind generation (WG) and the photovoltaic generation (PV) as renewable power generation sources as measures of global warming effect. Thermal generators (TGs) are included in this model to provide the maximum amount of energy to meet consumers' demand. On the other hand, plug-in hybrid electric vehicles (PHEV) are capable of reducing CO2 and gradually becoming an integral part of a smart-grid infrastructure. Such an integration introduces uncertainties into the system that are addressed by a fuzzy agent (FA). The demanded load, the wind speed, the solar radiation and a number of involved PHEVs are taken as fuzzy parameters to resolve uncertainties. An optimizer agent (OA), based on intelligent quantum inspired evolutionary algorithm, is deployed to carry out the economic scheduling operation concerning scheduling and dispatching with the help of FA. OA features intelligent operators such as a sophisticated rotation operator, a differential operator, etc. The method is tested on a hypothetical power system with 10 thermal units, an equivalent number of PHEVs, an equivalent solar and wind farm. The simulation results will show the effectiveness of OA-FA that provides an excellent operational resource scheduling while reducing the production cost and emission.
Traffic control/operation system based on probe vehicle data (i.e., vehicles' locations and trajectories, and past record of travel time) has been attracting attention. In this paper, we propose and evaluate a novel traffic management method to provide information using anticipatory stigmergy which can search an alternative route to avoid expected congestion by sharing the probe vehicles' locations in near-future. Because it might be ineffective if all drivers follow the fastest path searched by anticipatory stigmergy, we introduce new strategies for assigning a driver to a link based on the residual distance to his/her destination or the time involved in congestion from his/her departure. In addition, the impacts of driver's route choice behavior to follow the recommended link are examined as sensitivity analysis. The results of our numerical experiment show that our proposed anticipatory stigmergy with assignment strategy works better than conventional methods.
This paper proposes a multiagent-based route optimization method as a next-generation transportation system to generate a sustainable route network which can transport stranded persons effectively even if the road conditions are changed in a disaster situation. For this purpose, we apply a multiagent approach into the route optimization method where an agent corresponds to one route. Such an approach is very useful in a disaster situation because it is easy to add/delete routes and modify their routes according to the dynamic condition change and constraints. Towards a sustainable route network by multiagent approach, our route optimization method (1) employs the bus stop clustering method to generate clustered routes, (2) introduces a cluster-extension method to connect routes in different clusters and (3) adopts the evaluation function in consideration of damage by a change in the condition of roads. Intensive simulations on Mandl's urban transport benchmark problem have revealed the following implications: (1) the proposed method has succeeded in reducing stranded persons, detour persons, detour time, all of which are caused by road condition changes; (2) detour routes have emerged, which contribute to an increasing network sustainability; and (3) we have succeeded in reducing both the passenger's transportation time and the number of buses in a non-damaged situation.
Knowledge collaborative communities play an important role in collective intelligence systems. To discover a knowledge collaborative community, we need to consider not only the structure of a network but also the performance of knowledge collaboration among members within the community. Traditional community discovery approaches are not suitable to discover knowledge collaborative communities since most of them focus too much on the network topologies, and ignore some other important factors. In this paper, we propose two community discovery approaches, which can be used in different sizes of networks, and take more knowledge collaboration factors into account. Compared with some other existing approaches, the proposed approach can perform better in forming knowledge collaborative communities for multi-domain problem solving.
Smart Cities are supposed to be the next generation of not only city infrastructure but also citizenship. Improving citizens' quality of life - referred to as social utility in this paper - should be one of the main targets of a smart city. Electric Vehicles (EVs) offer several new venues in this area. While today citizens are basically on their own when they buy a car while residing in a city, EVs in a smart city is a different topic entirely. Citizens shrink from purchasing EVs today mainly because of high cost and low availability of battery charging. With alternative battery ownership model and Vehicle-To-Home (V2H) systems, citizens can get much more from owning an EV in terms of social utility. This paper shows that high social utility depends on the infrastructure provided by the city. While the battery replacement model presented in this paper greatly increases charging availability, it still heavily depends on battery replacement stations. This paper presents a realistic model for a city-wide EV service infrastructure. The model is based on the real road map of Tokyo. The model evaluates quality of life of citizens, represented by two social utility metrics. Recommendations to battery replacement service providers are made based on simulation results.
To discuss or evaluate certain policies for a smart city (e.g., urban transportation systems), it is effective to develop an agent-based simulation that can reproduce an individual's travel behavior and social interaction. Here, activity-travel data is needed to develop a behavior model. However, it is difficult to collect such data over a long time period due to a heavy burden on subjects of the survey. This study proposes a web system to collect an individual's schedule data easily from travel information. Our proposed system has two key characteristics: 1) travel information (e.g., which route is best at a particular time) is recommended automatically based on the concept of a prism when the user enters a new schedule, 2) researchers can utilize users' schedule information as activity-travel data without conducting a special survey. We tested the system with students as users, who expressed satisfaction with the system's usability as well as operability.
In this paper, we focus on allocating of social service facilities which are operated under the first-come-first-serve rule. In such facilities, users cannot make a reservation in advance. To reduce congestion, it is desirable to adjust a schedule by communication devices. We propose the user-in-the-loop forecasting with the statement-based cost estimate, and apply to two types of facility allocation models, i.e., the theme park scenario and the highway scenario. The computer experiments show that the proposed estimate caused better results in both scenarios to reduce congestion. In particular, the users in the highway scenario could achieve a near user equilibrium situation without any advance experience of the system.
The existing music recommendation systems rely on user's contexts or content analysis to satisfy the users' music playing needs. They achieved a certain degree of success and inspired future researches to get more progress. However, a cold start problem and the limitation to the similar music have been pointed out. Therefore, this paper proposes a unique recommendation method using a ‘renso’ alignment among Linked Data, aiming to realize the music recommendation agent in smartphone. We first collect data from Last.fm, Yahoo! Local, Twitter and LyricWiki, and create a large scale of Linked Open Data (LOD), then create the ‘renso’ relation on the LOD and select the music according to the context. Finally, we confirmed an evaluation result demonstrating its accuracy and serendipity.
We propose a team formation method that integrates the estimating of the resources of neighboring agents in a tree-structured agent network in order to allocate tasks to the agents that have sufficient capabilities for doing tasks. A task for providing the required service in a distributed environment is often achieved by a number of subtasks that are dynamically constructed on demand in a bottom-up manner and then done by the team of appropriate agents. A number of studies were conducted on efficient team formation for quality services. However, most of them assume that resources in other agents are known, and this assumption is not adequate in real world applications. The contribution of this paper is threefold. First, we extend the conventional method by combining the learning of task allocation and the reorganization of agent networks. In particular, we introduce the elimination of links as well as the generation of links in the reorganization. Second, we revise the learning method so as to use only information available locally. Finally, we omitt the assumption that all resource information in other agents is given in advance. Instead, we extend the task allocation method by combining it with the resource estimation of neighboring agents. We experimentally show that this extension can considerably improve the efficiency of team formation compared with the conventional method even though it does not require knowledge of resources in other agents. We also show that it can make the agent network adaptive to environmental changes.
This study is intended to encourage appropriate social norms among multiple agents. Effective norms, such as those emerging from sustained individual interactions over time, can make agents act cooperatively to optimize their performance. We introduce a “social learning” model in which agents mutually interact under a framework of the coordination game. Because coordination games have dual equilibria, social norms are necessary to make agents converge to a unique equilibrium. As described in this paper, we present the emergence of a right social norm by inverse reinforcement learning, which is an approach for extracting a reward function from the observation of optimal behaviors. First, we let a mediator agent estimate the reward function by inverse reinforcement learning from the observation of a master's behavior. Secondly, we introduce agents who act according to an estimated reward function in the multiagent world in which most agents, called citizens, have no way to act. Finally, we evaluate the effectiveness of introducing inverse reinforcement learning.
We model a transportation network where agents of different types operate with conflicting objectives: drivers want to drive at high speeds to reach their destinations faster, while police units want to prevent unlawful speeding. Police units have to efficiently allocate their limited resources to monitor roads and catch speeders, who try to avoid being caught. Assuming that police and drivers make strategic choices, the problem can be modeled using game theory. We describe the models and algorithms we developed and validate them on synthetic and real traffic data from different maps.
Building evacuation analysis has recently received increasing attention, as people are keen to assess the safety of occupants. Reports on past disasters indicate that human behavior characterizes evacuation during emergencies. The understanding and modeling of human behavior enable improved design of evacuation plans to better reflect the needs of occupants — for example, to reduce evacuation time, a composite of pre-movement time and travel time. In this paper, we demonstrate that information at the time of emergencies affects human behavior and that this behavior affects pre-movement time and the time it takes to move people to safe places. Information is shared with people via announcements and through interpersonal communication. We have modeled and simulated information transfer in an agent-based evacuation system, using BDI models that represent the diversity of human psychological states and using ACL-based communications that dynamically change people's beliefs. The model enables an evacuation simulation to consider the effect of information on human behavior and calculate evacuation time, including pre-movement time. The simulation results demonstrate that methods of guidance improve evacuation time, and they reveal phenomena in agent behaviors that have not been simulated by other methods.
We propose “EducaTableware (Educate/Tableware), ” a design for interactive tableware devices that makes eating more playful and improves daily eating habits through auditory feedback to encourage specific mealtime behaviors. We have developed a fork-type device for use when eating. This device emits sounds when a user is consuming a food item. In this paper, we discuss the EducaTableware concept, describe the implementation of the fork-type device, and conduct a user test with child subjects for one week.
Pervasive logging devices capture everything along with the public nearby without their consent, thus, possibly troubling people who prefer their privacy. This has issues for privacy and, furthermore, the widespread use of such logging devices may affect people's behavior, as they may feel uncomfortable that they are constantly being monitored. People may wish to have some control over the lifelogging devices of others and, in this article, we describe a framework to restrict anonymous logging, unless explicitly permitted. Our privacy framework allows the user of a logging device to define privacy policies controlling when, where and who to restrict from logging them. Moreover, it is possible to select which type of logging sensors to apply these restrictions. Evaluation results show that this approach is a practical method of configuring privacy settings and restricting pervasive devices from logging.
In this study, the properties of physical unclonable functions (PUFs) for 28-nm process field-programmable gate arrays (FPGAs) are examined. A PUF is a circuit that generates device-specific IDs by extracting device variations. Owing to device variation, no two PUFs will generate the same ID even if they have identical structures and are manufactured on the same silicon wafer. However, because the influence of device variation increases as the size of the process node shrinks, it is uncertain whether PUFs can be built using recently developed small-scale process nodes, even though the technology of variation control is constantly advancing. While many PUFs using 40-nm or larger process nodes have been reported, smaller devices have not yet been studied to the authors' knowledge, and this is the first published journal article on PUFs for 28-nm process FPGAs. In this paper, within-die reproducibility, die-to-die uniqueness, and other properties are evaluated, and the feasibility of PUFs on 28-nm FPGAs is discussed.
RC4 is a stream cipher designed by Rivest in 1987. It is the most famous stream cipher and widely used e.g., SSL/TLS, WEP and WPA. Although RC4 in particular implementations and settings such as the WEP implementation and the broadcast setting, was already broken, RC4 itself is not completely broken yet. In 2011, Teramura et al. generalized classes of weak keys of RC4 by using the predictive state, which are special classes of the internal state of RC4. The total number of Teramura et al.'s weak keys is approximately 2117.29. Their weak-key attack can recover a 128-bit secret key with efficiency of 295.10, where efficiency is defined as time complexity per success probability of the attack. This attack works only if particular patterns of the keystream are observed. In this paper, we further expand weak-key space of RC4. By thoroughly analyzing the relation between the key and the initial state of the pseudo-random generation algorithm, we can find new classes of predictive state which are utilized for key recovery attacks. As a result, 2118.58 keys can be defined as new weak keys, whose number is more than twice the number of Teramura et al.'s weak keys. Moreover, our attack is applicable to any keystream, while Teramura et al.'s attack is feasible only in particular patterns of the keystream. Given any keystream, our weak-key attack can recover a 128-bit secret key with efficiency of 2115.11. Our attack is the best-known single-key key recovery attack on RC4 with respect to efficiency. In addition, if we focus on specific keystreams similar to Teramura et al.'s attack, the 128-bit secret key can be recovered with efficiency of 276.32, which is more efficient than Teramura et al.'s attack.
Ordered multisignature scheme is a signature scheme to guarantee both validity of an electronic document and its signing order. Although the security of most of such schemes has been proven in the random oracle model, the difficulty of implementation of the random oracle implies that the security should be proven without random oracles, i.e., in the standard model. A straightforward way to construct such schemes in the standard model is to apply aggregate signature schemes. However, the existing schemes based on the CDH problem are inefficient in the sense that the number of computations of the bilinear maps and the length of public keys depend upon the length of (a hash value of) the message. Therefore, in this paper, we propose a CDH-based ordered multisignature scheme which is provably secure in the standard model under a moderate attack model. Its computational cost for the bilinear maps and the size of public key are independent of the length of (a hash value of) the message. More specifically, in comparison with the existing schemes, the public key length is reduced to three group elements from 512 group elements while the computational cost is reduced to 0.85msec from 1.6msec.
Analysis of malware-infected traffic data revealed the payload features that are the most effective for detecting infection. The traffic data was attack traffic using the D3M2012 dataset and CCC DATAsets 2009, 2010, and 2011. Traffic flowing on an intranet at two different sites was used as normal traffic data. Since the type of malware (worm, Internet connection confirmation, etc.) affects the type of traffic generated, the malware was divided into three types — worm, Trojan horse, and file-infected virus — and the most effective features were identified for each type.
This paper proposes a new privacy-preserving scheme for estimating the size of the intersection of two given secret subsets. Given the inner product of two Bloom filters (BFs) of the given sets, the proposed scheme applies Bayesian estimation under an assumption of beta distribution for an a priori probability of the size to be estimated. The BF retains the communication complexity and the Bayesian estimation improves the estimation accuracy. A possible application of the proposed protocol is an epidemiological datasets regarding two attributes, Helicobacter pylori infection and stomach cancer. Assuming information related to Helicobacter Pylori infection and stomach cancer are separately collected, the protocol demonstrates that a χ2-test can be performed without disclosing the contents of the two confidential databases.
Speech animation synthesis is still a challenging topic in the field of computer graphics. Despite many challenges, representing detailed appearance of inner mouth such as nipping tongue's tip with teeth and tongue's back hasn't been achieved in the resulting animation. To solve this problem, we propose a method of data-driven speech animation synthesis especially when focusing on the inside of the mouth. First, we classify inner mouth into teeth labeling opening distance of the teeth and a tongue according to phoneme information. We then insert them into existing speech animation based on opening distance of the teeth and phoneme information. Finally, we apply patch-based texture synthesis technique with a 2,213 images database created from 7 subjects to the resulting animation. By using the proposed method, we can automatically generate a speech animation with the realistic inner mouth from the existing speech animation created by previous methods.
To provide an accurate and user-adaptable software keyboard for touchscreens, we propose a probabilistic flick keyboard based on hidden Markov models (HMMs). Touch and flick operations for each character are modeled by HMMs. This keyboard reduces input errors by taking the trajectory of the actual touch position into consideration and by user adaptation. We evaluated the performance of an HMM-based flick keyboard and maximum-likelihood linear regression (MLLR) adaptation. Experimental results showed that a user-dependent model reduced the error rate by 28.3%. In a practical setting, the MLLR adaptation to a specific user with only 10 words reduced the error rate by 16.6% and increased the typing speed by 11.9%.