In this study, the authors propose and implement a particle display system (PDS) that consists of hundreds of randomly distributed pixels. The wireless capability of this system enables each node to move freely without distant limitation of the use of wire cables. The authors also propose effective visual presentation techniques for a display system with randomly distributed pixels. One of the optimization techniques involves the extension of a well-known phenomenon where humans can perceive two-dimensional static or moving images from a set of high-frequency flashing one-dimensional pixel arrays, such as LED arrays, as a characteristic of a human's vision system. While this technique can only extend the virtual resolution of a display in a direction perpendicular to the aligned pixels, our technique enables the display of multi-directional scrolling of two-dimensional images with randomly distributed pixels. In addition, the advantages of presenting information on a display with nonuniform pixel distribution and virtual pixels with fast flash of pixels are discussed. The proposed techniques help in reducing the cost of installing a large-scale display and the time taken for the initial preparation of the setup, which involves carrying large pixel arrays and determining the precise size and shape of the display.
We propose a new inter-kernel communication mechanism for multicore architectures in order to communicate with multiple kernels within a single machine. Using this mechanism, multiple kernels share I/O devices, such as network and disk devices. The mechanism has been integrated into another mechanism called SHIMOS that partitions the CPUs, the memory, and I/O devices. Multiple Linux kernels on multicore architectures have been realized using the integrated SHIMOS mechanism. Several sets of benchmark results demonstrate that SHIMOS is faster than modern virtual machines. For system calls, SHIMOS achieves about seven times faster than the Xen virtual machine. When two Linux compilation jobs run on two Linux kernels, SHIMOS is 1.35 and 1.005 times faster than Xen and the native single Linux, respectively.
Formal verification is frequently based on modal μ-calculus and its fragments. However, the number of systems and verification properties which cannot be formalized in modal μ-calculus has been increasing as they become complicated. In this paper, we present a first-order extension of modal μ-calculus in order to formalize various such systems and verification properties. We also give an axiomatization of the logic. It is necessarily incomplete for the logic because the set of all valid formulas is not recursively enumerable. Finally, in order to demonstrate that our axiomatization is practical for verification, we formalize a system and mutual exclusion for unboundedly many processes in our first-order extension, and then verify that the system satisfies the property in our axiomatization.
We give a semantics of abstraction in PML (Pointer Manipulation Language) given by Takahashi et al.. This is an instance of unifying theory for abstraction of reactive systems given by the second author et al., as every model of PML induces a model of Rμ, a modal logic with fixpoints studied by the second author. It gives the proof of the correctness of predicate abstraction used in MLAT.
This paper presents a method to compile the Standard ML module language into a flattened intermediate language. An innovative point on this approach lies in viewing a functor as a code template with place holders representing the functor arguments. Each functor application fills these place holders in the code template with the actual functor argument and generates a fresh structure. After this compilation, module language constructs are all eliminated. This method allows us to compile the full set of Standard ML language into a typed intermediate language that contains no special mechanism for module, and provides a simpler model for separate compilation. The proposed compilation method has been successfully implemented in the SML# compiler, which demonstrates the feasibility of the method. This paper also reports the details of our implementation.
The latest robust estimators usually take advantage of density estimation, such as kernel density estimation, to improve the robustness of inlier detection. However, the challenging problem for these systems is choosing the suitable smoothing parameter, which can result in the population of inliers being over- or under-estimated, and this, in turn, reduces the robustness of the estimation. To solve this problem, we propose a robust estimator that estimates an accurate inlier scale. The proposed method first carries out an analysis to figure out the residual distribution model using the obvious case-dependent constraint, the residual function. Then the proposed inlier scale estimator performs a global search for the scale producing the residual distribution that best fits the residual distribution model. Knowledge about the residual distribution model provides a major advantage that allows us to estimate the inlier scale correctly, thereby improving the estimation robustness. Experiments with various simulations and real data are carried out to validate our algorithm, which shows certain benefits compared with several of the latest robust estimators.
This paper presents a matrix-based algorithm for integrating inheritance relations of access rights for generating integrated access control policies which unify management of various access control systems. Inheritance relations of access rights are found in subject, resource, and action categories. Our algorithm first integrates inheritance relations in each category, and next, integrates inheritance relations of all categories. It is shown that these operations can be carried out by basic matrix operations. This enables us to implement the integration algorithm very easily.
This work proposes a method to enhance selection of multiobjective evolutionary algorithms aiming to improve their performance on many objective optimization problems. The proposed method uses a randomized sampling procedure combined with ε-dominance to fine grain the ranking of solutions after they have been ranked by Pareto dominance. The sampling procedure chooses a subset of initially equal ranked solutions to give them selective advantage, favoring a good distribution of the sample based on dominance regions wider than conventional Pareto dominance. We enhance NSGA-II with the proposed method and analyze its performance on a wide range of non-linear problems using MNK-Landscapes with up to M=10 objectives. Experimental results show that convergence and diversity of the solutions found can improve remarkably on 3 ≤ M ≤ 10 objective problems.
We address the task of active learning for linear regression models in collaborative settings. The goal of active learning is to select training points that would allow accurate prediction of output values. We propose a new active learning criterion that is aimed at directly improving the accuracy of the output value estimation by analyzing the effect of the new training points on the estimates of the output values. The advantages of the proposed method are highlighted in collaborative settings, in which most of the data points are missing, and the number of training data points is much smaller than the number of the parameters of the model.
We developed an MPEG-2 transcoding method on the basis of a two-tiered quantizer matrix that reduces re-quantization noise. The proposed method changes the quantization matrix to decrease the re-quantization noise, instead of changing the value of the quantization parameters as conventional methods do. Using this two-tiered matrix makes it possible to reduce re-quantization noise in the high-frequency domain. The main feature of the proposed method is that it does not cause any differences in image quality between adjacent macroblocks, which the conventional method does. Experimental results show that our proposed method produces transcoding scores of structural similarity (SSIM) and video quality model (VQM) that are 0.044 and 0.0938, respectively, better than those of the best conventional method. The proposed method always produced superior SSIM and VQM scores when the bit reduction ratio was smaller than 0.1015 for all the tested sequences.
This paper describes Musicream, a novel music-listening interface that lets a user unexpectedly come across various musical pieces similar to those liked by the user. Most existing “query-by-example” interfaces are based on similarity-based searching, so they return the same results for the same query, meaning that a user of those systems always receives the same list of musical pieces sorted by similarity. Therefore, most existing systems do not provide a user an opportunity to encounter various unfamiliar musical pieces by chance. Musicream facilitates active, flexible, and unexpected encounters with musical pieces by providing four functions: the music-disc streaming function which creates a flow of many musical-piece entities (discs) from a large music collection, the similarity-based sticking functionwhich allows a user to easily pick out and listen to similar pieces from the flow, the meta-playlist functionwhich can generate a playlist of playlists (ordered lists of pieces), and the time-machine functionwhich automatically records all Musicream activities and allows a user to visit and retrieve a past state as if using a time machine. In our experiments, these functions were used seamlessly to achieve active and creative querying and browsing of music collections, confirming the effectiveness of Musicream.
There are two major problems with learning-based super-resolution algorithms. One is that they require a large amount of memory to store examples; while the other is the high computational cost of finding the nearest neighbors in the database. In order to alleviate these problems, it is helpful to reduce the dimensionality of examples and to store only a small number of examples that contribute to the synthesis of a high quality video. Based on these ideas, we have developed an efficient algorithm for learning-based video super-resolution. We introduce several strategies to construct an efficient database. Through the evaluation experiments we show the efficiency of our approach in improving super-resolution algorithms.
In outdoor scenes, polarization of the sky provides a significant clue to understanding the environment. The polarized state of light conveys the information for obtaining the orientation of the sun. Robot navigation, sensor planning, and many other application areas benefit from using this navigation mechanism. Unlike previous investigations, we analyze sky polarization patterns when the fish-eye lens is not vertical, since a camera in a general position is effective in analyzing outdoor measurements. We have tilted the measurement system based on a fish-eye lens, a CCD camera, and a linear polarizer, in order to analyze transition of the 180-degree sky polarization patterns while tilting. We also compared our results measured under overcast skies with the corresponding celestial polarization patterns calculated using the single-scattering Rayleigh model.
The link structure of the Web is generally represented by the webgraph, and it is often used for web structure mining that mainly aims to find hidden communities on the Web. In this paper, we identify a common frequent substructure and give it a formal graph definition, which we call an isolated star (i-star), and propose an efficient enumeration algorithm of i-stars. We then investigate the structure of the Web by enumerating i-stars from real web data. As a result, we observed that most i-stars correspond to index structures in single domains, while some of them are verified to be candidates of communities, which implies the validity of i-stars as useful substructure for web structure mining and link spam detecting. We also observed that the distributions of i-star sizes show power-law, which is another new evidence of the scale-freeness of the webgraph.
Community detection in networks receives much attention recently. Most of the previous works are for unipartite networks composed of only one type of nodes. In real world situations, however, there are many bipartite networks composed of two types of nodes. In this paper, we propose a fast algorithm called LP&BRIM for community detection in large-scale bipartite networks. It is based on a joint strategy of two developed algorithms — label propagation (LP), a very fast community detection algorithm, and BRIM, an algorithm for generating better community structure by recursively inducing divisions between the two types of nodes in bipartite networks. Through experiments, we demonstrate that this new algorithm successfully finds meaningful community structures in large-scale bipartite networks in reasonable time limit.
Directory services are popular among people who search their favorite information on the Web. Those services provide hierarchical categories for finding a user's favorite page. Pages on the Web are categorized into one of the categories by hand. Many existing studies classify a web page by using text in the page. Recently, some studies use text not only from a target page which they want to categorize, but also from the original pages which link to the target page. We have to narrow down the text part in the original pages, because they include many text parts that are not related to the target page. However these studies always use a unique extraction method for all pages. Although web pages usually differ so much in their formats, they do not change their extraction methods. We have already developed an extraction method of anchor-related text. We use text parts extracted by our method for classifying web pages. The results of the experiments showed that our extraction method improves the classification accuracy.
Mobile devices are becoming more and more difficult to use due to the sheer number of functions now supported. In this paper, we propose a menu customization system that ranks functions so as to make interesting functions including both frequently used and functions that are infrequently used but have the potential to satisfy the user, easy to access. Concretely, we define the features of the phone's functions by extracting keywords from the manufacturer's manual, and propose a method that uses the Ranking SVM (Support Vector Machine) to rank the functions based on user's operation history. We conduct a home-use test for one week to evaluate the efficiency of customization and the usability of menu customization. The results of this test show that the average rank at the last day was half that of the first day, and that the user could find, on average, 3.14 more kinds of new functions, ones that the user did not know about before the test, on a daily basis. This shows that the proposed customized menu supports the user by making it easier to access frequent items and to find new interesting functions. From interviews, almost 70% of the users were satisfied with the ranking provided by menu customization as well as the usability of the resulting menus. In addition, interviews show that automatic cell phone menu customization is more appropriate for mobile phone beginners than expert users.
In this paper we propose a method for generating simple but semantically correct replies to user inputs which are not related to a given task of a task-oriented information kiosk or any other natural language interface placed in a public place. We describe our method for retrieving meaningful associations from the Web and adding modality based on chatlog data. After showing the results of the evaluation experiments, we introduce an implementation of an affect analysis algorithm and pun generator to increase users' satisfaction level.
In this paper, we present a backup technique for Peer to Peer applications, such as a distributed asynchronous Web-Based Training system that we have previously proposed. In order to improve the scalability and robustness of this system, all contents and functions are realized on mobile agents. These agents are distributed to computers, and using a Peer to Peer network that modified Content-Addressable Network they can obtain. In the proposed system, although entire services do not become impossible even if some computers break down, the problem that contents disappear occurs with an agent's disappearance. As a solution for this problem, backups of agents are distributed to computers. If failures of computers are detected, other computers will continue service using backups of the agents belonging to the computer. The developed algorithms are examined by experiments.
The development of wireless technologies, cellular networks, WiFi, and Bluetooth has created heterogeneous network environments. In these environments, users can access anything at anytime, anywhere using these wireless technologies; however, in order to make full use of such wireless networks, users need to discover whether or not wireless networks exist in their vicinity and select the most appropriate one. Then, users have to obtain and input parameter settings for the wireless networks to begin communication. In this paper, we propose a network composition framework that enables automatic connections to a wireless network suitable for the user's context with minimal interaction. Based on this framework, we introduce network composition procedures which realize network discovery, network selection, configuration information notification, and device configuration with the support of a cellular network connection. We implement the proposed framework and procedures in a real environment comprised of cellular phones and laptop PCs. We examine implemented functions and their performance using this experimental implementation and present several attractive examples of actual use.
Although the near-far effect has been considered to be the major issue preventing CDMA from being used in ad-hoc networks, in this paper, we show that the near-far effect is not a severe issue in inter-vehicle networks for safety driving support, where packet transmissions are generally performed in the broadcast manner. Indeed, the near-far effect provides extremely reliable transmissions between near nodes, regardless of node density, which cannot be achieved by CSMA/CA. However, CDMA cannot be directly applied in realistic traffic accident scenarios, where highly reliable transmissions are required between far nodes as well. This paper proposes to apply packet forwarding and transmission scheduling methods that try to expand the area, where reliable transmissions are achievable. Simulation results show that the proposed scheme significantly excels a CSMA/CA-based scheme in terms of delivery ratio and delay under realistic traffic accident scenarios. Specifically, the proposed scheme achieves approximately 90% of delivery ratio and 4 milliseconds of end-to-end delay in a scenario, where the CSMA/CA scheme achieves 60% of delivery ratio and 80 milliseconds of delay.
The most critical issue in generating and recognizing paraphrases is developing a widecoverage paraphrase knowledge base. To attain the coverage of paraphrases that should not necessarily be represented at surface level, researchers have attempted to represent them with general transformation patterns. However, this approach does not prevent spurious paraphrases because there is no practical method to assess whether or not each instance of those patterns properly represents a pair of paraphrases. This paper argues on the measurement of the appropriateness of such automatically generated paraphrases, particularly targeting at morpho-syntactic paraphrases of predicate phrases. We first specify the criteria that a pair of expressions must satisfy to be regarded as paraphrases. On the basis of the criteria, we then examine several measures for quantifying the appropriateness of a given pair of expressions as paraphrases of each other. In addition to existing measures, a probabilistic model consisting of two distinct components is examined. The first component of the probabilistic model is a structured N-gram language model that quantifies the grammaticality of automatically generated expressions. The second component approximates the semantic equivalence and substitutability of the given pair of expressions on the basis of the distributional hypothesis. Through an empirical experiment, we found (i) the effectiveness of contextual similarity in combination with the constituent similarity of morpho-syntactic paraphrases and (ii) the versatility of the Web for representing the characteristics of predicate phrases.
An anaphoric relation can be either direct or indirect. In some cases, the antecedent being referred to lies outside of the discourse its anaphor belongs to. Therefore, an anaphora resolution model needs to consider the following two decisions in parallel: antecedent selection—selecting the antecedent itself, and anaphora type classification— classifying an anaphor into direct anaphora, indirect anaphora or exophora. However, there are non-trivial issues for taking these decisions into account in anaphora resolution models since the anaphora type classification has received little attention in the literature. In this paper, we address three non-trivial issues: (i) how the antecedent selection model should be designed, (ii) what information helps with anaphora type classification, (iii) how the antecedent selection and anaphora type classification should be carried out, taking Japanese as our target language. Our findings are: first, an antecedent selection model should be trained separately for each anaphora type using the information useful for identifying its antecedent. Second, the best candidate antecedent selected by an antecedent selection model provides contextual information useful for anaphora type classification. Finally, the antecedent selection should be carried out before anaphora type classification.