A continuously-sized circuit resulting from transistor sizing consists of gates with a large variety of sizes. In the standard cell based design flow where every gate is implemented by a cell, a large number of different cells need to be prepared to implement an entire circuit. In this paper, we first provide a formal formulation of the performance-constrained different cell count minimization problem, and then propose an effective heuristic which iteratively minimizes the number of cells under performance constraints such as area, delay and power. Experimental results on the ISCAS 85 benchmark circuits implemented in a 90nm fabrication technology demonstrate that different cell counts are reduced by 74.3% on average while accepting a 1% delay degradation. Compared to circuits using a typical discretely-sized cell library, we also demonstrate that the proposed method can generate better circuits using the same number of cells.
In this paper, we present a minimization algorithm of the number of states of a linear separation automaton (LSA). An LSA is an extended model of a finite automaton. It accepts a sequence of real vectors, and has a weight and a threshold sequence at every state, which determine the transition from the current state to the next at each step. In our previous paper, we characterized an LSA and the minimum state LSA. The minimum state version for a given LSA M is obtained by the algorithm presented in this paper. Its time complexity is O(( K + k )n2), where K is the maximum number of threshold values assigned to each weight, k is the maximum number of edges going out from a state of M, and n is the number of states in M. Moreover, we discuss the minimization of a threshold sequence at each state.
This paper presents mathematical and general models of electronic money systems. The goal of the paper is to propose a first framework in which various kinds of e-money systems can be uniformly represented and their security properties can be evaluated and compared. We introduce two kinds of e-money system models; a note-type e-money system model and a balance-type e-money system model. We show that any balance-type e-money system with efficient data transmission cannot be simulated by any note-type e-money system. This implies that balance-type e-money systems are strictly faster in data communication. Then, we show that a forged monetary value can be detected in some note-type e-money systems, while it cannot be detected in any balance-type e-money systems with efficient data communication. This implies that note-type e-money systems seem to be more secure.
Experimentally verified protein-protein interactions (PPIs) cannot be easily retrieved by researchers unless they are stored in PPI databases. The curation of such databases can be made faster by employing text-mining systems to identify genes which play the interactor role in PPIs and to map these genes to unique database identifiers, also referred to as the interactor normalization task (INT). Our previous INT system won first place in the BioCreAtIvE II.5 INT challenge by exploiting the different characteristics of individual paper sections to guide gene normalization (GN) and using a support-vector-machine (SVM)-based ranking procedure. The best AUC achieved by our original system was 0.435 in the BioCreAtIvE II.5 INT offline challenge. After employing the proposed re-ranking algorithm, we have been able to improve our system's AUC to 0.447. In this paper, we present a new relational re-ranking algorithm that considers the associations among identifiers to further improve INT ranking results.
Type III secretion systems (T3SS) deliver bacterial proteins, or “effectors”, into eukaryotic host cells, inducing physiological responses in the hosts. Effector proteins have been considered virulence factors of pathogenic bacteria, but T3SSs have now been found in symbiotic bacteria as well. Whether any physicochemical difference exists between the two types of effectors remains unknown. In this work, we combined computational statistical and machine learning methods to identify features that could be responsible for the difference. For computational statistical method we used generalized Bayesian information criterion and kernel logistic regression, and for machine learning method we used support vector machine. It was clearly shown that differences in amino acid composition exist between pathogenic and symbiotic effector proteins. All identified discriminating features were those of amino acid composition and average residue weight, and their classification performance could be nearly identical to that using all physicochemical features, with sensitivity and specificity of over 80%. Further analysis on the seven discriminating features by graphical modeling revealed three dominant features among them. Moreover, amino acid regions that were distinctive for the seven features were explored by sliding window analysis. This study provides a methodological basis and important insights into the functional differences between pathogenic and symbiotic T3SS effectors.
Traditional XML query processing methods, such as XPath and XQuery, are fragile to changes in the underlying XML structure because path expressions cannot accommodate structural variations that may occur in designing or updating XML data. In this paper, we discuss the problem of processing path-free XML queries in a pure RDBMS. It is ideal to list all possible structural variations of a given path-free XML query, though it is non-trivial to devise an efficient implementation due to the combinatorial explosion of potential structural variations. In addition the problems that RDBMS cannot offer an ideal query plan and efficient XML structural join algorithms also pose a big challenge. We teach RDBMS aware of tree structures by adding XML-specific information, and supply FDs among attributes in an amoeba join to eliminate unfavorable results and achieve a marked reduction in the query space. Dealing with XML query in a pure RDBMS efficiently bridges the gap between XML and relational database. Experiments carried out on SQL Server have proven orders of magnitude improvement over the naïve implementation, demonstrating the feasibility of path-free XML query processing using a pure RDBMS kernel.
The topic of this paper is wide area structure from motion. We first describe recent progress in obtaining large-scale 3D visual models from images. Our approach consists of a multi-stage processing pipeline, which can process a recorded video stream in real-time on standard PC hardware by leveraging the computational power of the graphics processor. The output of this pipeline is a detailed textured 3D model of the recorded area. The approach is demonstrated on video data recorded in Chapel Hill containing more than a million frames. While for these results GPS and inertial sensor data was used, we further explore the possibility to extract the necessary information for consistent 3D mapping over larger areas from images only. In particular, we discuss our recent work focusing on estimating the absolute scale of motion from images as well as finding intersections where the camera path crosses itself to effectively close loops in the mapping process. For this purpose we introduce viewpoint-invariant patches (VIP) as a new 3D feature that we extract from 3D models locally computed from the video sequence. These 3D features have important advantages with respect to traditional 2D SIFT features such as much stronger viewpoint-invariance, a relative pose hypothesis from a single match and a hierarchical matching scheme naturally robust to repetitive structures. In addition, we also briefly discuss some additional work related to absolute scale estimation and multi-camera calibration.
Structure from motion (SfM) and appearance-based segmentation have played an important role in the interpretation of road scenes. The integration of these approaches can lead to good performance during interpretation since the relation between 3D spatial structure and 2D semantic segmentation can be taken into account. This paper presents a new integration framework using an SfM module and a bag of textons method for road scene labeling. By using a multiband image, which consists of a near-infrared and a visible color image, we can generate better discriminative textons than those generated by using only a color image. Our SfM module can accurately estimate the ego motion of the vehicle and reconstruct a 3D structure of the road scene. The bag of textons is computed over local rectangular regions: its size depends on the distance of the textons. Therefore, the 3D bag of textons method can help to effectively recognize the objects of a road scene because it considers the object's 3D structure. For solving the labeling problem, we employ a pairwise conditional random field (CRF) model. The unary potential of the CRF model is affected by SfM results, and the pairwise potential is optimized by the multiband image intensity. Experimental results show that the proposed method can effectively classify the objects in a 2D road scene with 3D structures. The proposed system can revolutionize 3D scene understanding systems used for vehicle environment perception.
This paper proposes a new color calibration method for multi-viewpoint images captured by sparsely and convergently arranged cameras. The main contribution of our method is its practical and efficient procedure while traditional methods are known to be labor-intensive. Because our method automatically extracts 3D points in the scene for color calibration, we do not need to capture color calibration objects like Macbeth chart. This enables us to calibrate a set of multi-viewpoint images whose capture environment is not available. Experiments with real images show that our method can minimize the difference of pixel values (1) quantitatively by leave-one-out evaluation, and (2) qualitatively by rendering a 3D video.
This paper proposes a method for acquiring the prior probability of human existence by using past human trajectories and the color of an image. The priors play an important role in human detection as well as in scene understanding. The proposed method is based on the assumption that a person can exist again in an area where he/she existed in the past. In order to acquire the priors efficiently, a high prior probability is assigned to an area having the same color as past human trajectories. We use a particle filter for representing and updating the prior probability. Therefore, we can represent a complex prior probability using only a few parameters. Through experiments, we confirmed that our proposed method can acquire the prior probability efficiently and use it to realize highly accurate human detection.
We propose a new method for background modeling based on combination of multiple models. Our method consists of three complementary approaches. The first one, or the pixel-level background modeling, uses the probability density function to approximate background model, where the PDF is estimated non-parametrically by using Parzen density estimation. Then the pixel-level background modeling can adapt periodical changes of pixel values. The region-level background modeling is based on the evaluation of local texture around each pixel, which can reduce the effects of variations in lighting. It can adapt gradual change of pixel value. The frame-level background modeling detects sudden and global changes of the image brightness and estimates a present background image from input image referring to a model background image, and foreground objects can be extracted by background subtraction. In our proposed method, integrating these approaches realizes robust object detection under varying illumination, whose effectiveness is shown in several experiments.
We propose a method to capture 3D video of an object that moves in a large area using active cameras. Our main ideas are to partition a desired target area into hexagonal cells, and to control active cameras based on these cells. Accurate camera calibration and continuous capture of the object with at least one third of the cameras are guaranteed regardless of the object's motion. We show advantages of our method over an existing capture method using fixed cameras. We also show that our method can be applied to a real studio.
We propose a novel wide angle imaging system inspired by compound eyes of animals. Instead of using a single lens, well compensated for aberration, we used a number of simple lenses to form a compound eye which produces practically distortion-free, uniform images with angular variation. The images formed by the multiple lenses are superposed on a single surface for increased light efficiency. We use GRIN (gradient refractive index) lenses to create sharply focused images without the artifacts seen when using reflection based methods for X-ray astronomy. We show the theoretical constraints for forming a blur-free image on the image sensor, and derive a continuum between 1 : 1 flat optics for document scanners and curved sensors focused at infinity. Finally, we show a practical application of the proposed optics in a beacon to measure the relative rotation angle between the light source and the camera with ID information.
Omnidirectional multi-camera systems cannot capture entire fields of view because of their inability to view areas directly below them. Such invisible areas in omnidirectional video decrease the resulting realistic sensation experienced when using a telepresence system. In this study, we generate omnidirectional video without invisible areas using an image completion technique. The proposed method compensates for the change in appearance of textures caused by camera motion and searches for appropriate exemplars considering three-dimensional geometric information. In our experiments, the effectiveness of our proposed method has been demonstrated by successfully filling in missing regions in real video sequences captured using an omnidirectional multi-camera system.
In this paper, the authors propose the adaptation of the rules used in the grouping structure analysis in Lerdahl and Jackendoff's “A Generative Theory of Tonal Music (GTTM)” to dance motion analysis. The success of the adaptation realizes the segmentation of dance motion in a hierarchical fashion. The analysis method obtained by the trial of the above adaptation consists of the following procedures. A motion-capture data stream of a dance is first divided into a sequence of events by piecewise linear regression. The hierarchical structure of groups each of which consists of a sequence of the events is then extracted by applying the grouping rules adapted to dance motion analysis. The above method is applied to motion-data streams acquired by motion capture systems. The obtained results indicate the following advantages: (1) The structure of hierarchical segmentation is precisely extracted in response to the characteristic of an analyzed dance. (2) The extraction of the hierarchical segmentation provides the possibility of the development of a technique distinguishing the oversegmentation from regular boundaries. (3) The possibility of utilizing the information of hierarchical segmentation for the comparison of dance performances is suggested.
This paper investigates the relationship between “error feedback” (when tracking or trajectory errors are made) and user performance in steering tasks. We conducted experiments to examine feedback presented as visual, auditory and tactile modalities, both individually and in combinations. The results indicate that feedback significantly affects the accuracy of steering tasks but not the movement time. The results also show that users perform most accurately with tactile feedback. This paper contributes to the basic understanding of “error feedback” and how it impacts steering tasks, and offers insights and implications for the future design of multimodal feedback mechanisms for steering tasks.
We present a novel technique for enhancing an image captured in low light by using near-infrared flash images. The main idea is to combine a color image with near-infrared flash images captured at the same time without causing any interference with the color image. In this work, near-infrared flash images are effectively used for removing annoying effects that are commonly observed in images of dimly lit environments, namely, image noise and motion blur. Our denoising method uses a pair of color and near-infrared flash images captured simultaneously. Therefore it is applicable to dynamic scenes, whereas existing methods assume stationary scenes and require a pair of flash and no-flash color images captured sequentially. Our deblurring method utilizes a set of near-infrared flash images captured during the exposure time of a single color image and directly acquires a motion blur kernel based on optical flow. We implemented a multispectral imaging system and confirmed the effectiveness of our technique through experiments using real images.
This paper introduces a novel efficient partial shape matching method named IS-Match. We use sampled points from the silhouette as a shape representation. The sampled points can be ordered which in turn allows to formulate the matching step as an order-preserving assignment problem. We propose an angle descriptor between shape chords combining the advantages of global and local shape description. An efficient integral image based implementation of the matching step is introduced which allows detecting partial matches an order of magnitude faster than comparable methods. We further show how the proposed algorithm is used to calculate a global optimal Pareto frontier to define a partial similarity measure between shapes. Shape retrieval experiments on standard shape datasets like MPEG-7 prove that state-of-the-art results are achieved at reduced computational costs.
Automated Trust Negotiation (ATN) has been proposed as a mechanism to establish mutual trust among strangers. While existing fundamental protocols and strategies are shown, this paper focuses on Parsimonious strategy. The most straightforward implementation of Parsimonious strategy has a very high memory consumption which may be problematic when it is used in real world environments. This paper proposes an implementation which keeps all requests in Disjunctive Normal Form (DNF) and further reduces its memory consumption by exploiting the history of the negotiation, while keeping the completeness of the strategy intact. In addition to that, proposed method provides a criterion to detect negotiation failures. Results obtained by means of simulations showed that the method proposed is effective in achieving its goals, without increasing the overall computational overhead. Theoretical analysis of the proposed method is also presented.
In this paper, we study content-based spam detection for spams that are generated by copying a seed document with some random perturbations. We propose an unsupervised detection algorithm based on an entropy-like measure called document complexity, which reflects how many similar documents exist in the input collection of documents. As the document complexity, however, is an ideal measure like Kolmogorov complexity, we substitute an estimated occurrence probability of each document for its complexity. We also present an efficient algorithm that estimates the probabilities of all documents in the collection in linear time to its total length. Experimental results showed that our algorithm especially works well for word salad spams, which are believed to be difficult to detect automatically.
We describe NCAP, a new network capturing tool for distributed sensor systems. NCAP operates on messages rather than on packets, and so performs full IP reassembly at the point of measurement. The resulting data can either be managed as files or be transmitted as encapsulated UDP datagrams either unicast or multicast. The NCAP library is highly portable with C and Python interfaces, and has a plug-in mechanism whereby analysis logic can be written discretely and without regard to the handling of encapsulated datagrams or files. The primary application of NCAP is the Security Information Exchange, where cooperating distributed sensor operators now submit captured DNS traffic to a centralized location for subsequent long-running analysis. Examples of value added reprocessing and rebroadcast will be shown, as well as samples of captured traffic and of possible security problems illuminated by our analysis. These results will show that NCAP makes it possible to capture, share, and analyze live network data on a larger scale than has ever been done.
This paper describes the implementation and the evaluation of a link aggregation system using Network Mobility (NEMO). The link aggregation system, called NEMO SHAKE, constructs a temporal network (called an alliance) between multiple mobile routers (MRs) carried by vehicles and aggregates external links between multiple MRs and the Internet to provide a fast and reliable transmission with mobile devices in vehicles carrying MRs. We also designed a system for controlling alliances. By estimating the distance and the link condition between MRs, it can achieve a high throughput and the stability of the aggregated paths between vehicles and the Internet. We evaluated the usefulness of NEMO SHAKE and its alliance-control mechanism in real vehicular networks. We confirmed that the alliance-control mechanism can achieve a high throughput by changing the member of the alliance dynamically.
This paper presents a car navigation system for a future integrated transportation system forming a component of an Intelligent Transportation System- ITS. The system focuses on the mechanisms of how to detect the current position of each vehicle and to navigate each vehicle. Individual vehicular data is collected using the Global Positioning System - GPS, for location data and it will then be transmitted to a control center via a mobile phone. For the purpose of this paper, the device will be referred to as “Reporting Equipment for current geographic Position” or REP. It is suspected that if a great number of REP equipped vehicles report their positions to the control center simultaneously, there would be a heavy load on the computational communications network (“network”). This paper proposes an algorithm to reduce the network load associated with large numbers of vehicles reporting their position at the same time; if a car skips some reports then it is not possible for the control center to estimate its correct position therefore we need an algorithm that aims to decrease the frequency of the reporting without sacrificing the proper level of accuracy for the position. Compared to periodical reporting, the load effectiveness shows a 50-66% improvement.