Although the near-far effect has been considered to be the major issue preventing CDMA from being used in ad-hoc networks, in this paper, we show that the near-far effect is not a severe issue in inter-vehicle networks for safety driving support, where packet transmissions are generally performed in the broadcast manner. Indeed, the near-far effect provides extremely reliable transmissions between near nodes, regardless of node density, which cannot be achieved by CSMA/CA. However, CDMA cannot be directly applied in realistic traffic accident scenarios, where highly reliable transmissions are required between far nodes as well. This paper proposes to apply packet forwarding and transmission scheduling methods that try to expand the area, where reliable transmissions are achievable. Simulation results show that the proposed scheme significantly excels a CSMA/CA-based scheme in terms of delivery ratio and delay under realistic traffic accident scenarios. Specifically, the proposed scheme achieves approximately 90% of delivery ratio and 4 milliseconds of end-to-end delay in a scenario, where the CSMA/CA scheme achieves 60% of delivery ratio and 80 milliseconds of delay.
In this paper, we describe Topolo Surface, a 2D fiducial tracking system we developed. Topolo Surface is a prototype system that implements a novel fiducial tracking method based on the combination of topological region adjacency and angle information. Existing systems based only on topological region adjacency information, such as D-Touch and ReacTIVision, have several desirable features including fast processing speed and robustness against false positive detection. Yet, the method used in these systems also has several deficits. The unique ID range in existing topology-based methods is very narrow and the cost to generate the set of such unique fiducial markers can be computationally very expensive, especially when compared to existing matrix-based systems. Also, several useful techniques to improve robustness, such as CRC or hamming distance, cannot be applied to existing topology-based systems. Our novel fiducial tracking method utilizes the combination of topological region adjacency and angle information. By using topological information together with geometrical information, our prototype system achieved much larger unique ID range at very cheap computational cost to generate its fiducial markers. This is achieved while maintaining the desirable features of fast processing speed and robustness against false positives in a topology-based method. Also, CRC or hamming distance can be applied to our method to improve the robustness, if necessary.
Traditional video services on the Internet are of a broadcasting service nature such as streaming and video-on-demand (VoD). Recent services incorporate more of the interactive nature of network applications such as easy video sharing and those with a chat function. Meanwhile, we have been conducting experimental Internet broadcasting in practice and found it difficult for non-professional broadcasters to provide audiences with satisfactory contents since they do not have a large budget or the technical knowledge to produce broadcasting contents compared to the professional ones. In this paper, we propose an audience-driven broadcast service model in which audiences can send their wish to a broadcaster such that they would like to see some specific objects while broadcasting; the broadcaster can reply back to the request as well. We implemented a prototype system for audience-driven live broadcasting and studied its effects and problems based on the results from the experimental broadcast at our university graduation ceremony and our campus festival. This paper reports our experiments and findings of the audience-driven live broadcasting.
This paper discusses the potential problems due to cultural differences, which foreign companies may face in Brazil concerning information security. Top 3 investing countries in Brazil, namely US, Netherlands, and Japan are examined. Potential problems concerning the management of people in information security are developed by using Geert Hofstede's framework and based upon the authors' experience in global business activities. To evaluate the magnitude of potential of problems, a recently proposed measure called Level of Potential (LoP) is adopted. A survey was conducted in Brazil to evaluate the severity of potential problems and the practicability of LoP. To examine the practicability of LoPs, the logical LoPs are compared with their surveyed severities. Our results show that LoP can predict problems to a certain extent in the Brazilian business environment. The results reveal that Japanese companies may face problems least, while the Dutch ones face the difficulties most. The problem of “Using previous company's confidential information” is a problem with the highest severity among the potential problems since “teaching others” is encouraged by employees' belief.
In this paper, the authors attempt to develop a technique for the analysis of the motions of dances having no stylized motion structure, focusing on joint motions. The variance-covariance matrix given by the statistical analysis of the time-series data of joint motions is selected for the evaluation index characterizing dance motions. The application of the derived evaluation index to the representation of dissimilarity between dances is shown to be effective when the whole commonness appearing in both the dances compared should be considered. It is also confirmed that the application of multidimensional scaling (MDS) with the orthogonal rotation of coordinate axes is effective to extract the distribution feature of a database of dances. The evaluation items characterizing all the dances belonging to the database are automatically extracted by the analysis of correlation between the coordinate axes given by MDS and the elements of the variance-covariance matrix.
Recently, ubiquitous Internet-access services have been provided by Internet service providers (ISPs) by deploying wireless local area networks (LANs) in public spaces including stations, hotels, and coffee shops. The IEEE802.1X protocol is usually used for user authentications to allow only authorized users to access services. Then, although user personal information of access locations, services, and operations can be easily collected by ISPs and thus, their strict management has been demanded, it becomes very difficult when multiple ISPs provide roaming services by their corporations. In this paper, we present an anonymous IEEE802.1X authentication system using a group signature scheme to allow user authentication without revealing their identities. Without user identities, ISPs cannot collect personal information. As an efficient revocable group signature scheme, we adopt the verifier-local revocation (VLR) type with some modifications for use of the fast pairing computation. We show the implementation of our proposal and evaluation results where the practicality of our system is confirmed for up to 1, 000 revoked users.
Security software such as anti-virus software and personal firewall are usually installed in every host within an enterprise network. There are mainly two kinds of security software: signature-based software and anomaly-based software. Anomaly-based software generally has a “threshold” that discriminates between normal traffic and malware communications in network traffic observation. Such a threshold involves the number of packets used for behavior checking by the anomaly-based software. Also, it indicates the number of packets sent from an infected host before the infected host is contained. In this paper, we propose a mathematical model that uses discrete mathematics known as combinatorics, which is suitable for situations in which there are a small number of infected hosts. Our model can estimate the threshold at which the number of infected hosts can be suppressed to a small number. The result from our model fits very well with the result of computer simulation using typical existing scanning malware and a typical network.
Reprogramming sensor nodes is important for managing sensor networks. The latest reprogramming protocols use radio communication to distribute software data. In multi-base station sensor networks, the placement of the base stations affects several wireless reprogramming performance metrics. We developed a method for placing base stations, and we evaluated the features of software dissemination for multi-base station sensor networks. Simulations showed that the placement and number of base stations and the number of data segments were the key parameters in software dissemination.
This paper discusses a buffering strategy for a delay-tolerant multimedia sensor network (DTMSN), whose typical application is video surveillance. In DTMSN, a sensor node observes events around it and stores the data in its own buffer memory. All the data is collected to the sink. Sensor nodes have restrictions on buffer memory as well as battery capacity. The entire data size is much larger than a single node's memory size. Thus, developing a strategy for buffering satisfying these restrictions is a critical issue for DTMSN. In this paper, we propose a novel buffering scheme for DTMSN called cooperative buffering (CB). In the proposed CB, the sensor node which has a large amount of data cooperates with its neighbor nodes to buffer the data in a distributed manner. CB uses mobile sinks. The cooperatively buffered data are transmitted directly to the mobile sink when it arrives. After proposing CB, this paper discusses extension for easy collection of the sink, extension for multi source nodes, and some sink mobility strategies of sink mobility. It evaluates the power consumption performance of CB via theoretical formulation and computer simulation. As a result, we show from the results that the proposed CB can handle multimedia data while operating at low-power.
Many safety applications in Vehicular Ad hoc Networks (VANETs) are based on broadcast. Designing a broadcast protocol that satisfies VANET applications' requirements is very crucial. In this paper, we propose a reliable and efficient multi-hop broadcast routing protocol for VANETs. The proposed protocol provides the strict reliability in various traffic conditions. This protocol also performs low overhead by means of reducing rebroadcast redundancy in a high-density network environment. We also propose an enhanced multipoint relay (MPR) selection algorithm that considers vehicles' mobility and then use it for relay node selection. We show the performance analysis of the proposed protocol by simulation with ns-2 in different conditions, and give the simulation results demonstrating effectiveness of the proposed protocol compared with other VANET broadcast schemes.
Effective bandwidth utilization and scalability are vital issues for IP networking over a large-scale uni-directional link (UDL), such as a wide-area wireless broadcast over satellite or terrestrial digital broadcasting. On a large-scale UDL, the current network architecture is not scalable to cover an extraordinary number of receivers that communicate using a Link-layer Tunneling Mechanism (LLTM). This paper proposes a network architecture for a large-scale UDL that: (1) decreases the traffic load of LLTM at the upstream network of the UDL, (2) coordinates the data link layer and network layer of receivers without communications via UDL, and (3) enables neighbor discovery for direct communication between receivers via a bi-directional link that is used as a return path for LLTM. Simulation results showed that our approach reduces by more than 90% the control messages to be sent via UDL compared with IPv6 stateless address autoconfiguration on the existing network architecture. Our proposal improves the UDL bandwidth consumption from O(N) to O(1), so that the bulk of the bandwidth can be utilized for delivering services, not for network configuration of receivers.
In these years, 3D-LSIs which consist of several silicon layers have been developed and attracted attention. For floorplaning of 3D-LSIs, a rectangular solid dissection, which is a dissection of a rectangular solid into smaller rectangular solids by planes, also has attracted attention and been studied much. However, not so many properties have been clarified about a rectangular solid dissection. This paper presents the relation between the number of rooms and that of walls in a rectangular solid dissection.
Power conservation has become a serious concern during people's daily life. Ubiquitous computing technologies clearly provide a potential way to help us realize a more environment-friendly lifestyle. In this paper, we propose a ubiquitous power management system called Gynapse, which uses multi-modal sensors to predict the exact usage of each device, and then switches their power modes based on predicted usage to maximize the total energy saving under the constraint of user required response time. We build a three-level Hierarchical Hidden Markov Model (HHMM) to represent and learn the device level usage patterns from multi-modal sensors. Based on the learned HHMM, we develop our predictive mechanism in Dynamic Bayesian Network (DBN) scheme to precisely predict the usage of each device, with user required response time under consideration. Based on the predicted usage, we follow a four-step process to balance the total energy saving and response time of devices by switching their power modes accordingly. Preliminary results demonstrate that Gynapse has the capability to reduce power consumption while keeping the response time within user's requirement, and provides a complementary approach to previous power management systems.
This paper presents an evolutionary synthesis of feature extraction programs for object recognition. The evolutionary synthesis method employed is based on linear genetic programming which is combined with redundancy-removed recombination. The evolutionary synthesis can automatically construct feature extraction programs for a given object recognition problem, without any domain-specific knowledge. Experiments were done on a lawn weed detection problem with both a low-level performance measure, i.e., segmentation accuracy, and an application-level performance measure, i.e., simulated weed control performance. Compared with four human-designed lawn weed detection methods, the results show that the performance of synthesized feature extraction programs is significantly better than three human-designed methods when evaluated with the low-level measure, and is better than two human-designed methods according to the application-level measure.
Agent-based middleware that has abilities of adaptation to dynamically changing environments is a significant direction for system developments in ubiquitous computing environments. In this paper, we focus on the communication infrastructure of agent-based middleware in ubiquitous computing environments. We propose an adaptive communication mechanism between agent platforms, which can select communication schemes flexibly, based on properties of inter-agent communication and resource status. We designed the proposed mechanism and implemented a prototype system. Furthermore, we performed an initial experiment by using the prototype system on a network environment with two types of access network. We confirmed that the dynamic selection of an inter-platform communication scheme works effectively according to the change of network resource status. From the experimental results, we confirmed that efficiency is improved 5% and stability is improved 22% when compared to that of the traditional mechanism.
There are many studies aimed at using port-scan traffic data for fast and accurate detection of rapidly spreading worms. This paper proposes two new methods for reducing the traffic data to a simplified form comprising of significant components of smaller dimensionality. (1) Dimension reduction via Principal Component Analysis (PCA), widely used as a tool in exploratory data analysis, enables estimation of how uniformly the sensors are distributed over the reduced coordinate system. PCA gives a scatter plot for the sensors, which helps to detect abnormal behavior in both the source address space and the destination port space. (2) One of the significant applications of PCA is to reduce the number of sensors without losing the accuracy of estimation. Our proposed method based on PCA allows redundant sensors to be discarded and the number of packets estimated even when half of the sensors are unavailable with accuracy of less than 3% of the total number of packets. In addition to our proposals, we report on experiments that use the Internet Scan Data Acquisition System (ISDAS) distributed observation data from the Japan Computer Emergency Response Team (JPCERT).
Creating security policy for SELinux is difficult because access rules often exceed 10,000 and elements in rules such as permissions and types are understandable only for SELinux experts. The most popular way to facilitate creating security policy is refpolicy which is composed of macros and sample configurations. However, describing and verifying refpolicy based configurations is difficult because complexities of configuration elements still exist, using macros requires expertise and there are more than 100,000 configuration lines. The memory footprint of refpolicy which is around 5MB by default, is also a problem for resource constrained devices. We propose a system called SEEdit which facilitates creating security policy by a higher level language called SPDL and SPDL tools. SPDL reduces the number of permissions by integrated permissions and removes type configurations. SPDL tools generate security policy configurations from access logs and tool user's knowledge about applications. Experimental results on an embedded system and a PC system show that practical security policies are created by SEEdit, i.e., describing configurations is semi-automated, created security policies are composed of less than 500 lines of configurations, 100 configuration elements, and the memory footprint in the embedded system is less than 500KB.
The purpose of this paper is to propose a quantitative approach for the effective and efficient assessment of risks related to information security. Though there are already several other approaches proposed to measure information security (IS) related risk, they are either inapplicable to real enterprises' IT landscapes or are of a qualitative nature, i.e. based on subjective decisions of the implementation team and thus could suffer from a significant degree of speculation. In contrast, our approach is based on objective statistical data, provides quantitative results and can be easily applied to any enterprise of any industry or any non-profit organization. An example of the application of the proposed approach to a real enterprise is also provided. The only prerequisite for the proposed methodology is a sufficient amount of incidents statistics collected under conditions described later in this paper. The reason for such research is that performing of IS related risk assessment is one of the procedures required to manage information security. And the process of IS management has recently become one of the highest concerns for most organizations and enterprises. It is caused not only by the growth of hackers' activity but also because of increasing legal requirements and compliance issues.
If the given problem instance is partially solved, we want to minimize our effort to solve the problem using that information. In this paper we introduce the measure of entropy, H (S), for uncertainty in partially solved input data S (X) = (X1, . . . , Xk), where X is the entire data set, and each Xi is already solved. We propose a generic algorithm that merges Xi's repeatedly, and finishes when k becomes 1. We use the entropy measure to analyze three example problems, sorting, shortest paths and minimum spanning trees. For sorting Xi is an ascending run, and for minimum spanning trees, Xi is interpreted as a partially obtained minimum spanning tree for a subgraph. For shortest paths, Xi is an acyclic part in the given graph. When k is small, the graph can be regarded as nearly acyclic. The entropy measure, H (S), is defined by regarding pi = ¦Xi¦/¦X¦ as a probability measure, that is, H (S) = -n (p1 log p1 + . . . + pk log pk), where n = ¦X1¦ + . . . + ¦Xk¦. We show that we can sort the input data S (X) in O (H (S)) time, and that we can complete the minimum cost spanning tree in O (m + H (S)) time, where m in the number of edges. Then we solve the shortest path problem in O (m + H (S)) time. Finally we define dual entropy on the partitioning process, whereby we give the time bounds on a generic quicksort and the shortest path problem for another kind of nearly acyclic graphs.
Document clustering is the process of partitioning a set of unlabeled documents into clusters such that documents within each cluster share some common concepts. To analyze the clusters easily, it is convenient to represent the concepts using some key terms. However, by using terms as features, text data is represented in a very high-dimensional vector space, and the computational cost is high. Note that the text data are of high sparsity, and not all weights in the centers are important for classification. Based on this observation, we propose in this study a comparative advantage-based clustering algorithm which can find out the relative strength between clusters, as well as keep and enlarge their strength. Since the vectors are represented by term frequency, the clustering results are more comprehensible compared with dimensionality reduction methods. Experimental results show that the proposed algorithm can keep the characteristic of k-means algorithm, but the computational cost is much lower. Moreover, we also found that the proposed method has a higher chance of getting better results.
A continuously-sized circuit resulting from transistor sizing consists of gates with a large variety of sizes. In the standard cell based design flow where every gate is implemented by a cell, a large number of different cells need to be prepared to implement an entire circuit. In this paper, we first provide a formal formulation of the performance-constrained different cell count minimization problem, and then propose an effective heuristic which iteratively minimizes the number of cells under performance constraints such as area, delay and power. Experimental results on the ISCAS 85 benchmark circuits implemented in a 90nm fabrication technology demonstrate that different cell counts are reduced by 74.3% on average while accepting a 1% delay degradation. Compared to circuits using a typical discretely-sized cell library, we also demonstrate that the proposed method can generate better circuits using the same number of cells.
In this paper, the authors propose the adaptation of the rules used in the grouping structure analysis in Lerdahl and Jackendoff's “A Generative Theory of Tonal Music (GTTM)” to dance motion analysis. The success of the adaptation realizes the segmentation of dance motion in a hierarchical fashion. The analysis method obtained by the trial of the above adaptation consists of the following procedures. A motion-capture data stream of a dance is first divided into a sequence of events by piecewise linear regression. The hierarchical structure of groups each of which consists of a sequence of the events is then extracted by applying the grouping rules adapted to dance motion analysis. The above method is applied to motion-data streams acquired by motion capture systems. The obtained results indicate the following advantages: (1) The structure of hierarchical segmentation is precisely extracted in response to the characteristic of an analyzed dance. (2) The extraction of the hierarchical segmentation provides the possibility of the development of a technique distinguishing the oversegmentation from regular boundaries. (3) The possibility of utilizing the information of hierarchical segmentation for the comparison of dance performances is suggested.
This paper investigates the relationship between “error feedback” (when tracking or trajectory errors are made) and user performance in steering tasks. We conducted experiments to examine feedback presented as visual, auditory and tactile modalities, both individually and in combinations. The results indicate that feedback significantly affects the accuracy of steering tasks but not the movement time. The results also show that users perform most accurately with tactile feedback. This paper contributes to the basic understanding of “error feedback” and how it impacts steering tasks, and offers insights and implications for the future design of multimodal feedback mechanisms for steering tasks.