Formal methods are mathematically-based techniques for specifying, developing and verifying a component or system for increasing the confidence regarding the reliability and robustness of the target. It can be used at different levels with different techniques, and one approach is to use model-oriented formal languages such as VDM languages in writing specifications. During model development, we can test executable specifications in VDM-SL and VDM++. In a lightweight formal approach, we test formal specifications to increase our confidence as we do in implementing software code with conventional programming languages. For this purpose, millions of tests may be conducted in developing highly reliable mission-critical software in a lightweight formal approach. In this paper, we introduce our approach to supporting a large volume of testing for executable formal specifications using Hadoop, an implementation of the MapReduce programming model. We are able to automatically distribute an interpretation of specifications in VDM languages by using Hadoop. We also apply a property-based data-driven testing tool, QuickCheck, over MapReduce so that specifications can be checked with thousands of tests that would be infeasible to write by hand, often uncovering subtle corner cases that wouldn't be found otherwise. We observed effect to coverage and evaluated scalability in testing large amounts of data for executable specifications in our approaches.
The tussle in IP multicast, where different enablers have interests that are adverse to each other, has led to a halt in inter-provider multicast deployment and created a situation in which enabling inter-domain multicast routing is considered a deterrent for network providers. This paper presents ODMT (On-demand Inter-domain Multicast Tunneling), an on-demand inter-provider multicast tunneling that is autonomous in operation and manageable through its definable policy control. In the architectural design, we propose a new approach of enabling inter-provider multicast by decoupling the control-plane and forwarding-plane. Focusing on the control-plane without changing the forwarding-plane, our solution changes the traditional open multicast service model into a more manageable service model in inter-domain multicast operation, hence it eases the Internet-wide multicast deployment.
Web services and cloud computing paradigms have opened up many new vistas. The data intensive cloud applications usually require huge amounts of data to input and output from secondary storage systems. The outstanding progress in area of network communications has enabled high speed networks and therefore, communication latency bottleneck in cloud and other web applications has been shifted to node/storage level. Moreover, existing cloud solutions focused mainly on the efficient utilization of computing resources through virtualization and issues of storage bottleneck did not receive much attention. Moreover, virtualization based implementation ensures equal priority to all hosted applications, thus, real time applications in cloud environment can't meet their requirements. To meet the demand of overall low latency in cloud and other web services; and particularly to reduce I/O bottleneck at storage level, novel idea of autonomous L3 cache technology is proposed. Autonomous L3 cache technology utilizes local memory space as dedicated block device cache for certain specific application, thus prioritizing it over rest of hosted ones. Evaluation shows performance improvement of 5-8 times in terms of timeliness in given setup.
A Grid monitoring system is differentiated from a general monitoring system in that it must be scalable across wide-area networks, include a large number of heterogeneous resources, and be integrated with the other Grid middleware in terms of naming and security issues. A Grid Monitoring is the act of collecting information concerning the characteristics and status of resources of interest. The Grid Monitoring Architecture (GMA) specification sets out the requirements and constraints of any implementation. It is based on simple Consumer/Producer architecture with an integrated system registry and distinguishes transmission of monitoring data and data discovery logically. There are many systems that implement GMA but all have some drawbacks such as, difficult installation, single point of failure, or loss of message control. So we design a simple model after we analyze the requirements of Grid monitoring and information service. We propose a grid monitoring system based on GMA. The proposed Grid monitoring system consists of producers, registry, consumers, and failover registry. The registry is used to match the consumer with one or more producers, so it is the main monitoring tool. The failover registry is used to recover any failure in the main registry. The structure of a proposed grid monitoring system depends on java Servlet and SQL query language. This makes the system more flexible and scalable. We try to solve some problems of the previous works in a Grid monitoring system such as, lack of data flow and single point of failure in R-GMA, and difficulty of installing in MDS4. Firstly, we solve the problem of single point of failure by adding failover registry to the system. It can recover any failure in Registry node. Secondly, we take into consideration the system components to be easy to install/maintain. The proposed system is combination of few systems and frequency of update is low. Thirdly, load balancing should be added to the system to overcome the message overloaded. We evaluate the performance of the system by measuring the response time, utilization, and throughput. The result with load balancing is better than that without load balancing in all evaluation results. Finally, we make a comparison between the proposed system and the other three monitoring systems. We also make a comparison between the four types of load balancing algorithms.
The objective of Peer-to-Peer Content Delivery Networks is to deliver copyrighted contents to paid clients in an efficient and secure manner. To protect such contents from being distributed to unauthorized peers, Lou and Hwang proposed a proactive content poisoning scheme to restrain an illegal download conducted by unauthorized peers, and a scheme to identify colluders who illegally leak the contents to such unauthorized peers. In this paper, we propose three schemes which extend the Lou and Hwang's colluder detection scheme in two directions. The first direction is to introduce an intensive probing to check suspected peers, and the second direction is to adopt a reputation system to select reliable (non-colluder) peers as a decoy. The performance of the resulting scheme is evaluated by simulation. The result of simulations indicates that the proposed schemes detect all colluders about 30% earlier on average than the original scheme while keeping the accuracy of the colluder detection at medium collusion rate.
With the explosive expansion of the Internet, many fundamental and popular Internet services such as WWW and e-mail are becoming more and more important and are indispensable for the human's social activities. As one technique to operate the systems reliably and efficiently, the way of introducing multihomed networks attracts much attention. However, conventional route selection mechanisms on multihomed networks reveal problems in terms of properness of route selection and dynamic traffic balancing which are two key criteria of applying multihomed networks. In this paper, we propose an improved dynamic route selection mechanism based on multipath DNS (Domain Name System) round trip time to address the existing problems. The evaluation results on the WWW system and the e-mail system indicate that the proposal is effective for a proper route selection based on the network status as well as for dynamic traffic balancing on multihomed networks and we also confirmed the resolution of problems that occur in the case of conventional mechanisms.
Recently it has become more important to monitor the daily human activities of the elderly and of children. In this paper, we propose a system for practical activity recognition using the Doppler effect in 24GHz microwaves. It extracts the features from the signals, selects the optimal features, and then classifies activities using a pattern matching technique. We can sense human activities simply with setting Doppler sensors on the wall or tables, without any body-attached sensors. As a result of performance evaluation, our system achieves over ninety percent in the classification of eight actions on average.
A sensor-based project management process, which uses continuous sensing data of face-to-face communication, was developed for integration into current project management processes. To establish a practical process, a sensing system was applied in two software-development projects involving 123 and 65 employees, respectively, to analyze the relation between work performance and behavioral patterns and investigate the use of sensor data. It was found that a factor defined as “communication richness, ” which refers to the amount of communication, correlates with employee performance (job evaluation) and was common in both projects, while other factors, such as “workload, ” were found in just one of the projects. Developers' quality of development (low bug occurrence) was also investigated in one of the projects and “communication richness” was found as a factor of high development quality. As a result of this analysis, we propose a four-step sensor-based project management process, which consists of analysis, monitoring, inspection, and action, and evaluated its effectiveness. Through monitoring, it was estimated that some “unplanned” events, such as changing specifications and problem solving during a project, could be systematically identified. Cohesion of a network was systematically increased using a recommendation of communication, called WorkX, which involves micro rotating of discussion members based on network topology.
Exact string matching is the problem of finding all occurrences of a pattern P in a text T. The problem is well-known and many sophisticated algorithms have been proposed. Some fast exact string matching algorithms have been described since the 80s (e.g., the Boyer-Moore algorithm and its simplified version the Boyer-Moore-Horspool algorithm). They have been regarded as the standard benchmarks for the practical exact string search literature. In this paper, we propose two algorithms MSBM (Max-Shift BM) and MSH (Max-Shift BMH) both based on the combination of the bad-character rule of the right-most character used in the Boyer-Moore-Horspool algorithm, the extended bad-character rule and the good-suffix rule used in the Gusfield algorithm, which is a modification of the Boyer-Moore algorithm. Only a small extra space and preprocessing time are needed with respect to the BM and BMH algorithms. Nonetheless, empirical results on different data (DNA, Protein and Text Web) with different pattern lengths show that both MSBM and MSH are very fast in practice. MSBM algorithm usually won against other algorithms.
In this paper, we consider the problem of recognizing the shape of dynamic event regions in wireless sensor networks (WSNs). A key idea of our proposed algorithm is to use the notion of distance fielddefined by the hop count from the boundary of event regions. By constructing such a field, we can easily identify several critical points in each event region (e.g., local maximum and saddle point) which could be effectively used to characterize the shape and the movement of such event regions. The communication cost required for the shape recognition of dynamic event regions significantly decreases compared with a naive centralized scheme by selectively allowing those critical points to send a certification message to the boundary of the event region and a notification message to the data aggregation points. The performance of the proposed scheme is evaluated by simulations. The simulation results indicate that: 1) the number of messages transmissions during a shape recognition significantly decreases compared with a naive centralized scheme; 2) the accuracy of shape recognition depends on the density of the underlying WSN, while it is robust against the lack of sensors in a particular region in the field, and 3) the proposed event tracking scheme correctly recognizes the movement of an event region with small number of message transmissions compared to a centralized scheme.
We study the problem of determining the minimum number of face guards which cover the surface of a polyhedral terrain. We show that ⌊(2n-5)/7⌋ face guards are sometimes necessary to guard the surface of an n-vertex triangulated polyhedral terrain.
The healthcare professionals have critical needs for general purpose query capabilities. These users increasingly require the use of information technology. The query needs cannot be met by form based user interfaces (or through the aids such as, the query builder). Further, the archetype-based Electronic Health Records (EHRs) databases are more complex, as compared with the traditional database systems. The present study examines a new way to support general purpose user level query language interface for querying EHR data. It presents the user with user's view of clinical concepts, without requiring any intricate knowledge of an object or stored structures. It enables clinicians and researchers to pose general purpose queries, over archetype-based Electronic Health Record systems.
The Controller Area Network (CAN) is widely employed in automotive control system networks. In the last few years, the amount of data has been increasing rapidly in these networks. With the purpose of improving CAN bandwidth efficiency, scheduling and analysis are considered to be particularly important. As an effective method, it has been known that assigning offsets to the CAN messages can reduce their worst case response time (WCRT). Meanwhile, the fact is that many commercial CAN controllers have been equipped with a priority or first-in-first-out (FIFO) queue to transmit messages to the CAN bus. However, previous researches for WCRT analysis of CAN messages either assumed a priority queue or did not consider the offset. For this reason, in this paper we propose a WCRT analysis method for CAN messages with assigned offset in the FIFO queue. We first present a critical instant theorem, then we propose two algorithms for WCRT calculation based on the given theorem. Experimental results on generated message sets and a real message set have validated the effectiveness of the proposed algorithms.
In this paper, we propose a new method to realize quick update of information concerned with shared contents in Peer-to-Peer (P2P) networks. The proposed method is a combination of a hierarchical P2P architecture and a tag-based file management scheme. The hierarchical architecture consists of three layers: the top layer consisting of a collection of central servers, the middle layer consisting of a set of sub-servers, and the bottom layer consisting of a number of user peers. Indexes of files held by each user peer are stored at the sub-servers in the middle layer, and the correlation between file indexes and sub-servers is maintained by central servers using tags. We implemented a prototype of the proposed method using Java, and evaluated the performance through simulations using PeerSim 1.0.4. The results of our experiments indicate that the proposed method is a good candidate for “real-time search engines” in P2P systems; e.g., it completes an upload of 10, 000 file indexes to the relevant sub-servers in a few minutes and achieves query forwarding to relevant peers within 100ms.
In this paper, we propose and implement a cross layer protocol for ad hoc networks using directional antennas. In the proposed protocol called RSSI-based MAC and routing protocol using directional antennas (RMRP), RSSI is used for computing the direction of the receiver and also used for controlling backoff time. Moreover, the backoff time is weighted according to number of hops from a source node. In addition, simple routing functions are introduced in the proposed RMRP. We implement the proposed RMRP on a testbed with the electronically steerable passive array radiator (ESPAR) antenna and IEEE 802.15.4. From some experimental results, we confirm some throughput improvement and show the effectiveness of the proposed RMRP. Especially, the proposed RMRP can achieve about 2.1 times higher throughput than a conventional random backoff protocol in a multi-hop communication scenario.
Network mobility has attracted large attention to provide vehicles such as trains with Internet connectivity. NEMO Basic Support Protocol (NEMO-BS) supports network mobility. However, through our experiment using a train in service, NEMO-BS shows that the handover latency becomes very large if the signaling messages are lost due to instability of the wireless link during handover. There are several proposals such as N-PMIPv6 and N-NEMO to support network mobility based on network-based localized mobility management. However, these proposals have problems such as large tunneling overhead and transmission of the control messages for handover on the wireless link. This paper proposes PNEMO, a network-based localized mobility management protocol for mobile networks. In PNEMO, mobility management is basically handled in the wired network so that the signaling messages are not transmitted on the wireless link when handover occurs. This makes handover stable even if the wireless link is unstable during handover. PNEMO uses a single tunneling even if the mobile network is nested. PNEMO is implemented in Linux. The measured performance shows that the handover latency is almost constant even if the wireless link is unstable when handover occurs, and that the overhead of PNEMO is negligible in comparison with NEMO-BS.
On Awashima Island in Niigata prefecture, Japan, the tourist association conducts ecotourism specifically aimed at children. In this ecotourism, providing children with valid educational materials is important. Therefore, we have proposed an ecotourism support system that can provide video content as educational materials by using mobile phones and One-seg broadcasting. In an experimental evaluation of the proposed system, a content scheduler of the system has increased the number of accesses per hour to the ecotour page by offering appropriate content to tourists. As a result, a conversion rate of the ecotour page was 30.6% which is quite high, so One-seg broadcasting is useful for advertising ecotourism.
Biometric authentication has been attracting much attention because it is more user-friendly than other authentication methods such as password-based and token-based authentications. However, it intrinsically comprises problems of privacy and revocability. To address these issues, new techniques called cancelable biometrics have been proposed and their properties have been analyzed extensively. Nevertheless, only a few considered provable security, and provably secure schemes known to date had to sacrifice user-friendliness because users have to carry tokens so that they can securely access their secret keys. In this paper, we propose two cancelable biometric protocols each of which is provably secure and requires no secret key access of users. We use as an underlying component the Boneh-Goh-Nissim cryptosystem proposed in TCC 2005 and the Okamoto-Takashima cryptosystem proposed in Pairing 2008 in order to evaluate 2-DNF (disjunctive normal form) predicate on encrypted feature vectors. We define a security model in a semi-honest manner and give a formal proof which shows that our protocols are secure in that model. The revocation process of our protocols can be seen as a new way of utilizing the veiled property of the underlying cryptosystems, which may be of independent interest.
A simple formula of a decision making process is introduced and applied to airplane accidents: a near-miss case of Douglas DC-10-40 with Boeing 747-400D in 2001 and a collision between Tupolev 154M and Boeing 757-200 cargo jet in 2002. The decision making process is shown as the plot of lnln (1-y)-1 versus lnt with phase-change ratio, y and time, t and thus, it shows the diffusion law when the plot is linear. The flight data focused on altitude from cruising to descending is applied to the model and a clear phase-change is demonstrated. The timing of the phase-change means the “decision making” point and is estimated as timing for pilots to start to perform maneuvers. This model may be applied to a large number of cases related to the human factors on decision making.
The phrase table, a scored list of bilingual phrases, lies at the center of phrase-based machine translation systems. We present a method to directly learn this phrase table from a parallel corpus of sentences that are not aligned at the word level. The key contribution of this work is that while previous methods have generally only modeled phrases at one level of granularity, in the proposed method phrases of many granularities are included directly in the model. This allows for the direct learning of a phrase table that achieves competitive accuracy without the complicated multi-step process of word alignment and phrase extraction that is used in previous research. The model is achieved through the use of non-parametric Bayesian methods and inversion transduction grammars (ITGs), a variety of synchronous context-free grammars (SCFGs). Experiments on several language pairs demonstrate that the proposed model matches the accuracy of the more traditional two-step word alignment/phrase extraction approach while reducing its phrase table to a fraction of its original size.