Products and services nowadays need personal information from consumers in order to personalize their goods to best fit consumers. At the present, the online environment is the biggest source of consumers' personal information. However, online privacy has become the major concern of consumers. A personal information trading platform has been proposed as a medium for collecting consumers' personal information in exchange for monetary incentive. This study proposes a new approach to requesting personal attributes which can adapt with consumers' personal information disclosure behavior and aims to increase the disclosure of personal information without increasing of monetary incentive. To develop this new adaption method, we developed the valuation of a personal information method without using currency. The probability and graph mining techniques were used to valuating personal attributes. Then, we displayed the relationships of personal attributes disclosure in the hierarchy and proposed a method for valuating personal information disclosure. The valuation method was used in the evaluations, which were compared with the disclosure of personal information results from the consumers. After the evaluation was completed, the result showed that the new approach can significantly increase the disclosure of consumers' personal information.
A use of an electric outlet by a consumer forces the outlet manager to pay for the consumer's power usage in current electrical power systems. Even if a consumer uses an outlet managed by another person, one bill for both indoor and outdoor charging information should be required to the consumer in their contract with the utility company. For this purpose, we define a model for the Smart Grid security and propose a Secure Payment Protocol for Charging Information over Smart grid, SPaCIS for short, as a protocol satisfying the model. Our model provides for the unlinkability of consumers as well as for the undeniability and unforgeability of billing information using digital signatures and identity federations. SPaCIS is also efficient in the sense that time complexity is constant relatively to a trivial use such as an individual verification for each signatures, unless a verification error happens. We furthermore evaluate performance of SPaCIS via cryptographic implementation, and simulate SPaCIS in a case that one thousand users generate thirty signatures. Then, we show that SPaCIS with ECDSA can be executed within 6.30msec for signing and 21.04msec for verification of signatures, and conclude that SPaCIS is fairly practical.
Technological development in communications and electronics has made the growing expansion of the Internet of Things (IoT). IoT is expected to make a great impact to our society because smart devices in IoT are easily integrated into existing service. As a result, standardization of technologies to support the IoT is becoming more important to realize a smart society through different service domains. This paper presents a survey on the current state of the art of standards for IoT technologies and gives a brief introduction to related standards and recent research areas in IoT. Finally, it also proposes an idea of the future platform of scalable IoT systems. The proposed idea employs IP mobility technologies to realize inter-operability among IoT devices in different networks.
As an innovation of driver assistance technology, this research aims to develop an “Autonomous Intelligent Driving System” to prevent risk of accidents and enhance driving safety for elderly drivers in order to vitalize current aged society. The proposed system focuses on two key technologies: Risk-predictive driving intelligence model and Shared control between the driver and the assistance system. The first key technology is to embed an experienced driver model for recovering degraded performances of recognition, decision-making and operation of drivers. In the driver assistance system design, the experienced driver model contains knowledge-based “risk-prediction mechanism” to avoid accidents in risky driving situations. For instance, when passing unsignalized intersections with poor visibility, it is known that experienced drivers predict the appearance of sudden-crossing pedestrians or bicycles and then slow down the vehicle when approaching such poor visibility area and also prepare to brake in order to avoid potential collisions that might occur. The second key point is “Shared control.” This research does not aim to develop a fully-autonomous driving vehicle for them, but aims to develop an advanced driver assistance system for preventing accidents in the case that the intervention by braking or steering is needed, as well as reducing driving workload. Therefore, to realize good cooperative characteristics between the driver and the system, the shared control concept is applied to optimize the assistance level for braking and steering maneuver, minimizing the interference human driver driving maneuver. The Driving Simulator and the test vehicle are used to verify the effectiveness of the proposed intelligent driving system.
This article discusses a novel method to strengthen the collaboration between Internet service providers (ISPs) and content delivery networks (CDNs). CDNs are becoming the primary data delivery method in information communication technology environments because information sharing via networks is becoming the driving force of the future Internet. Moreover, it is anticipated that network routers will be equipped with additional processing power and storage modules for providing efficient end-user services. Consequently, this article studies the effectiveness of introducing a Service-oriented Router (SoR) to strengthen the ISP-CDN collaboration to leverage DNS-based request redirection in CDNs. In contrast, the proposed method yields better performance in user redirection and network resource utilization, suggesting that using SoR may a future business model which addresses adequate ISP-CDN collaboration.
Recently, cloud systems composed of heterogeneous hardware have been increased to utilize progressed hardware power. However, to program applications for heterogeneous hardware to achieve high performance needs much technical skill and is difficult for users. Therefore, to achieve high performance easily, this paper proposes a PaaS which analyzes application logics and offloads computations to GPU and FPGA automatically when users deploy applications to clouds.
There have been several studies on object detection and activity recognition on a table conducted thus far. Most of these studies use image processing with cameras or a specially configured table with electrodes and an RFID reader. In private homes, methods using cameras are not preferable since cameras might invade the privacy of inhabitants and give them the impression of being monitored. In addition, it is difficult to apply the specially configured system to off-the-shelf tables. In this work, we propose a system that recognizes activities conducted on a table and identifies which user conducted the activities with load cells only. The proposed system uses four load cells installed on the four corners of the table or under the four legs of the table. User privacy is protected because only the data on actions through the load cells is obtained. Load cells are easily installed on off-the-shelf tables with four legs and installing our system does not change the appearance of the table. The results of experiments using a table we manufactured revealed that the weight error was 38g, the position error was 6.8cm, the average recall of recognition for four activities was 0.96, and the average recalls of user identification were 0.65 for ten users and 0.89 for four users.
In this paper we consider the (legal) representative in governmental ICT services and propose a secure private mail box system in which a message sent to the pupil is re-encrypted by the proxy server. This process enables the representative to decrypt the message. We also show its formal description of the protocols and evaluate the security by ProVerif model checking tool.
We describe a method for decentralized task/area partitioning for coordination in cleaning/sweeping domains with learning to identify the easy-to-dirty areas. Ongoing advances in computer science and robotics have led to applications for covering large areas that require coordinated tasks by multiple control programs including robots. Our study aims at coordination and cooperation by multiple agents, and we discuss it using an example of the cleaning tasks to be performed by multiple agents with potentially different performances and capabilities. We then developed a method for partitioning the target area on the basis of their performances in order to improve the overall efficiency through their balanced collective efforts. Agents, i.e., software for controlling devices and robots, autonomously decide in a cooperative manner how the task/area is partitioned by taking into account the characteristics of the environment and the differences in agents' software capability and hardware performance. During this partitioning process, agents also learn the locations of obstacles and the probabilities of dirt accumulation that express what areas are easy to be dirty. Experimental evaluation showed that even if the agents use different algorithms or have the batteries with different capacities resulting in different performances, and even if the environment is not uniform such as different locations of easy-to-dirty areas and obstacles, the proposed method can adaptively partition the task/area among the agents with the learning of the probabilities of dirt accumulations. Thus, agents with the proposed method can keep the area clean effectively and evenly.
Increasing the size of parallel corpora for less-resourced language pairs is essential for machine translation (MT). To address the shortage of parallel corpora between Chinese and Japanese, we propose a method to construct a quasi-parallel corpus by inflating a small amount of Chinese-Japanese corpus, so as to improve statistical machine translation (SMT) quality. We generate new sentences using analogical associations based on large amounts of monolingual data and a small amount of parallel data. We filter over-generated sentences using two filtering methods: one based on BLEU and the second one based on N-sequences. We add the obtained aligned quasi-parallel corpus to a small parallel Chinese-Japanese corpus and perform SMT experiments. We obtain significant improvements over a baseline system.
In eye-tracking-based reading behavior research, gaze sampling errors often negatively affect gaze-to-word mapping. In this paper, we propose a method for more accurate mapping by first taking adjacent horizontally progressive fixations as segments, and then classifying the segments into six classes using a random forest classifier. The segments are then reconstructed based on the classification, and are associated with a document line using a dynamic programming algorithm. The combination of segment-to-line mapping and transition classification achieved 87% mapping accuracy. We also witnessed a reduction of manual annotation time when the mapping was used as an annotation guiding tool.
Data stream management systems (DSMSs) are suitable for managing and processing continuous data at high input rates with low latency. For advanced driver assistance including autonomous driving, embedded systems use a variety of onboard sensor data with communications from outside the vehicle. Thus, the software developed for such systems must be able to handle large volumes of data and complex processing. We develop a platform that integrates and manages data in an automotive embedded system using a DSMS. However, because automotive data processing, which is distributed in in-vehicle networks of the embedded system, is time-critical and must be reliable to reduce sensor noise, it is difficult to identify conventional DSMSs that meet these requirements. To address these new challenges, we develop an automotive embedded DSMS (AEDSMS). This AEDSMS precompiles high-level queries into executable query plans when designing automotive systems that demand time-criticality. Data stream processing is distributed in in-vehicle networks appropriately, where real-time scheduling and senor data fusion are also applied to meet deadlines and enhance the reliability of sensor data. The main contributions of this paper are as follows: (1) we establish a clear understanding of the challenges faced when introducing DSMSs into the automotive field; (2) we propose an AEDSMS to tackle these challenges; and (3) we evaluate the AEDSMS during run-time for advanced driver assistance.
This paper addresses the issues in the task of annotating geographical entities on microblogs and reports the preliminary results of our efforts to annotate Japanese microblog texts. Unlike prior work, we aim at annotating not only geographical location entities but also facility entities, such as stations, restaurants and schools. We discuss (i) how to build a gazetteer of geographical entities with a sufficiently broad coverage, (ii) what types ambiguities that need to be considered, (iii) why the annotator tends to disagree, and (iv) what technical problems should be addressed to automate the task of annotating the geographical entities. All the annotation data and the annotation guidelines are publicly available for research purposes from our web site.
In recent years, virtual and augmented reality have begun to take advantage of the high speed capabilities of data streaming technologies and wireless networks. However, limitations like bandwidth and latency still prevent us from achieving high fidelity telepresence and collaborative virtual and augmented reality applications. Fortunately, both researchers and engineers are aware of these problems and have set out to design 5G networks to help us to move to the next generation of virtual interfaces. This paper reviews state of the art virtual and augmented reality communications technology and outlines current efforts to design an effective, ubiquitous 5G network to help to adapt to virtual application demands. We discuss application needs in domains like telepresence, education, healthcare, streaming media, and haptics, and provide guidelines and future directions for growth based on this new network infrastructure.
Telecommunication service has been growing and progressing from telephone to high reality communication systems that are based on evolution of network and media technologies. Recognizing virtual reality (VR) as a communication tool, we provide a review of communication services and the directions they are moving in, as well as related VR technologies. The Immersive Telepresence System “Kirari!” is also introduced as the latest development example for a new telecommunication service.
The research and development (R&D) and the standardization of the 5th Generation (5G) mobile networking technologies are proceeding at a rapid pace all around the world. In this paper, we introduce the emerging concept of network slicing that is considered one of the most significant technology challenges for 5G mobile networking infrastructure, summarize our preliminary research efforts to enable end-to-end network slicing for 5G mobile networking, and finally discuss application use cases that should drive the designs of the infrastructure of network slicing.
This paper first investigates how a network operates when multiple receivers download content simultaneously in content-centric networking (CCN) when the receivers' downloading speeds differ. The results indicate that the performance of the download completion time of a faster user degrades excessively due to a decrease in the cache-hit rate in the router. Based on the investigation, this paper proposes a novel in-network caching method for simultaneous download from multiple receivers in CCNs. The proposed method keeps cached data packets in a router until slower receivers download the data, in order to prevent slower users from directly downloading data from the content provider. We conduct computer simulations and confirm the effectiveness of the proposed method. We show that the proposed method can improve the download completion time performance in the situation where multiple receivers download content at different speeds in CCN.
Wireless Mesh Networks (WMNs) over CSMA MAC (especially IEEE 802.11) are an attractive solution to widen the coverage area of the Internet in unlicensed frequency bands. Although such CSMA-based WMNs have been deeply investigated for a long time, they still suffer from heavy interference due to hidden terminals. In this paper, we accelerate the performance of CSMA-based WMNs by introducing a distributed scheduling scheme that exchanges the transmission-queue length information in real-time among neighbor nodes. In our scheduling scheme, we exchange the information of transmission queue-lengths among neighbor nodes in real-time, and allow transmitting frames to the node that has the longest queue length among its 2-hop distance. The proposed scheduling scheme can be regarded as a distributed design of so called ‘Max-weight’ scheduling.By combining CSMA with the queue-length based scheduling, we significantly reduce collisions due to hidden terminals and improve the performance with a small overhead of queue-length fields in MAC frames.
Fine-grained network traffic monitoring is important for efficient network management in software-defined networking (SDN). The current SDN architecture, i.e., OpenFlow, relies on counters in the flow entries of forwarding tables for such monitoring tasks. This is not efficient nor flexible since the packet-header fields that users aim for monitoring are not always the same or overlap with those in OpenFlow match fields, which is designed for forwarding as a higher priority. This inflexibility may result in unnecessary flow entries added to switches for monitoring and controller-switch monitoring-based communication overhead, which may cause the communication channel to become a bottleneck, especially when the network includes a large number of switches. We propose SDN-Mon, a SDN-based monitoring framework that decouples monitoring from existing forwarding tables, and allows more fine-grained and flexible monitoring to serve a variety of network-management applications. SDN-Mon allows the controller to define the arbitrary sets of monitoring match fields based on the requirements of controller applications to flexibly monitor traffic. In SDN-Mon, some monitoring processes are selectively delegated to SDN switches to leverage the computing processor of the switch and avoid an unnecessary overhead in the controller-switch communication for monitoring. We implemented SDN-Mon and evaluated its performance on Lagopus switch, a high-performance software switch.
To improve the communication performance in IEEE802.11-based wireless mesh networks (WMNs), several dynamic metrics have been proposed. However, all of them have a severe risk of generating temporary routing loops which may cause severe congestion and disruption of communications. Although the routing loop is an essential problem that degrades network performance, no essential solution is provided so far for wireless multihop networks. In this paper, we propose a mechanism called Loop-free Metric Range (LMR) to make existing dynamic metrics loop-free by restricting the range of metric values to change. LMR is applicable to a major part of existing metrics including ETX, ETT, MIC, etc. without any message overhead. We first provide theoretical results that shows LMR guarantees loop-freedom if no message loss takes place. We next show that LMR is also practically effective in practical scenarios where message loss may take place; we show through simulation and actual evaluations that LMR works effectively as a limiter on dynamic metrics to reduce routing loops and to improve network performance through similation and real evaluation.
With the dramatic increase in Internet of Things (IoT) related messaging volume, message queue systems are highly required for both interoperability among devices, as well as for control message traffic between devices and heterogeneous back-end systems (BES). When connected BES issue several dequeue requests to the message queue and no message is available, the frequency of missed-dequeues increases, which causes a degradation of the maximum throughput. Therefore, we propose the retry dequeue-request scheduling (RDS) method that decreases the number of dequeue requests from the BES by delaying the replies to the BES when missed-dequeues occur. Simulation and experimental evaluations show that the throughput of the RDS method achieves 180% of that of the conventional dequeue method.
This paper proposes a novel data compression method for artificial vision systems and its low-energy implementation in order to reduce energy consumption in a wireless communication subsystem. The artificial vision systems are one of the methods for realizing visual prosthesis by controlling stimulus to visual nerves, and they consist of an inner stimulating unit and an outer image processing unit. The outer unit transmits information regarding stimulation to the inner unit via wireless communication, which occupies a large portion of the whole energy consumption. Reducing traffic in wireless communication is important to prevent damage caused by extra heat dissipation of the inner unit, which leads to excess energy consumption. The proposed compression method marks a higher compression ratio than the conventional compression methods by taking advantage of the analyses of stimuli position data, which is dominant in traffic. The proposed method is implemented as an application-domain specific instruction-set processor to achieve both configurability of stimulation control and compression efficiency. The evaluation results show that the proposed implementation reduces energy consumption by about 87% and 62% in the compression and decompression process, respectively. These results indicate that the proposed method can expect to reduce energy consumption in a wireless communication receiver dramatically.
This paper presents a security analysis of the Local Interconnect Network (LIN) that is used in assembly units such as seats, steering wheels, and doors in vehicles. Recently, the number of security threats to in-vehicle networks such as the Controller Area Network has increased. In contrast, there have been no reports that evaluate the security of LIN in detail. The security analysis of LIN is important because it is used in units related to seats, steering wheels, etc. and it is at risk for an attack. In this paper, we present the first evaluation on the security of LIN. We present case studies of attacks that use the characteristics of a commonly-used error handling mechanism. In the attacks, the attacker intentionally stops communication using the error handling mechanism and sends a false response in place of a valid one. We experimentally show the feasibility of the attacks using a vehicle microcontroller. Furthermore, we present countermeasures against the attacks. The results of this study show that there is vulnerability to attack when the error handling mechanism is simply designed. We believe that this study will contribute to improvements in security of in-vehicle communications.
A significant number of logs are generated in dynamic malware analysis. Consequently, a method for effectively compressing these logs is required to reduce the amount of memory and storage consumed to store such logs. In this study, we evaluated the efficacy of grammar compression methods in compressing call traces in malware analysis logs. We hypothesized that grammar compression can be useful in compressing call traces because its algorithm can naturally express the dynamic control flows of program execution. We measured the compression ratio of three grammar compression methods (SEQUITUR, Re-Pair, and Byte Pair Encoding (BPE)) and three well-known compressors (gzip, bzip2, and xz). In experiments conducted in which API call sequences collected from thousands of Windows malware were compressed, the Re-Pair grammar compression method was found to outperform both gzip and bzip2.
Recently, Delay Tolerant Networks (DTNs) have been intensively researched to overcome unstable communication due to the intermittent link connection in wireless communications. In wireless DTNs, to enable continuous connectivity, data are exchanged through intermediate nodes in the path toward the destination node by store-and-forward approach. However, since the participating nodes in the network are not fully trusted, a secure data exchange mechanism in the DTNs would be strongly desirable. In this paper, we propose a secure data exchange system in the wireless DTNs using Attribute-Based Encryption (ABE) to provide two properties: (i) content data can be accessed by only authorized nodes that are dynamically defined by a policy on the attributes while keeping its integrity from alteration during transmission, and (ii) routing messages are encrypted and authenticated such that only the attribute-based authorized nodes can exchange the routing messages, where multi-hop routing messages are encrypted and authenticated by the ABE. Our experimental results show the practicality of our system.
Relaxed memory consistency models specify effects of executions of statements among threads, which may or may not be reordered. Such reorderings may cross loop iterations. To the best of our knowledge, however, there exists no concurrent program logic which explicitly handles the reorderings across loop iterations. This paper provides concurrent program logic for relaxed memory consistency models that can represent, for example, total store ordering, partial store ordering, relaxed memory ordering, and acquire and release consistency. There are two novel aspects to our approach. First, we translate a concurrent program into a family of directed acyclic graphs with finite nodes and transitive edges called program graphs according to a memory consistency model that we adopt. These represent dependencies among statements which represent reorderings of not only statements but also visibility of their effects. Second, we introduce auxiliary variables that temporarily buffer the effects of write operations on shared memory, and explicitly describe the reflections of the buffered effects to shared memory. Specifically, we define a small-step operational semantics for the program graphs with the introduced auxiliary variables, then define sound and relatively complete logic to the semantics.
This paper proposes a parallel implementation of graph mining that extracts all connected subgraphs with common itemsets, of which the size is not less than a given threshold, from a graph and from itemsets associated with vertices of the graph, in distributed memory environments using the task-parallel language Tascell. With regard to this problem, we have already proposed parallelization of a backtrack search algorithm named COPINE and its implementation in shared memory environments. In this implementation, all workers share a single table, which is controlled by locks, that contains the knowledge acquired during the search to obviate the need for unnecessary searching. This sharing method is not practical in distributed memory environments because it would lead to a drastic increase in the cost of internode communications. Therefore, we implemented a sharing method in which each computing node has a table and sends its updates to the other nodes at regular time intervals. In addition to this, the high task creation cost for COPINE is problematic and thus the conventional work-stealing strategy in Tascell, which aims to minimize the number of internode work-steals, significantly degrades the performance since it increases the number of intranode work-steals for small tasks. We solved this problem by promoting workers to enable them to request tasks from external nodes. We also employed a work-stealing strategy based on estimation of the sizes of tasks created by victim workers. This approach enabled us to achieve good speedup performance with up to 8 nodes × 16 workers.
In this paper, the author proposes an Energy-on-Demand (EoD) system based on combinatorial optimization of appliance power consumptions, and describes its implementation and evaluation. EoD is a novel power network architecture of demand-side power management, whose objective is to intelligently manage power flows among power generations under the limitation of available power resource. In an EoD system, when total power consumption exceeds the limit of power resource, a power allocation manager deployed in the system decides the optimal power allocation to all the appliances based on their importance and power consumptions, and controls the amount of power supplied to the appliances in a way that causes minimum undesired effect to quality-of-life of users. Therefore, one of the most crucial factors in an EoD system is the strategy for deciding the optimal power allocation. From a mathematical viewpoint, the power allocation management in an EoD system can be considered as an optimization problem of appliance operation modes. In the developed system, power allocation is based on the multiple-choice knapsack problem (MCKP), a kind of combinatorial optimization problem. The system measures power consumption of appliances, computes the optimal power allocation based on an algorithm for the MCKP, and realizes computed power allocation by controlling IR-controllable appliances and mechanical relays. Through experiments, the developed system is confirmed to work properly as an EoD system by observing system behaviors when the total power consumption exceeds the upper limit of the available power resource.