Information Centric Networking (ICN) is a promising paradigm for the future architecture of the Internet. Content Centric Networking (CCN) is an instantiation of the ICN paradigm. The challenging areas of CCN include congestion control, availability, security, etc. We focus on security, especially secure communications. Some schemes applying identity-based encryption (IBE) for content encryption over CCN have been proposed. However, such schemes generally have the key escrow problem that the private key generator which issues decryption keys to receivers can decrypt any ciphertext passively. We propose an IBE scheme approach to the problem by combining partial-double encryption, interest trace back, cut-through fragment forwarding and multi-path routing. Our scheme is IND-ID-CPA secure in the random oracle model.
Fujioka et al. proposed the first generic construction (FSXY construction) of exposure-resilient authenticated key exchange (AKE) from a key encapsulation mechanism (KEM) without random oracles. However, the FSXY construction implicitly assumes that some intermediate computation result is never exposed though other secret information can be exposed. This is a kind of physical assumption, and an implementation trick (i.e., some on-line computation is executed in a special tamper-proof module) is necessary to achieve the assumption. Such a trick is very costly and may be missed by human errors in implementation. From the viewpoint of the human factor, it is desirable to avoid using complicated implementation tricks. In this paper, we introduce a new generic construction without implementation tricks. Our construction satisfies the same security model as the FSXY construction without increasing communication complexity. Moreover, it has another advantage that the protocol can be executed in one-round while the FSXY construction is a sequential two-move protocol. Our key idea is to use KEM with public-key-independent-ciphertext, which allows parties to be able to generate a ciphertext without depending on encryption keys.
The use of Twitter by citizens during catastrophic events is increasing with the availability of Internet services and the use of smartphones during disasters. After the Great East Japan Earthquake on 2011, Twitter was flooded with lots of disaster information, including misinformation that have been widely spread by retweet. Accordingly, we developed a questionnaire to investigate factors influenced people decision making to retweet disaster information they read from Twitter in disaster situations. We developed a questionnaire using brainstorming and KJ method and conducted a user survey (n =57) to test the questionnaire items. Then, we analyzed using exploratory factor analysis and as a result, five factors derived from 38 question items which are: 1) Trustworthy information, 2) Relevance of the information during disasters, 3) Willingness to supply the information, 4) Importance of the information, and 5) Self Interest. However, there are 7 question items that need revision based on the results of the factor analysis. In this paper, we discuss the method we used to design the questionnaire and the result of the factor analyses of the questionnaire testing.
This paper investigates gamification mechanisms and their application for promoting participatory urban sensing. Participatory sensing, which utilizes user smartphones as sensors, is focused on as an effective and economical sensing mechanism for wide areas. However, countinuing to motivate many participants for a long time is difficult. In addition, monetary incentives are limited generally. To solve these problems, gamification mechanisms are considered one promising technique because they have the potential to suppress monetary incentives by maintaining the motivation of participants. In addition to a general survey, we introduce our past practical research results on gamified participatory urban sensing.
A novel method for extracting “trip periods,” i.e., periods in which a person travels, from continuously collected sensor data, called a “trip-extraction method” hereafter, is proposed to make a sensor-based travel-behavior survey possible. There are mainly two drawbacks in previous studies that detect “stay periods,” i.e., periods in which a person stays within an area, by using the boundary of a “stay area,” i.e., an area in which a person stays and then regard the rest of the periods as trip periods: false positives caused by GPS-positioning errors and false negatives caused by short-distance trips within the boundary. This study solves these problems by using novel features that are effective even in the case where the GPS-positioning error is large and by classifying every single piece of GPS data into either trip periods or stay periods not on the basis of the stay-area boundary but on the newly proposed features. An experimental evaluation showed that the precision of the proposed method was 89.4%, which is much higher than that of conventional methods.
Significant progress has been made in the field of autonomous driving during the past decades. However, fully autonomous driving in urban traffic is still extremely difficult in the near future. Visual tracking of vehicles or pedestrians is an essential part of autonomous driving. Among these tracking methods, kernel-based object tracking is an effective means of tracking in video sequences. This paper reviews the kernel theory adopted in target tracking of autonomous driving and makes a qualitative and quantitative comparison among several well-known kernel based methods. The theoretical and experimental analysis allow us to conclude that the kernel based online subspace learning algorithm achieves a good trade-off between the stability and real-time processing for target tracking in the practical application environments of autonomous driving. This paper reports on the result of evaluating the performances of five algorithms by using seven video sequences.
WLANs have become increasingly popular and widely deployed. The MAC protocol is one of the important technologies of the WLANs and affects communication efficiency directly. A distributed MAC protocol has the advantage that infrastructure such as an access point is unnecessary. On the other hand, total throughput decreases heavily increase in network density, which needs to be improved. Previous research works gave proposals with improved throughput but a degraded fairness. In this paper, focusing on MAC protocol, we propose a novel protocol that each node estimates the number of nodes in a network with a short convergence time and no overhead traffic burden added to the network through observing the channel, and nodes dynamically optimize their backoff process to achieve high throughput and satisfactory fairness. Since necessary indexes can be obtained through direct measurement from the channel, our scheme will not involve any added load to networks, which makes our schemes simpler and more effective. Through simulation comparison with recently proposed methods, we show that our scheme can greatly enhance the throughput with good fairness without it signifying whether the network is in saturated or non-saturated state.
Humanitarian aid in an emergency information system involves information from multidisciplinary environments. A lot of information is stored in relational databases. Semantic interoperability between existing relational databases and ontologies still remains a major practical issue. In order to avoid a combinatorial explosion of terminology alignment among different systems, we designed a pivot ontology framework, and present a pivot construction methodology and a PivotOntology-to-Database schema matching methodology. The first methodology is adopted from an ontology engineering technique, and the second one is based on a linguistic relation approach. To integrate humanitarian aid in emergency information from several databases, the Humanitarian Aid for Refugees in Emergencies (HARE) ontology has been proposed. Coverage of the HARE ontology is evaluated with respect to comparison against knowledge sources, and matching with existing systems. The evaluations demonstrate that the HARE ontology is broadly compatible with existing database schemas.
In this paper, we focus on an in-line machine model. This model represents systems for the manufacturing of a product in large quantities. Recently, studies relating to the collision probability between jobs have been conducted in such models. In this paper, we extend the known models to a generalized version by considering delivery time between machines. We first present a method for computing a schedule of jobs in the generalized model. Then, we show that the collision probability for the generalized model is the same as that for the model without delivery time. We call this property the redundancy of delivery time. Next, we introduce two optimization problems with collision probability for the generalized model. Using the redundancy of delivery time, it is shown that these optimization problems are equivalent to simpler problems. This finding may prove to be very useful when considering optimization problems with collision probability.
Formula-based fault localization approach is an algorithmic method that is able to provide fine-grained information account for identified root causes. The method combines the SAT-based formal verification techniques with the Reiter's model-based diagnosis theory. This paper adapts the formula-based fault localization method, and introduces a new program encoding, called full flow-sensitive trace formula. This encoding is particularly useful for programs with multiple faults. Furthermore, we improve the efficiency of computing the potential root causes by using the push & pop mechanism of the Yices solver. We implemented the method in a tool, SNIPER, which was applied to some benchmarks. All single and multiple faults were successfully identified and discriminated.
Cloud population is a term that describes a cloud application distributed over many virtual machines or container-based boxes. Cloud platforms today offer simple tools for performance management (common example is load balancing) which are not sufficient for managing the performance of cloud populations. This paper proposes a new concept called cloud probing which is where applications themselves probe their host cloud platforms and optimize their own populations at runtime based on measurement data. This paper shows that even a simple optimization algorithm can lead to improved performance for the entire population. Since the only prerequisite function is the ability to migrate, the proposed method is also feasible in federated clouds where apps are fully in charge of managing their own populations spread across multiple cloud providers. This paper showcases the design of the TopoAPI that implements cloud probing, runs independently from physical platforms, and can therefore be used in federated environments.
With the recent growth of mobile communication, the location-based k-nearest neighbor (k-NN) search is getting much attention. While the k-NN search provides beneficial information about points of interest (POIs) near users, users' locations could be revealed to the server. Lien et al. have recently proposed a highly-accurate privacy-preserving k-NN search protocol with the additive homomorphism. However, it requires a heavy computation load due to the unnecessary multiplication on the server in the encryption domain. In this paper, we propose a lightweight private circular query protocol (LPCQP) with divided POI-table and the somewhat homomorphic encryption for privacy-preserving k-NN search. Our proposed scheme removes unnecessary POI information for the request user by dividing and aggregating a POI-table, and this reduces both the computational and the communication costs. In addition, we use both additive and multiplicative homomorphisms to perform the above process in the encryption domain. We evaluate the performance of our proposed scheme and show that our scheme reduces both the computational and the communication costs while maintaining high security and high accuracy.
We address a declarative construction of abstract syntax trees with Parsing Expression Grammars. AST operators (constructor, connector, and tagging) are newly defined to specify flexible AST constructions. A new challenge coming with PEGs is the consistency management of ASTs in backtracking and packrat parsing. We make the transaction AST machine in order to perform AST operations in the context of the speculative parsing of PEGs. All the consistency control is automated by the analysis of AST operators. The proposed approach is implemented in the Nez parser, written in Java. The performance study shows that the transactional AST machine requires 25% approximately more time in CSV, XML, and C grammars.
We design a concurrent separation logic for GPGPU, namely GPUCSL, and prove its soundness by using Coq. GPUCSL is based on a CSL proposed by Blom et al., which is for automatic verification of GPGPU kernels, but employs different inference rules because the rules in Blom's CSL are not standard. For example, Blom's CSL does not have a frame rule. Our CSL is a simple extension of the original CSL, and it is more suitable as a basis of advanced properties proposed for other studies on CSLs. Our soundness proof is based on Vafeiadis' method, which is for a CSL with a fork-join concurrency model. The proof reveals two problems in Blom's approach in terms of soundness and extensibility. First, their assumption that thread ID independence of a kernel implies barrier divergence freedom does not hold. Second, it is not easy to extend their proof to other CSLs with a frame rule. Although our CSL covers only a subset of CUDA, our preliminary experiment shows that it is useful and expressive enough to verify a simple kernel with barriers.
This paper presents a scheme comprising a type system and a type-directed compilation method that enables users to integrate high-level key-value store (KVS) operations into statically typed polymorphic functional languages such as Standard ML. KVS has become an important building block for cloud applications because of its scalability. The proposed scheme will enhance the productivity and program safety of KVS by eliminating the need for low-level string manipulation. A prototype that demonstrates its feasibility has been implemented in the SML# language and clarifies issues that need to be resolved in further development towards better practical performance.
The problem of similarity search is a crucial task in many real-world applications such as multimedia databases, data mining, and bioinformatics. In this work, we investigate the similarity search on uncertain data modeled in Gaussian distributions. By employing Kullback-Leibler divergence (KL-divergence) to measure the dissimilarity between two Gaussian distributions, our goal is to search a database for the top-k Gaussian distributions similar to a given query Gaussian distribution. Especially, we consider non-correlated Gaussian distributions, where there are no correlations between dimensions and their covariance matrices are diagonal. To support query processing, we propose two types of novel approaches utilizing the notions of rank aggregation and skyline queries. The efficiency and effectiveness of our approaches are demonstrated through a comprehensive experimental performance study.
Optimally hybrid numerical solvers were constructed for massively parallel generalized eigenvalue problems (GEP). The strong scaling benchmark was carried out on the K computer and other supercomputers for electronic structure calculation problems in the matrix sizes of M=104-106 with up to 105 cores. The procedure of GEP is decomposed into the two subprocedures of the reducer to the standard eigenvalue problem (SEP) and the solver of SEP. A hybrid solver is constructed, when a routine is chosen for each subprocedure from the three parallel solver libraries of ScaLAPACK, ELPA and EigenExa. The hybrid solvers with the two newer libraries, ELPA and EigenExa, give better benchmark results than the conventional ScaLAPACK library. The detailed analysis on the results implies that the reducer can be a bottleneck in next-generation (exa-scale) supercomputers, which provides guidance for future research. The code was developed as a middleware and a mini-application and will appear online.
We present a game theoretic approach for power reduction in large-scale distributed storage systems. The key idea is to use a distributed hash table and migrate its virtual nodes dynamically so as to skew the workload towards a subset of physical disks while not overloading them. To realize this idea in an autonomous way, virtual nodes are regarded as selfish agents playing a game in which each node receives a payoff according to the workload of the disk on which it currently resides. We model this setting as a potential game, a kind of strategic game in which the incentive of all players to change their strategy can be represented by a single global function. Thus, any increase in the payoff of a virtual node yields a better state in terms of energy conservation. This game model consists of a pair of global and private payoff functions, derived by the Wonderful Life Utility scheme. The former function evaluates how good the current state of the system is, while the latter determines the current payoff of each node. The performance of our method is measured both by simulations and a prototype implementation. From the experiments, we observed that our method consumed 11.1%-16.4% less energy than the static configuration. In addition, although a small number of responses were heavily delayed because of overloading of some disks at peak time, our method maintained preferred overall average response time in the range 50-190ms.
July 31, 2017 Due to the end of the Yahoo!JAPAN OpenID service, My J-STAGE will end the support of the following sign-in services with OpenID on August 26, 2017: -Sign-in with Yahoo!JAPAN ID -Sign-in with livedoor ID * After that, please sign-in with My J-STAGE ID.
July 03, 2017 There had been a service stop from Jul 2‚ 2017‚ 8:06 to Jul 2‚ 2017‚ 19:12(JST) (Jul 1‚ 2017‚ 23:06 to Jul 2‚ 2017‚ 10:12(UTC)) . The service has been back to normal.We apologize for any inconvenience this may cause you.
May 18, 2016 We have released “J-STAGE BETA site”.
May 01, 2015 Please note the "spoofing mail" that pretends to be J-STAGE.