-
Kei Tanimoto, Susumu Ishihara
Article type: Regular Paper
Subject area: Wireless/Mobile Networks
2011 Volume 19 Pages
1-11
Published: 2011
Released on J-STAGE: January 12, 2011
JOURNAL
FREE ACCESS
This paper describes the implementation and the evaluation of a link aggregation system using Network Mobility (NEMO). The link aggregation system, called NEMO SHAKE, constructs a temporal network (called an alliance) between multiple mobile routers (MRs) carried by vehicles and aggregates external links between multiple MRs and the Internet to provide a fast and reliable transmission with mobile devices in vehicles carrying MRs. We also designed a system for controlling alliances. By estimating the distance and the link condition between MRs, it can achieve a high throughput and the stability of the aggregated paths between vehicles and the Internet. We evaluated the usefulness of NEMO SHAKE and its alliance-control mechanism in real vehicular networks. We confirmed that the alliance-control mechanism can achieve a high throughput by changing the member of the alliance dynamically.
View full abstract
-
Nguyen Thanh Hung, Hideto Ikeda, Kenzi Kuribayasi, Nikolaos Vogiatzis
Article type: Regular Paper
Subject area: ITS
2011 Volume 19 Pages
12-24
Published: 2011
Released on J-STAGE: January 12, 2011
JOURNAL
FREE ACCESS
This paper presents a car navigation system for a future integrated transportation system forming a component of an Intelligent Transportation System- ITS. The system focuses on the mechanisms of how to detect the current position of each vehicle and to navigate each vehicle. Individual vehicular data is collected using the Global Positioning System - GPS, for location data and it will then be transmitted to a control center via a mobile phone. For the purpose of this paper, the device will be referred to as “Reporting Equipment for current geographic Position” or REP. It is suspected that if a great number of REP equipped vehicles report their positions to the control center simultaneously, there would be a heavy load on the computational communications network (“network”). This paper proposes an algorithm to reduce the network load associated with large numbers of vehicles reporting their position at the same time; if a car skips some reports then it is not possible for the control center to estimate its correct position therefore we need an algorithm that aims to decrease the frequency of the reporting without sacrificing the proper level of accuracy for the position. Compared to periodical reporting, the load effectiveness shows a 50-66% improvement.
View full abstract
-
Tsutomu Inaba, Hiroyuki Takizawa, Hiroaki Kobayashi
Article type: Regular Paper
Subject area: P2P Networking
2011 Volume 19 Pages
25-38
Published: 2011
Released on J-STAGE: February 09, 2011
JOURNAL
FREE ACCESS
The technologies of Cloud Computing and NGN are now growing a paradigm shift where various services are provided to business users over the network. In conjunction with this movement, many studies are active to realize a ubiquitous computing environment in which a huge number of individual users can share their computing resources on the Internet, such as personal computers (PCs), game consoles, sensors and so on. To realize an effective resource discovery mechanism for such an environment, this paper presents an adaptive overlay network that enables a self-organizing resource management system to efficiently adapt to a heterogeneous environment. The proposed mechanism is composed of two functions. One is to adjust the number of logical links of a resource, which forward search queries so that less-useful query flooding can be reduced. The other is to connect resources so as to decrease the communication latency on the physical network rather than the number of query hops on an overlay network. To further improve the discovery efficiency, this paper integrates these functions into a self-organizing resource management system, SORMS, which has been proposed in our previous work. The simulation results indicate that the proposed mechanism can increase the number of discovered resources by 60% without decreasing the discovery efficiency, and can reduce the total communication traffic by 80% compared with the original SORMS. This performance improvement is obtained by efficient control of logical links in a large scale network.
View full abstract
-
Kan Watanabe, Masaru Fukushi, Michitaka Kameyama
Article type: Regular Paper
Subject area: Cloud Computing and Grid Computing
2011 Volume 19 Pages
39-51
Published: 2011
Released on J-STAGE: February 09, 2011
JOURNAL
FREE ACCESS
This paper presents an adaptive group-based job scheduling method for credibility-based sabotage-tolerance techniques in volunteer computing (VC) systems. Credibility-based technique is a promising approach for reliable VC systems since it guarantees computational correctness mathematically based on the credibility of participants. Check-by-voting reduces the cost of checking credibility in credibility-based technique. However, in some applications where the deadline of the computation is relatively short, current job scheduling methods do not work well for check-by-voting and significantly degrade performance. To improve the performance of VCs, the proposed job scheduling method adaptively groups participants based on the expected-credibility to take into account the participants under job execution. Simulation of VCs shows that the proposed method always outperforms current job scheduling methods regardless of the values of unknown parameters such as population and behavior of saboteurs.
View full abstract
-
Donghui Lin, Yohei Murakami, Masahiro Tanaka
Article type: Regular Paper
Subject area: Cloud Computing and Grid Computing
2011 Volume 19 Pages
52-61
Published: 2011
Released on J-STAGE: February 09, 2011
JOURNAL
FREE ACCESS
In service composition environments, users and service entity hosts are always geometrically distributed. Therefore the performance of the service response might be poor when users invoke services that are physically far from them. Such issues are difficult to be solved with traditional caching technologies in the areas of contents delivery network because service providers do not always allow their service entities to be copied to all service entity hosts. In this paper, we deal with the service invocation control problem considering the above issues. First, we formally model the service invocation problem in service composition environments. Then we design several dynamic service invocation control mechanisms to improve the response performance of atomic services and composite services. The evaluation results show that (1) the mechanism for atomic services that considers both potential users for most service invocation requests and potential users for continuous requests can best improve the response performance; (2) the mechanism for composite services that considers the group characteristics of atomic services can improve the response performance more than other mechanisms; and (3) our proposed dynamic mechanisms can bring a stable response performance from the perspective of users.
View full abstract
-
Akira Yamada, Masanori Hara, Yutaka Miyake
Article type: Regular Paper
Subject area: Network Security
2011 Volume 19 Pages
62-73
Published: 2011
Released on J-STAGE: February 09, 2011
JOURNAL
FREE ACCESS
Web tracking sites or Web bugs are potential but serious threats to users' privacy during Web browsing. Web sites and their associated advertising sites surreptitiously gather the profiles of visitors and possibly abuse or improperly expose them, even if visitors are unaware their profiles are being utilized. In order to prevent such sites in a corporate network, most companies employ filters that rely on blacklists, but these lists are insufficient. In this paper, we propose Web tracking sites detection and blacklist generation based on temporal link analysis. Our proposal analyzes traffic at the network gateway so that it can monitor all tracking sites in the administrative network. The proposed algorithm constructs a graph between sites and their visited time in order to characterize each site. Then, the system classifies suspicious sites using machine learning. We confirm that public black lists contain at most 22-70% of the known tracking sites respectively. The machine learning can identify the blacklisted sites with true positive rate, 62-73%, which is more accurate than any single blacklist. Although the learning algorithm falsely identified 15% of unlisted sites, 96% of these are verified to be unknown tracking sites by means of a manual labeling. These unknown tracking sites can serve as good candidates for an entry of a new backlist.
View full abstract
-
Ryosuke Sato, Kohei Suenaga, Naoki Kobayashi
Article type: Recommended Paper
Subject area: Theory of Programs
2011 Volume 19 Pages
74-87
Published: 2011
Released on J-STAGE: February 09, 2011
JOURNAL
FREE ACCESS
Suenaga, et al. have developed a type-based framework for automatically translating tree-processing programs into stream-processing ones. The key ingredient of the framework was the use of ordered linear types to guarantee that a tree-processing program traverses an input tree just once in the depth-first, left-to-right order (so that the input tree can be read from a stream). Their translation, however, sometimes introduces redundant buffering of input data. This paper extends their framework by introducing ordered,
non-linear types in addition to ordered linear types. The resulting transformation framework reduces the redundant buffering, generating more efficient stream-processing programs.
View full abstract
-
Ritsu Nomura, Masahiro Kuroda, Tadanori Mizuno
Article type: Regular Paper
Subject area: Wireless/Mobile Networks
2011 Volume 19 Pages
88-102
Published: 2011
Released on J-STAGE: March 09, 2011
JOURNAL
FREE ACCESS
Cognitive radio (CR) technology has been offered to improve efficiency in bandwidth use and the quality of service (QoS) of heterogeneous wireless networks with varied types of radio systems. As the CR network system grows, network security has been raised as an area of concern. The topic has yet to be fully considered and no suitable authentication methods have been identified. In this paper, we propose a radio-free mutual authentication protocol for the CR network. The protocol, named EAP-CRP, adopts the location information of a mobile terminal as a shared secret for authentication. EAP-CRP has been designed to satisfy the requirements of wireless network security, confidentiality, integrity and availability, and is realized as a lightweight and quick-responding mutual authentication protocol.
View full abstract
-
Kriengsak Treeprapin, Akimitsu Kanzaki, Takahiro Hara, Shojiro Nishio
Article type: Regular Paper
Subject area: Wireless/Mobile Networks
2011 Volume 19 Pages
103-117
Published: 2011
Released on J-STAGE: March 09, 2011
JOURNAL
FREE ACCESS
In this paper, we propose an extended method of our previous mobile sensor control method, named DATFM (Data Acquisition and Transmission with Fixed and Mobile node). DATFM uses two types of sensor nodes,
fixed node and
mobile node. The data acquired by nodes are accumulated on a fixed node before being transferred to the sink node. The extended method, named DATFM/DF (DATFM with deliberate Deployment of Fixed nodes), strategically deploys sensor nodes based on the analysis of the performance of DATFM in order to improve the efficiency of sensing and data gathering. We also conduct simulation experiments to evaluate the performance of DATFM/DF.
View full abstract
-
Hajime Tazaki, Rodney Van Meter, Ryuji Wakikawa, Keisuke Uehara, Jun M ...
Article type: Regular Paper
Subject area: Wireless/Mobile Networks
2011 Volume 19 Pages
118-128
Published: 2011
Released on J-STAGE: March 09, 2011
JOURNAL
FREE ACCESS
MANET for NEMO (MANEMO) is a new type of network that integrates multi-hop mobile wireless networks with global connectivity provided by Network Mobility (NEMO). Two factors limit the scalability of MANEMO: the volatility of topologically correct global addresses, and excessive traffic load caused by inefficient use of nested tunnels and the consequent redundant routing of packets. We propose NAT-MANEMO, which solves both problems by applying NAT for some mobile router addresses, bypassing tunnel nesting. This approach retains global addresses for mobile end nodes, preserving application transparency, and requires only minimal modification to existing specifications. Our ideas are evaluated using simulation and a proof of concept implementation. The simulation shows the additional signaling overhead for the route optimization introduced by our proposal is negligible compare to the bandwidth of an IEEE 802.11 link. The implementation confirms that route optimization reduces latency and improves throughput.
View full abstract
-
Hiroki Noguchi, Tomoya Takagi, Koji Kugata, Shintaro Izumi, Masahiko Y ...
Article type: Regular Paper
Subject area: Mobile Computing
2011 Volume 19 Pages
129-140
Published: 2011
Released on J-STAGE: March 09, 2011
JOURNAL
FREE ACCESS
We propose a microphone array network that realizes ubiquitous sound acquisition. Several nodes with 16 microphones are connected to form a novel huge sound acquisition system, which carries out voice activity detection (VAD), sound source localization, and sound enhancement. The three operations are distributed among nodes. Using the distributed network, we produce a low-traffic data-intensive array network. To manage node power consumption, VAD is implemented. The system uses little power when speech is not active. For sound localization, a network-connected multiple signal classification (MUSIC) algorithm is used. The experimental result of the sound-source enhancement shows a signal-noise ratio (SNR) improvement of 7.75dB using 112 microphones. Network traffic is reduced by 99.11% when using 1,024 microphones.
View full abstract
-
Kenji Ito, Noriyoshi Suzuki, Satoshi Makido, Hiroaki Hayashi
Article type: Regular Paper
Subject area: ITS
2011 Volume 19 Pages
141-152
Published: 2011
Released on J-STAGE: March 09, 2011
JOURNAL
FREE ACCESS
The present study investigates a Medium Access Control (MAC) protocol for reliable inter-vehicle communications (IVC) to support safe driving with the goal of reducing road traffic accidents. A number of studies have evaluated the performance of the Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA) protocol. However, the communication quality provided by the CSMA/CA protocol is seriously degraded by the hidden terminal problem in IVC. Therefore, we propose a new MAC protocol, referred to as Periodic Broadcast-Timing Reservation Multiple Access (PB-TRMA), which can autonomously control transmission timing and avoid packet collisions by enhancing the Network Allocation Vector (NAV) for periodic broadcast communications. The simulation results show that the proposed protocol can resolve the hidden terminal problem and mitigate data packet collisions. Moreover, the behavior of PB-TRMA is similar to that of Time-Division Multiple Access (TDMA). In addition, we show that two procedures, namely, packet collision retrieval and hidden terminal detection, are essential ingredients in PB-TRMA in order to achieve high quality performance.
View full abstract
-
Katsunari Yoshioka, Yoshihiko Hosobuchi, Tatsunori Orii, Tsutomu Matsu ...
Article type: Regular Paper
Subject area: Network Security
2011 Volume 19 Pages
153-168
Published: 2011
Released on J-STAGE: March 09, 2011
JOURNAL
FREE ACCESS
The use of public Malware Sandbox Analysis Systems (public MSASs) which receive online submissions of possibly malicious files or URLs from an arbitrary user, analyze their behavior by executing or visiting them by a testing environment (i.e., a sandbox), and send analysis reports back to the user, has increased in popularity. Consequently, anti-analysis techniques have also evolved from known technologies like anti-virtualization and anti-debugging to the detection of specific sandboxes by checking their unique characteristics such as a product ID of their OS and a usage of certain Dynamic Link Library (DLL) used in a particular sandbox. In this paper, we point out yet another important characteristic of the sandboxes, namely, their IP addresses. In public MSASs, the sandbox is often connected to the Internetin order to properly observe malware behavior as modern malware communicate with remote hosts in the Internet for various reasons, such as receiving command and control (C&C) messages and files for updates. We explain and demonstrate that the IP address of an Internet-connected sandbox can be easily disclosed by an attacker who submits a decoy sample dedicated to this purpose. The disclosed address can then be shared among attackers, blacklisted, and used against the analysis system, for example, to conceal potential malicious behavior of malware. We call the method Network-based Sandbox Detection by Decoy Injection (NSDI). We conducted case studies with 15 representative existing public MSASs, which were selected from 33 online malware analysis systems with careful screening processes, and confirmed that a hidden behavior of the malware samples was successfully concealed from all of the 15 analysis systems by NSDI. In addition, we found out the risk that a background analysis activity behind these systems can also be revealed by NSDI if the samples are shared among the systems without careful considerations. Moreover, about three months after our first case study it was reported that a real-world NSDI was conducted against several public MSASs.
View full abstract
-
Keisuke Takemori, Takahiro Sakai, Masakatsu Nishigaki, Yutaka Miyake
Article type: Regular Paper
Subject area: Network Security
2011 Volume 19 Pages
169-179
Published: 2011
Released on J-STAGE: April 06, 2011
JOURNAL
FREE ACCESS
As a bot communicates with a malicious controller over a normal communication or an encrypted channel and updates its code frequently, it becomes difficult to detect an infected personal computer (PC) using a signature-based intrusion detection system (IDS) and an antivirus system (AV). As sending control and attack packets from the bot process are independent of the user operation, a behavior monitor is effective in detecting an anomaly communication. In this paper, we propose a bot detection technique that checks outbound packets with destination-based whitelists. If any outbound packets during the non-operating duration do not match the whitelists, the PC is considered to be infected by the bot. The whitelists are a set of legitimate IP addresses (IPs) and/or domain names (DNs). We implement the proposal system as a host-based detector and evaluate false negative (FN) and false positive (FP) performance.
View full abstract
-
Nobutaka Kawaguchi, Hiroshi Shigeno, Ken'ichi Okada
Article type: Regular Paper
Subject area: Network Security
2011 Volume 19 Pages
180-189
Published: 2011
Released on J-STAGE: April 06, 2011
JOURNAL
FREE ACCESS
In this paper, we propose a new distributed hit-list worm detection method: the Anomaly Connection Tree Method with Distributed Sliding Window (ACTM-DSW). ACTM-DSW employs multiple distributed network Intrusion Detection Systems (IDSs), each of which monitors a small portion of an enterprise network. In ACTM-DSW, worm propagation trees are detected by using a sliding time window. More precisely, the distributed IDSs in ACTM-DSW cooperatively detect tree structures composed of the worm's infection connections that have been made within a time window. Through computer-based simulations, we demonstrate that ACTM-DSW outperforms an existing distributed worm detection method, called d-ACTM/VT, for detecting worms whose infection intervals are not constant, but rather have an exponential or uniform distribution. In addition, we implement the distributed IDSs on Xen, a virtual machine environment, and demonstrate the feasibility of the proposed method experimentally.
View full abstract
-
Wei Li, Xiaojuan Wu, Hua-An Zhao
Article type: Regular Paper
Subject area: Image Processing
2011 Volume 19 Pages
190-200
Published: 2011
Released on J-STAGE: April 06, 2011
JOURNAL
FREE ACCESS
Now video surveillance systems are being widely used, the capability of extracting moving objects and estimating moving object density from video sequences is indispensable for these systems. This paper proposes some new techniques of crowded objects motion analysis (COMA) to deal with crowded objects scenes, which consist of three parts: background removal, foreground segmentation, and crowded objects density estimation. To obtain optimal foregrounds, a combination approach of Lucas-Kanade optical flow and Gaussian background subtraction is proposed. For foreground segmentation, we put forward an optical flow clustering approach, which segments different crowded object flows, and then a block absorption approach to deal with the small blocks produced during clustering. Finally, we extract a set of 15 features from the foreground flows and estimate the density of each foreground flow. We employ self organizing maps to reduce the dimensions of the feature vector and to be a final classifier. Some experimental results prove that the proposed technique is useful and efficient.
View full abstract
-
Xavier Olive, Hiroshi Nakashima
Article type: Regular Paper
Subject area: Algorithm Theory
2011 Volume 19 Pages
201-210
Published: 2011
Released on J-STAGE: May 11, 2011
JOURNAL
FREE ACCESS
In this paper, we discuss variable and value symmetries in distributed constraint reasoning and efficient methods to represent and propagate them in distributed environments. Unlike commonly used centralised methods which detect symmetries according to their global definition, we suggest here to define them at the individual constraint level, then define operations on those symmetries in order to propagate them through the depth-first search tree that is generated in efficient distributed constraint reasoning algorithms. In our algorithm, we represent constraints (or utility functions) by a list of costs: while the usual representation lists one cost for one assignation, we drastically reduce the size of that list by keeping only one cost for one class of equivalence of assignations. In practice, for a constraint with
n symmetric variables defined on a domain of
n symmetric values, this approach cuts down the size of the list of costs from
nn to
p(
n) (partition function of
n), i.e., from 10
10 to 42 when
n = 10. We henceforth devised algorithms to process the sparse representations of utility functions and to propagate them along with symmetry information among distributed agents. We implemented this new representation of constraints and tested it with the DPOP algorithm on distributed graph colouring problems, rich in symmetries. Our evaluation shows that in 19% of execution instances we cut down 10 times the volume of communication spent, while no significant overhead appears in non symmetrical executions. These results open serious perspectives on a possible bounding of memory and communication bandwidth consumption in some subclass of distributed constraint reasoning problems.
View full abstract
-
Yuko Murayama
2011 Volume 19 Pages
211
Published: 2011
Released on J-STAGE: July 06, 2011
JOURNAL
FREE ACCESS
-
Taro Yamamoto, Naoko Chiba, Fumihiko Magata, Katsumi Takahashi, Naoya ...
Article type: Regular Paper
Subject area: Anshin
2011 Volume 19 Pages
212-220
Published: 2011
Released on J-STAGE: July 06, 2011
JOURNAL
FREE ACCESS
“
Anshin” is an emotion in Japanese that is difficult to translate because it is vague, varies from person to person, and is subjective. It means something like “a feeling of contentment”. The demand for Internet use with “
Anshin” is high. We believe that the emotion and the demand could be universal. To study “
Anshin, ” we conducted group interviews as our first step. We obtained 95/157 cases of “
Anshin”/anxiety from 28 people. From the results, we found that studying anxiety is valuable. Anxiety is a kind of opposite concept to “
Anshin” and controlling it leads to a kind of “
Anshin.” To discuss this, we constructed a model of the process of anxiety generation and selected candidates for the related elements. After investigating obtained cases, we produced a questionnaire for Internet anxieties to prepare the evaluation of them.
View full abstract
-
Isabella Hatak, Dietmar Roessl
Article type: Regular Paper
Subject area: Anshin
2011 Volume 19 Pages
221-230
Published: 2011
Released on J-STAGE: July 06, 2011
JOURNAL
FREE ACCESS
Empirical evidence has consistently shown that trust facilitates coordination, reduces conflicts and enhances longevity within cooperative relationships. Conditions leading to trust have been considered repeatedly in research papers. Whereas the link between reputation and trust, for example, has been extensively researched, the study of relational competence as a determinant of trust has largely been ignored. Although some academic articles naming the impact of competence on trust exist, the study of the mode of action of relational competence in the trust-developing process is underdeveloped. Therefore, the main purpose of this paper is to analyse the relationship between relational competence and trust. For this reason, a laboratory experiment was conducted. In its conclusion, the paper presents the empirically confirmed strong correlation between relational competence and trust within cooperative relationships by taking into account situational and personal factors.
View full abstract
-
Stephen Marsh, Pamela Briggs, Khalil El-Khatib, Babak Esfandiari, John ...
Article type: Regular Paper
Subject area: Anshin
2011 Volume 19 Pages
231-252
Published: 2011
Released on J-STAGE: July 06, 2011
JOURNAL
FREE ACCESS
Device Comfort is a concept that uses an enhanced notion of trust to allow a personal (likely mobile) device to better reason about the state of interactions and actions between it, its owner, and the environment. This includes allowing a better understanding of how to manage information in fine-grained context as well as addressing the personal security of the user. To do this, it forms a unique relationship with the user, focusing on the device's judgment of user in context. This paper introduces and defines Device Comfort, including an examination of what makes up the comfort of a device in terms of trust and other considerations, and discusses the uses of such an approach. It also presents some ongoing developmental work in the concept, and an initial formal model of Device Comfort, its makeup and behaviour.
View full abstract
-
Toshihiko Takemura, Hideyuki Tanaka, Kanta Matsuura
Article type: Regular Paper
Subject area: Anshin
2011 Volume 19 Pages
253-262
Published: 2011
Released on J-STAGE: July 06, 2011
JOURNAL
FREE ACCESS
In this paper, we investigate the awareness gaps between information security managers and workers with regard to the effectiveness of organizational information security measures in Japanese organizations by analyzing micro data from two Web-based surveys of information security managers and workers. As a result, we find that there are no awareness gaps between information security managers and workers with regard to the effects of the organizational information security in large companies. However, we find that awareness gaps between them tend to exist in small or medium-sized companies. Next, we argue how to bridge the gaps. We propose that information security managers could implement the two-sided organizational measures by communicating with workers in their organizations.
View full abstract
-
Hiroyuki Sato, Akira Kubo
Article type: Regular Paper
Subject area: Trust Models and Trust Management
2011 Volume 19 Pages
263-273
Published: 2011
Released on J-STAGE: July 06, 2011
JOURNAL
FREE ACCESS
In modern information service architectures, many servers are involved in service building, in which servers must rely on the information provided by other servers thereby creating a trust. This trust relation is central to building services in distributed environments, and is closely related to information security. Almost every standard on information security is concerned with the internal control of an organization, and particularly with authentication. In this paper, we focus on a trust model of certificate authentication. Conventionally, a trust model of certificates is defined as a validation of chains of certificates. However, today, this trust model does not function well because of the fragmentation problem caused by complexities of paths and by fine a requirement at security levels. In this paper, we propose “dynamic path validation” together with another trust model of PKI for controlling this situation. First, we propose Policy Authority. Policy Authority assigns a level of compliance (LoC) to CAs in its trust domain. LoC is evaluated in terms of the certificate common criteria of Policy Authority. Moreover, it controls the path building with considerations of LoC. Therefore, we can flexibly evaluate levels of CP/CPS's in a single server. In a typical bridge model, we need as many bridge CAs as the number of required levels of CP/CPS's. In our framework, instead, we can do the same task in a single server, by which we can save costs of maintaining lists of trust anchors at multiple levels.
View full abstract
-
Andreas Fuchs, Sigrid Gürgens, Carsten Rudolph
Article type: Regular Paper
Subject area: Trust Models and Trust Management
2011 Volume 19 Pages
274-291
Published: 2011
Released on J-STAGE: July 06, 2011
JOURNAL
FREE ACCESS
Historically, various different notions of trust can be found, each addressing particular aspects of ICT systems, e.g., trust in electronic commerce systems based on reputation and recommendation, or trust in public key infrastructures. While these notions support the understanding of trust establishment and degrees of trustworthiness in their respective application domains, they are insufficient when addressing the more general notion of trust needed when reasoning about security in ICT systems. Furthermore, their purpose is not to elaborate on the security mechanisms used to substantiate trust assumptions and thus they do not support reasoning about security in ICT systems. In this paper, a formal notion of trust is presented that expresses trust requirements from the view of different entities involved in the system and that enables to relate, in a step-by-step process, high level security requirements to those trust assumptions that cannot be further substantiated by security mechanisms, thus supporting formal reasoning about system security properties. Integrated in the Security Modeling Framework SeMF this formal definition of trust can support security engineering processes and formal validation and verification by enabling reasoning about security properties with respect to trust.
View full abstract
-
Daniel Lomsak, Jay Ligatti
Article type: Regular Paper
Subject area: Trust Models and Trust Management
2011 Volume 19 Pages
292-306
Published: 2011
Released on J-STAGE: July 06, 2011
JOURNAL
FREE ACCESS
Complex software-security policies are difficult to specify, understand, and update. The same is true for complex software in general, but while many tools and techniques exist for decomposing complex general software into simpler reusable modules (packages, classes, functions, aspects, etc.), few tools exist for decomposing complex security policies into simpler reusable modules. The tools that do exist for modularizing policies either encapsulate entire policies as atomic modules that cannot be decomposed or allow fine-grained policy modularization but require expertise to use correctly. This paper presents PoliSeer, a GUI-based tool designed to enable users who are not expert policy engineers to flexibly specify, visualize, modify, and enforce complex runtime policies on untrusted software. PoliSeer users rely on expert policy engineers to specify universally composable policy modules; PoliSeer users then build complex policies by composing those expert-written modules. This paper describes the design and implementation of PoliSeer and a case study in which we have used PoliSeer to specify and enforce a policy on PoliSeer itself.
View full abstract
-
Ayako Komatsu, Tsutomu Matsumoto
Article type: Regular Paper
Subject area: Privacy and Reputation Management
2011 Volume 19 Pages
307-316
Published: 2011
Released on J-STAGE: July 06, 2011
JOURNAL
FREE ACCESS
In contemporary society, many services are offered electronically. Electronically-available personal identification is used to identify the users of these services. e-Money, a potential medium that contains an eID, is widely used in Japan. Service providers encounter certain limitations both when collecting the attribute values related to such eIDs and when using them for analysis because of privacy concerns. A survey was conducted to clarify which of these trade-offs to consider before deploying e-Money privacy, economic value, benefit, or services. Regression analysis and conjoint analysis were performed. The results of the analyses of the questionnaires revealed that there was a preference for economic value, service, and privacy, in this order even though many people were anxious about privacy.
View full abstract
-
Leonardo A. Martucci, Sebastian Ries, Max Mühlhäser
Article type: Regular Paper
Subject area: Privacy and Reputation Management
2011 Volume 19 Pages
317-331
Published: 2011
Released on J-STAGE: July 06, 2011
JOURNAL
FREE ACCESS
We propose an identity management system that supports role-based pseudonyms that are bound to a given set of services (service contexts) and support the use of reputation systems. Our proposal offers a solution for the problem of providing privacy protection and reputation mechanisms concurrently. The trust information used to evaluate the reputation of users is dynamic and associated to their pseudonyms. In particular, our solution does not require the support or assistance from central authorities during the operation phase. Moreover, the presented scheme provides inherent detection and mitigation of Sybil attacks. Finally, we present an attacker model and evaluate the security and privacy properties and robustness of our solution.
View full abstract
-
Pern Hui Chia, Georgios Pitsilis
Article type: Regular Paper
Subject area: Privacy and Reputation Management
2011 Volume 19 Pages
332-344
Published: 2011
Released on J-STAGE: July 06, 2011
JOURNAL
FREE ACCESS
The majority of recommender systems predict user preferences by relating users with similar attributes or taste. Prior research has shown that trust networks improve the accuracy of recommender systems, predominantly using algorithms devised by individual researchers. In this work, omitting any specific trust inference algorithm, we investigate how useful it might be if explicit trust relationships are used to select the best neighbors or predictors, to generate accurate recommendations. We conducted a series of evaluations using data from
Epinions.com, a popular collaborative reviewing system. We find that, for highly active users, using trusted sources as predictors does not give more accurate recommendations compared to the classic similarity-based collaborative filtering scheme, except in improving the precision to recommend items that are of users' liking. This cautions against the intuition that inputs from trusted sources would always be more accurate or helpful. The use of explicit trust links, however, provides a slight gain in prediction accuracy when it comes to the less active users. These findings highlight the need and potential to adapt the use of trust information for different groups of users, besides to better understand trust when employing it in the recommender systems. Parallel to the trust criterion, we also investigated the effects of requiring the candidate predictors to have an equal or higher experience level.
View full abstract
-
Christian Damsgaard Jensen, Povilas Pilkauskas, Thomas Lefévre
Article type: Regular Paper
Subject area: Privacy and Reputation Management
2011 Volume 19 Pages
345-363
Published: 2011
Released on J-STAGE: July 06, 2011
JOURNAL
FREE ACCESS
The Wikipedia is a web-based encyclopedia, written and edited collaboratively by Internet users. The Wikipedia has an extremely open editorial policy that allows anybody, to create or modify articles. This has promoted a broad and detailed coverage of subjects, but also introduced problems relating to the quality of articles. The Wikipedia Recommender System (WRS) was developed to help users determine the credibility of articles based on feedback from other Wikipedia users. The WRS implements a collaborative filtering system with trust metrics, i.e., it provides a rating of articles which emphasizes feedback from recommenders that the user has agreed with in the past. This exposes the problem that most recommenders are not equally competent in all subject areas. The first WRS prototype did not include an evaluation of the areas of expertise of recommenders, so the trust metric used in the article ratings reflected the average competence of recommenders across all subject areas. We have now developed a new version of the WRS, which evaluates the expertise of recommenders within different subject areas. In order to do this, we need to identify a way to classify the subject area of all the articles in the Wikipedia. In this paper, we examine different ways to classify the subject area of Wikipedia article according to well established knowledge classification schemes. We identify a number of requirements that a classification scheme must meet in order to be useful in the context of the WRS and present an evaluation of four existing knowledge classification schemes with respect to these requirements. This evaluation helped us identify a classification scheme, which we have implemented in the current version of the Wikipedia Recommender System.
View full abstract
-
Marcin Seredynski, Pascal Bouvry, Dominic Dunlop
Article type: Regular Paper
Subject area: Security and Trust
2011 Volume 19 Pages
364-377
Published: 2011
Released on J-STAGE: July 06, 2011
JOURNAL
FREE ACCESS
The necessary cooperation in packet forwarding by wireless mobile ad hoc network users can be achieved if nodes create a distributed cooperation enforcement mechanism. One of the most significant roles in this mechanism is played by a trust system, which enables forwarding nodes to distinguish between cooperative (therefore trustworthy) and selfish (untrustworthy) nodes. As shown in this paper, the performance of the system depends on the data classes describing the forwarding behaviour of nodes, which are used for the evaluation of their level of cooperation. The paper demonstrates that partition of such data into personal and general classes can help to create better protection against clique-building among nodes. Personal data takes into account the status of packets originated by a node itself, while general considers the status of packets originated by other nodes. Computational experiments demonstrate that, in the presence of a large number of selfish and colluding nodes, prioritising the personal data improves the performance of cooperative nodes and creates a better defence against colluding free-riders.
View full abstract
-
Ahmad Bazzi, Yoshikuni Onozato
Article type: Regular Paper
Subject area: Security and Trust
2011 Volume 19 Pages
378-388
Published: 2011
Released on J-STAGE: July 06, 2011
JOURNAL
FREE ACCESS
Computers connected to the Internet are a target for a myriad of complicated attacks. Companies can use sophisticated security systems to protect their computers; however, average users usually rely on built-in or personal firewalls to protect their computers while avoiding the more complicated and expensive alternatives. In this paper we study the feasibility — from the network performance point of view — of a VM configured as an integrated security appliance for personal computers. After discussing the main causes of network performance degradation, we use
netperf on the host computer to find the network performance overhead when using a virtual appliance. We are mainly concerned with the bandwidth and the latency that would limit a network link. We compared the bandwidth and latency of this integrated security virtual appliance with current market products and in both cases, the performance of the virtual appliance was excellent compared with hardware counterparts. This security virtual appliance for example allows more than an 80Mbps data transfer rate for individual users, while security appliances generally allow only 150Mbps for small office users. In brief, our tests show that the network performance of a security virtual appliance is on par with the current security appliances available in the market; therefore, this solution is quite feasible.
View full abstract
-
Masahiro Koyama
Article type: Regular Paper
Subject area: Security and Trust
2011 Volume 19 Pages
389-399
Published: 2011
Released on J-STAGE: July 06, 2011
JOURNAL
FREE ACCESS
The purpose of the paper is to elucidate the formation of “trust” in Internet society in the context of the relationship between the real social system and “trust” and between Internet space and “trust”. Although it is based on the dualities of real-world “system trust” and Internet “technological trust” and real-world “human trust” and Internet “personality trust” in this paper, in order for trust in the Internet space not to be limited to a personal issue of name and anonymity before trust for the space system, we intend to discuss trust in the Internet space as “communications trust” (duality of system and personality reliabilities). This study suggests that trust in the Internet space lies in the joint composition of technology (civilization) and society (culture) and clearly exists as complex of security (info-tech) and humanity (info-arts). Based on the above idea the final purpose of this study is to shows a path towards forming new human trust (internal controls) in the “information security” fabricated from the viewpoint of human mind controls (laws, morality, ethics, and custom) and information engineering. This course signifies the “security arts” (the study of trust) that incorporate information arts and info-tech in the Internet space.
View full abstract
-
Shuichi Oikawa, Jin Kawasaki
Article type: Regular Paper
Subject area: Real-Time OSes
2011 Volume 19 Pages
400-410
Published: 2011
Released on J-STAGE: August 10, 2011
JOURNAL
FREE ACCESS
This paper proposes simultaneous virtual-machine logging and replay. It performs logging and replay simultaneously on the same machine through the use of two virtual machines, one for the primary execution and the other for the backup execution. While the primary execution produces the execution history, the backup execution consumes the history by replaying it. The size of an execution log can be limited to a certain size; thus, huge storage devices becomes unnecessary. We developed such a logging and replaying feature in a VMM. It can log and replay the execution of the Linux operating system. Our experiment results show the overhead of the primary execution is only fractional.
View full abstract
-
Megumi Ito, Shuichi Oikawa
Article type: Regular Paper
Subject area: Real-Time OSes
2011 Volume 19 Pages
411-420
Published: 2011
Released on J-STAGE: August 10, 2011
JOURNAL
FREE ACCESS
This paper describes our approach to making the Gandalf Virtual Machine Monitor (VMM) interruptible. Gandalf is designed to be a lightweight VMM for use in embedded systems. Hardware interrupts are directly notified to its guest operating system (OS) kernel without the interventions of the VMM, and the VMM only processes the exceptions caused by the guest kernel. Since the VMM processes those exceptions with interrupts disabled, the detailed performance analysis using PMC (Performance Monitoring Counters) revealed that the time duration while the interrupts are disabled is rather long. By making Gandalf interruptible, we are able to make VMM based systems more suitable for embedded systems. We analyzed the requirements for making Gandalf interruptible, and designed and implemented mechanisms to achieve this. The experimental results show that making Gandalf interruptible significantly reduces a duration of execution time with interrupts disabled while it does not impact performance.
View full abstract
-
Sayaka Akioka, Yuki Ohno, Midori Sugaya, Tatsuo Nakajima
Article type: Regular Paper
Subject area: Load balancing and scheduling
2011 Volume 19 Pages
421-429
Published: 2011
Released on J-STAGE: August 10, 2011
JOURNAL
FREE ACCESS
This paper proposes SPLiT (Scalable Performance Library Tool) as the methodology to improve performance of applications on multicore processors through CPU and cache optimizations on the fly. SPLiT is designed to relieve the difficulty of the performance optimization of parallel applications on multicore processors. Therefore, all programmers have to do to benefit from SPLiT is to add a few library calls to let SPLiT know which part of the application should be analyzed. This simple but compelling optimization library contributes to enrich pervasive servers on a multicore processor, which is a strong candidate for an architecture of information appliances in the near future. SPLiT analyzes and predicts application behaviors based on CPU cycle counts and cache misses. According to the analysis and predictions, SPLiT tries to allocate processes and threads sharing data onto the same physical cores in order to enhance cache efficiency. SPLiT also tries to separate cache effective codes from the codes with more cache misses for the purpose of the avoidance of cache pollutions, which result in performance degradation. Empirical experiments assuming web applications validated the efficiency of SPLiT and the performance of the web application is improved by 26%.
View full abstract
-
Mebae Ushida, Yutaka Kawai, Kazuki Yoneyama, Kazuo Ohta
Article type: Regular Paper
Subject area: Security Infrastructure
2011 Volume 19 Pages
430-440
Published: 2011
Released on J-STAGE: September 07, 2011
JOURNAL
FREE ACCESS
Designated Verifier Signature (DVS) guarantees that only a verifier designated by a signer can verify the “
validity of a signature”. In this paper, we propose a new variant of DVS; Proxiable Designated Verifier Signature (PDVS) where the verifier can commission a third party (i.e., the proxy) to perform some process of the verification. In the PDVS system, the verifier can reduce his computational cost by delegating some process of the verification without revealing the validity of the signature to the proxy. In all DVS systems, the validity of a signature means that a signature satisfies both properties that (1) the signature is judged “
accept” by a decision algorithm and (2) the signature is confirmed at it is generated by the signer. So in the PDVS system, the verifier can commission the proxy to check only the property (1). In the proposed PDVS model, we divide verifier's secret keys into two parts; one is a key for performing the decision algorithm, and the other is a key for generating a dummy signature, which prevents a third party from convincing the property (2). We also define security requirements for the PDVS, and propose a PDVS scheme which satisfies all security requirements we define.
View full abstract
-
Tetsuya Izu, Masahiko Takenaka, Masaya Yasuda
Article type: Regular Paper
Subject area: Security Infrastructure
2011 Volume 19 Pages
441-450
Published: 2011
Released on J-STAGE: September 07, 2011
JOURNAL
FREE ACCESS
Let G be an additive group generated by an element
G of prime order
r. The discrete logarithm problem with auxiliary input (DLPwAI) is a problem to find α on inputs
G, α
G, α
dG ∈ G for a positive integer
d dividing
r-1. The infeasibility of DLPwAI ensures the security of some pairing-based cryptographic schemes. In 2006, Cheon proposed an algorithm for solving DLPwAI which works better than conventional algorithms. In this paper, we report our experimental results of Cheon's algorithm on a pairing-friendly elliptic curve defined over GF(3
127). Moreover, based on our experimental results, we estimate the required cost of Cheon's algorithm to solve DLPwAI on some pairing-friendly elliptic curves over a finite field of characteristic 3. Our estimation implies that DLPwAI on a part of pairing-friendly curves can be solved at reasonable cost when the optimal parameter
d is chosen.
View full abstract
-
Tangtisanon Pikulkaew, Hiroaki Kikuchi
Article type: Regular Paper
Subject area: Network Security
2011 Volume 19 Pages
451-462
Published: 2011
Released on J-STAGE: September 07, 2011
JOURNAL
FREE ACCESS
Recently, Automated Trust Negotiation (ATN) has played an important role for two participants who want to automatically establish a trust relationship between each other in an open system so that they can receive some services or information. For example, when a person wants to buy a product from a website he will need to know that the website can be trusted or not. In this scheme, both parties (i.e., the person and the website, in this example) exchange their credentials and access control policies to each other automatically to ensure that the required policies of each party are met; following which the trust negotiation is established. Our proposed scheme allows both parties to learn whether or not, they agree to establish a trust relationship. After the scheme was performed, no policy was disclosed to each other. In this paper, we provide some building blocks used to construct our proposed scheme and describe the basic ideas for hiding access control policies and for implementing a conditional transfer. We also define the steps of how our protocol works with a numerical example. Moreover, we evaluate our scheme in terms of the computation cost by a mathematical analysis and the implementation using binary tree model of credentials and policies. Finally, we show that our scheme can be securely performed.
View full abstract
-
Hiroaki Kikuchi, Shuji Matsuo, Masato Terada
Article type: Regular Paper
Subject area: Network Security
2011 Volume 19 Pages
463-472
Published: 2011
Released on J-STAGE: September 07, 2011
JOURNAL
FREE ACCESS
A botnet is a network of compromised computers infected with malware that is controlled remotely via public communications media. Many attempts at botnet detection have been made including heuristics analyses of traffic. In this study, we propose a new method for identifying independent botnets in the CCC Dataset 2009, the log of download servers observed by distributed honeypots, by applying the technique of Principal Component Analysis. Our main results include distinguishing four independent botnets when a year is divided into five phases.
View full abstract
-
Asaad Ahmed, Keiichi Yasumoto, Yukiko Yamauchi, Minoru Ito
Article type: Recommended Paper
Subject area: Wireless/Mobile Networks
2011 Volume 19 Pages
473-490
Published: 2011
Released on J-STAGE: October 05, 2011
JOURNAL
FREE ACCESS
Aiming to achieve sensing coverage for given
Areas of Interest (AoI) over time at low cost in a
People-Centric Sensing manner, we propose a concept of (α,
T)-coverage of a target field where each point in the field is sensed by at least one mobile node with the probability of at least α during time period
T. Our goal is to achieve (α,
T)-coverage of a given AoI by a minimal set of mobile nodes. In this paper, we propose two algorithms:
inter-location algorithm that selects a minimal number of mobile nodes from nodes inside the AoI considering the distance between them and
inter-meeting-time algorithm that selects nodes regarding the expected meeting time between the nodes. To cope with the case that there is an insufficient number of nodes inside the AoI, we propose an extended algorithm which regards nodes inside and outside the AoI. To improve the accuracy of the proposed algorithms, we also propose an updating mechanism which adapts the number of selected nodes based on their latest locations during the time period
T. In our simulation-based performance evaluation, our algorithms achieved (α,
T)-coverage with good accuracy for various values of α,
T, AoI size, and moving probability.
View full abstract