Journal of Information Processing
Online ISSN : 1882-6652
ISSN-L : 1882-6652
Volume 23, Issue 4
Displaying 1-17 of 17 articles from this issue
  • Nariyoshi Yamai
    Article type: Special Issue of Applications and the Internet in Conjunction with Main Topics of COMPSAC 2014
    2015 Volume 23 Issue 4 Pages 381
    Published: 2015
    Released on J-STAGE: July 15, 2015
    JOURNAL FREE ACCESS
    Download PDF (36K)
  • Michael Wisely, Sahra Sedigh Sarvestani, Ali R. Hurson
    Article type: Special Issue of Applications and the Internet in Conjunction with Main Topics of COMPSAC 2014
    Subject area: Invited Papers
    2015 Volume 23 Issue 4 Pages 382-391
    Published: 2015
    Released on J-STAGE: July 15, 2015
    JOURNAL FREE ACCESS
    Many modern mobile applications appeal to users because they grant access to a wealth of information anytime, anywhere. However, a number of obstacles stand between users and the data they seek. Firstly, mobile devices have limited access to energy sources. Devices should be able to access information without sacrificing hours of battery life. Secondly, users expect timely access to data. While respecting the energy limitations of devices, data must be quickly accessible. Data broadcasting has been proposed as a quick and efficient solution for providing users with the data they desire. Broadcasting disseminates data from a server in a way that is analogous to AM/FM radio or television broadcasts. Devices tune in to wireless channels to fetch data items from a broadcast. Several technical challenges must be addressed to ensure efficient and timely data access for clients. These include organizing, indexing, and accessing the broadcast data items. Despite these issues, broadcasting is a scalable and efficient method for disseminating data to mobile clients. In this paper, we describe related techniques and compare and contrast them with respect to response time and energy efficiency.
    Download PDF (987K)
  • Marwa Elsayed, Mohammad Zulkernine
    Article type: Special Issue of Applications and the Internet in Conjunction with Main Topics of COMPSAC 2014
    Subject area: Invited Papers
    2015 Volume 23 Issue 4 Pages 392-401
    Published: 2015
    Released on J-STAGE: July 15, 2015
    JOURNAL FREE ACCESS
    Security is one of the most prominent challenges that hinder the acceleration of cloud adoption. Intrusion detection systems (IDSs) can be used to increase the security level of cloud environments. Therefore, the effectiveness of the IDS is a crucial issue for cloud security. However, the cloud presents new challenges and requirements, including scalability and adaptability, which effective IDSs need to address. Choosing the right deployment architecture significantly impacts the effectiveness of IDSs in the cloud. Additionally, robust IDSs need novel detection techniques to keep up with modern sophisticated attacks that target cloud environments. Hence, it is important to understand the advantages and limitations of different IDSs and how the deployment choice in cloud environments impacts the IDSs' effectiveness. This paper presents a novel classification scheme of the state-of-the-art of intrusion detection approaches in the cloud. This classification sheds light on the existing approaches with respect to the following aspects: deployment architecture and detection technique. We first classify the existing approaches based on their deployment architectures. Then, we present a comparative analysis of these approaches with respect to the detection techniques. We also provide detailed analysis of the strengths and weaknesses of existing approaches. The classification and analysis will help in the selection of the proper deployment architectures and detection techniques of IDSs in cloud environments.
    Download PDF (329K)
  • Sho Tsugawa, Hiroyuki Ohsaki
    Article type: Special Issue of Applications and the Internet in Conjunction with Main Topics of COMPSAC 2014
    Subject area: Invited Papers
    2015 Volume 23 Issue 4 Pages 402-410
    Published: 2015
    Released on J-STAGE: July 15, 2015
    JOURNAL FREE ACCESS
    Research on social network analysis (SNA) has been actively pursued. Most SNAs focus on either social relationship networks (e.g., friendship and trust networks) or social interaction networks (e.g., email and phone call networks). It is expected that the social relationship network and social interaction network of a group should be closely related to each other. For instance, people in the same community in a social relationship network are expected to communicate with each other more frequently than with people in different communities. To the best of our knowledge, however, there is not much understanding on such interaction locality in large-scale online social networks. This paper aims to bridge the gap between intuition about interaction locality and empirical evidences observed in large-scale social networks. We investigate the strength of interaction locality in large-scale social networks by analyzing different types of data: logs of mobile phone calls, email messages, and message exchanges in a social networking service. Our results show that strong interaction locality is observed equally in the three datasets, and suggest that strength of the interaction locality is invariant with regard to the scale of the community. Moreover, we discuss practical implications as well as possible applications.
    Download PDF (1498K)
  • Shinobu Saito, Jun Hagiwara, Tomoki Yagasaki, Katsuyuki Natsukawa
    Article type: Special Issue of Applications and the Internet in Conjunction with Main Topics of COMPSAC 2014
    Subject area: Requirements engineering
    2015 Volume 23 Issue 4 Pages 411-419
    Published: 2015
    Released on J-STAGE: July 15, 2015
    JOURNAL FREE ACCESS
    Prototyping practices are widely used. Requirements engineers develop screen prototypes with paper or HTML. However, feedback on the prototypes has limited effectiveness. Screen prototypes are mainly useful for reviewing only user interface requirements. To cope with this situation, we propose a requirements validation approach using models and prototyping (ReVAMP). This approach provides customers with a set of requirement models and a system prototype generation tool for trial use. A generated system prototype is implemented with both business application features and access control features. Thus, customers could give requirements engineers more practical feedback on requirements for not only a user interface but also other aspects of a target system. To evaluate the proposed models and tool, we introduce two business information system development projects in which the proposed approach was applied.
    Download PDF (3258K)
  • Hiroshi Yamamoto, Shigehiro Ano, Katsuyuki Yamazaki
    Article type: Special Issue of Applications and the Internet in Conjunction with Main Topics of COMPSAC 2014
    Subject area: Wireless/Mobile Networks
    2015 Volume 23 Issue 4 Pages 420-429
    Published: 2015
    Released on J-STAGE: July 15, 2015
    JOURNAL FREE ACCESS
    User's experience of network services using large-scale distributed systems is markedly affected by a network condition (i.e., network latency) between a user terminal and a server. In a mobile environment, the network latency fluctuates because a mobile node on the cellular network frequently changes its access network than before when handover or offloading occurs due to users movement on a real world. Many researchers attempt to perform simulation studies on large-scale distributed services provided through mobile networks for revealing the impact of the network condition on the service performance, hence an evaluation model that simulates a realistic state change of latency variation is attracting attention. However, existing studies have assumed only a condition where the tendency of latency variation never changes. Therefore, we propose a new modeling method using a Markov Regime Switching which builds a realistic evaluation model which can represent the dynamic change of the mobile network state. Furthermore, the effectiveness of the proposed modeling method is evaluated based on the actual latency dataset which is collected while a user of a cellular phone moves around within a wide area. Here, with a wide spread of smart phones and tablets in recent years, the Internet connection has become able to be utilized through a cellular and a WiFi network while the mobile user is moving by various kinds of transportation (e.g., train, car). In this study, as a typical example of the transportation, we focus on the Yamanote Line which is the most famous railway loop line used by a large number of office workers in Japan, hence the target dataset which is measured when the mobile user gets on the Yamanote Line is analyzed by the modeling method for building the evaluation model. The evaluation results help us to disclose whether or not the evaluation model constructed by the proposed modeling method can accurately estimate the dynamic variation of the mobile network quality.
    Download PDF (1567K)
  • Hiroshi Yamamoto, Tatsuya Takahashi, Norihiro Fukumoto, Shigehiro Ano, ...
    Article type: Special Issue of Applications and the Internet in Conjunction with Main Topics of COMPSAC 2014
    Subject area: Distributed Systems Operation and Management
    2015 Volume 23 Issue 4 Pages 430-440
    Published: 2015
    Released on J-STAGE: July 15, 2015
    JOURNAL FREE ACCESS
    A congestion detection on mobile networks becomes the main challenge of cellular carriers and mobile network providers because the mobile network quality easily degrades when many users concentrate on a limited place. Especially when a large-scale event is held, a heavy network congestion interferes with the communication of the participants and local residents. Therefore, the congestion detection process has been performed by several network providers, but has been executed on a high-performance computing resource in a centralized manner, which markedly increases the computing cost. On the other hand, with the wide spread of a large-scale distributed computing environment (e.g., cloud computing), a Complex Event Processing (CEP) system has recently been made available for several purposes. The CEP is a distributed computing system which can identify meaningful events by analyzing a large amount of data stream (e.g., sensor data) in real time. Here, the congestion detection can be considered as a suitable application for the CEP system, where a large amount of traffic logs (i.e., data streams) should rapidly be analyzed in order to detect network congestions (i.e., meaningful events). Therefore, in this study, we propose a new system structure of the CEP-based congestion detection system using distributed computing resources. In the proposed system, processing components are deployed on multiple resources, and execute independent tasks that are carefully extracted from a system procedure of the congestion detection. Through experimental evaluation using computing resources on a popular cloud service (Amazon EC2), it is disclosed that the CEP-based system contributes to achieve the real time detection of congestions on the mobile networks.
    Download PDF (1579K)
  • Yong Jin, Nariyoshi Yamai, Kiyohiko Okayama, Motonori Nakamura
    Article type: Special Issue of Applications and the Internet in Conjunction with Main Topics of COMPSAC 2014
    Subject area: Distributed Systems Operation and Management
    2015 Volume 23 Issue 4 Pages 441-448
    Published: 2015
    Released on J-STAGE: July 15, 2015
    JOURNAL FREE ACCESS
    The Internet has been widely deployed as an infrastructure to provide various ICT (Information and Communication Technology) services today. Some typical services such as e-mail, SNS (Social Network Service) and WWW rely considerably on the Internet in terms of reliability and effectiveness. In this paper, we focus on the IPv6 site multihoming technology and its collaboration with route selection mechanism, which have been reported as one solution to accomplish these goals. Even if a host can easily obtain multiple IP addresses in IPv6 multihomed site, it has to select a proper site-exit router when sending out a packet in order to avoid ingress filtering. Especially, when an inside host initializes an outbound connection it can barely select a proper site-exit router based on its source IP address. To solve this problem, we propose an optimal route selection method for IPv6 multihomed site. With this method, a middleware will be deployed within each inside host so as to connect to the destination host through multiple site-exit router during the initialization phase simultaneously, and then use the first established one for data communication. We also embedded a kind of Network Address Translation (NAT) feature into the middleware to avoid the ingress filtering. By analyzing the results of our experiments on the prototype system we confirmed that the proposed method worked as well as we expected and the collaboration of the site multihoming technology and the proper route selection method can be one possible solution for IPv6 site multihoming in a real network environment.
    Download PDF (909K)
  • Daisuke Okamoto, Keita Kawano, Nariyoshi Yamai, Tokumi Yokohira
    Article type: Special Issue of Applications and the Internet in Conjunction with Main Topics of COMPSAC 2014
    Subject area: Distributed Systems Operation and Management
    2015 Volume 23 Issue 4 Pages 449-457
    Published: 2015
    Released on J-STAGE: July 15, 2015
    JOURNAL FREE ACCESS
    We have developed a system (traditional system) to flexibly provide the requested applications environment on educational Windows PCs. The traditional system dynamically controls the execution of applications installed on each educational PC depending on the rules defined by teachers as well as by administrators. The traditional system, however, has a low tolerance for malicious attacks. If the execution file of a certain application is falsified, the corresponding rules already applied become invalid. In addition, though the traditional system has a function to define groups of controlled applications, it does not support hierarchical groups. This reduces the usability of the traditional system. In order to address these issues, this paper proposes a control method of application execution using digital certificates. The proposed method has a high tolerance for the falsification of execution files by controlling their executions based on the reliability of the corresponding digital certificates. It also improves its usability by introducing hierarchical group management utilizing hierarchical structure for digital certificates.
    Download PDF (745K)
  • Ikuo Nakagawa, Masahiro Hiji, Hiroshi Esaki
    Article type: Special Issue of Applications and the Internet in Conjunction with Main Topics of COMPSAC 2014
    Subject area: Distributed Systems Operation and Management
    2015 Volume 23 Issue 4 Pages 458-464
    Published: 2015
    Released on J-STAGE: July 15, 2015
    JOURNAL FREE ACCESS
    We propose “Dripcast,” a new server-less Java programming framework for billions of IoT (Internet of Things) devices. The framework makes it easy to develop device applications working with a cloud, that is, scalable computing resources on the Internet. The framework consists of two key technologies; (1) transparent remote procedure call (2) mechanism to read, write and process Java objects with scale-out style distributed datastore. A great benefit of the framework is that there is no need to write a server-side program nor a database code. A very simple client-side program is enough to work with the framework, to read, write or process Java objects on a cloud. The mechanism is infinitely scalable since it works with scale-out technologies. In this paper, we describe the concept and the architecture of the Dripcast framework. We also implement the framework and evaluate from two points of views, 1) from the view point of scalability about cloud resources, 2) from the view point of method call encapsulation overhead in client IoT devices.
    Download PDF (1312K)
  • Doudou Fall, Takeshi Okuda, Youki Kadobayashi, Suguru Yamaguchi
    Article type: Special Issue of Applications and the Internet in Conjunction with Main Topics of COMPSAC 2014
    Subject area: Contingency Management/Risk Management
    2015 Volume 23 Issue 4 Pages 465-475
    Published: 2015
    Released on J-STAGE: July 15, 2015
    JOURNAL FREE ACCESS
    Cloud computing has revolutionized information technology, in that It allows enterprises and users to lower computing expenses by outsourcing their needs to a cloud service provider. However, despite all the benefits it brings, cloud computing raises several security concerns that have not yet been fully addressed to a satisfactory note. Indeed, by outsourcing its operations, a client surrenders control to the service provider and needs assurance that data is dealt with in an appropriate manner. Furthermore, the most inherent security issue of cloud computing is multi-tenancy. Cloud computing is a shared platform where users' data are hosted in the same physical infrastructure. A malicious user can exploit this fact to steal the data of the users whom he or she is sharing the platform with. To address the aforementioned security issues, we propose a security risk quantification method that will allow users and cloud computing administrators to measure the security level of a given cloud ecosystem. Our risk quantification method is an adaptation of the fault tree analysis, which is a modeling tool that has proven to be highly effective in mission-critical systems. We replaced the faults by the probable vulnerabilities in a cloud system, and with the help of the common vulnerability scoring system, we were able to generate the risk formula. In addition to addressing the previously mentioned issues, we were also able to quantify the security risks of a popular cloud management stack, and propose an architecture where users can evaluate and rank different cloud service providers.
    Download PDF (1359K)
  • Yuichi Sei, Akihiko Ohsuga
    Article type: Special Issue of Applications and the Internet in Conjunction with Main Topics of COMPSAC 2014
    Subject area: System Security
    2015 Volume 23 Issue 4 Pages 476-487
    Published: 2015
    Released on J-STAGE: July 15, 2015
    JOURNAL FREE ACCESS
    A compromised node in wireless sensor networks can be used to create false messages by generating them on their own or by falsifying legitimate messages received from other nodes. Because compromised nodes that create false messages can waste a considerable amount of network resources, we should detect them as early as possible. Existing studies for detecting such nodes can only be used in situations where sensor nodes do not move. However, it is possible that nodes move because of wind or other factors in real situations. We improve existing studies for detecting compromised nodes in mobile wireless sensor networks. In the proposed method, an agent exists on each node and it appends its ID and a k-bit code to an event message and the sink detects a compromised node by a statistical method. Our method can be used in static and dynamic environments. Simulations we conducted prove the effectiveness of our method.
    Download PDF (2552K)
  • Weihua Sun, Naoki Shibata, Masahiro Kenmotsu, Keiichi Yasumoto, Minoru ...
    Article type: Recommended Paper
    Subject area: ITS
    2015 Volume 23 Issue 4 Pages 488-496
    Published: 2015
    Released on J-STAGE: July 15, 2015
    JOURNAL FREE ACCESS
    In this paper, we focus on multilevel parking facilities and propose a navigation system that minimizes the time required for cars to find vacant parking spaces. Parking zones at large parking facilities provide drivers conditions to drivers due to differences in distances from the entrance of the parking facility or to the entrances of the shopping areas. This leads to many cars concentrating at some parking zones while other zones are not occupied. It is not easy for car drivers entering a large parking facility to know which parking zones are vacant. It is fairly common that parking facilities have indicators that show occupancy information to the drivers. However, since these indicators deliver the same information to all drivers, this method tends to make a new congested zone by sending many drivers to that zone. In this paper, we propose a system that provides each driver with a recommended route in the parking facility that minimizes the expected parking time. Our method estimates the occupancy of each zone from the information sensed by the cars that implement the proposed method. This information is collected to a server installed in the facility, and then the server disseminates the processed information to the cars. The cars then calculates the recommended route from this information. We conducted a simulation-based evaluation of the proposed method using a realistic model simulating a real parking facility in Nara. As a result, we confirmed that the proposed method reduced parking waiting time by 20%-70% even with low penetration ratio.
    Download PDF (2786K)
  • Yuya Kaneda, Yan Pei, Qiangfu Zhao, Yong Liu
    Article type: Recommended Paper
    Subject area: Knowledge Processing
    2015 Volume 23 Issue 4 Pages 497-504
    Published: 2015
    Released on J-STAGE: July 15, 2015
    JOURNAL FREE ACCESS
    Outlier detection is one of the methods for improving the performance of machine learning models. Since outliers often affect the performance of the learning models negatively, it is desired to detect and remove outliers before model construction. In this paper, we try to improve the performance of the decision boundary making (DBM) algorithm via outlier detection. DBM has been proposed by us for inducing compact and high performance learning models that are suitable for implementation in portable computing devices. The basic idea of DBM is to generate data that can fit the decision boundary (DB) of a high performance model, and then induce a compact model based on the generated data. In our study, a support vector machine (SVM) is used as the high performance model, and a single hidden layer multilayer perceptron (MLP) is used as the compact model. Experimental results obtained so far show that DBM performs well in many cases, but its performance still is not good enough for some applications. In this paper, we use SVM not only for obtaining the DB, but also for detecting the outliers, so that better MLP can be induced using cleaner data. We use a threshold δoutlier to control the number of outliers to remove. Experimental results show that, if we select δoutlier properly, the DBM incorporated with outlier detection outperforms the original DBM, and it is better than or comparable to SVM for all databases used in the experiments.
    Download PDF (614K)
  • Kimio Kuramitsu
    Article type: Regular Papers
    Subject area: Special Section on Programming
    2015 Volume 23 Issue 4 Pages 505-512
    Published: 2015
    Released on J-STAGE: July 15, 2015
    JOURNAL FREE ACCESS
    Packrat parsing is a linear-time implementation method of recursive descent parsers. The trick is a memoization mechanism, where all parsing results are memorized to avoid redundant parsing in cases of backtracking. An arising problem is extremely huge heap consumption in memoization, resulting in the fact that the cost of memoization is likely to outweigh its benefits. In many cases, developers need to make a difficult choice to abandon packrat parsing despite the possible exponential time parsing. Elastic packrat parsing is developed in order to avoid such a difficult choice. The heap consumption is upper-bounded since memorized results are stored on a sliding window buffer. In addition, the buffer capacity is adjusted by tracing each of nonterminal backtracking activities at runtime. Elastic packrat parsing is implemented in a part of our Nez parser. We demonstrate that the elastic packrat parsing achieves stable and robust performance against a variety of inputs with different backtracking activities.
    Download PDF (1047K)
  • Yusuke Takamatsu, Kenji Kono
    Article type: Regular Papers
    Subject area: Special Section on Advanced Computing Systems
    2015 Volume 23 Issue 4 Pages 513-524
    Published: 2015
    Released on J-STAGE: July 15, 2015
    JOURNAL FREE ACCESS
    Clickjacking is a new attack which exploits a vulnerability in web applications. It tricks victims into clicking on something different from what they perceive they are clicking on. The victims may reveal confidential information or start unintended online transactions. Clickjacking attacks compromise visual integrity (called visual clickjacking) or condition integrity (called switchover clickjacking) to deceive victims. We address visual clickjacking in this paper. Visual clickjacking can be prevented if appropriate countermeasures such as frame busting are implemented in web applications. However, the correct implementation is not easy. A trivial mistake in the implementation leads to evasion of the countermeasures. For the correct implementation, web developers must have intimate knowledge on evasion techniques of the countermeasures. In this paper, we propose Clickjuggler, an automated tool for checking defenses against visual clickjacking during the development. Clickjuggler generates some types of visual clickjacking attack, performs those attacks on web applications, and checks whether the attacks are successful or not. By automating the process of checking for the vulnerabilities, web developers are released from the burden of checking the correctness of their implementation. Unskillful developers can benefit from Clickjuggler since no special knowledge on a variety of visual clickjacking and evasion techniques is needed to use Clickjuggler. Our experimental results demonstrate that Clickjuggler can detect the visual clickjacking vulnerabilities in 4 real-world web applications and can detect the vulnerabilities in a shorter time than existing tools.
    Download PDF (1348K)
  • Masami Hagiya
    Article type: Invited Papers
    Subject area: Special Section on Computers and Education
    2015 Volume 23 Issue 4 Pages 525-530
    Published: 2015
    Released on J-STAGE: July 15, 2015
    JOURNAL FREE ACCESS
    The Science Council of Japan's Committee on Informatics is currently creating a reference standard in informatics. This activity includes defining informatics for university education and for the future academic development of informatics. The most characteristic feature of the chosen definition of informatics is the desire to cover all branches of informatics across bun-kei (social sciences and humanities) and ri-kei (natural science and engineering), with the intention of unifying the field. In the present paper, the background of the activity, and the motivation and implications of the definition of informatics are presented. In particular, we discuss the importance of covering bun-kei and ri-kei for the future development of informatics and the implications of the definition on liberal arts education in universities and primary and secondary education in elementary, middle and high schools.
    Download PDF (575K)
feedback
Top