Special Section on Information and Communication System Security - Against Cyberattacks -
-
Naoto SONE
2015 Volume E98.D Issue 4 Pages
749
Published: 2015
Released on J-STAGE: April 01, 2015
JOURNAL
FREE ACCESS
-
Hung-Yu CHIEN, Tzong-Chen WU, Chien-Lung HSU
Article type: INVITED PAPER
Subject area: Authentication
2015 Volume E98.D Issue 4 Pages
750-759
Published: 2015
Released on J-STAGE: April 01, 2015
JOURNAL
FREE ACCESS
Secure authentication of low cost Radio Frequency Identification (RFID) tag with limited resources is a big challenge, especially when we simultaneously consider anonymity, un-traceability, and forward secrecy. The popularity of Internet of Things (IoT) further amplifies this challenge, as we should authenticate these mobile tags in the partial-distributed-server environments. In this paper, we propose an RFID authentication scheme in the partial-distributed-server environments. The proposed scheme owns excellent performance in terms of computational complexity and scalability as well as security properties.
View full abstract
-
Akihiro SATOH, Yutaka NAKAMURA, Takeshi IKENAGA
Article type: PAPER
Subject area: Authentication
2015 Volume E98.D Issue 4 Pages
760-768
Published: 2015
Released on J-STAGE: April 01, 2015
JOURNAL
FREE ACCESS
A dictionary attack against SSH is a common security threat. Many methods rely on network traffic to detect SSH dictionary attacks because the connections of remote login, file transfer, and TCP/IP forwarding are visibly distinct from those of attacks. However, these methods incorrectly judge the connections of automated operation tasks as those of attacks due to their mutual similarities. In this paper, we propose a new approach to identify user authentication methods on SSH connections and to remove connections that employ non-keystroke based authentication. This approach is based on two perspectives: (1) an SSH dictionary attack targets a host that provides keystroke based authentication; and (2) automated tasks through SSH need to support non-keystroke based authentication. Keystroke based authentication relies on a character string that is input by a human; in contrast, non-keystroke based authentication relies on information other than a character string. We evaluated the effectiveness of our approach through experiments on real network traffic at the edges in four campus networks, and the experimental results showed that our approach provides high identification accuracy with only a few errors.
View full abstract
-
Yuling LIU, Xinxin QU, Guojiang XIN, Peng LIU
Article type: PAPER
Subject area: Data Hiding
2015 Volume E98.D Issue 4 Pages
769-774
Published: 2015
Released on J-STAGE: April 01, 2015
JOURNAL
FREE ACCESS
A novel ROI-based reversible data hiding scheme is proposed for medical images, which is able to hide electronic patient record (EPR) and protect the region of interest (ROI) with tamper localization and recovery. The proposed scheme combines prediction error expansion with the sorting technique for embedding EPR into ROI, and the recovery information is embedded into the region of non-interest (RONI) using histogram shifting (HS) method which hardly leads to the overflow and underflow problems. The experimental results show that the proposed scheme not only can embed a large amount of information with low distortion, but also can localize and recover the tampered area inside ROI.
View full abstract
-
Mitsuaki AKIYAMA, Takeshi YAGI, Youki KADOBAYASHI, Takeo HARIU, Suguru ...
Article type: PAPER
Subject area: Attack Monitoring & Detection
2015 Volume E98.D Issue 4 Pages
775-787
Published: 2015
Released on J-STAGE: April 01, 2015
JOURNAL
FREE ACCESS
We investigated client honeypots for detecting and circumstantially analyzing drive-by download attacks. A client honeypot requires both improved inspection performance and in-depth analysis for inspecting and discovering malicious websites. However, OS overhead in recent client honeypot operation cannot be ignored when improving honeypot multiplication performance. We propose a client honeypot system that is a combination of multi-OS and multi-process honeypot approaches, and we implemented this system to evaluate its performance. The process sandbox mechanism, a security measure for our multi-process approach, provides a virtually isolated environment for each web browser. It prevents system alteration from a compromised browser process by I/O redirection of file/registry access. To solve the inconsistency problem of file/registry view by I/O redirection, our process sandbox mechanism enables the web browser and corresponding plug-ins to share a virtual system view. Therefore, it enables multiple processes to be run simultaneously without interference behavior of processes on a single OS. In a field trial, we confirmed that the use of our multi-process approach was three or more times faster than that of a single process, and our multi-OS approach linearly improved system performance according to the number of honeypot instances. In addition, our long-term investigation indicated that 72.3% of exploitations target browser-helper processes. If a honeypot restricts all process creation events, it cannot identify an exploitation targeting a browser-helper process. In contrast, our process sandbox mechanism permits the creation of browser-helper processes, so it can identify these types of exploitations without resulting in false negatives. Thus, our proposed system with these multiplication approaches improves performance efficiency and enables in-depth analysis on high interaction systems.
View full abstract
-
Masashi ETO, Tomohide TANAKA, Koei SUZUKI, Mio SUZUKI, Daisuke INOUE, ...
Article type: PAPER
Subject area: Attack Monitoring & Detection
2015 Volume E98.D Issue 4 Pages
788-795
Published: 2015
Released on J-STAGE: April 01, 2015
JOURNAL
FREE ACCESS
A number of network monitoring sensors such as honeypot and web crawler have been launched to observe increasingly-sophisticated cyber attacks. Based on these technologies, there have been several large scale network monitoring projects launched to fight against cyber threats on the Internet. Meanwhile, these projects are facing some problems such as Difficulty of collecting wide range darknet, Burden of honeypot operation and Blacklisting problem of honeypot address. In order to address these problems, this paper proposes a novel proactive cyber attack monitoring platform called GHOST sensor, which enables effective utilization of physical and logical resources such as hardware of sensors and monitoring IP addresses as well as improves the efficiency of attack information collection. The GHOST sensor dynamically allocates targeted IP addresses to appropriate sensors so that the sensors can flexibly monitor attacks according to profiles of each attacker. Through an evaluation in a experiment environment, this paper presents the efficiency of attack observation and resource utilization.
View full abstract
-
Da XIAO, Lvyin YANG, Chuanyi LIU, Bin SUN, Shihui ZHENG
Article type: PAPER
Subject area: Cloud Security
2015 Volume E98.D Issue 4 Pages
796-806
Published: 2015
Released on J-STAGE: April 01, 2015
JOURNAL
FREE ACCESS
Provable Data Possession (PDP) schemes enable users to efficiently check the integrity of their data in the cloud. Support for massive and dynamic sets of data and adaptability to third-party auditing are two key factors that affect the practicality of existing PDP schemes. We propose a secure and efficient PDP system called IDPA-MF-PDP, by exploiting the characteristics of real-world cloud storage environments. The cost of auditing massive and dynamic sets of data is dramatically reduced by utilizing a multiple-file PDP scheme (MF-PDP), based on the data update patterns of cloud storage. Deployment and operational costs of third-party auditing and information leakage risks are reduced by an auditing framework based on integrated data possession auditors (DPAs), instantiated by trusted hardware and tamper-evident audit logs. The interaction protocols between the user, the cloud server, and the DPA integrate MF-PDP with the auditing framework. Analytical and experimental results demonstrate that IDPA-MF-PDP provides the same level of security as the original PDP scheme while reducing computation and communication overhead on the DPA, from linear the size of data to near constant. The performance of the system is bounded by disk I/O capacity.
View full abstract
-
Jing YU, Toshihiro YAMAUCHI
Article type: LETTER
Subject area: Access Control
2015 Volume E98.D Issue 4 Pages
807-811
Published: 2015
Released on J-STAGE: April 01, 2015
JOURNAL
FREE ACCESS
Android applications that using WebView can load and display web pages. Interaction with web pages allows JavaScript code within the web pages to access resources on the Android device by using the Java object, which is registered into WebView. If this WebView feature were exploited by an attacker, JavaScript code could be used to launch attacks, such as stealing from or tampering personal information in the device. To address these threats, we propose an access control on the security-sensitive APIs at the Java object level. The proposed access control uses static analysis to identify these security-sensitive APIs, detects threats at runtime, and notifies the user if threats are detected, thereby preventing attacks from web pages.
View full abstract
-
Dafei HUANG, Changqing XUN, Nan WU, Mei WEN, Chunyuan ZHANG, Xing CAI, ...
Article type: PAPER
Subject area: Fundamentals of Information Systems
2015 Volume E98.D Issue 4 Pages
812-823
Published: 2015
Released on J-STAGE: April 01, 2015
JOURNAL
FREE ACCESS
Aiming to ease the parallel programming for heterogeneous architectures, we propose and implement a high-level OpenCL runtime that conceptually merges multiple heterogeneous hardware devices into one
virtual heterogeneous compute device (VHCD). Moreover, automated workload distribution among the devices is based on offline profiling, together with new programming directives that define the device-independent data access range per work-group. Therefore, an OpenCL program originally written for a single compute device can, after inserting a small number of programming directives, run efficiently on a platform consisting of heterogeneous compute devices. Performance is ensured by introducing the technique of virtual cache management, which minimizes the amount of host-device data transfer. Our new OpenCL runtime is evaluated by a diverse set of OpenCL benchmarks, demonstrating good performance on various configurations of a heterogeneous system.
View full abstract
-
Asahi TAKAOKA, Satoshi TAYU, Shuichi UENO
Article type: PAPER
Subject area: Fundamentals of Information Systems
2015 Volume E98.D Issue 4 Pages
824-834
Published: 2015
Released on J-STAGE: April 01, 2015
JOURNAL
FREE ACCESS
Ordered Binary Decision Diagrams (OBDDs for short) are popular dynamic data structures for Boolean functions. In some modern applications, we have to handle such huge graphs that the usual explicit representations by adjacency lists or adjacency matrices are infeasible. To deal with such huge graphs, OBDD-based graph representations and algorithms have been investigated. Although the size of OBDD representations may be large in general, it is known to be small for some special classes of graphs. In this paper, we show upper bounds and lower bounds of the size of OBDDs representing some intersection graphs such as bipartite permutation graphs, biconvex graphs, convex graphs, (2-directional) orthogonal ray graphs, and permutation graphs.
View full abstract
-
Yonghwan KIM, Tadashi ARARAGI, Junya NAKAMURA, Toshimitsu MASUZAWA
Article type: PAPER
Subject area: Computer System
2015 Volume E98.D Issue 4 Pages
835-851
Published: 2015
Released on J-STAGE: April 01, 2015
JOURNAL
FREE ACCESS
Recently, Hadoop has attracted much attention from engineers and researchers as an emerging and effective framework for
Big Data.
HDFS (Hadoop Distributed File System) can manage a huge amount of data with high performance and reliability using only commodity hardware. However, HDFS requires a single master node, called a
NameNode, to manage the entire namespace (or all the i-nodes) of a file system. This causes the
SPOF (Single Point Of Failure) problem because the file system becomes inaccessible when the
NameNode fails. This also causes a
bottleneck of efficiency since all the access requests to the file system have to contact the
NameNode. Hadoop 2.0 resolves the SPOF problem by introducing manual failover based on two
NameNodes,
Active and
Standby. However, it still has the efficiency bottleneck problem since all the access requests have to contact the
Active in ordinary executions. It may also lose the advantage of using commodity hardware since the two
NameNodes have to share a highly reliable sophisticated storage. In this paper, we propose a new HDFS architecture to resolve all the problems mentioned above.
View full abstract
-
Ryosuke TSUCHIYA, Hironori WASHIZAKI, Yoshiaki FUKAZAWA, Tadahisa KATO ...
Article type: PAPER
Subject area: Software Engineering
2015 Volume E98.D Issue 4 Pages
852-862
Published: 2015
Released on J-STAGE: April 01, 2015
JOURNAL
FREE ACCESS
Traceability links between requirements and source code are helpful in software reuse and maintenance tasks. However, manually recovering links in a large group of products requires significant costs and some links may be overlooked. Here, we propose a semi-automatic method to recover traceability links between requirements and source code in the same series of large software products. In order to support differences in representation between requirements and source code, we recover links by using the configuration management log as an intermediary. We refine the links by classifying requirements and code elements in terms of whether they are common to multiple products or specific to one. As a result of applying our method to real products that have 60KLOC, we have recovered valid traceability links within a reasonable amount of time. Automatic parts have taken 13 minutes 36 seconds, and non-automatic parts have taken about 3 hours, with a recall of 76.2% and a precision of 94.1%. Moreover, we recovered some links that were unknown to engineers. By recovering traceability links, software reusability and maintainability will be improved.
View full abstract
-
Jaehoon KIM
Article type: PAPER
Subject area: Data Engineering, Web Information Systems
2015 Volume E98.D Issue 4 Pages
863-871
Published: 2015
Released on J-STAGE: April 01, 2015
JOURNAL
FREE ACCESS
Resource Description Framework (RDF) access control suffers from an authorization conflict problem caused by RDF inference. When an access authorization is specified, it can lie in conflict with other access authorizations that have the opposite security sign as a result of RDF inference. In our former study, we analyzed the authorization conflict problem caused by subsumption inference, which is the key inference in RDF. The Rule Interchange Format (RIF) is a Web standard rule language recommended by W3C, and can be combined with RDF data. Therefore, as in RDF inference, an authorization conflict can be caused by RIF inference. In addition, this authorization conflict can arise as a result of the interaction of RIF inference and RDF inference rather than of RIF inference alone. In this paper, we analyze the authorization conflict problem caused by RIF inference and suggest an efficient authorization conflict detection algorithm. The algorithm exploits the graph labeling-based algorithm proposed in our earlier paper. Through experiments, we show that the performance of the graph labeling-based algorithm is outstanding for large RDF data.
View full abstract
-
Asami HIGAI, Atsuko TAKEFUSA, Hidemoto NAKADA, Masato OGUCHI
Article type: PAPER
Subject area: Data Engineering, Web Information Systems
2015 Volume E98.D Issue 4 Pages
872-882
Published: 2015
Released on J-STAGE: April 01, 2015
JOURNAL
FREE ACCESS
Distributed file systems, which manage large amounts of data over multiple commercially available machines, have attracted attention as management and processing systems for Big Data applications. A distributed file system consists of multiple data nodes and provides reliability and availability by holding multiple replicas of data. Due to system failure or maintenance, a data node may be removed from the system, and the data blocks held by the removed data node are lost. If data blocks are missing, the access load of the other data nodes that hold the lost data blocks increases, and as a result, the performance of data processing over the distributed file system decreases. Therefore, replica reconstruction is an important issue to reallocate the missing data blocks to prevent such performance degradation. The Hadoop Distributed File System (HDFS) is a widely used distributed file system. In the HDFS replica reconstruction process, source and destination data nodes for replication are selected randomly. We find that this replica reconstruction scheme is inefficient because data transfer is biased. Therefore, we propose two more effective replica reconstruction schemes that aim to balance the workloads of replication processes. Our proposed replication scheduling strategy assumes that nodes are arranged in a ring, and data blocks are transferred based on this one-directional ring structure to minimize the difference in the amount of transfer data for each node. Based on this strategy, we propose two replica reconstruction schemes: an optimization scheme and a heuristic scheme. We have implemented the proposed schemes in HDFS and evaluate them on an actual HDFS cluster. We also conduct experiments on a large-scale environment by simulation. From the experiments in the actual environment, we confirm that the replica reconstruction throughputs of the proposed schemes show a 45% improvement compared to the HDFS default scheme. We also verify that the heuristic scheme is effective because it shows performance comparable to the optimization scheme. Furthermore, the experimental results on the large-scale simulation environment show that while the optimization scheme is unrealistic because a long time is required to find the optimal solution, the heuristic scheme is very efficient because it can be scalable, and that scheme improved replica reconstruction throughput by up to 25% compared to the default scheme.
View full abstract
-
Ryoichi ISAWA, Daisuke INOUE, Koji NAKAO
Article type: PAPER
Subject area: Information Network
2015 Volume E98.D Issue 4 Pages
883-893
Published: 2015
Released on J-STAGE: April 01, 2015
JOURNAL
FREE ACCESS
Many malware programs emerging from the Internet are compressed and/or encrypted by a wide variety of packers to deter code analysis, thus making it necessary to perform unpacking first. To do this task efficiently, Guo et al. proposed a generic unpacking system named Justin that provides original entry point (OEP) candidates. Justin executes a packed program, and then it extracts written-and-executed points caused by the decryption of the original binary until it determines the OEP has appeared, taking those points as candidates. However, for several types of packers, the system can provide comparatively large sets of candidates or fail to capture the OEP. For more effective generic unpacking, this paper presents a novel OEP detection method featuring two mechanisms. One identifies the decrypting routine by tracking relations between writing instructions and written areas. This is based on the fact that the decrypting routine is the generator for the original binary. In case our method fails to detect the OEP, the other mechanism sorts candidates based on the most likely candidate so that analysts can reach the correct one quickly. With experiments using a dataset of 753 samples packed by 25 packers, we confirm that our method can be more effective than Justin's heuristics, in terms of detecting OEPs and reducing candidates. After that, we also propose a method combining our method with one of Justin's heuristics.
View full abstract
-
Wei-Chi KU, Yu-Chang YEH, Bo-Ren CHENG, Chia-Ju CHANG
Article type: PAPER
Subject area: Information Network
2015 Volume E98.D Issue 4 Pages
894-901
Published: 2015
Released on J-STAGE: April 01, 2015
JOURNAL
FREE ACCESS
Since most password schemes are vulnerable to login-recording attacks, graphical password schemes that are resistant to such attacks have been proposed. However, none of existing graphical password schemes with resistance to login-recording attacks can provide both sufficient security and good usability. Herein, we design and implement a simple sector-based graphical password scheme, RiS, with dynamically adjustable resistance to login-recording attacks. RiS is a pure graphical password scheme by using the shape of the sector. In RiS, the user can dynamically choose the login mode with suitable resistance to login-recording attacks depending on the login environment. Hence, the user can efficiently complete the login process in an environment under low threat of login-recording attacks and securely complete the login process in an environment under high threat of login-recording attacks. Finally, we show that RiS can achieve both sufficient security and good usability.
View full abstract
-
Tinghuai MA, Jinjuan ZHOU, Meili TANG, Yuan TIAN, Abdullah AL-DHELAAN, ...
Article type: PAPER
Subject area: Office Information Systems, e-Business Modeling
2015 Volume E98.D Issue 4 Pages
902-910
Published: 2015
Released on J-STAGE: April 01, 2015
JOURNAL
FREE ACCESS
Recommender systems, which provide users with recommendations of content suited to their needs, have received great attention in today's online business world. However, most recommendation approaches exploit only a single source of input data and suffer from the data sparsity problem and the cold start problem. To improve recommendation accuracy in this situation, additional sources of information, such as friend relationship and user-generated tags, should be incorporated in recommendation systems. In this paper, we revise the user-based collaborative filtering (CF) technique, and propose two recommendation approaches fusing user-generated tags and social relations in a novel way. In order to evaluate the performance of our approaches, we compare experimental results with two baseline methods: user-based CF and user-based CF with weighted friendship similarity using the real datasets (Last.fm and Movielens). Our experimental results show that our methods get higher accuracy. We also verify our methods in cold-start settings, and our methods achieve more precise recommendations than the compared approaches.
View full abstract
-
Jihyun LEE, Sungwon KANG
Article type: PAPER
Subject area: Office Information Systems, e-Business Modeling
2015 Volume E98.D Issue 4 Pages
911-921
Published: 2015
Released on J-STAGE: April 01, 2015
JOURNAL
FREE ACCESS
The ultimate purpose of a business process is to promote business values. Thus, any process that fails to enhance or promote business values should be improved or adjusted so that business values can be achieved. Therefore an organization should have the capability of confirming whether a business value is achieved; furthermore, in order to cope with the changes of business environment, it should be able to define the necessary measures on the basis of business values. This paper proposes techniques for measuring a business process based on business values, which can be used to monitor and control business activities focusing on the attainment of business values. To show the feasibility of the techniques, we compare their monitoring and controlling capabilities with those of the current fulfillment process of a company. The results show that the proposed techniques are effective in linking business values to relevant processes and integrating each measurement result in accordance with the management level.
View full abstract
-
Tetsuya WATANABE, Toshimitsu YAMAGUCHI, Kazunori MINATANI
Article type: PAPER
Subject area: Rehabilitation Engineering and Assistive Technology
2015 Volume E98.D Issue 4 Pages
922-929
Published: 2015
Released on J-STAGE: April 01, 2015
JOURNAL
FREE ACCESS
A survey was conducted on the use of ICT by visually impaired people. Among 304 respondents, 81 used smartphones and 44, tablets. Blind people used feature phones at a higher rate and smartphones and tablets at lower rates than people with low vision. The most popular smartphone model was iPhone and the most popular tablet model was iPad. While almost all blind users used the speech output accessibility feature and only a few of them used visual features, low vision users used both visual features such as Zoom, Large text, and Invert colors and speech output at high rates both on smartphones and tablets. The most popular text entry methods were different between smartphones and tablets. For smartphones flick and numeric keypad input were popular among low vision users while voice input was the most popular among blind users. For tablets a software QWERTY keyboard was the most popular among both blind and low vision users. The advantages of smartphones were access to geographical information, quick Web browsing, voice input, and extensibility for both blind and low vision users, object recognition for blind users, and readability for low vision users. Tablets also work as a vision aid for people with low vision. The drawbacks of smartphones and tablets were text entry and touch operation difficulties and inaccessible apps for both blind and low vision users, problems in speech output for blind users, and problems in readability for low vision users. Researchers and makers of operating systems (OS) and apps should assume responsibility for solving these problems.
View full abstract
-
Kosei KURISU, Nobuo SUEMATSU, Kazunori IWATA, Akira HAYASHI
Article type: PAPER
Subject area: Image Recognition, Computer Vision
2015 Volume E98.D Issue 4 Pages
930-937
Published: 2015
Released on J-STAGE: April 01, 2015
JOURNAL
FREE ACCESS
In image segmentation, finite mixture modeling has been widely used. In its simplest form, the spatial correlation among neighboring pixels is not taken into account, and its segmentation results can be largely deteriorated by noise in images. We propose a spatially correlated mixture model in which the mixing proportions of finite mixture models are governed by a set of underlying functions defined on the image space. The spatial correlation among pixels is introduced by putting a Gaussian process prior on the underlying functions. We can set the spatial correlation rather directly and flexibly by choosing the covariance function of the Gaussian process prior. The effectiveness of our model is demonstrated by experiments with synthetic and real images.
View full abstract
-
Jidong ZHAO, Jingjing LI, Ke LU
Article type: PAPER
Subject area: Image Recognition, Computer Vision
2015 Volume E98.D Issue 4 Pages
938-947
Published: 2015
Released on J-STAGE: April 01, 2015
JOURNAL
FREE ACCESS
For robust visual tracking, the main challenges of a subspace representation model can be attributed to the difficulty in handling various appearances of the target object. Traditional subspace learning tracking algorithms neglected the discriminative correlation between different multi-view target samples and the effectiveness of sparse subspace learning. For learning a better subspace representation model, we designed a discriminative graph to model both the labeled target samples with various appearances and the updated foreground and background samples, which are selected using an incremental updating scheme. The proposed discriminative graph structure not only can explicitly capture multi-modal intraclass correlations within labeled samples but also can obtain a balance between within-class local manifold and global discriminative information from foreground and background samples. Based on the discriminative graph, we achieved a sparse embedding by using
L2,1-norm, which is incorporated to select relevant features and learn transformation in a unified framework. In a tracking procedure, the subspace learning is embedded into a Bayesian inference framework using compound motion estimation and a discriminative observation model, which significantly makes localization effective and accurate. Experiments on several videos have demonstrated that the proposed algorithm is robust for dealing with various appearances, especially in dynamically changing and clutter situations, and has better performance than alternatives reported in the recent literature.
View full abstract
-
Chen CHEN, Chunyan HOU, Peng NIE, Xiaojie YUAN
Article type: PAPER
Subject area: Natural Language Processing
2015 Volume E98.D Issue 4 Pages
948-954
Published: 2015
Released on J-STAGE: April 01, 2015
JOURNAL
FREE ACCESS
Recommendation systems have been widely used in E-commerce sites, social media and etc. An important recommendation task is to predict items that a user will perform actions on with users' historical data, which is called top-
K recommendation. Recently, there is huge amount of emerging items which are divided into a variety of categories and researchers have argued or suggested that top-
K recommendation of item category could be very beneficial for users to make better and faster decisions. However, the traditional methods encounter some common but crucial problems in this scenario because additional information, such as time, is ignored. The ranking algorithm on graphs and the increasingly growing amount of online user behaviors shed some light on these problems. We propose a construction method of time-aware graphs to use ranking algorithm for personalized recommendation of item category. Experimental results on real-world datasets demonstrate the advantages of our proposed method over competitive baseline algorithms.
View full abstract
-
Ang LI, Xiaoguang MAO, Yan LEI, Tao JI
Article type: LETTER
Subject area: Software Engineering
2015 Volume E98.D Issue 4 Pages
955-959
Published: 2015
Released on J-STAGE: April 01, 2015
JOURNAL
FREE ACCESS
Fault localization is essential for conducting effective program repair. However, preliminary studies have shown that existing fault localization approaches do not take the requirements of automatic repair into account, and therefore restrict the repair performance. To address this issue, this paper presents the first study on designing fault localization approaches for automatic program repair, that is, we propose a fault localization approach using failure-related contexts in order to improve automatic program repair. The proposed approach first utilizes program slicing technique to construct a failure-related context, then evaluates the suspiciousness of each element in this context, and finally transfers the result of evaluation to automatic program repair techniques for performing repair on faulty programs. The experimental results demonstrate that the proposed approach is effective to improve automatic repair performance.
View full abstract
-
Inchul SONG, Yohan J. ROH, Myoung Ho KIM
Article type: LETTER
Subject area: Data Engineering, Web Information Systems
2015 Volume E98.D Issue 4 Pages
960-963
Published: 2015
Released on J-STAGE: April 01, 2015
JOURNAL
FREE ACCESS
In this letter, we propose an energy-efficient in-network processing method for continuous grouped aggregation queries in wireless sensor networks. As in previous work, in our method sensor nodes partially compute aggregates as data flow through them to reduce data transferred. Different from other methods, our method considers group information of partial aggregates when sensor nodes forward them to next-hop nodes in order to maximize data reduction by same-group partial aggregation. Through experimental evaluation, we show that our method outperforms the existing methods in terms of energy efficiency.
View full abstract
-
Hae Young LEE
Article type: LETTER
Subject area: Information Network
2015 Volume E98.D Issue 4 Pages
964-967
Published: 2015
Released on J-STAGE: April 01, 2015
JOURNAL
FREE ACCESS
This letter presents a method to adaptively counter false data injection attacks (FDIAs) in wireless sensor networks, in which a fuzzy rule-based system detects FDIAs and chooses the most appropriate countermeasures. The method does not require en-route verification processes and manual parameter settings. The effectiveness of the method is shown with simulation results.
View full abstract
-
Tetsuro KITAHARA, Shunsuke HOKARI, Tatsuya NAGAYASU
Article type: LETTER
Subject area: Human-computer Interaction
2015 Volume E98.D Issue 4 Pages
968-971
Published: 2015
Released on J-STAGE: April 01, 2015
JOURNAL
FREE ACCESS
In this paper, we propose a jogging support system that plays back background music while synchronizing its tempo with the user's jogging pace. Keeping an even pace is important in jogging but it is not easy due to tiredness. Our system indicates the variation of the runner's pace by changing the playback speed of music according to the user's pace variation. Because this requires the runner to keep an even pace in order to enjoy the music at its normal speed, the runner will be spontaneously influenced to keep an even pace. Experimental results show that our system reduced the variation of jogging pace.
View full abstract
-
Jinki PARK, Jaehwa PARK, Young-Bin KWON, Chan-Gun LEE, Ho-Hyun PARK
Article type: LETTER
Subject area: Image Processing and Video Processing
2015 Volume E98.D Issue 4 Pages
972-975
Published: 2015
Released on J-STAGE: April 01, 2015
JOURNAL
FREE ACCESS
A new exemplar-based inpainting method which effectively preserves global structures and textures in the restored region driven by feature vectors is presented. Exemplars that belong to the source region are segmented based on their features. To express characteristics of exemplars such as shapes of structures and smoothness of textures, the Harris corner response and the variance of pixel values are employed as a feature vector. Enhancements on restoration plausibility and processing speedup are achieved as shown in the experiments.
View full abstract
-
Aram KIM, Junhee PARK, Byung-Uk LEE
Article type: LETTER
Subject area: Image Processing and Video Processing
2015 Volume E98.D Issue 4 Pages
976-979
Published: 2015
Released on J-STAGE: April 01, 2015
JOURNAL
FREE ACCESS
In a patch-based super-resolution algorithm, a low-resolution patch is influenced by surrounding patches due to blurring. We propose to remove this boundary effect by subtracting the blur from the surrounding high-resolution patches, which enables more accurate sparse representation. We demonstrate improved performance through experimentation. The proposed algorithm can be applied to most of patch-based super-resolution algorithms to achieve additional improvement.
View full abstract
-
Xu CHENG, Nijun LI, Tongchi ZHOU, Lin ZHOU, Zhenyang WU
Article type: LETTER
Subject area: Image Recognition, Computer Vision
2015 Volume E98.D Issue 4 Pages
980-984
Published: 2015
Released on J-STAGE: April 01, 2015
JOURNAL
FREE ACCESS
This paper proposes a robust superpixel-based tracker via multiple-instance learning, which exploits the importance of instances and mid-level features captured by superpixels for object tracking. We first present a superpixels-based appearance model, which is able to compute the confidences of the object and background. Most importantly, we introduce the sample importance into multiple-instance learning (MIL) procedure to improve the performance of tracking. The importance for each instance in the positive bag is defined by accumulating the confidence of all the pixels within the corresponding instance. Furthermore, our tracker can help recover the object from the drifting scene using the appearance model based on superpixels when the drift occurs. We retain the first (
k-1) frames' information during the updating process to alleviate drift to some extent. To evaluate the effectiveness of the proposed tracker, six video sequences of different challenging situations are tested. The comparison results demonstrate that the proposed tracker has more robust and accurate performance than six ones representing the state-of-the-art.
View full abstract
-
Huiyun JING, Xin HE, Qi HAN, Xiamu NIU
Article type: LETTER
Subject area: Image Recognition, Computer Vision
2015 Volume E98.D Issue 4 Pages
985-988
Published: 2015
Released on J-STAGE: April 01, 2015
JOURNAL
FREE ACCESS
The research of detecting co-saliency over multiple images is just beginning. The existing methods multiply the saliency on single image by the correspondence over multiple images to estimate co-saliency. They have difficulty in highlighting the co-salient object that is not salient on single image. It is caused by two problems. (1) The correspondence computation lacks precision. (2) The co-saliency multiplication formulation does not fully consider the effect of correspondence for co-saliency. In this paper, we propose a novel co-saliency detection scheme linearly combining foreground correspondence and single-view saliency. The progressive graph matching based foreground correspondence method is proposed to improve the precision of correspondence computation. Then the foreground correspondence is linearly combined with single-view saliency to compute co-saliency. According to the linear combination formulation, high correspondence could bring about high co-saliency, even when single-view saliency is low. Experiments show that our method outperforms previous state-of-the-art co-saliency methods.
View full abstract
-
Zhong ZHANG, Shuang LIU, Xing MEI
Article type: LETTER
Subject area: Image Recognition, Computer Vision
2015 Volume E98.D Issue 4 Pages
989-993
Published: 2015
Released on J-STAGE: April 01, 2015
JOURNAL
FREE ACCESS
The bag-of-words model (BOW) has been extensively adopted by recent human action recognition methods. The pooling operation, which aggregates local descriptor encodings into a single representation, is a key determiner of the performance of the BOW-based methods. However, the spatio-temporal relationship among interest points has rarely been considered in the pooling step, which results in the imprecise representation of human actions. In this paper, we propose a novel pooling strategy named contextual max pooling (CMP) to overcome this limitation. We add a constraint term into the objective function under the framework of max pooling, which forces the weights of interest points to be consistent with their probabilities. In this way, CMP explicitly considers the spatio-temporal contextual relationships among interest points and inherits the positive properties of max pooling. Our method is verified on three challenging datasets (KTH, UCF Sports and UCF Films datasets), and the results demonstrate that our method achieves better results than the state-of-the-art methods in human action recognition.
View full abstract
-
Yoshihide KATO, Shigeki MATSUBARA
Article type: LETTER
Subject area: Natural Language Processing
2015 Volume E98.D Issue 4 Pages
994-998
Published: 2015
Released on J-STAGE: April 01, 2015
JOURNAL
FREE ACCESS
This paper describes a method of identifying nonlocal dependencies in incremental parsing. Our incremental parser inserts empty elements at arbitrary positions to generate partial parse trees including empty elements. To identify the correspondence between empty elements and their fillers, our method adapts a hybrid approach: slash feature annotation and heuristic rules. This decreases local ambiguity in incremental parsing and improves the accuracy of our parser.
View full abstract