IEICE Transactions on Information and Systems
Online ISSN : 1745-1361
Print ISSN : 0916-8532
Volume E99.D, Issue 4
Displaying 1-50 of 56 articles from this issue
Special Section on Information and Communication System Security
  • Toshihiro YAMAUCHI
    2016 Volume E99.D Issue 4 Pages 785-786
    Published: April 01, 2016
    Released on J-STAGE: April 01, 2016
    JOURNAL FREE ACCESS
    Download PDF (78K)
  • Kazukuni KOBARA
    Article type: INVITED PAPER
    2016 Volume E99.D Issue 4 Pages 787-795
    Published: April 01, 2016
    Released on J-STAGE: April 01, 2016
    JOURNAL FREE ACCESS
    Cyber-attacks and cybersecurity used to be the issues for those who use Internet and computers. The issues, however, are expanding to anyone who does not even use them directly. The society is gradually and heavily depending on networks and computers. They are not closed within a cyberspace anymore and having interaction with our real world with sensors and actuators. Such systems are known as CPS (Cyber Physical Systems), IoT/E (Internet of Things/Everything), Industry 4.0, Industrial Internet, M2M, etc. No matter what they are called, exploitation of any of these systems may cause a serious influence to our real life and appropriate countermeasures must be taken to mitigate the risks. In this paper, cybersecurity in ICS (Industrial Control Systems) is reviewed as a leading example of cyber physical security for critical infrastructures. Then as a future aspect of it, IoT security for consumers is explained.
    Download PDF (1450K)
  • Rashed MAZUMDER, Atsuko MIYAJI
    Article type: PAPER
    Subject area: Cryptography and cryptographic protocols
    2016 Volume E99.D Issue 4 Pages 796-804
    Published: April 01, 2016
    Released on J-STAGE: April 01, 2016
    JOURNAL FREE ACCESS
    A cryptographic hash is an important tool in the area of a modern cryptography. It comprises a compression function, where the compression function can be built by a scratch or blockcipher. There are some familiar schemes of blockcipher compression function such as Weimar, Hirose, Tandem, Abreast, Nandi, ISA-09. Interestingly, the security proof of all the mentioned schemes are based on the ideal cipher model (ICM), which depends on ideal environment. Therefore, it is desired to use such a proof technique model, which is close to the real world such as weak cipher model (WCM). Hence, we proposed an (n, 2n) blockcipher compression function, which is secure under the ideal cipher model, weak cipher model and extended weak cipher model (ext.WCM). Additionally, the majority of the existing schemes need multiple key schedules, where the proposed scheme and the Hirose-DM follow single key scheduling property. The efficiency-rate of our scheme is r=1/2. Moreover, the number of blockcipher call of this scheme is 2 and it runs in parallel.
    Download PDF (547K)
  • Yasuyuki NOGAMI, Hiroto KAGOTANI, Kengo IOKIBE, Hiroyuki MIYATAKE, Tak ...
    Article type: PAPER
    Subject area: Cryptography and cryptographic protocols
    2016 Volume E99.D Issue 4 Pages 805-815
    Published: April 01, 2016
    Released on J-STAGE: April 01, 2016
    JOURNAL FREE ACCESS
    Pairing-based cryptography has realized a lot of innovative cryptographic applications such as attribute-based cryptography and semi homomorphic encryption. Pairing is a bilinear map constructed on a torsion group structure that is defined on a special class of elliptic curves, namely pairing-friendly curve. Pairing-friendly curves are roughly classified into supersingular and non supersingular curves. In these years, non supersingular pairing-friendly curves have been focused on from a security reason. Although non supersingular pairing-friendly curves have an ability to bridge various security levels with various parameter settings, most of software and hardware implementations tightly restrict them to achieve calculation efficiencies and avoid implementation difficulties. This paper shows an FPGA implementation that supports various parameter settings of pairings on non supersingular pairing-friendly curves for which Montgomery reduction, cyclic vector multiplication algorithm, projective coordinates, and Tate pairing have been combinatorially applied. Then, some experimental results with resource usages are shown.
    Download PDF (1081K)
  • Kazumasa OMOTE, Phuong-Thao TRAN
    Article type: PAPER
    Subject area: Cryptography and cryptographic protocols
    2016 Volume E99.D Issue 4 Pages 816-829
    Published: April 01, 2016
    Released on J-STAGE: April 01, 2016
    JOURNAL FREE ACCESS
    Proof of Retrievability (POR) is a protocol by which a client can distribute his/her data to cloud servers and can check if the data stored in the servers is available and intact. After that, network coding-based POR has been applied to improve network throughput. Although many network coding-based PORs have been proposed, most of them have not achieved the following practical features: direct repair and dynamic operations. In this paper, we propose the D2-POR scheme (Direct repair and Dynamic operations in network coding-based POR) to address these shortcomings. When a server is corrupted, the D2-POR can support the direct repair in which the data stored in the corrupted server can be repaired using the data directly provided by healthy servers. The client is thus free from the burden of data repair. Furthermore, the D2-POR allows the client to efficiently perform dynamic operations, i.e., modification, insertion and deletion.
    Download PDF (432K)
  • Shoichiro YAMASAKI, Tomoko K. MATSUSHIMA
    Article type: PAPER
    Subject area: Network security
    2016 Volume E99.D Issue 4 Pages 830-838
    Published: April 01, 2016
    Released on J-STAGE: April 01, 2016
    JOURNAL FREE ACCESS
    Secret sharing is a method of information protection for security. The information is divided into n shares and reconstructed from any k shares, but no knowledge of the information is revealed from k-1 shares. Physical layer security is a method of achieving favorable reception conditions at the destination terminal in wireless communications. In this study, we propose a security enhancement technique for wireless packet communications. The technique uses secret sharing and physical layer security to exchange a secret encryption key. The encryption key for packet information is set as the secret information in secret sharing, and the secret information is divided into n shares. Each share is located in the packet header. The base station transmits the packets to the destination terminal by using physical layer security based on precoded multi-antenna transmission. With this transmission scheme, the destination terminal can receive more than k shares without error and perfectly recover the secret information. In addition, an eavesdropper terminal can receive less than k-1 shares without error and recover no secret information. In this paper, we propose a protection technique using secret sharing based on systematic Reed-Solomon codes. The technique establishes an advantageous condition for the destination terminal to recover the secret information. The evaluation results by numerical analysis and computer simulation show the validity of the proposed technique.
    Download PDF (1706K)
  • Rui WANG, Qiaoyan WEN, Hua ZHANG, Xuelei LI
    Article type: PAPER
    Subject area: Network security
    2016 Volume E99.D Issue 4 Pages 839-849
    Published: April 01, 2016
    Released on J-STAGE: April 01, 2016
    JOURNAL FREE ACCESS
    Tor is the most popular and well-researched low-latency anonymous communication network provides sender privacy to Internet users. It also provides recipient privacy by making TCP services available through “hidden service”, which allowing users not only to access information anonymously but also to publish information anonymously. However, based on our analysis of the hidden service protocol, we found a special combination of cells, which is the basic transmission unit over Tor, transmitted during the circuit creation procedure that could be used to degrade the anonymity. In this paper, we investigate a novel protocol-feature based attack against Tor's hidden service. The main idea resides in fact that an attacker could monitor traffic and manipulate cells at the client side entry router, and an adversary at the hidden server side could cooperate to reveal the communication relationship. Compared with other existing attacks, our attack reveals the client of a hidden service and does not rely on traffic analysis or watermarking techniques. We manipulate Tor cells at the entry router to generate the protocol-feature. Once our controlled entry onion routers detect such a feature, we can confirm the IP address of the client. We implemented this attack against hidden service and conducted extensive theoretical analysis and experiments over Tor network. The experiment results validate that our attack can achieve high rate of detection rate with low false positive rate.
    Download PDF (2016K)
  • Xiulei WANG, Ming CHEN, Changyou XING, Tingting ZHANG
    Article type: PAPER
    Subject area: Network security
    2016 Volume E99.D Issue 4 Pages 850-859
    Published: April 01, 2016
    Released on J-STAGE: April 01, 2016
    JOURNAL FREE ACCESS
    The availability is an important issue of software-defined networking (SDN). In this paper, the experiments based on a SDN testbed showed that the resource utilization of the data plane and control plane changed drastically when DDoS attacks happened. This is mainly because the DDoS attacks send a large number of fake flows to network in a short time. Based on the observation and analysis, a DDoS defense mechanism based on legitimate source and destination IP address database is proposed in this paper. Firstly, each flow is abstracted as a source-destination IP address pair and a legitimate source-destination IP address pair database (LSDIAD) is established by historical normal traffic trace. Then the proportion of new source-destination IP address pair in the traffic per unit time is cumulated by non-parametric cumulative sum (CUSUM) algorithm to detect the DDoS attacks quickly and accurately. Based on the alarm from the non-parametric CUSUM, the attack flows will be filtered and redirected to a middle box network for deep analysis via south-bound API of SDN. An on-line updating policy is adopted to keep the LSDIAD timely and accurate. This mechanism is mainly implemented in the controller and the simulation results show that this mechanism can achieve a good performance in protecting SDN from DDoS attacks.
    Download PDF (1840K)
  • Yuta TAKATA, Mitsuaki AKIYAMA, Takeshi YAGI, Takeo HARIU, Shigeki GOTO
    Article type: PAPER
    Subject area: Web security
    2016 Volume E99.D Issue 4 Pages 860-872
    Published: April 01, 2016
    Released on J-STAGE: April 01, 2016
    JOURNAL FREE ACCESS
    Drive-by download attacks force users to automatically download and install malware by redirecting them to malicious URLs that exploit vulnerabilities of the user's web browser. In addition, several evasion techniques, such as code obfuscation and environment-dependent redirection, are used in combination with drive-by download attacks to prevent detection. In environment-dependent redirection, attackers profile the information on the user's environment, such as the name and version of the browser and browser plugins, and launch a drive-by download attack on only certain targets by changing the destination URL. When malicious content detection and collection techniques, such as honeyclients, are used that do not match the specific environment of the attack target, they cannot detect the attack because they are not redirected. Therefore, it is necessary to improve analysis coverage while countering these adversarial evasion techniques. We propose a method for exhaustively analyzing JavaScript code relevant to redirections and extracting the destination URLs in the code. Our method facilitates the detection of attacks by extracting a large number of URLs while controlling the analysis overhead by excluding code not relevant to redirections. We implemented our method in a browser emulator called MineSpider that automatically extracts potential URLs from websites. We validated it by using communication data with malicious websites captured during a three-year period. The experimental results demonstrated that MineSpider extracted 30,000 new URLs from malicious websites in a few seconds that conventional methods missed.
    Download PDF (936K)
  • Bo SUN, Mitsuaki AKIYAMA, Takeshi YAGI, Mitsuhiro HATADA, Tatsuya MORI
    Article type: PAPER
    Subject area: Web security
    2016 Volume E99.D Issue 4 Pages 873-882
    Published: April 01, 2016
    Released on J-STAGE: April 01, 2016
    JOURNAL FREE ACCESS
    Modern web users may encounter a browser security threat called drive-by-download attacks when surfing on the Internet. Drive-by-download attacks make use of exploit codes to take control of user's web browser. Many web users do not take such underlying threats into account while clicking URLs. URL Blacklist is one of the practical approaches to thwarting browser-targeted attacks. However, URL Blacklist cannot cope with previously unseen malicious URLs. Therefore, to make a URL blacklist effective, it is crucial to keep the URLs updated. Given these observations, we propose a framework called automatic blacklist generator (AutoBLG) that automates the collection of new malicious URLs by starting from a given existing URL blacklist. The primary mechanism of AutoBLG is expanding the search space of web pages while reducing the amount of URLs to be analyzed by applying several pre-filters such as similarity search to accelerate the process of generating blacklists. AutoBLG consists of three primary components: URL expansion, URL filtration, and URL verification. Through extensive analysis using a high-performance web client honeypot, we demonstrate that AutoBLG can successfully discover new and previously unknown drive-by-download URLs from the vast web space.
    Download PDF (2138K)
  • Hyoung-Kee CHOI, Ki-Eun SHIN, Hyoungshick KIM
    Article type: PAPER
    Subject area: Privacy protection in information systems
    2016 Volume E99.D Issue 4 Pages 883-890
    Published: April 01, 2016
    Released on J-STAGE: April 01, 2016
    JOURNAL FREE ACCESS
    With the rapid merger of healthcare business and information technology, more healthcare institutions and medical practices are sharing information. Since these records often contain patients' sensitive personal information, Healthcare Information Systems (HISs) should be properly designed to manage these records in a secure manner. We propose a novel security design for the HIS complying with the security and privacy rules. The proposed system defines protocols to ensure secure delivery of medical records over insecure public networks and reliable management of medical record in the remote server without incurring excessive costs to implement services for security. We demonstrate the practicality of the proposed system through a security analysis and performance evaluation.
    Download PDF (1099K)
  • Hyunsu MUN, Youngseok LEE
    Article type: LETTER
    Subject area: Privacy protection in information systems
    2016 Volume E99.D Issue 4 Pages 891-894
    Published: April 01, 2016
    Released on J-STAGE: April 01, 2016
    JOURNAL FREE ACCESS
    Online used markets such as eBay, Yahoo Auction, and Craigslist have been popular due to the web services. Compared to the shopping mall websites like eBay or Yahoo Auction, web community-style used markets often expose the private information of sellers. In Korea, the most popular online used market is a website called “Joonggonara” with more than 13 million users, and it uses an informal posting format that does not protect the users' privacy identifiable information. In this work, we examine the privacy leakage from the online used markets in Korea, and show that 45.9% and 74.0% of sample data expose cellular phone numbers and email addresses, respectively. In addition, we demonstrate that the private information can be maliciously exploited to identify a subscriber of the social network service.
    Download PDF (1253K)
Special Section on Data Engineering and Information Management
  • Toshiyuki AMAGASA
    2016 Volume E99.D Issue 4 Pages 895
    Published: April 01, 2016
    Released on J-STAGE: April 01, 2016
    JOURNAL FREE ACCESS
    Download PDF (61K)
  • Huifeng GUO, Dianhui CHU, Yunming YE, Xutao LI, Xixian FAN
    Article type: PAPER
    2016 Volume E99.D Issue 4 Pages 896-905
    Published: April 01, 2016
    Released on J-STAGE: April 01, 2016
    JOURNAL FREE ACCESS
    Ranking as an important task in information systems has many applications, such as document/webpage retrieval, collaborative filtering and advertising. The last decade has witnessed a growing interest in the study of learning to rank as a means to leverage training information in a system. In this paper, we propose a new learning to rank method, i.e. BLM-Rank, which uses a linear function to score samples and models the pairwise preference of samples relying on their scores under a Bayesian framework. A stochastic gradient approach is adopted to maximize the posterior probability in BLM-Rank. For industrial practice, we have also implemented the proposed algorithm on Graphic Processing Unit (GPU). Experimental results on LETOR have demonstrated that the proposed BLM-Rank method outperforms the state-of-the-art methods, including RankSVM-Struct, RankBoost, AdaRank-NDCG, AdaRank-MAP and ListNet. Moreover, the results have shown that the GPU implementation of the BLM-Rank method is ten-to-eleven times faster than its CPU counterpart in the training phase, and one-to-four times faster in the testing phase.
    Download PDF (827K)
  • Keisuke KIRITOSHI, Qiang MA
    Article type: PAPER
    2016 Volume E99.D Issue 4 Pages 906-917
    Published: April 01, 2016
    Released on J-STAGE: April 01, 2016
    JOURNAL FREE ACCESS
    To support the efficient gathering of diverse information about a news event, we focus on descriptions of named entities (persons, organizations, locations) in news articles. We extend the stakeholder mining proposed by Ogawa et al. and extract descriptions of named entities in articles. We propose three measures (difference in opinion, difference in details, and difference in factor coverage) to rank news articles on the basis of analyzing differences in descriptions of named entities. On the basis of these three measurements, we develop a news app on mobile devices to help users to acquire diverse reports for improving their understanding of the news. For the current article a user is reading, the proposed news app will rank and provide its related articles from different perspectives by the three ranking measurements. One of the notable features of our system is to consider the access history to provide the related news articles. In other words, we propose a context-aware re-ranking method for enhancing the diversity of news reports presented to users. We evaluate our three measurements and the re-ranking method with a crowdsourcing experiment and a user study, respectively.
    Download PDF (2279K)
  • Hongyeon KIM, Sungmin KANG, Seokjoo LEE, Jun-Ki MIN
    Article type: PAPER
    2016 Volume E99.D Issue 4 Pages 918-926
    Published: April 01, 2016
    Released on J-STAGE: April 01, 2016
    JOURNAL FREE ACCESS
    MapReduce is considered as the de facto framework for storing and processing massive data due to its fascinating features: simplicity, flexibility, fault tolerance and scalability. However, since the MapReduce framework does not provide an efficient access method to data (i.e., an index), whole data should be retrieved even though a user wants to access a small portion of data. Thus, in this paper, we devise an efficient algorithm constructing quadtrees with MapReduce. Our proposed algorithms reduce the index construction time by utilizing a sampling technique to partition a data set. To improve the query performance, we extend the quadtree construction algorithm in which the adjacent nodes of a quadtree are integrated when the number of points located in the nodes is less than the predefined threshold. Furthermore, we present an effective algorithm for incremental update. Our experimental results show the efficiency of our proposed algorithms in diverse environments.
    Download PDF (1533K)
  • Nan JIANG, Wenge RONG, Baolin PENG, Yifan NIE, Zhang XIONG
    Article type: PAPER
    2016 Volume E99.D Issue 4 Pages 927-935
    Published: April 01, 2016
    Released on J-STAGE: April 01, 2016
    JOURNAL FREE ACCESS
    One of the main research tasks in community question answering (cQA) is finding the most relevant questions for a given new query, thereby providing useful knowledge for users. The straightforward approach is to capitalize on textual features, or a bag-of-words (BoW) representation, to conduct the matching process between queries and questions. However, these approaches have a lexical gap issue which means that, if lexicon matching fails, they cannot model the semantic meaning. In addition, latent semantic models, like latent semantic analysis (LSA), attempt to map queries to its corresponding semantically similar questions through a lower dimension representation. But alas, LSA is a shallow and linear model that cannot model highly non-linear correlations in cQA. Moreover, both BoW and semantic oriented solutions utilize a single dictionary to represent the query, question, and answer in the same feature space. However, the correlations between them, as we observe from data, imply that they lie in entirely different feature spaces. In light of these observations, this paper proposes a tri-modal deep belief network (tri-DBN) to extract a unified representation for the query, question, and answer, with the hypothesis that they locate in three different feature spaces. Besides, we compare the unified representation extracted by our model with other representations using the Yahoo! Answers queries on the dataset. Finally, Experimental results reveal that the proposed model captures semantic meaning both within and between queries, questions, and answers. In addition, the results also suggest that the joint representation extracted via the proposed method can improve the performance of cQA archives searching.
    Download PDF (750K)
  • Chooi-Ling GOH, Shigetoshi NAKATAKE
    Article type: PAPER
    2016 Volume E99.D Issue 4 Pages 936-943
    Published: April 01, 2016
    Released on J-STAGE: April 01, 2016
    JOURNAL FREE ACCESS
    Blood pressure measurement by auscultatory method is a compulsory skill that is required by all healthcare practitioners. During the measurement, they must concentrate on recognizing the Korotkoff sounds, looking at the sphygmomanometer scale, and constantly deflating the cuff pressure simultaneously. This complex operation is difficult for the new learners and they need a lot of practice with the supervisor in order to guide them on their measurements. However, the supervisor is not always available and consequently, they always face the problem of lack of enough training. In order to help them mastering the skill of measuring blood pressure by auscultatory method more efficiently and effectively, we propose using a sensor device to capture the signals of Korotkoff sounds and cuff pressure during the measurement, and display the signal changes on a visualization tool through wireless connection. At the end of the measurement, the learners can verify their skill on deflation speed and recognition of Korotkoff sounds using the graphical view, and compare their measurements with the machine instantly. By using this device, the new learners do not need to wait for their supervisor for training but can practice with their colleagues more frequently. As a result, they will be able to acquire the skill in a shorter time and be more confident with their measurements.
    Download PDF (1506K)
  • Nobutaka SUZUKI, Kosetsu IKEDA, Yeondae KWON
    Article type: PAPER
    2016 Volume E99.D Issue 4 Pages 944-958
    Published: April 01, 2016
    Released on J-STAGE: April 01, 2016
    JOURNAL FREE ACCESS
    In this paper, we consider solving the all-pairs regular path problem on large graphs efficiently. Let G be a graph and r be a regular path query, and consider finding the answers of r on G. If G is so small that it fits in main memory, it suffices to load entire G into main memory and traverse G to find paths matching r. However, if G is too large and cannot fit in main memory, we need another approach. In this paper, we propose a novel approach based on external memory algorithm. Our algorithm finds the answers matching r by scanning the node list of G sequentially. We made a small experiment, which suggests that our algorithm can solve the problem efficiently.
    Download PDF (1860K)
  • Yongyos KAEWPITAKKUN, Kiyoaki SHIRAI
    Article type: PAPER
    2016 Volume E99.D Issue 4 Pages 959-968
    Published: April 01, 2016
    Released on J-STAGE: April 01, 2016
    JOURNAL FREE ACCESS
    Sentiment analysis of microblogging has become an important classification task because a large amount of user-generated content is published on the Internet. In Twitter, it is common that a user expresses several sentiments in one tweet. Therefore, it is important to classify the polarity not of the whole tweet but of a specific target about which people express their opinions. Moreover, the performance of the machine learning approach greatly depends on the domain of the training data and it is very time-consuming to manually annotate a large set of tweets for a specific domain. In this paper, we propose a method for sentiment classification at the target level by incorporating the on-target sentiment features and user-aware features into the classifier trained automatically from the data createdfor the specific target. An add-on lexicon, extended target list, and competitor list are also constructed as knowledge sources for the sentiment analysis. None of the processes in the proposed framework require manual annotation. The results of our experiment show that our method is effective and improves on the performance of sentiment classification compared to the baselines.
    Download PDF (385K)
  • Md-Mizanur RAHOMAN, Ryutaro ICHISE
    Article type: PAPER
    2016 Volume E99.D Issue 4 Pages 969-978
    Published: April 01, 2016
    Released on J-STAGE: April 01, 2016
    JOURNAL FREE ACCESS
    These days, the Web contains a huge volume of (semi-)structured data, called Linked Data (LD). However, LD suffer in data quality, and this poor data quality brings the need to identify erroneous data. Because manual erroneous data checking is impractical, automatic erroneous data detection is necessary. According to the data publishing guidelines of LD, data should use (already defined) ontology which populates type-annotated LD. Usually, the data type annotation helps in understanding the data. However, in our observation, the data type annotation could be used to identify erroneous data. Therefore, to automatically identify possible erroneous data over the type-annotated LD, we propose a framework that uses a novel nearest-neighbor based error detection technique. We conduct experiments of our framework on DBpedia, a type-annotated LD dataset, and found that our framework shows better performance of error detection in comparison with state-of-the-art framework.
    Download PDF (332K)
  • Shunsuke OHASHI, Giovanni Yoko KRISTIANTO, Goran TOPIC, Akiko AIZAWA
    Article type: PAPER
    2016 Volume E99.D Issue 4 Pages 979-988
    Published: April 01, 2016
    Released on J-STAGE: April 01, 2016
    JOURNAL FREE ACCESS
    Mathematical formulae play an important role in many scientific domains. Regardless of the importance of mathematical formula search, conventional keyword-based retrieval methods are not sufficient for searching mathematical formulae, which are structured as trees. The increasing number as well as the structural complexity of mathematical formulae in scientific articles lead to the necessity for large-scale structure-aware formula search techniques. In this paper, we formulate three types of measures that represent distinctive features of semantic similarity of math formulae, and develop efficient hash-based algorithms for the approximate calculation. Our experiments using NTCIR-11 Math-2 Task dataset, a large-scale test collection for math information retrieval with about 60-million formulae, show that the proposed method improves the search precision while also keeps the scalability and runtime efficiency high.
    Download PDF (499K)
  • Masafumi MAKINO, Tatsuo TSUJI, Ken HIGUCHI
    Article type: PAPER
    2016 Volume E99.D Issue 4 Pages 989-999
    Published: April 01, 2016
    Released on J-STAGE: April 01, 2016
    JOURNAL FREE ACCESS
    In this paper, we present a new encoding/decoding method for dynamic multidimensional datasets and its implementation scheme. Our method encodes an n-dimensional tuple into a pair of scalar values even if n is sufficiently large. The method also encodes and decodes tuples using only shift and and/or register instructions. One of the most serious problems in multidimensional array based tuple encoding is that the size of an encoded result may often exceed the machine word size for large-scale tuple sets. This problem is efficiently resolved in our scheme. We confirmed the advantages of our scheme by analytical and experimental evaluations. The experimental evaluations were conducted to compare our constructed prototype system with other systems; (1) a system based on a similar encoding scheme called history-offset encoding, and (2) PostgreSQL RDBMS. In most cases, both the storage and retrieval costs of our system significantly outperformed those of the other systems.
    Download PDF (1815K)
  • Ryosuke KOYANAGI, Ryo FURUKAWA, Tsubasa TAKAHASHI, Takuya MORI, Toshiy ...
    Article type: PAPER
    2016 Volume E99.D Issue 4 Pages 1000-1009
    Published: April 01, 2016
    Released on J-STAGE: April 01, 2016
    JOURNAL FREE ACCESS
    In this paper we propose an improved algorithm for k-concealment, which has been proposed as an alternative to the well-known k-anonymity model. k-concealment achieves similar privacy goals as k-anonymity; it proposes to generalize records in a table in such a way that each record is indistinguishable from at least k-1 other records, while achieving higher utility than k-anonymity. However, its computation is quite expensive in particular when dealing with large datasets containing massive records due to its high computational complexity. To cope with this problem, we propose neighbor lists, where for each record similar records are stored. Neighbor lists are constructed in advance, and can also be efficiently constructed by mapping each record to a point in a high-dimensional space and using appropriate multidimensional indexes. Our proposed scheme successfully decreases the execution time from O(kn2) to O(k2n+knlogn), and it can be practically applied to databases with millions of records. The experimental evaluation using a real dataset reveals that the proposed scheme can achieve the same level of utility as k-concealment while maintaining the efficiency at the same time.
    Download PDF (1460K)
  • Marie KATSURAI, Ikki OHMUKAI, Hideaki TAKEDA
    Article type: PAPER
    2016 Volume E99.D Issue 4 Pages 1010-1018
    Published: April 01, 2016
    Released on J-STAGE: April 01, 2016
    JOURNAL FREE ACCESS
    It is crucial to promote interdisciplinary research and recommend collaborators from different research fields via academic database analysis. This paper addresses a problem to characterize researchers' interests with a set of diverse research topics found in a large-scale academic database. Specifically, we first use latent Dirichlet allocation to extract topics as distributions over words from a training dataset. Then, we convert the textual features of a researcher's publications to topic vectors, and calculate the centroid of these vectors to summarize the researcher's interest as a single vector. In experiments conducted on CiNii Articles, which is the largest academic database in Japan, we show that the extracted topics reflect the diversity of the research fields in the database. The experiment results also indicate the applicability of the proposed topic representation to the author disambiguation problem.
    Download PDF (725K)
  • Kyoungman BAE, Youngjoong KO
    Article type: LETTER
    2016 Volume E99.D Issue 4 Pages 1019-1022
    Published: April 01, 2016
    Released on J-STAGE: April 01, 2016
    JOURNAL FREE ACCESS
    This paper claims to use a new question expansion method for question classification in cQA services. The input questions consist of only a question whereas training data do a pair of question and answer. Thus they cannot provide enough information for good classification in many cases. Since the answer is strongly associated with the input questions, we try to create a pseudo answer to expand each input question. Translation probabilities between questions and answers and a pseudo relevant feedback technique are used to generate the pseudo answer. As a result, we obtain the significant improved performances when two approaches are effectively combined.
    Download PDF (393K)
Special Section on Cyberworlds
  • Masayuki NAKAJIMA
    2016 Volume E99.D Issue 4 Pages 1023
    Published: April 01, 2016
    Released on J-STAGE: April 01, 2016
    JOURNAL FREE ACCESS
    Download PDF (80K)
  • Asako SOGA, Bin UMINO, Yuho YAZAKI, Motoko HIRAYAMA
    Article type: PAPER
    2016 Volume E99.D Issue 4 Pages 1024-1031
    Published: April 01, 2016
    Released on J-STAGE: April 01, 2016
    JOURNAL FREE ACCESS
    This paper reports an assessment of the feasibility and the practicality of a creation support system for contemporary dance e-learning. We developed a Body-part Motion Synthesis System (BMSS) that allows users to create choreographies by synthesizing body-part motions to increase the effect of learning contemporary dance choreography. Short created choreographies can be displayed as animation using 3DCG characters. The system targets students who are studying contemporary dance and is designed to promote the discovery learning of contemporary dance. We conducted a series of evaluation experiments for creating contemporary dance choreographies to verify the learning effectiveness of our system as a support system for discovery learning. As a consequence of experiments with 26 students who created contemporary dances, we verified that BMSS is a helpful creation training tool to discover new choreographic methods, new dance movements, and new awareness of their bodies.
    Download PDF (1906K)
  • Chung-Liang LAI, Chien-Ming TSENG, D. ERDENETSOGT, Tzu-Kuan LIAO, Ya-L ...
    Article type: PAPER
    2016 Volume E99.D Issue 4 Pages 1032-1037
    Published: April 01, 2016
    Released on J-STAGE: April 01, 2016
    JOURNAL FREE ACCESS
    A low-cost prototypic Kinect-based rehabilitation system was developed for recovering balance capability of stroke patients. A total of 16 stroke patients were recruited to participate in the study. After excluding 3 patients who failed to finish all of the rehabilitation sessions, only the data of 13 patients were analyzed. The results exhibited a significant effect in recovering balance function of the patients after 3 weeks of balance training. Additionally, the questionnaire survey revealed that the designed system was perceived as effective and easy in operation.
    Download PDF (759K)
  • Isao MIYAGAWA, Yukinobu TANIGUCHI
    Article type: PAPER
    2016 Volume E99.D Issue 4 Pages 1038-1051
    Published: April 01, 2016
    Released on J-STAGE: April 01, 2016
    JOURNAL FREE ACCESS
    We propose a practical method that acquires dense light transports from unknown 3D objects by employing orthogonal illumination based on a Walsh-Hadamard matrix for relighting computation. We assume the presence of color crosstalk, which represents color mixing between projector pixels and camera pixels, and then describe the light transport matrix by using sets of the orthogonal illumination and the corresponding camera response. Our method handles not only direct reflection light but also global light radiated from the entire environment. Tests of the proposed method using real images show that orthogonal illumination is an effective way of acquiring accurate light transports from various 3D objects. We demonstrate a relighting test based on acquired light transports and confirm that our method outputs excellent relighting images that compare favorably with the actual images observed by the system.
    Download PDF (3877K)
  • Shohei KAKEI, Masami MOHRI, Yoshiaki SHIRAISHI, Masakatu MORII
    Article type: PAPER
    2016 Volume E99.D Issue 4 Pages 1052-1061
    Published: April 01, 2016
    Released on J-STAGE: April 01, 2016
    JOURNAL FREE ACCESS
    TPM-embedded devices can be used as authentication tokens by issuing certificates to signing keys generated by TPM. TPM generates Attestation Identity Key (AIK) and Binding Key (BK) that are RSA keys. AIK is used to identify TPM. BK is used to encrypt data so that specific TPM can decrypt it. TPM can use for device authentication by linking a SSL client certificate to TPM. This paper proposes a method of an AIK certificate issuance with OpenID and a method of the SSL client certificate issuance to specific TPM using AIK and BK. In addition, the paper shows how to implement device authentication system using the SSL client certificate related to TPM.
    Download PDF (1839K)
  • Wei ZHANG, Huan REN, Qingshan JIANG
    Article type: PAPER
    2016 Volume E99.D Issue 4 Pages 1062-1070
    Published: April 01, 2016
    Released on J-STAGE: April 01, 2016
    JOURNAL FREE ACCESS
    Phishing attacks target financial returns by luring Internet users to exposure their sensitive information. Phishing originates from e-mail fraud, and recently it is also spread by social networks and short message service (SMS), which makes phishing become more widespread. Phishing attacks have drawn great attention due to their high volume and causing heavy losses, and many methods have been developed to fight against them. However, most of researches suffered low detection accuracy or high false positive (FP) rate, and phishing attacks are facing the Internet users continuously. In this paper, we are concerned about feature engineering for improving the classification performance on phishing web pages detection. We propose a novel anti-phishing framework that employs feature engineering including feature selection and feature extraction. First, we perform feature selection based on genetic algorithm (GA) to divide features into critical features and non-critical features. Then, the non-critical features are projected to a new feature by implementing feature extraction based on a two-stage projection pursuit (PP) algorithm. Finally, we take the critical features and the new feature as input data to construct the detection model. Our anti-phishing framework does not simply eliminate the non-critical features, but considers utilizing their projection in the process of classification, which is different from literatures. Experimental results show that the proposed framework is effective in detecting phishing web pages.
    Download PDF (804K)
  • Hyun-Joo KIM, Jong-Hyun KIM, Jung-Tai KIM, Ik-Kyun KIM, Tai-Myung CHUN ...
    Article type: PAPER
    2016 Volume E99.D Issue 4 Pages 1071-1080
    Published: April 01, 2016
    Released on J-STAGE: April 01, 2016
    JOURNAL FREE ACCESS
    The recent cyber-attacks utilize various malware as a means of attacks for the attacker's malicious purposes. They are aimed to steal confidential information or seize control over major facilities after infiltrating the network of a target organization. Attackers generally create new malware or many different types of malware by using an automatic malware creation tool which enables remote control over a target system easily and disturbs trace-back of these attacks. The paper proposes a generation method of malware behavior patterns as well as the detection techniques in order to detect the known and even unknown malware efficiently. The behavior patterns of malware are generated with Multiple Sequence Alignment (MSA) of API call sequences of malware. Consequently, we defined these behavior patterns as a “feature-chain” of malware for the analytical purpose. The initial generation of the feature-chain consists of extracting API call sequences with API hooking library, classifying malware samples by the similar behavior, and making the representative sequences from the MSA results. The detection mechanism of numerous malware is performed by measuring similarity between API call sequence of a target process (suspicious executables) and feature-chain of malware. By comparing with other existing methods, we proved the effectiveness of our proposed method based on Longest Common Subsequence (LCS) algorithm. Also we evaluated that our method outperforms other antivirus systems with 2.55 times in detection rate and 1.33 times in accuracy rate for malware detection.
    Download PDF (2414K)
  • Bumsoon JANG, Seokjoo DOO, Soojin LEE, Hyunsoo YOON
    Article type: PAPER
    2016 Volume E99.D Issue 4 Pages 1081-1091
    Published: April 01, 2016
    Released on J-STAGE: April 01, 2016
    JOURNAL FREE ACCESS
    Due to the periodic recovery of virtual machines regardless of whether malicious intrusions exist, proactive recovery-based Intrusion Tolerant Systems (ITSs) are being considered for mission-critical applications. However, the virtual replicas can easily be exposed to attacks during their working period, and additionally, proactive recovery-based ITSs are ineffective in eliminating the vulnerability of exposure time, which is closely related to service availability. To address these problems, we propose a novel hybrid recovery-based ITS in this paper. The proposed method utilizes availability-driven recovery and dynamic cluster resizing. The availability-driven recovery method operates the recovery process by both proactive and reactive ways for the system to gain shorter exposure times and higher success rates. The dynamic cluster resizing method reduces the overhead of the system that occurs from dynamic workload fluctuations. The performance of the proposed ITS with various synthetic and real workloads using CloudSim showed that it guarantees higher availability and reliability of the system, even under malicious intrusions such as DDoS attacks.
    Download PDF (3019K)
Regular Section
  • Ryota SHIOYA, Ryo TAKAMI, Masahiro GOSHIMA, Hideki ANDO
    Article type: PAPER
    Subject area: Computer System
    2016 Volume E99.D Issue 4 Pages 1092-1107
    Published: April 01, 2016
    Released on J-STAGE: April 01, 2016
    JOURNAL FREE ACCESS
    Out-of-order superscalar processors have high performance but consume a large amount of energy for dynamic instruction scheduling. We propose a front-end execution architecture (FXA) for improving the energy efficiency of out-of-order superscalar processors. FXA has two execution units: an out-of-order execution unit (OXU) and an in-order execution unit (IXU). The OXU is the execution core of a common out-of-order superscalar processor. In contrast, the IXU consists only of functional units and a bypass network only. The IXU is placed at the processor front end and executes instructions in order. The IXU functions as a filter for the OXU. Fetched instructions are first fed to the IXU, and the instructions are executed in order if they are ready to execute. The instructions executed in the IXU are removed from the instruction pipeline and are not executed in the OXU. The IXU does not include dynamic scheduling logic, and thus its energy consumption is low. Evaluation results show that FXA can execute more than 50% of the instructions by using IXU, thereby making it possible to shrink the energy-consuming OXU without incurring performance degradation. As a result, FXA achieves both high performance and low energy consumption. We evaluated FXA and compared it with conventional out-of-order/in-order superscalar processors after ARM big.LITTLE architecture. The results show that FXA achieves performance improvements of 7.4% on geometric mean in SPECCPU INT 2006 benchmark suite relative to a conventional superscalar processor (big), while reducing the energy consumption by 17% in the entire processor. The performance/energy ratio (the inverse of the energy-delay product) of FXA is 25% higher than that of a conventional superscalar processor (big) and 27% higher than that of a conventional in-order superscalar processor (LITTLE).
    Download PDF (1417K)
  • Chun-Hung CHENG, Ying-Wen BAI
    Article type: PAPER
    Subject area: Computer System
    2016 Volume E99.D Issue 4 Pages 1108-1116
    Published: April 01, 2016
    Released on J-STAGE: April 01, 2016
    JOURNAL FREE ACCESS
    As the electricity rates during peak hours are higher, this paper proposes a design for an ultrabook to automatically shift the charging period to an off-peak period. In addition, this design sets an upper limit for the battery which thus protects the battery and prevents it from remaining in a continued state of both high temperature and high voltage. This design uses both a low-power embedded controller (EC) and the fuzzy logic controller (FLC) control method as the main control techniques together with real time clock (RTC) ICs. The sensing value of the EC and the presetting of parameters are used to control the conversion of the AC/DC module. This user interface design allows the user to set not only the peak/off-peak period but also the upper use limit of the battery.
    Download PDF (1582K)
  • Tomomi HATANO, Takashi ISHIO, Joji OKADA, Yuji SAKATA, Katsuro INOUE
    Article type: PAPER
    Subject area: Software Engineering
    2016 Volume E99.D Issue 4 Pages 1117-1126
    Published: April 01, 2016
    Released on J-STAGE: April 01, 2016
    JOURNAL FREE ACCESS
    For the maintenance of a business system, developers must understand the business rules implemented in the system. One type of business rules defines computational business rules; they represent how an output value of a feature is computed from the valid inputs. Unfortunately, understanding business rules is a tedious and error-prone activity. We propose a program-dependence analysis technique tailored to understanding computational business rules. Given a variable representing an output, the proposed technique extracts the conditional statements that may affect the computation of the output. To evaluate the usefulness of the technique, we conducted an experiment with eight developers in one company. The results confirm that the proposed technique enables developers to accurately identify conditional statements corresponding to computational business rules. Furthermore, we compare the number of conditional statements extracted by the proposed technique and program slicing. We conclude that the proposed technique, in general, is more effective than program slicing.
    Download PDF (657K)
  • Masayoshi SHIMAMURA, Hiroaki YAMANAKA, Akira NAGATA, Katsuyoshi IIDA, ...
    Article type: PAPER
    Subject area: Information Network
    2016 Volume E99.D Issue 4 Pages 1127-1138
    Published: April 01, 2016
    Released on J-STAGE: April 01, 2016
    JOURNAL FREE ACCESS
    Network virtualization environments (NVEs) are emerging to meet the increasing diversity of demands by Internet users where a virtual network (VN) can be constructed to accommodate each specific application service. In the future Internet, diverse service providers (SPs) will provide application services on their own VNs running across diverse infrastructure providers (InPs) that provide physical resources in an NVE. To realize both efficient resource utilization and good QoS of each individual service in such environments, SPs should perform adaptive control on network and computational resources in dynamic and competitive resource sharing, instead of explicit and sufficient reservation of physical resources for their VNs. On the other hand, two novel concepts, software-defined networking (SDN) and network function virtualization (NFV), have emerged to facilitate the efficient use of network and computational resources, flexible provisioning, network programmability, unified management, etc., which enable us to implement adaptive resource control. In this paper, therefore, we propose an architectural design of network orchestration for enabling SPs to maintain QoS of their applications aggressively by means of resource control on their VNs efficiently, by introducing virtual network provider (VNP) between InPs and SPs as 3-tier model, and by integrating SDN and NFV functionalities into NVE framework. We define new north-bound interfaces (NBIs) for resource requests, resource upgrades, resource programming, and alert notifications while using the standard OpenFlow interfaces for resource control on users' traffic flows. The feasibility of the proposed architecture is demonstrated through network experiments using a prototype implementation and a sample application service on nation-wide testbed networks, the JGN-X and RISE.
    Download PDF (1971K)
  • Iku OHAMA, Hiromi IIDA, Takuya KIDA, Hiroki ARIMURA
    Article type: PAPER
    Subject area: Artificial Intelligence, Data Mining
    2016 Volume E99.D Issue 4 Pages 1139-1152
    Published: April 01, 2016
    Released on J-STAGE: April 01, 2016
    JOURNAL FREE ACCESS
    Latent variable models for relational data enable us to extract the co-cluster structures underlying observed relational data. The Infinite Relational Model (IRM) is a well-known relational model for discovering co-cluster structures with an unknown number of clusters. The IRM assumes that the link probability between two objects (e.g., a customer and an item) depends only on their cluster assignment. However, relational models based on this assumption often lead us to extract many non-informative and unexpected clusters. This is because the underlying co-cluster structures in real-world relationships are often destroyed by structured noise that blurs the cluster structure stochastically depending on the pair of related objects. To overcome this problem, in this paper, we propose an extended IRM that simultaneously estimates denoised clear co-cluster structure and a structured noise component. In other words, our proposed model jointly estimates cluster assignment and noise level for each object. We also present posterior probabilities for running collapsed Gibbs sampling to infer the model. Experiments on real-world datasets show that our model extracts a clear co-cluster structure. Moreover, we confirm that the estimated noise levels enable us to extract representative objects for each cluster.
    Download PDF (3032K)
  • Yusuke IWASAWA, Ikuko EGUCHI YAIRI, Yutaka MATSUO
    Article type: PAPER
    Subject area: Rehabilitation Engineering and Assistive Technology
    2016 Volume E99.D Issue 4 Pages 1153-1161
    Published: April 01, 2016
    Released on J-STAGE: April 01, 2016
    JOURNAL FREE ACCESS
    The recent increase in the use of intelligent devices such as smartphones has enhanced the relationship between daily human behavior sensing and useful applications in ubiquitous computing. This paper proposes a novel method inspired by personal sensing technologies for collecting and visualizing road accessibility at lower cost than traditional data collection methods. To evaluate the methodology, we recorded outdoor activities of nine wheelchair users for approximately one hour each by using an accelerometer on an iPod touch and a camcorder, gathered the supervised data from the video by hand, and estimated the wheelchair actions as a measure of street level accessibility in Tokyo. The system detected curb climbing, moving on tactile indicators, moving on slopes, and stopping, with F-scores of 0.63, 0.65, 0.50, and 0.91, respectively. In addition, we conducted experiments with an artificially limited number of training data to investigate the number of samples required to estimate the target.
    Download PDF (1599K)
  • Chao XU, Dongxiang ZHOU, Tao GUAN, Yongping ZHAI, Yunhui LIU
    Article type: PAPER
    Subject area: Pattern Recognition
    2016 Volume E99.D Issue 4 Pages 1162-1171
    Published: April 01, 2016
    Released on J-STAGE: April 01, 2016
    JOURNAL FREE ACCESS
    This paper realized the automatic recognition of Mycobacterium tuberculosis in Ziehl-Neelsen stained images by the conventional light microscopy, which can be used in the computer-aided diagnosis of the tuberculosis. We proposed a novel recognition method based on active shape model. First, the candidate bacillus objects are segmented by a method of marker-based watershed transform. Next, a point distribution model of the object shape is proposed to label the landmarks on the object automatically. Then the active shape model is performed after aligning the training set with a weight matrix. The deformation regulation of the object shape is discovered and successfully applied in recognition without using geometric and other commonly used features. During this process, a width consistency constraint is combined with the shape parameter to improve the accuracy of the recognition. Experimental results demonstrate that the proposed method yields high accuracy in the images with different background colors. The recognition accuracy in object level and image level are 92.37% and 97.91% respectively.
    Download PDF (2136K)
  • Jianjuan LIANG, Bilan ZHU, Taro KUMAGAI, Masaki NAKAGAWA
    Article type: PAPER
    Subject area: Pattern Recognition
    2016 Volume E99.D Issue 4 Pages 1172-1181
    Published: April 01, 2016
    Released on J-STAGE: April 01, 2016
    JOURNAL FREE ACCESS
    The paper presents a recognition method of character-position-free on-line handwritten Japanese text patterns to allow a user to overlay characters freely without confirming previously written characters. To develop this method, we first collected text patterns written without wrist or elbow support and without visual feedback and then prepared large sets of character-position-free handwritten Japanese text patterns artificially from normally handwritten text patterns. The proposed method sets each off-stroke between real strokes as undecided and evaluates the segmentation probability by SVM model. Then, the optimal segmentation-recognition path can be effectively found by Viterbi search in the candidate lattice, combining the scores of character recognition, geometric features, linguistic context, as well as the segmentation scores by SVM classification. We test this method on variously overlaid sample patterns, as well as on the above-mentioned collected handwritten patterns, and verify that its recognition rates match those of the latest recognizer for normally handwritten horizontal Japanese text with no serious speed restriction in practical applications.
    Download PDF (526K)
  • Seng KHEANG, Kouichi KATSURADA, Yurie IRIBE, Tsuneo NITTA
    Article type: PAPER
    Subject area: Speech and Hearing
    2016 Volume E99.D Issue 4 Pages 1182-1192
    Published: April 01, 2016
    Released on J-STAGE: April 01, 2016
    JOURNAL FREE ACCESS
    The automatic transcription of out-of-vocabulary words into their corresponding phoneme strings has been widely adopted for speech synthesis and spoken-term detection systems. By combining various methods in order to meet the challenges of grapheme-to-phoneme (G2P) conversion, this paper proposes a phoneme transition network (PTN)-based architecture for G2P conversion. The proposed method first builds a confusion network using multiple phoneme-sequence hypotheses generated from several G2P methods. It then determines the best final-output phoneme from each block of phonemes in the generated network. Moreover, in order to extend the feasibility and improve the performance of the proposed PTN-based model, we introduce a novel use of right-to-left (reversed) grapheme-phoneme sequences along with grapheme-generation rules. Both techniques are helpful not only for minimizing the number of required methods or source models in the proposed architecture but also for increasing the number of phoneme-sequence hypotheses, without increasing the number of methods. Therefore, the techniques serve to minimize the risk from combining accurate and inaccurate methods that can readily decrease the performance of phoneme prediction. Evaluation results using various pronunciation dictionaries show that the proposed model, when trained using the reversed grapheme-phoneme sequences, often outperformed conventional left-to-right grapheme-phoneme sequences. In addition, the evaluation demonstrates that the proposed PTN-based method for G2P conversion is more accurate than all baseline approaches that were tested.
    Download PDF (693K)
  • Chihiro TSUTAKE, Yutaka NAKANO, Toshiyuki YOSHIDA
    Article type: PAPER
    Subject area: Image Processing and Video Processing
    2016 Volume E99.D Issue 4 Pages 1193-1201
    Published: April 01, 2016
    Released on J-STAGE: April 01, 2016
    JOURNAL FREE ACCESS
    This paper proposes a fast mode decision technique for intra prediction of High Efficiency Video Coding (HEVC) based on a reliability metric for motion vectors (RMMV). Since such a decision problem can be regarded as a kind of pattern classification, an efficient classifier is required for the reduction of computation complexity. This paper employs the RMMV as a classifier because the RMMV can efficiently categorize image blocks into flat(uniform), active, and edge blocks, and can estimate the direction of an edge block as well. A local search for angular modes is introduced to further speed up the decision process. An experiment shows the advantage of our technique over other techniques.
    Download PDF (817K)
  • Fang TIAN, Jie GUO, Bin SONG, Haixiao LIU, Hao QIN
    Article type: PAPER
    Subject area: Image Processing and Video Processing
    2016 Volume E99.D Issue 4 Pages 1202-1211
    Published: April 01, 2016
    Released on J-STAGE: April 01, 2016
    JOURNAL FREE ACCESS
    Distributed compressed video sensing (DCVS), combining advantages of compressed sensing and distributed video coding, is developed as a novel and powerful system to get an encoder with low complexity. Nevertheless, it is still unclear how to explore the method to achieve an effective video recovery through utilizing realistic signal characteristics as much as possible. Based on this, we present a novel spatiotemporal dictionary learning (DL) based reconstruction method for DCVS, where both the DL model and the l1-analysis based recovery with correlation constraints are included in the minimization problem to achieve the joint optimization of sparse representation and signal reconstruction. Besides, an alternating direction method with multipliers (ADMM) based numerical algorithm is outlined for solving the underlying optimization problem. Simulation results demonstrate that the proposed method outperforms other methods, with 0.03-4.14 dB increases in PSNR and a 0.13-15.31 dB gain for non-key frames.
    Download PDF (1061K)
  • Yu WANG, Jien KATO
    Article type: PAPER
    Subject area: Image Recognition, Computer Vision
    2016 Volume E99.D Issue 4 Pages 1212-1220
    Published: April 01, 2016
    Released on J-STAGE: April 01, 2016
    JOURNAL FREE ACCESS
    Local spatio-temporal features are popular in the human action recognition task. In practice, they are usually coupled with a feature encoding approach, which helps to obtain the video-level vector representations that can be used in learning and recognition. In this paper, we present an efficient local feature encoding approach, which is called Approximate Sparse Coding (ASC). ASC computes the sparse codes for a large collection of prototype local feature descriptors in the off-line learning phase using Sparse Coding (SC) and look up the nearest prototype's precomputed sparse code for each to-be-encoded local feature in the encoding phase using Approximate Nearest Neighbour (ANN) search. It shares the low dimensionality of SC and the high speed of ANN, which are both desired properties for a local feature encoding approach. ASC has been excessively evaluated on the KTH dataset and the HMDB51 dataset. We confirmed that it is able to encode large quantity of local video features into discriminative low dimensional representations efficiently.
    Download PDF (632K)
  • Yuta NAKASHIMA, Noboru BABAGUCHI, Jianping FAN
    Article type: PAPER
    Subject area: Image Recognition, Computer Vision
    2016 Volume E99.D Issue 4 Pages 1221-1233
    Published: April 01, 2016
    Released on J-STAGE: April 01, 2016
    JOURNAL FREE ACCESS
    The recent popularization of social network services (SNSs), such as YouTube, Dailymotion, and Facebook, enables people to easily publish their personal videos taken with mobile cameras. However, at the same time, such popularity has raised a new problem: video privacy. In such social videos, the privacy of people, i.e., their appearances, must be protected, but naively obscuring all people might spoil the video content. To address this problem, we focus on videographers' capture intentions. In a social video, some persons are usually essential for the video content. They are intentionally captured by the videographers, called intentionally captured persons (ICPs), and the others are accidentally framed-in (non-ICPs). Videos containing the appearances of the non-ICPs might violate their privacy. In this paper, we developed a system called BEPS, which adopts a novel conditional random field (CRF)-based method for ICP detection, as well as a novel approach to obscure non-ICPs and preserve ICPs using background estimation. BEPS reduces the burden of manually obscuring the appearances of the non-ICPs before uploading the video to SNSs. Compared with conventional systems, the following are the main advantages of BEPS: (i) it maintains the video content, and (ii) it is immune to the failure of person detection; false positives in person detection do not violate privacy. Our experimental results successfully validated these two advantages.
    Download PDF (4166K)
  • Nattapong THAMMASAN, Koichi MORIYAMA, Ken-ichi FUKUI, Masayuki NUMAO
    Article type: PAPER
    Subject area: Music Information Processing
    2016 Volume E99.D Issue 4 Pages 1234-1241
    Published: April 01, 2016
    Released on J-STAGE: April 01, 2016
    JOURNAL FREE ACCESS
    Research on emotion recognition using electroencephalogram (EEG) of subjects listening to music has become more active in the past decade. However, previous works did not consider emotional oscillations within a single musical piece. In this research, we propose a continuous music-emotion recognition approach based on brainwave signals. While considering the subject-dependent and changing-over-time characteristics of emotion, our experiment included self-reporting and continuous emotion annotation in the arousal-valence space. Fractal dimension (FD) and power spectral density (PSD) approaches were adopted to extract informative features from raw EEG signals and then we applied emotion classification algorithms to discriminate binary classes of emotion. According to our experimental results, FD slightly outperformed PSD approach both in arousal and valence classification, and FD was found to have the higher correlation with emotion reports than PSD. In addition, continuous emotion recognition during music listening based on EEG was found to be an effective method for tracking emotional reporting oscillations and provides an opportunity to better understand human emotional processes.
    Download PDF (1082K)
  • Seon Hwan KIM, Ju Hee CHOI, Jong Wook KWAK
    Article type: LETTER
    Subject area: Computer System
    2016 Volume E99.D Issue 4 Pages 1242-1245
    Published: April 01, 2016
    Released on J-STAGE: April 01, 2016
    JOURNAL FREE ACCESS
    In this letter, we propose a novel wear leveling technique we call Hidden cold block-aware Wear Leveling (HaWL) using a bit-set threshold. HaWL prolongs the lifetime of flash memory devices by using a bit array table in wear leveling. The bit array table saves the histories of block erasures for a period and distinguishes cold blocks from all blocks. In addition, HaWL can reduce the size of the bit array table by using a one-to-many mode, where one bit is related to many blocks. Moreover, to prevent degradation of wear leveling in the one-to-many mode, HaWL uses bit-set threshold (BST) and increases the accuracy of the cold block information. The performance results illustrate that HaWL prolongs the lifetime of flash memory by up to 48% compared with previous wear leveling techniques in our experiments.
    Download PDF (509K)
  • Riham ALTAWY, Ahmed ABDELKHALEK, Amr M. YOUSSEF
    Article type: LETTER
    Subject area: Information Network
    2016 Volume E99.D Issue 4 Pages 1246-1250
    Published: April 01, 2016
    Released on J-STAGE: April 01, 2016
    JOURNAL FREE ACCESS
    In this letter, we present a meet-in-the-middle attack on the 7-round reduced block cipher Kalyna-b/2b, which has been approved as the new encryption standard of Ukraine (DSTU 7624:2014) in 2015. According to its designers, the cipher provides strength to several cryptanalytic methods after the fifth and sixth rounds of the versions with block length of 128 and 256 bits, respectively. Our attack is based on the differential enumeration approach, where we carefully deploy a four-round distinguisher in the first four rounds to bypass the effect of the carry bits resulting from the prewhitening modular key addition. We also exploit the linear relation between consecutive odd and even indexed round keys, which enables us to attack seven rounds and recover all the round keys incrementally. The attack on Kalyna with 128-bit block has a data complexity of 289 chosen plaintexts, time complexity of 2230.2 and a memory complexity of 2202.64. The data, time and memory complexities of our attack on Kalyna with 256-bit block are 2233, 2502.2 and 2170, respectively.
    Download PDF (299K)
feedback
Top