IEICE Transactions on Information and Systems
Online ISSN : 1745-1361
Print ISSN : 0916-8532
Volume E100.D, Issue 8
Displaying 1-48 of 48 articles from this issue
Special Section on Multiple-Valued Logic and VLSI Computing
  • Takahiro HANYU
    2017 Volume E100.D Issue 8 Pages 1555
    Published: August 01, 2017
    Released on J-STAGE: August 01, 2017
    JOURNAL FREE ACCESS
    Download PDF (73K)
  • Shin-ichi MINATO
    Article type: INVITED PAPER
    2017 Volume E100.D Issue 8 Pages 1556-1562
    Published: August 01, 2017
    Released on J-STAGE: August 01, 2017
    JOURNAL FREE ACCESS

    Discrete structure manipulation is a fundamental technique for many problems solved by computers. BDDs/ZDDs have attracted a great deal of attention for twenty years, because those data structures are useful to efficiently manipulate basic discrete structures such as logic functions and sets of combinations. Recently, one of the most interesting research topics related to BDDs/ZDDs is Frontier-based search method, a very efficient algorithm for enumerating and indexing the subsets of a graph to satisfy a given constraint. This work is important because many kinds of practical problems can be efficiently solved by some variations of this algorithm. In this article, we present recent research activity related to BDD and ZDD. We first briefly explain the basic techniques for BDD/ZDD manipulation, and then we present several examples of the state-of-the-art algorithms to show the power of enumeration.

    Download PDF (2384K)
  • Miki HASEYAMA, Takahiro OGAWA, Sho TAKAHASHI, Shuhei NOMURA, Masatsugu ...
    Article type: INVITED PAPER
    2017 Volume E100.D Issue 8 Pages 1563-1573
    Published: August 01, 2017
    Released on J-STAGE: August 01, 2017
    JOURNAL FREE ACCESS

    Biomimetics is a new research field that creates innovation through the collaboration of different existing research fields. However, the collaboration, i.e., the exchange of deep knowledge between different research fields, is difficult for several reasons such as differences in technical terms used in different fields. In order to overcome this problem, we have developed a new retrieval platform, “Biomimetics image retrieval platform,” using a visualization-based image retrieval technique. A biological database contains a large volume of image data, and by taking advantage of these image data, we are able to overcome limitations of text-only information retrieval. By realizing such a retrieval platform that does not depend on technical terms, individual biological databases of various species can be integrated. This will allow not only the use of data for the study of various species by researchers in different biological fields but also access for a wide range of researchers in fields ranging from materials science, mechanical engineering and manufacturing. Therefore, our platform provides a new path bridging different fields and will contribute to the development of biomimetics since it can overcome the limitation of the traditional retrieval platform.

    Download PDF (9557K)
  • Tsutomu SASAO
    Article type: PAPER
    Subject area: Logic Design
    2017 Volume E100.D Issue 8 Pages 1574-1582
    Published: August 01, 2017
    Released on J-STAGE: August 01, 2017
    JOURNAL FREE ACCESS

    This paper presents a method to realize index generation functions using multiple Index Generation Units (IGUs). The architecture implements index generation functions more efficiently than a single IGU when the number of registered vectors is very large. This paper proves that independent linear transformations are necessary in IGUs for efficient realization. Experimental results confirm this statement. Finally, it shows a fast update method to IGUs.

    Download PDF (489K)
  • Shinobu NAGAYAMA, Tsutomu SASAO, Jon T. BUTLER
    Article type: PAPER
    Subject area: Logic Design
    2017 Volume E100.D Issue 8 Pages 1583-1591
    Published: August 01, 2017
    Released on J-STAGE: August 01, 2017
    JOURNAL FREE ACCESS

    Index generation functions model content-addressable memory, and are useful in virus detectors and routers. Linear decompositions yield simpler circuits that realize index generation functions. This paper proposes a balanced decision tree based heuristic to efficiently design linear decompositions for index generation functions. The proposed heuristic finds a good linear decomposition of an index generation function by using appropriate cost functions and a constraint to construct a balanced tree. Since the proposed heuristic is fast and requires a small amount of memory, it is applicable even to large index generation functions that cannot be solved in a reasonable time by existing heuristics. This paper shows time and space complexities of the proposed heuristic, and experimental results using some large examples to show its efficiency.

    Download PDF (711K)
  • Shunsuke KOSHITA, Naoya ONIZAWA, Masahide ABE, Takahiro HANYU, Masayuk ...
    Article type: PAPER
    Subject area: VLSI Architecture
    2017 Volume E100.D Issue 8 Pages 1592-1602
    Published: August 01, 2017
    Released on J-STAGE: August 01, 2017
    JOURNAL FREE ACCESS

    This paper presents FIR digital filters based on stochastic/binary hybrid computation with reduced hardware complexity and high computational accuracy. Recently, some attempts have been made to apply stochastic computation to realization of digital filters. Such realization methods lead to significant reduction of hardware complexity over the conventional filter realizations based on binary computation. However, the stochastic digital filters suffer from lower computational accuracy than the digital filters based on binary computation because of the random error fluctuations that are generated in stochastic bit streams, stochastic multipliers, and stochastic adders. This becomes a serious problem in the case of FIR filter realizations compared with the IIR counterparts because FIR filters usually require larger number of multiplications and additions than IIR filters. To improve the computational accuracy, this paper presents a stochastic/binary hybrid realization, where multipliers are realized using stochastic computation but adders are realized using binary computation. In addition, a coefficient-scaling technique is proposed to further improve the computational accuracy of stochastic FIR filters. Furthermore, the transposed structure is applied to the FIR filter realization, leading to reduction of hardware complexity. Evaluation results demonstrate that our method achieves at most 40dB improvement in minimum stopband attenuation compared with the conventional pure stochastic design.

    Download PDF (1376K)
  • Rei UENO, Naofumi HOMMA, Takafumi AOKI
    Article type: PAPER
    Subject area: VLSI Architecture
    2017 Volume E100.D Issue 8 Pages 1603-1610
    Published: August 01, 2017
    Released on J-STAGE: August 01, 2017
    JOURNAL FREE ACCESS

    This paper presents a system for the automatic generation of Galois-field (GF) arithmetic circuits, named the GF Arithmetic Module Generator (GF-AMG). The proposed system employs a graph-based circuit description called the GF Arithmetic Circuit Graph (GF-ACG). First, we present an extension of the GF-ACG to handle GF(pm) (p≥3) arithmetic circuits, which can be efficiently implemented by multiple-valued logic circuits in addition to the conventional binary circuits. We then show the validity of the generation system through the experimental design of GF(pm) multipliers for different p-values. In addition, we evaluate the performance of three types of GF(2m) multipliers and typical GF(pm) multipliers (p≥3) empirically generated by our system. We confirm from the results that the proposed system can generate a variety of GF parallel multipliers, including practical multipliers over GF(pm) having extension degrees greater than 128.

    Download PDF (1003K)
  • Yosuke IIJIMA, Yasushi YUMINAKA
    Article type: PAPER
    Subject area: VLSI Architecture
    2017 Volume E100.D Issue 8 Pages 1611-1617
    Published: August 01, 2017
    Released on J-STAGE: August 01, 2017
    JOURNAL FREE ACCESS

    The growing demand for high-speed data communication has continued to meet the need for ever-increasing I/O bandwidth in recent VLSI systems. However, signal integrity issues, such as intersymbol interference (ISI) and reflections, make the channel band-limited at high-speed data rates. We propose high-speed data transmission techniques for VLSI systems using Tomlinson-Harashima precoding (THP). Because THP can eliminate ISI by inverting the characteristics of channels with limited peak and average power at the transmitter, it is suitable for implementing advanced low-voltage and high-speed VLSI systems. This paper presents a novel double-rate THP equalization technique especially intended for multi-valued data transmission to further improve THP performance. Simulation and measurement results show that the proposed THP equalization with a double sampling rate can enhance the data transition time and, therefore, improve the eye opening.

    Download PDF (2461K)
  • Daisuke SUZUKI, Takahiro HANYU
    Article type: PAPER
    Subject area: VLSI Architecture
    2017 Volume E100.D Issue 8 Pages 1618-1624
    Published: August 01, 2017
    Released on J-STAGE: August 01, 2017
    JOURNAL FREE ACCESS

    An energy-efficient nonvolatile FPGA with assuring highly-reliable backup operation using a self-terminated power-gating scheme is proposed. Since the write current is automatically cut off just after the temporal data in the flip-flop is successfully backed up in the nonvolatile device, the amount of write energy can be minimized with no write failure. Moreover, when the backup operation in a particular cluster is completed, power supply of the cluster is immediately turned off, which minimizes standby energy due to leakage current. In fact, the total amount of energy consumption during the backup operation is reduced by 66% in comparison with that of a conventional worst-case-based approach where the long time write current pulse is used for the reliable write.

    Download PDF (956K)
  • Naotake KAMIURA, Shoji KOBASHI, Manabu NII, Takayuki YUMOTO, Ichiro YA ...
    Article type: PAPER
    Subject area: Soft Computing
    2017 Volume E100.D Issue 8 Pages 1625-1633
    Published: August 01, 2017
    Released on J-STAGE: August 01, 2017
    JOURNAL FREE ACCESS

    In this paper, we present a method of analyzing relationships between items in specific health examination data, as one of the basic researches to address increases of lifestyle-related diseases. We use self-organizing maps, and pick up the data from the examination dataset according to the condition specified by some item values. We then focus on twelve items such as hemoglobin A1c (HbA1c), aspartate transaminase (AST), alanine transaminase (ALT), gamma-glutamyl transpeptidase (γ-GTP), and triglyceride (TG). We generate training data presented to a map by calculating the difference between item values associated with successive two years and normalizing the values of this calculation. We label neurons in the map on condition that one of the item values of training data is employed as a parameter. We finally examine the relationships between items by comparing results of labeling (clusters formed in the map) to each other. From experimental results, we separately reveal the relationships among HbA1c, AST, ALT, γ-GTP and TG in the unfavorable case of HbA1c value increasing and those in the favorable case of HbA1c value decreasing.

    Download PDF (2937K)
  • Mizuki HIGUCHI, Kenichi SORACHI, Yutaka HATA
    Article type: PAPER
    Subject area: Soft Computing
    2017 Volume E100.D Issue 8 Pages 1634-1641
    Published: August 01, 2017
    Released on J-STAGE: August 01, 2017
    JOURNAL FREE ACCESS

    This paper analyzes the relationship between the changes of Body Mass Index (BMI) and those of the other health checkup data in one year. We divide all data of the subjects into 13 groups by their BMI changes. We calculate these variations in each group and classify the variations into gender, age, and BMI. As the result by gender, men were more influenced by the changes of BMI than women at Hb-A1c, AC, GPT, GTP, and TG. As the result of classification by age, they were influenced by the changes of BMI at Hb-A1c, GPT, and DTP by age. As the result of classification by BMI, inspection values such as GOT, GPT, and GTP decreased according to the decrement of BMI. Next we show the result on gender-age, gender-BMI, and age-BMI clusters. Our results showed that subjects should reduce BMI values in order to improve lifestyle-related diseases. Several inspection values would be improved according to decrement of BMI. Conversely, it may be difficult for subjects with under 18 of BMI to manage them by BMI. We show a possibility that we could prevent the lifestyle disease by controlling BMI.

    Download PDF (1063K)
  • Masakazu MORIMOTO, Naotake KAMIURA, Yutaka HATA, Ichiro YAMAMOTO
    Article type: PAPER
    Subject area: Soft Computing
    2017 Volume E100.D Issue 8 Pages 1642-1646
    Published: August 01, 2017
    Released on J-STAGE: August 01, 2017
    JOURNAL FREE ACCESS

    To promote effective guidance by health checkup results, this paper predict a likelihood of developing lifestyle-related diseases from health check data. In this paper, we focus on the fluctuation of hemoglobin A1c (HbA1c) value, which deeply connected with diabetes onset. Here we predict incensement of HbA1c value and examine which kind of health checkup item has important role for HbA1c fluctuation. Our experimental results show that, when we classify the subjects according to their gender and triglyceride (TG) fluctuation value, we will effectively evaluate the risk of diabetes onset for each class.

    Download PDF (904K)
Special Section on Information and Communication System Security
  • Yasunori ISHIHARA
    2017 Volume E100.D Issue 8 Pages 1647-1648
    Published: August 01, 2017
    Released on J-STAGE: August 01, 2017
    JOURNAL FREE ACCESS
    Download PDF (83K)
  • Heung Youl YOUM
    Article type: INVITED PAPER
    2017 Volume E100.D Issue 8 Pages 1649-1662
    Published: August 01, 2017
    Released on J-STAGE: August 01, 2017
    JOURNAL FREE ACCESS

    The Internet of Things (IoT) is defined as a global infrastructure for the Information Society, enabling advanced services by interconnecting (physical and virtual) things based on, existing and evolving, interoperable information and communication technologies by ITU-T. Data may be communicated in low-power and lossy environments, which causes complicated security issues. Furthermore, concerns are raised over access of personally identifiable information pertaining to IoT devices, network and platforms. Security and privacy concerns have been main barriers to implement IoT, which needs to be resolved appropriate security and privacy measures. This paper describes security threats and privacy concerns of IoT, surveys current studies related to IoT and identifies the various requirements and solutions to address these security threats and privacy concerns. In addition, this paper also focuses on major global standardization activities for security and privacy of Internet of Things. Furthermore, future directions and strategies of international standardization for theInternet of Thing's security and privacy issues will be given. This paper provides guidelines to assist in suggesting the development and standardization strategies forward to allow a massive deployment of IoT systems in real world.

    Download PDF (6849K)
  • Yumehisa HAGA, Yuta TAKATA, Mitsuaki AKIYAMA, Tatsuya MORI
    Article type: PAPER
    Subject area: Privacy
    2017 Volume E100.D Issue 8 Pages 1663-1670
    Published: August 01, 2017
    Released on J-STAGE: August 01, 2017
    JOURNAL FREE ACCESS

    Web tracking is widely used as a means to track user's behavior on websites. While web tracking provides new opportunities of e-commerce, it also includes certain risks such as privacy infringement. Therefore, analyzing such risks in the wild Internet is meaningful to make the user's privacy transparent. This work aims to understand how the web tracking has been adopted to prominent websites. We also aim to understand their resilience to the ad-blocking techniques. Web tracking-enabled websites collect the information called the web browser fingerprints, which can be used to identify users. We develop a scalable system that can detect fingerprinting by using both dynamic and static analyses. If a tracking site makes use of many and strong fingerprints, the site is likely resilient to the ad-blocking techniques. We also analyze the connectivity of the third-party tracking sites, which are linked from multiple websites. The link analysis allows us to extract the group of associated tracking sites and understand how influential these sites are. Based on the analyses of 100,000 websites, we quantify the potential risks of the web tracking-enabled websites. We reveal that there are 226 websites that adopt fingerprints that cannot be detected with the most of off-the-shelf anti-tracking tools. We also reveal that a major, resilient third-party tracking site is linked to 50.0 % of the top-100,000 popular websites.

    Download PDF (650K)
  • Yuichi NAKAMURA, Yoshimichi NAKATSUKA, Hiroaki NISHI
    Article type: PAPER
    Subject area: Privacy
    2017 Volume E100.D Issue 8 Pages 1671-1679
    Published: August 01, 2017
    Released on J-STAGE: August 01, 2017
    JOURNAL FREE ACCESS

    In this study, an anonymization infrastructure for the secondary use of data is proposed. The proposed infrastructure can publish data that includes privacy information while preserving the privacy by using anonymization techniques. The infrastructure considers a situation where ill-motivated users redistribute the data without authorization. Therefore, we propose a watermarking method for anonymized data to solve this problem. The proposed method is implemented, and the proposed method's tolerance against attacks is evaluated.

    Download PDF (1162K)
  • Takuya WATANABE, Mitsuaki AKIYAMA, Tatsuya MORI
    Article type: PAPER
    Subject area: Privacy
    2017 Volume E100.D Issue 8 Pages 1680-1690
    Published: August 01, 2017
    Released on J-STAGE: August 01, 2017
    JOURNAL FREE ACCESS

    We developed a novel, proof-of-concept side-channel attack framework called RouteDetector, which identifies a route for a train trip by simply reading smart device sensors: an accelerometer, magnetometer, and gyroscope. All these sensors are commonly used by many apps without requiring any permissions. The key technical components of RouteDetector can be summarized as follows. First, by applying a machine-learning technique to the data collected from sensors, RouteDetector detects the activity of a user, i.e., “walking,” “in moving vehicle,” or “other.” Next, it extracts departure/arrival times of vehicles from the sequence of the detected human activities. Finally, by correlating the detected departure/arrival times of the vehicle with timetables/route maps collected from all the railway companies in the rider's country, it identifies potential routes that can be used for a trip. We demonstrate that the strategy is feasible through field experiments and extensive simulation experiments using timetables and route maps for 9,090 railway stations of 172 railway companies.

    Download PDF (1482K)
  • Mitsuhiro HATADA, Tatsuya MORI
    Article type: PAPER
    Subject area: Program Analysis
    2017 Volume E100.D Issue 8 Pages 1691-1702
    Published: August 01, 2017
    Released on J-STAGE: August 01, 2017
    JOURNAL FREE ACCESS

    An enormous number of malware samples pose a major threat to our networked society. Antivirus software and intrusion detection systems are widely implemented on the hosts and networks as fundamental countermeasures. However, they may fail to detect evasive malware. Thus, setting a high priority for new varieties of malware is necessary to conduct in-depth analyses and take preventive measures. In this paper, we present a traffic model for malware that can classify network behaviors of malware and identify new varieties of malware. Our model comprises malware-specific features and general traffic features that are extracted from packet traces obtained from a dynamic analysis of the malware. We apply a clustering analysis to generate a classifier and evaluate our proposed model using large-scale live malware samples. The results of our experiment demonstrate the effectiveness of our model in finding new varieties of malware.

    Download PDF (466K)
  • Yuta ISHII, Takuya WATANABE, Mitsuaki AKIYAMA, Tatsuya MORI
    Article type: PAPER
    Subject area: Program Analysis
    2017 Volume E100.D Issue 8 Pages 1703-1713
    Published: August 01, 2017
    Released on J-STAGE: August 01, 2017
    JOURNAL FREE ACCESS

    Android is one of the most popular mobile device platforms. However, since Android apps can be disassembled easily, attackers inject additional advertisements or malicious codes to the original apps and redistribute them. There are a non-negligible number of such repackaged apps. We generally call those malicious repackaged apps “clones.” However, there are apps that are not clones but are similar to each other. We call such apps “relatives.” In this work, we developed a framework called APPraiser that extracts similar apps and classifies them into clones and relatives from the large dataset. We used the APPraiser framework to study over 1.3 million apps collected from both official and third-party marketplaces. Our extensive analysis revealed the following findings: In the official marketplace, 79% of similar apps were attributed to relatives, while in the third-party marketplace, 50% of similar apps were attributed to clones. The majority of relatives are apps developed by prolific developers in both marketplaces. We also found that in the third-party market, of the clones that were originally published in the official market, 76% of them are malware.

    Download PDF (1315K)
  • Yuta TAKATA, Mitsuaki AKIYAMA, Takeshi YAGI, Takeshi YADA, Shigeki GOT ...
    Article type: PAPER
    Subject area: Internet Security
    2017 Volume E100.D Issue 8 Pages 1714-1728
    Published: August 01, 2017
    Released on J-STAGE: August 01, 2017
    JOURNAL FREE ACCESS

    An incident response organization such as a CSIRT contributes to preventing the spread of malware infection by analyzing compromised websites and sending abuse reports with detected URLs to webmasters. However, these abuse reports with only URLs are not sufficient to clean up the websites. In addition, it is difficult to analyze malicious websites across different client environments because these websites change behavior depending on a client environment. To expedite compromised website clean-up, it is important to provide fine-grained information such as malicious URL relations, the precise position of compromised web content, and the target range of client environments. In this paper, we propose a new method of constructing a redirection graph with context, such as which web content redirects to malicious websites. The proposed method analyzes a website in a multi-client environment to identify which client environment is exposed to threats. We evaluated our system using crawling datasets of approximately 2,000 compromised websites. The result shows that our system successfully identified malicious URL relations and compromised web content, and the number of URLs and the amount of web content to be analyzed were sufficient for incident responders by 15.0% and 0.8%, respectively. Furthermore, it can also identify the target range of client environments in 30.4% of websites and a vulnerability that has been used in malicious websites by leveraging target information. This fine-grained analysis by our system would contribute to improving the daily work of incident responders.

    Download PDF (2682K)
  • Bayu Adhi TAMA, Kyung-Hyune RHEE
    Article type: PAPER
    Subject area: Internet Security
    2017 Volume E100.D Issue 8 Pages 1729-1737
    Published: August 01, 2017
    Released on J-STAGE: August 01, 2017
    JOURNAL FREE ACCESS

    Anomaly detection is one approach in intrusion detection systems (IDSs) which aims at capturing any deviation from the profiles of normal network activities. However, it suffers from high false alarm rate since it has impediment to distinguish the boundaries between normal and attack profiles. In this paper, we propose an effective anomaly detection approach by hybridizing three techniques, i.e. particle swarm optimization (PSO), ant colony optimization (ACO), and genetic algorithm (GA) for feature selection and ensemble of four tree-based classifiers, i.e. random forest (RF), naive bayes tree (NBT), logistic model trees (LMT), and reduces error pruning tree (REPT) for classification. Proposed approach is implemented on NSL-KDD dataset and from the experimental result, it significantly outperforms the existing methods in terms of accuracy and false alarm rate.

    Download PDF (518K)
  • Mohamad Samir A. EID, Hitoshi AIDA
    Article type: PAPER
    Subject area: Internet Security
    2017 Volume E100.D Issue 8 Pages 1738-1750
    Published: August 01, 2017
    Released on J-STAGE: August 01, 2017
    JOURNAL FREE ACCESS

    Distributed Denial of Service (DDoS) attacks based on HTTP and HTTPS (i.e., HTTP(S)-DDoS) are increasingly popular among attackers. Overlay-based mitigation solutions attract small and medium-sized enterprises mainly for their low cost and high scalability. However, conventional overlay-based solutions assume content inspection to remotely mitigate HTTP(S)-DDoS attacks, prompting trust concerns. This paper reports on a new overlay-based method which practically adds a third level of client identification (to conventional per-IP and per-connection). This enhanced identification enables remote mitigation of more complex HTTP(S)-DDoS categories without content inspection. A novel behavior-based reputation and penalty system is designed, then a simplified proof of concept prototype is implemented and deployed on DeterLab. Among several conducted experiments, two are presented in this paper representing a single-vector and a multi-vector complex HTTP(S)-DDoS attack scenarios (utilizing LOIC, Slowloris, and a custom-built attack tool for HTTPS-DDoS). Results show nearly 99.2% reduction in attack traffic and 100% chance of legitimate service. Yet, attack reduction decreases, and cost in service time (of a specified file) rises, temporarily during an approximately 2 minutes mitigation time. Collateral damage to non-attacking clients sharing an attack IP is measured in terms of a temporary extra service time. Only the added identification level was utilized for mitigation, while future work includes incorporating all three levels to mitigate switching and multi-request per connection attack categories.

    Download PDF (2136K)
  • Yong JIN, Kunitaka KAKOI, Nariyoshi YAMAI, Naoya KITAGAWA, Masahiko TO ...
    Article type: PAPER
    Subject area: Internet Security
    2017 Volume E100.D Issue 8 Pages 1751-1761
    Published: August 01, 2017
    Released on J-STAGE: August 01, 2017
    JOURNAL FREE ACCESS

    The widespread usage of computers and communication networks affects people's social activities effectively in terms of intercommunication and the communication generally begins with domain name resolutions which are mainly provided by DNS (Domain Name System). Meanwhile, continuous cyber threats to DNS such as cache poisoning also affects computer networks critically. DNSSEC (DNS Security Extensions) is designed to provide secure name resolution between authoritative zone servers and DNS full resolvers. However high workload of DNSSEC validation on DNS full resolvers and complex key management on authoritative zone servers hinder its wide deployment. Moreover, querying clients use the name resolution results validated on DNS full resolvers, therefore they only get errors when DNSSEC validation fails or times out. In addition, name resolution failure can occur on querying clients due to technical and operational issues of DNSSEC. In this paper, we propose a client based DNSSEC validation system with adaptive alert mechanism considering minimal querying client timeout. The proposed system notifies the user of alert messages with answers even when the DNSSEC validation on the client fails or timeout so that the user can determine how to handle the received answers. We also implemented a prototype system and evaluated the features on a local experimental network as well as in the Internet. The contribution of this article is that the proposed system not only can mitigate the workload of DNS full resolvers but also can cover querying clients with secure name resolution, and by solving the existing operation issues in DNSSEC, it also can promote DNSSEC deployment.

    Download PDF (1927K)
  • Jianfeng LU, Zheng WANG, Dewu XU, Changbing TANG, Jianmin HAN
    Article type: PAPER
    Subject area: Access Control
    2017 Volume E100.D Issue 8 Pages 1762-1769
    Published: August 01, 2017
    Released on J-STAGE: August 01, 2017
    JOURNAL FREE ACCESS

    The user authorization query (UAQ) problem determines whether there exists an optimum set of roles to be activated to provide a set of permissions requested by a user. It has been deemed as a key issue for efficiently handling user's access requests in role-based access control (RBAC). Unfortunately, the weight is a value attached to a permission/role representing its importance, should be introduced to UAQ, has been ignored. In this paper, we propose a comprehensive definition of the weighted UAQ (WUAQ) problem with the role-weighted-cardinality and permission-weighted-cardinality constraints. Moreover, we study the computational complexity of different subcases of WUAQ, and show that many instances in each subcase are intractable. In particular, inspired by the idea of the genetic algorithm, we propose an algorithm to approximate solve an intractable subcase of the WUAQ problem. An important observation is that this algorithm can be efficiently modified to handle the other subcases of the WUAQ problem. The experimental results show the advantage of the proposed algorithm, which is especially fit for the case that the computational overhead is even more important than the accuracy in a large-scale RBAC system.

    Download PDF (700K)
  • Kenta NOMURA, Masami MOHRI, Yoshiaki SHIRAISHI, Masakatu MORII
    Article type: PAPER
    Subject area: Cryptographic Schemes
    2017 Volume E100.D Issue 8 Pages 1770-1779
    Published: August 01, 2017
    Released on J-STAGE: August 01, 2017
    JOURNAL FREE ACCESS

    We focus on the construction of the digital signature scheme for local broadcast, which allows the devices with limited resources to securely transmit broadcast message. A multi-group authentication scheme that enables a node to authenticate its membership in multi verifiers by the sum of the secret keys has been proposed for limited resources. This paper presents a transformation which converts a multi-group authentication into a multi-group signature scheme. We show that the multi-group signature scheme converted by our transformation is existentially unforgeable against chosen message attacks (EUF-CMA secure) in the random oracle model if the multi-group authentication scheme is secure against impersonation under passive attacks (IMP-PA secure). In the multi-group signature scheme, a sender can sign a message by the secret keys which multiple certification authorities issue and the signature can validate the authenticity and integrity of the message to multiple verifiers. As a specific configuration example, we show the example in which the multi-group signature scheme by converting an error correcting code-based multi-group authentication scheme.

    Download PDF (1043K)
  • Hua ZHANG, Shixiang ZHU, Xiao MA, Jun ZHAO, Zeng SHOU
    Article type: PAPER
    Subject area: Industrial Control System Security
    2017 Volume E100.D Issue 8 Pages 1780-1789
    Published: August 01, 2017
    Released on J-STAGE: August 01, 2017
    JOURNAL FREE ACCESS

    As advances in networking technology help to connect industrial control networks with the Internet, the threat from spammers, attackers and criminal enterprises has also grown accordingly. However, traditional Network Intrusion Detection System makes significant use of pattern matching to identify malicious behaviors and have bad performance on detecting zero-day exploits in which a new attack is employed. In this paper, a novel method of anomaly detection in industrial control network is proposed based on RNN-GBRBM feature decoder. The method employ network packets and extract high-quality features from raw features which is selected manually. A modified RNN-RBM is trained using the normal traffic in order to learn feature patterns of the normal network behaviors. Then the test traffic is analyzed against the learned normal feature pattern by using osPCA to measure the extent to which the test traffic resembles the learned feature pattern. Moreover, we design a semi-supervised incremental updating algorithm in order to improve the performance of the model continuously. Experiments show that our method is more efficient in anomaly detection than other traditional approaches for industrial control network.

    Download PDF (1438K)
  • Sungmoon KWON, Hyunguk YOO, Taeshik SHON
    Article type: PAPER
    Subject area: Industrial Control System Security
    2017 Volume E100.D Issue 8 Pages 1790-1797
    Published: August 01, 2017
    Released on J-STAGE: August 01, 2017
    JOURNAL FREE ACCESS

    In the past, the security of industrial control systems was guaranteed by their obscurity. However, as devices of industrial control systems became more varied and interaction between these devices became necessary, effective management systems for such networks emerged. This triggered the need for cyber-physical systems that connect industrial control system networks and external system networks. The standards for the protocols in industrial control systems explain security functions in detail, but many devices still use nonsecure communication because it is difficult to update existing equipment. Given this situation, a number of studies are being conducted to detect attacks against industrial control system protocols, but these studies consider only data payloads without considering the case that industrial control systems' availability is infringed owing to packet reassembly failures. Therefore, with regard to the DNP3 protocol, which is used widely in industrial control systems, this paper describes attacks that can result in packet reassembly failures, proposes a countermeasure, and tests the proposed countermeasure by conducting actual attacks and recoveries. The detection of a data payload should be conducted after ensuring the availability of an industrial control system by using this type of countermeasure.

    Download PDF (1793K)
Regular Section
  • Ryuta KAWANO, Hiroshi NAKAHARA, Seiichi TADE, Ikki FUJIWARA, Hiroki MA ...
    Article type: PAPER
    Subject area: Computer System
    2017 Volume E100.D Issue 8 Pages 1798-1806
    Published: August 01, 2017
    Released on J-STAGE: August 01, 2017
    JOURNAL FREE ACCESS

    Inter-switch networks for HPC systems and data-centers can be improved by applying random shortcut topologies with a reduced number of hops. With minimal routing in such networks; however, deadlock-freedom is not guaranteed. Multiple Virtual Channels (VCs) are efficiently used to avoid this problem. However, previous works do not provide good trade-offs between the number of required VCs and the time and memory complexities of an algorithm. In this work, a novel and fast algorithm, named ACRO, is proposed to endorse the arbitrary routing functions with deadlock-freedom, as well as consuming a small number of VCs. A heuristic approach to reduce VCs is achieved with a hash table, which improves the scalability of the algorithm compared with our previous work. Moreover, experimental results show that ACRO can reduce the average number of VCs by up to 63% when compared with a conventional algorithm that has the same time complexity. Furthermore, ACRO reduces the time complexity by a factor of O(|N|·log|N|), when compared with another conventional algorithm that requires almost the same number of VCs.

    Download PDF (1013K)
  • Kenji KANAZAWA, Tsutomu MARUYAMA
    Article type: PAPER
    Subject area: Computer System
    2017 Volume E100.D Issue 8 Pages 1807-1818
    Published: August 01, 2017
    Released on J-STAGE: August 01, 2017
    JOURNAL FREE ACCESS

    WalkSAT (WSAT) is one of the best performing stochastic local search algorithms for the Boolean Satisfiability (SAT) and the Maximum Boolean Satisfiability (MaxSAT). WSAT is very suitable for hardware acceleration because of its high inherent parallelism. Formal verification of digital circuits is one of the most important applications of SAT and MaxSAT. Structural knowledge such as logic gates and their dependencies can be derived from SAT/MaxSAT instances generated from formal verification of digital circuits. Such that knowledge is useful to solve these instances efficiently. In this paper, we first discuss a heuristic to utilize the structural knowledge for solving these problems by using WSAT. Then, we show its implementation on FPGA. The problem size of the formal verification is typically very large, and most data have to be placed in off-chip DRAMs. In this situation, the acceleration by FPGA is limited by the throughput and access latency of the DRAMs. In our implementation, data are carefully mapped on the on-chip memory banks and off-chip DRAMs so that most data in the off-chip DRAMs can be continuously accessed using burst-read. Furthermore, a variable-way cache memory comprised of the on-chip memory banks is used in order to hide the DRAM access latency by caching the head portion of the continuous read from the DRAMs and giving them to the circuit till the rest portion is started to be given by the burst-read. We evaluate the performance of our proposed method by changing configuration of the variable-way cache and the processing parallelism, and discuss how much acceleration can be achieved.

    Download PDF (1071K)
  • Satoshi YAMANE, Ryosuke KONOSHITA, Tomonori KATO
    Article type: PAPER
    Subject area: Software Engineering
    2017 Volume E100.D Issue 8 Pages 1819-1826
    Published: August 01, 2017
    Released on J-STAGE: August 01, 2017
    JOURNAL FREE ACCESS

    Embedded systems have been widely used. In addition, embedded systems have been gradually complicated. It is important to ensure the safety for embedded software by software model checking. We have developed a verification system for verifying embedded assembly programs. It generates exact Kripke structure by exhaustively and dynamically simulating assembly programs, and simultaneously verify it by model checking. In addition, we have introduced undefined values to reduce the number of states in order to avoid the state space explosion.

    Download PDF (356K)
  • Weiwei XING, Shibo ZHAO, Shunli ZHANG, Yuanyuan CAI
    Article type: PAPER
    Subject area: Information Network
    2017 Volume E100.D Issue 8 Pages 1827-1836
    Published: August 01, 2017
    Released on J-STAGE: August 01, 2017
    JOURNAL FREE ACCESS

    Crowd modeling and simulation is an active research field that has drawn increasing attention from industry, academia and government recently. In this paper, we present a generic data-driven approach to generate crowd behaviors that can match the video data. The proposed approach is a bi-layer model to simulate crowd behaviors in pedestrian traffic in terms of exclusion statistics, parallel dynamics and social psychology. The bottom layer models the microscopic collision avoidance behaviors, while the top one focuses on the macroscopic pedestrian behaviors. To validate its effectiveness, the approach is applied to generate collective behaviors and re-create scenarios in the Informatics Forum, the main building of the School of Informatics at the University of Edinburgh. The simulation results demonstrate that the proposed approach is able to generate desirable crowd behaviors and offer promising prediction performance.

    Download PDF (1175K)
  • David KOCIK, Keiichi KANEKO
    Article type: PAPER
    Subject area: Dependable Computing
    2017 Volume E100.D Issue 8 Pages 1837-1843
    Published: August 01, 2017
    Released on J-STAGE: August 01, 2017
    JOURNAL FREE ACCESS

    The Möbius cube is a variant of the hypercube. Its advantage is that it can connect the same number of nodes as a hypercube but with almost half the diameter of the hypercube. We propose an algorithm to solve the node-to-node disjoint paths problem in n-Möbius cubes in polynomial-order time of n. We provide a proof of correctness of the algorithm and estimate that the time complexity is O(n2) and the maximum path length is 3n-5.

    Download PDF (520K)
  • Rachelle RIVERO, Richard LEMENCE, Tsuyoshi KATO
    Article type: PAPER
    Subject area: Artificial Intelligence, Data Mining
    2017 Volume E100.D Issue 8 Pages 1844-1851
    Published: August 01, 2017
    Released on J-STAGE: August 01, 2017
    JOURNAL FREE ACCESS

    With the huge influx of various data nowadays, extracting knowledge from them has become an interesting but tedious task among data scientists, particularly when the data come in heterogeneous form and have missing information. Many data completion techniques had been introduced, especially in the advent of kernel methods — a way in which one can represent heterogeneous data sets into a single form: as kernel matrices. However, among the many data completion techniques available in the literature, studies about mutually completing several incomplete kernel matrices have not been given much attention yet. In this paper, we present a new method, called Mutual Kernel Matrix Completion (MKMC) algorithm, that tackles this problem of mutually inferring the missing entries of multiple kernel matrices by combining the notions of data fusion and kernel matrix completion, applied on biological data sets to be used for classification task. We first introduced an objective function that will be minimized by exploiting the EM algorithm, which in turn results to an estimate of the missing entries of the kernel matrices involved. The completed kernel matrices are then combined to produce a model matrix that can be used to further improve the obtained estimates. An interesting result of our study is that the E-step and the M-step are given in closed form, which makes our algorithm efficient in terms of time and memory. After completion, the (completed) kernel matrices are then used to train an SVM classifier to test how well the relationships among the entries are preserved. Our empirical results show that the proposed algorithm bested the traditional completion techniques in preserving the relationships among the data points, and in accurately recovering the missing kernel matrix entries. By far, MKMC offers a promising solution to the problem of mutual estimation of a number of relevant incomplete kernel matrices.

    Download PDF (531K)
  • Xiuze ZHOU, Shunxiang WU
    Article type: PAPER
    Subject area: Artificial Intelligence, Data Mining
    2017 Volume E100.D Issue 8 Pages 1852-1859
    Published: August 01, 2017
    Released on J-STAGE: August 01, 2017
    JOURNAL FREE ACCESS

    E-mails, which vary in length, are a special form of text. The difference in the lengths of e-mails increases the difficulty of text analysis. To better analyze e-mail, our models must analyze not only long e-mails but also short e-mails. Unlike normal documents, short texts have some unique characteristics, such as data sparsity and ambiguity problems, making it difficult to obtain useful information from them. However, long text and short text cannot be analyzed in the same manner. Therefore, we have to analyze the characteristics of both. We present the Biterm Author Topic in the Sentences Model (BATS) model; it can discover relevant topics of corpus and accurately capture the relationship between the topics and authors of e-mails. The Author Topic (AT) model learns from a single word in a document, while the BATS is modeled on word co-occurrence in the entire corpus. We assume that all words in a single sentence are generated from the same topic. Accordingly, our method uses only word co-occurrence patterns at the sentence level, rather than the document or corpus level. Experiments on the Enron data set indicate that our proposed method achieves better performance on e-mails than the baseline methods. What's more, our method analyzes long texts effectively and solves the data sparsity problems of short texts.

    Download PDF (959K)
  • Bin YANG, Yuliang LU, Kailong ZHU, Guozheng YANG, Jingwei LIU, Haibo Y ...
    Article type: PAPER
    Subject area: Pattern Recognition
    2017 Volume E100.D Issue 8 Pages 1860-1869
    Published: August 01, 2017
    Released on J-STAGE: August 01, 2017
    JOURNAL FREE ACCESS

    The rapid development of information techniques has lead to more and more high-dimensional datasets, making classification more difficult. However, not all of the features are useful for classification, and some of these features may even cause low classification accuracy. Feature selection is a useful technique, which aims to reduce the dimensionality of datasets, for solving classification problems. In this paper, we propose a modified bat algorithm (BA) for feature selection, called MBAFS, using a SVM. Some mechanisms are designed for avoiding the premature convergence. On the one hand, in order to maintain the diversity of bats, they are guided by the combination of a random bat and the global best bat. On the other hand, to enhance the ability of escaping from local optimization, MBAFS employs one mutation mechanism while the algorithm trapped into local optima. Furthermore, the performance of MBAFS was tested on twelve benchmark datasets, and was compared with other BA based algorithms and some well-known BPSO based algorithms. Experimental results indicated that the proposed algorithm outperforms than other methods. Also, the comparison details showed that MBAFS is competitive in terms of computational time.

    Download PDF (1889K)
  • Ning AN, Xiao-Guang ZHAO, Zeng-Guang HOU
    Article type: PAPER
    Subject area: Image Recognition, Computer Vision
    2017 Volume E100.D Issue 8 Pages 1870-1881
    Published: August 01, 2017
    Released on J-STAGE: August 01, 2017
    JOURNAL FREE ACCESS

    In this study, we address the problem of online RGB-D tracking which confronted with various challenges caused by deformation, occlusion, background clutter, and abrupt motion. Various trackers have different strengths and weaknesses, and thus a single tracker can merely perform well in specific scenarios. We propose a 3D tracker-level fusion algorithm (TLF3D) which enhances the strengths of different trackers and suppresses their weaknesses to achieve robust tracking performance in various scenarios. The fusion result is generated from outputs of base trackers by optimizing an energy function considering both the 3D cube attraction and 3D trajectory smoothness. In addition, three complementary base RGB-D trackers with intrinsically different tracking components are proposed for the fusion algorithm. We perform extensive experiments on a large-scale RGB-D benchmark dataset. The evaluation results demonstrate the effectiveness of the proposed fusion algorithm and the superior performance of the proposed TLF3D tracker against state-of-the-art RGB-D trackers.

    Download PDF (5146K)
  • JinAn XU, Yufeng CHEN, Kuang RU, Yujie ZHANG, Kenji ARAKI
    Article type: PAPER
    Subject area: Natural Language Processing
    2017 Volume E100.D Issue 8 Pages 1882-1892
    Published: August 01, 2017
    Released on J-STAGE: August 01, 2017
    JOURNAL FREE ACCESS

    Named Entity Translation Equivalents extraction plays a critical role in machine translation (MT) and cross language information retrieval (CLIR). Traditional methods are often based on large-scale parallel or comparable corpora. However, the applicability of these studies is constrained, mainly because of the scarcity of parallel corpora of the required scale, especially for language pairs of Chinese and Japanese. In this paper, we propose a method considering the characteristics of Chinese and Japanese to automatically extract the Chinese-Japanese Named Entity (NE) translation equivalents based on inductive learning (IL) from monolingual corpora. The method adopts the Chinese Hanzi and Japanese Kanji Mapping Table (HKMT) to calculate the similarity of the NE instances between Japanese and Chinese. Then, we use IL to obtain partial translation rules for NEs by extracting the different parts from high similarity NE instances in Chinese and Japanese. In the end, the feedback processing updates the Chinese and Japanese NE entity similarity and rule sets. Experimental results show that our simple, efficient method, which overcomes the insufficiency of the traditional methods, which are severely dependent on bilingual resource. Compared with other methods, our method combines the language features of Chinese and Japanese with IL for automatically extracting NE pairs. Our use of a weak correlation bilingual text sets and minimal additional knowledge to extract NE pairs effectively reduces the cost of building the corpus and the need for additional knowledge. Our method may help to build a large-scale Chinese-Japanese NE translation dictionary using monolingual corpora.

    Download PDF (2744K)
  • Hongjun ZHANG, Yuntian FENG, Wenning HAO, Gang CHEN, Dawei JIN
    Article type: PAPER
    Subject area: Natural Language Processing
    2017 Volume E100.D Issue 8 Pages 1893-1902
    Published: August 01, 2017
    Released on J-STAGE: August 01, 2017
    JOURNAL FREE ACCESS

    In recent years, deep learning has been widely applied in relation extraction task. The method uses only word embeddings as network input, and can model relations between target named entity pairs. It equally deals with each relation mention, so it cannot effectively extract relations from the corpus with an enormous number of non-relations, which is the main reason why the performance of relation extraction is significantly lower than that of relation classification. This paper designs a deep reinforcement learning framework for relation extraction, which considers relation extraction task as a two-step decision-making game. The method models relation mentions with CNN and Tree-LSTM, which can calculate initial state and transition state for the game respectively. In addition, we can tackle the problem of unbalanced corpus by designing penalty function which can increase the penalties for first-step decision-making errors. Finally, we use Q-Learning algorithm with value function approximation to learn control policy π for the game. This paper sets up a series of experiments in ACE2005 corpus, which show that the deep reinforcement learning framework can achieve state-of-the-art performance in relation extraction task.

    Download PDF (1406K)
  • Ying MA, Shunzhi ZHU, Yumin CHEN, Jingjing LI
    Article type: LETTER
    Subject area: Software Engineering
    2017 Volume E100.D Issue 8 Pages 1903-1906
    Published: August 01, 2017
    Released on J-STAGE: August 01, 2017
    JOURNAL FREE ACCESS

    An transfer learning method, called Kernel Canonical Correlation Analysis plus (KCCA+), is proposed for heterogeneous Cross-company defect prediction. Combining the kernel method and transfer learning techniques, this method improves the performance of the predictor with more adaptive ability in nonlinearly separable scenarios. Experiments validate its effectiveness.

    Download PDF (193K)
  • Jaeho KIM, Jung Kyu PARK
    Article type: LETTER
    Subject area: Data Engineering, Web Information Systems
    2017 Volume E100.D Issue 8 Pages 1907-1910
    Published: August 01, 2017
    Released on J-STAGE: August 01, 2017
    JOURNAL FREE ACCESS

    The use of flash memory based storage devices is rapidly increasing, and user demands for high performance are also constantly increasing. The performance of the flash storage device is greatly influenced by cleaning operations of Flash Translation Layer (FTL). Various studies have been conducted to lower the cost of cleaning operations. However, there are limits to achieve sufficient performance improvement of flash storages without help of a host system, with only limited information in storage devices. Recently, SCSI, eMMC, and UFS standards provide an interface for sending semantic information from a host system to a storage device. In this paper, we analyze effects of semantic information on performance and lifetime of flash storage devices. We evaluate performance and lifetime improvement through SA-FTL (Semantic Aware Flash Translation Layer), which can take advantage of semantic information in storage devices. Experiments show that SA-FTL improves performance and lifetime of flash based storages by up to 30 and 35%, respectively, compared to a simple page-level FTL.

    Download PDF (405K)
  • Younsoo PARK, Jungwoo CHOI, Young-Bin KWON, Jaehwa PARK, Ho-Hyun PARK
    Article type: LETTER
    Subject area: Information Network
    2017 Volume E100.D Issue 8 Pages 1911-1915
    Published: August 01, 2017
    Released on J-STAGE: August 01, 2017
    JOURNAL FREE ACCESS

    Remote data checking (RDC) is a scheme that allows clients to efficiently check the integrity of data stored at an untrusted server using spot-checking. Efforts have been consistently devoted toward improving the efficiency of such RDC schemes because they involve some overhead. In this letter, it is assumed that a probabilistic attack model is adopted, in which an adversary corrupts exposed blocks in the network with a certain probability. An optimal spot-checking ratio that simultaneously guarantees the robustness of the scheme and minimizes the overhead is obtained.

    Download PDF (318K)
  • Yan WANG, Long CHENG, Jian ZHANG
    Article type: LETTER
    Subject area: Information Network
    2017 Volume E100.D Issue 8 Pages 1916-1919
    Published: August 01, 2017
    Released on J-STAGE: August 01, 2017
    JOURNAL FREE ACCESS

    Wireless sensor network (WSN) has attracted many researchers to investigate it in recent years. It can be widely used in the areas of surveillances, health care and agriculture. The location information is very important for WSN applications such as geographic routing, data fusion and tracking. So the localization technology is one of the key technologies for WSN. Since the computational complexity of the traditional source localization is high, the localization method can not be used in the sensor node. In this paper, we firstly introduce the Neyman-Pearson criterion based detection model. This model considers the effect of false alarm and missing alarm rate, so it is more realistic than the binary and probability model. An affinity propagation algorithm based localization method is proposed. Simulation results show that the proposed method provides high localization accuracy.

    Download PDF (404K)
  • Junsuk PARK, Nobuhiro SEKI, Keiichi KANEKO
    Article type: LETTER
    Subject area: Dependable Computing
    2017 Volume E100.D Issue 8 Pages 1920-1921
    Published: August 01, 2017
    Released on J-STAGE: August 01, 2017
    JOURNAL FREE ACCESS

    In the topologies for interconnected nodes, it is desirable to have a low degree and a small diameter. For the same number of nodes, a dual-cube topology has almost half the degree compared to a hypercube while increasing the diameter by just one. Hence, it is a promising topology for interconnection networks of massively parallel systems. We propose here a stochastic fault-tolerant routing algorithm to find a non-faulty path from a source node to a destination node in a dual-cube.

    Download PDF (147K)
  • Kenji MATSUI, Toru TAMAKI, Bisser RAYTCHEV, Kazufumi KANEDA
    Article type: LETTER
    Subject area: Pattern Recognition
    2017 Volume E100.D Issue 8 Pages 1922-1924
    Published: August 01, 2017
    Released on J-STAGE: August 01, 2017
    JOURNAL FREE ACCESS

    We propose a feature for action recognition called Trajectory-Set (TS), on top of the improved Dense Trajectory (iDT). The TS feature encodes only trajectories around densely sampled interest points, without any appearance features. Experimental results on the UCF50 action dataset demonstrates that TS is comparable to state-of-the-arts, and outperforms iDT; the accuracy of 95.0%, compared to 91.7% by iDT.

    Download PDF (492K)
  • Yuki SAITO, Shinnosuke TAKAMICHI, Hiroshi SARUWATARI
    Article type: LETTER
    Subject area: Speech and Hearing
    2017 Volume E100.D Issue 8 Pages 1925-1928
    Published: August 01, 2017
    Released on J-STAGE: August 01, 2017
    JOURNAL FREE ACCESS

    This paper proposes Deep Neural Network (DNN)-based Voice Conversion (VC) using input-to-output highway networks. VC is a speech synthesis technique that converts input features into output speech parameters, and DNN-based acoustic models for VC are used to estimate the output speech parameters from the input speech parameters. Given that the input and output are often in the same domain (e.g., cepstrum) in VC, this paper proposes a VC using highway networks connected from the input to output. The acoustic models predict the weighted spectral differentials between the input and output spectral parameters. The architecture not only alleviates over-smoothing effects that degrade speech quality, but also effectively represents the characteristics of spectral parameters. The experimental results demonstrate that the proposed architecture outperforms Feed-Forward neural networks in terms of the speech quality and speaker individuality of the converted speech.

    Download PDF (572K)
  • Yu ZHOU, Leida LI, Ke GU, Zhaolin LU, Beijing CHEN, Lu TANG
    Article type: LETTER
    Subject area: Image Processing and Video Processing
    2017 Volume E100.D Issue 8 Pages 1929-1933
    Published: August 01, 2017
    Released on J-STAGE: August 01, 2017
    JOURNAL FREE ACCESS

    Depth-image-based-rendering (DIBR) is a popular technique for view synthesis. The rendering process mainly introduces artifacts around edges, which leads to degraded quality. This letter proposes a DIBR-synthesized image quality metric by measuring the Statistics of both Edge Intensity and Orientation (SEIO). The Canny operator is first used to detect edges. Then the gradient maps are calculated, based on which the intensity and orientation of the edge pixels are computed for both the reference and synthesized images. The distance between the two intensity histograms and that between the two orientation histograms are computed. Finally, the two distances are pooled to obtain the overall quality score. Experimental results demonstrate the advantages of the presented method.

    Download PDF (2830K)
  • Hanhoon PARK, Jong-Il PARK
    Article type: LETTER
    Subject area: Image Processing and Video Processing
    2017 Volume E100.D Issue 8 Pages 1934-1937
    Published: August 01, 2017
    Released on J-STAGE: August 01, 2017
    JOURNAL FREE ACCESS

    Side match vector quantization (SMVQ) has been originally developed for image compression and is also useful for steganography. SMVQ requires to create its own state codebook for each block in both encoding and decoding phases. Since the conventional method for the state codebook generation is extremely time-consuming, this letter proposes a fast generation method. The proposed method is tens times faster than the conventional one without loss of perceptual visual quality.

    Download PDF (1644K)
  • Hao GE, Feng YANG, Xiaoguang TU, Mei XIE, Zheng MA
    Article type: LETTER
    Subject area: Image Recognition, Computer Vision
    2017 Volume E100.D Issue 8 Pages 1938-1942
    Published: August 01, 2017
    Released on J-STAGE: August 01, 2017
    JOURNAL FREE ACCESS

    Recently, numerous methods have been proposed to tackle the problem of fine-grained image classification. However, rare of them focus on the pre-processing step of image alignment. In this paper, we propose a new pre-processing method with the aim of reducing the variance of objects among the same class. As a result, the variance of objects between different classes will be more significant. The proposed approach consists of four procedures. The “parts” of the objects are firstly located. After that, the rotation angle and the bounding box could be obtained based on the spatial relationship of the “parts”. Finally, all the images are resized to similar sizes. The objects in the images possess the properties of translation, scale and rotation invariance after processed by the proposed method. Experiments on the CUB-200-2011 and CUB-200-2010 datasets have demonstrated that the proposed method could boost the recognition performance by serving as a pre-processing step of several popular classification algorithms.

    Download PDF (967K)
feedback
Top