IEICE Transactions on Information and Systems
Online ISSN : 1745-1361
Print ISSN : 0916-8532
Volume E99.D, Issue 3
Displaying 1-30 of 30 articles from this issue
Special Section on Foundations of Computer Science - Developments of the Theory of Algorithms and Computation -
• Keisuke TANAKA
2016 Volume E99.D Issue 3 Pages 558
Published: March 01, 2016
Released on J-STAGE: March 01, 2016
JOURNAL FREE ACCESS
• Hiroshi FUJIWARA, Atsushi MATSUDA, Toshihiro FUJITO
Article type: PAPER
2016 Volume E99.D Issue 3 Pages 559-566
Published: March 01, 2016
Released on J-STAGE: March 01, 2016
JOURNAL FREE ACCESS
We consider a problem of the choice of price plans offered by a telecommunications company: a “pay-as-you-go” plan and a “flat-rate” plan. This problem is formulated as an online optimization problem extending the ski-rental problem, and analyzed using the competitive ratio. We give a lemma for easily calculating the competitive ratio. Based on the lemma, we derive a family of optimal strategies for a realistic class of instances.
• Hiroshi FUJIWARA, Takahiro SEKI, Toshihiro FUJITO
Article type: PAPER
2016 Volume E99.D Issue 3 Pages 567-574
Published: March 01, 2016
Released on J-STAGE: March 01, 2016
JOURNAL FREE ACCESS
We consider a problem as follows: Given unit weights arriving in an online manner with the total cardinality unknown, upon each arrival we decide where to place it on the unit circle in $\mathbb{R}^{2}$. The objective is to set the center of mass of the placed weights as close to the origin as possible. We apply competitive analysis defining the competitive difference as a performance measure. We first present an optimal strategy for placing unit weights which achieves a competitive difference of $\frac{1}{5}$. We next consider a variant in which the destination of each weight must be chosen from a set of positions that equally divide the unit circle. We give a simple strategy whose competitive difference is 0.35. Moreover, in the offline setting, several conditions for the center of mass to lie at the origin are derived.
• Kenji HASHIMOTO, Ryuta SAWADA, Yasunori ISHIHARA, Hiroyuki SEKI, Toru ...
Article type: PAPER
2016 Volume E99.D Issue 3 Pages 575-587
Published: March 01, 2016
Released on J-STAGE: March 01, 2016
JOURNAL FREE ACCESS
This paper discusses the decidability of determinacy and subsumption of tree transducers. For two tree transducers T1 and T2, T1 determines T2 if the output of T2 can be identified by the output of T1, that is, there is a partial function f such that [[T2]]=f∘[[T1]] where [[T1]] and [[T2]] are tree transformation relations induced by T1 and T2, respectively. Also, T1 subsumes T2 if T1 determines T2 and the partial function f such that [[T2]]=f∘[[T1]] can be defined by a transducer in a designated class that T2 belongs to. In this paper, we show that determinacy is in coNEXPTIME for single-valued linear extended bottom-up tree transducers as the determiner class and single-valued bottom-up tree transducers as the determinee class. We also show that subsumption is in coNEXPITME for these classes, and a bottom-up tree transducer T3 such that [[T2]]=[[T3]]∘[[T1]] can be constructed if T1 subsumes T2.
• Shuichi INOKUCHI, Hitoshi FURUSAWA, Toshikazu ISHIDA, Yasuo KAWAHARA
Article type: PAPER
2016 Volume E99.D Issue 3 Pages 588-597
Published: March 01, 2016
Released on J-STAGE: March 01, 2016
JOURNAL FREE ACCESS
In this paper we present a novel treatment of cellular automata (CA) from an algebraic point of view. CA on monoids associated with Σ-algebras are introduced. Then an extension of Hedlund's theorem which connects CA associated with Σ-algebras and continuous functions between prodiscrete topological spaces on the set of configurations are discussed.
• Takehiro ITO, Hiroyuki NOOKA, Xiao ZHOU
Article type: PAPER
2016 Volume E99.D Issue 3 Pages 598-606
Published: March 01, 2016
Released on J-STAGE: March 01, 2016
JOURNAL FREE ACCESS
Suppose that we are given two vertex covers C0 and Ct of a graph G, together with an integer threshold k ≥ max{|C0|, |Ct|}. Then, the vertex cover reconfiguration problem is to determine whether there exists a sequence of vertex covers of G which transforms C0 into Ct such that each vertex cover in the sequence is of cardinality at most k and is obtained from the previous one by either adding or deleting exactly one vertex. This problem is PSPACE-complete even for planar graphs. In this paper, we first give a linear-time algorithm to solve the problem for even-hole-free graphs, which include several well-known graphs, such as trees, interval graphs and chordal graphs. We then give an upper bound on k for which any pair of vertex covers in a graph G has a desired sequence. Our upper bound is best possible in some sense.
• Chuzo IWAMOTO, Yusuke KITAGAKI
Article type: PAPER
2016 Volume E99.D Issue 3 Pages 607-614
Published: March 01, 2016
Released on J-STAGE: March 01, 2016
JOURNAL FREE ACCESS
A Manhattan tower is a monotone orthogonal polyhedron lying in the halfspace z ≥ 0 such that (i) its intersection with the xy-plane is a simply connected orthogonal polygon, and (ii) the horizontal cross section at higher levels is nested in that for lower levels. Here, a monotone polyhedron meets each vertical line in a single segment or not at all. We study the computational complexity of finding the minimum number of guards which can observe the side and upper surfaces of a Manhattan tower. It is shown that the vertex-guarding, edge-guarding, and face-guarding problems for Manhattan towers are NP-hard.
• Morito OOMINE, Hiroshi NAGAMOCHI
Article type: PAPER
2016 Volume E99.D Issue 3 Pages 615-623
Published: March 01, 2016
Released on J-STAGE: March 01, 2016
JOURNAL FREE ACCESS
In the obnoxious facility game with a set of agents in a space, we wish to design a mechanism, a decision-making procedure that determines a location of an undesirable facility based on locations reported by the agents, where we do not know whether the location reported by an agent is where exactly the agent exists in the space. For a location of the facility, the benefit of each agent is defined to be the distance from the location of the facility to where the agent exists. Given a mechanism, all agents are informed of how the mechanism utilizes locations reported by the agents to determine a location of the facility before they report their locations. Some agent may try to manipulate the decision of the facility location by strategically misreporting her location. As a fair decision-making, mechanisms should be designed so that no particular group of agents can get a larger benefit by misreporting their locations. A mechanism is called group strategy-proof if no subset of agents can form a group such that every member of the group can increase her benefit by misreporting her location jointly with the rest of the group. For a given mechanism, a point in the space is called a candidate if it can be output as the location of the facility by the mechanism for some set of locations reported by agents. In this paper, we consider the case where a given space is a tree metric, and characterize the group strategy-proof mechanisms in terms of distribution of all candidates in the tree metric. We prove that there exists a group strategy-proof mechanism in the tree metric if and only if the tree has a point to which every candidate has the same distance.
• Katsuhisa YAMANAKA, Shin-ichi NAKANO
Article type: PAPER
2016 Volume E99.D Issue 3 Pages 624-629
Published: March 01, 2016
Released on J-STAGE: March 01, 2016
JOURNAL FREE ACCESS
In this paper, we consider the problem of generating uniformly random mosaic floorplans. We propose a polynomial-time algorithm that generates such floorplans with f faces. Two modified algorithms are created to meet additional criteria.
Regular Section
• Ryota SHIOYA, Hideki ANDO
Article type: PAPER
Subject area: Computer System
2016 Volume E99.D Issue 3 Pages 630-640
Published: March 01, 2016
Released on J-STAGE: March 01, 2016
JOURNAL FREE ACCESS
Out-of-order superscalar processors rename register numbers to remove false dependencies between instructions. A renaming logic for register renaming is a high-cost module in a superscalar processor, and it consumes considerable energy. A renamed trace cache (RTC) was proposed for reducing the energy consumption of a renaming logic. An RTC caches and reuses renamed operands, and thus, register renaming can be omitted on RTC hits. However, conventional RTCs suffer from several performance, energy consumption, and hardware overhead problems. We propose a semi-global renamed trace cache (SGRTC) that caches only renamed operands that are short distance from producers outside traces, and solves the problems of conventional RTCs. Evaluation results show that SGRTC achieves 64% lower energy consumption for renaming with a 0.2% performance overhead as compared to a conventional processor.
• Hongda WANG, Jianchun XING, Juelong LI, Qiliang YANG, Xuewei ZHANG, De ...
Article type: PAPER
Subject area: Software Engineering
2016 Volume E99.D Issue 3 Pages 641-649
Published: March 01, 2016
Released on J-STAGE: March 01, 2016
JOURNAL FREE ACCESS
Web Service Business Process Execution Language (BPEL) has become the de facto standard for developing instant service-oriented workflow applications in open environment. The correctness and reliability of BPEL processes have gained increasing concerns. However, the unique features (e.g., dead path elimination (DPE) semantics, parallelism, etc.) of BPEL language have raised enormous problems to it, especially in path feasibility analysis of BPEL processes. Path feasibility analysis of BPEL processes is the basis of BPEL testing, for it relates to the test case generation. Since BPEL processes support both parallelism and DPE semantics, existing techniques can't be directly applied to its path feasibility analysis. To address this problem, we present a novel technique to analyze the path feasibility for BPEL processes. First, to tackle unique features mentioned above, we transform a BPEL process into an intermediary model — BPEL control flow graph, which is proposed to abstract the execution flow of BPEL processes. Second, based on this abstraction, we symbolically encode every path of BPEL processes as some Satisfiability formulas. Finally, we solve these formulas with the help of Satisfiability Modulo Theory (SMT) solvers and the feasible paths of BPEL processes are obtained. We illustrate the applicability and feasibility of our technique through a case study.
• Cong LIU, Jiujun CHENG, Yirui WANG, Shangce GAO
Article type: PAPER
Subject area: Software Engineering
2016 Volume E99.D Issue 3 Pages 650-660
Published: March 01, 2016
Released on J-STAGE: March 01, 2016
JOURNAL FREE ACCESS
Time performance optimization and resource conflict resolution are two important challenges in multiple project management contexts. Compared with traditional project management, multi-project management usually suffers limited and insufficient resources, and a tight and urgent deadline to finish all concurrent projects. In this case, time performance optimization of the global project management is badly needed. To our best knowledge, existing work seldom pays attention to the formal modeling and analyzing of multi-project management in an effort to eliminate resource conflicts and optimizing the project execution time. This work proposes such a method based on PRT-Net, which is a Petri net-based formulism tailored for a kind of project constrained by resource and time. The detailed modeling approaches based on PRT-Net are first presented. Then, resource conflict detection method with corresponding algorithm is proposed. Next, the priority criteria including a key-activity priority strategy and a waiting-short priority strategy are presented to resolve resource conflicts. Finally, we show how to construct a conflict-free PRT-Net by designing resource conflict resolution controllers. By experiments, we prove that our proposed priority strategy can ensure the execution time of global multiple projects much shorter than those without using any strategies.
• Xin YANG, Norihiro YOSHIDA, Raula GAIKOVINA KULA, Hajimu IIDA
Article type: PAPER
Subject area: Software Engineering
2016 Volume E99.D Issue 3 Pages 661-670
Published: March 01, 2016
Released on J-STAGE: March 01, 2016
JOURNAL FREE ACCESS
Software peer review is regarded as one of the most important approaches to preserving software quality. Due to the distributed collaborations in Open Source Software (OSS) development, the review techniques and processes conducted in OSS environment differ from the traditional review method that based on formal face-to-face meetings. Unlike other related works, this study investigates peer review processes of OSS projects from the social perspective: communication and interaction in peer review by using social network analysis (SNA). Moreover, the relationship between peer review contributors and their activities is studied. We propose an approach to evaluating contributors' activeness and social relationship using SNA named Peer Review Social Network (PeRSoN). We evaluate our approach by empirical case study, 326,286 review comments and 1,745 contributors from three representative industrial OSS projects have been extracted and analyzed. The results indicate that the social network structure influences the realistic activeness of contributors significantly. Based on the results, we suggest our approach can support project leaders in assigning review tasks, appointing reviewers and other activities to improve current software processes.
• Katsuhisa MARUYAMA, Takayuki OMORI, Shinpei HAYASHI
Article type: PAPER
Subject area: Software Engineering
2016 Volume E99.D Issue 3 Pages 671-687
Published: March 01, 2016
Released on J-STAGE: March 01, 2016
JOURNAL FREE ACCESS
Change-aware development environments can automatically record fine-grained code changes on a program and allow programmers to replay the recorded changes in chronological order. However, since they do not always need to replay all the code changes to investigate how a particular entity of the program has been changed, they often eliminate several code changes of no interest by manually skipping them in replaying. This skipping action is an obstacle that makes many programmers hesitate when they use existing replaying tools. This paper proposes a slicing mechanism that automatically removes manually skipped code changes from the whole history of past code changes and extracts only those necessary to build a particular class member of a Java program. In this mechanism, fine-grained code changes are represented by edit operations recorded on the source code of a program and dependencies among edit operations are formalized. The paper also presents a running tool that slices the operation history and replays its resulting slices. With this tool, programmers can avoid replaying nonessential edit operations for the construction of class members that they want to understand. Experimental results show that the tool offered improvements over conventional replaying tools with respect to the reduction of the number of edit operations needed to be examined and over history filtering tools with respect to the accuracy of edit operations to be replayed.
• Halil ARSLAN, Sinan TÜNCEL
Article type: PAPER
Subject area: Information Network
2016 Volume E99.D Issue 3 Pages 688-694
Published: March 01, 2016
Released on J-STAGE: March 01, 2016
JOURNAL FREE ACCESS
The usage of peer-to-peer (P2P) networks that provide sharing of real-time environmental data by internet users is becoming more and more popular. As a result, it's necessary to identify the problems during P2P communication and to develop proper solutions. One of the major problems of P2P communication is that it's not possible to reach the clients behind devices that create private networks like network address translation (NAT) and firewalls from the public network. Among the solutions proposed for this problem, Interactive Connectivity Establishment (ICE) and Real Time Media Flow Protocol (RTMFP) are the methods most preferred in the literature. These methods seem more attractive than other NAT traversal mechanisms since they are independent from internet infrastructure and are also appropriate for dynamic structures. However, they do have some disadvantages. In this study, a new state-based end-to-end communication technique (SBN) for NAT traversal was designed and realized. The performance of the designed method was evaluated against three criteria, connectivity check delay, connection packet count and bandwidth, and compared with the ICE method. The results revealed that the suggested SBN method proved an average of 78% success in connectivity check delay, 69% in the number of packets used and 66% in the consumption of bandwidth over the ICE method.
• Daisuke KOTANI, Yasuo OKABE
Article type: PAPER
Subject area: Information Network
2016 Volume E99.D Issue 3 Pages 695-707
Published: March 01, 2016
Released on J-STAGE: March 01, 2016
JOURNAL FREE ACCESS
Protecting control planes in networking hardware from high rate packets is a critical issue for networks under operation. One common approach for conventional networking hardware is to offload expensive functions onto hard-wired offload engines as ASICs. This approach is inadequate for OpenFlow networks because it restricts a certain amount of flexibility for network control that OpenFlow tries to provide. Therefore, we need a control plane protection mechanism in OpenFlow switches as a last resort, while preserving flexibility for network control. In this paper, we propose a mechanism to filter out Packet-In messages, which include packets handled by the control plane in OpenFlow networks, without dropping important ones for network control. Switches record values of packet header fields before sending Packet-In messages, and filter out packets that have the same values as the recorded ones. The controllers set the header fields in advance whose values must be recorded, and the header fields are selected based on controller design. We have implemented and evaluated the proposed mechanism on a prototype software switch, concluding that it dramatically reduces CPU loads on switches while passes important Packet-In messages for network control.
• David KOCIK, Yuki HIRAI, Keiichi KANEKO
Article type: PAPER
Subject area: Dependable Computing
2016 Volume E99.D Issue 3 Pages 708-713
Published: March 01, 2016
Released on J-STAGE: March 01, 2016
JOURNAL FREE ACCESS
This paper proposes an algorithm that solves the node-to-set disjoint paths problem in an n-Möbius cube in polynomial-order time of n. It also gives a proof of correctness of the algorithm as well as estimating the time complexity, O(n4), and the maximum path length, 2n-1. A computer experiment is conducted for n=1,2,...,31 to measure the average performance of the algorithm. The results show that the average time complexity is gradually approaching to O(n3) and that the maximum path lengths cannot be attained easily over the range of n in the experiment.
• Chendra Hadi SURYANTO, Kazuhiro FUKUI, Hideitsu HINO
Article type: PAPER
Subject area: Pattern Recognition
2016 Volume E99.D Issue 3 Pages 714-723
Published: March 01, 2016
Released on J-STAGE: March 01, 2016
JOURNAL FREE ACCESS
Many methods have been proposed for measuring the structural similarity between two protein folds. However, it is difficult to select one best method from them for the classification task, as each method has its own strength and weakness. Intuitively, combining multiple methods is one solution to get the optimal classification results. In this paper, by generalizing the concept of the large margin nearest neighbor (LMNN), a method for combining multiple distance metrics from different types of protein structure comparison methods for protein fold classification task is proposed. While LMNN is limited to Mahalanobis-based distance metric learning from a set of feature vectors of training data, the proposed method learns an optimal combination of metrics from a set of distance metrics by minimizing the distances between intra-class data and enlarging the distances of different classes' data. The main advantage of the proposed method is the capability in finding an optimal weight coefficient for combination of many metrics, possibly including poor metrics, avoiding the difficulties in selecting which metrics to be included for the combination. The effectiveness of the proposed method is demonstrated on classification experiments using two public protein datasets, namely, Ding Dubchak dataset and ENZYMES dataset.
• Byung Ok KANG, Oh-Wook KWON
Article type: PAPER
Subject area: Speech and Hearing
2016 Volume E99.D Issue 3 Pages 724-730
Published: March 01, 2016
Released on J-STAGE: March 01, 2016
JOURNAL FREE ACCESS
We propose a new method to combine multiple acoustic models in Gaussian mixture model (GMM) spaces for robust speech recognition. Even though large vocabulary continuous speech recognition (LVCSR) systems are recently widespread, they often make egregious recognition errors resulting from unavoidable mismatch of speaking styles or environments between the training and real conditions. To handle this problem, a multi-style training approach has been used conventionally to train a large acoustic model by using a large speech database with various kinds of speaking styles and environment noise. But, in this work, we combine multiple sub-models trained for different speaking styles or environment noise into a large acoustic model by maximizing the log-likelihood of the sub-model states sharing the same phonetic context and position. Then the combined acoustic model is used in a new target system, which is robust to variation in speaking style and diverse environment noise. Experimental results show that the proposed method significantly outperforms the conventional methods in two tasks: Non-native English speech recognition for second-language learning systems and noise-robust point-of-interest (POI) recognition for car navigation systems.
• Li TIAN, Qi JIA, Sei-ichiro KAMATA
Article type: PAPER
Subject area: Image Recognition, Computer Vision
2016 Volume E99.D Issue 3 Pages 731-738
Published: March 01, 2016
Released on J-STAGE: March 01, 2016
JOURNAL FREE ACCESS
In this study, we propose a simple, yet general and powerful framework of integrating multiple global and local features by Product Sparse Coding (PSC) for image retrieval. In our framework, multiple global and local features are extracted from images and then are transformed to Trimmed-Root (TR)-features. After that, the features are encoded into compact codes by PSC. Finally, a two-stage ranking strategy is proposed for indexing in retrieval. We make three major contributions in this study. First, we propose TR representation of multiple image features and show that the TR representation offers better performance than the original features. Second, the integrated features by PSC is very compact and effective with lower complexity than by the standard sparse coding. Finally, the two-stage ranking strategy can balance the efficiency and memory usage in storage. Experiments demonstrate that our compact image representation is superior to the state-of-the-art alternatives for large-scale image retrieval.
• Tatsuro ORIKASA, Takayuki OKATANI
Article type: PAPER
Subject area: Computer Graphics
2016 Volume E99.D Issue 3 Pages 739-746
Published: March 01, 2016
Released on J-STAGE: March 01, 2016
JOURNAL FREE ACCESS
The the depth-of-field limitation of our eyes causes out-of-focus blur in the retinal images. The blur dynamically changes whenever we change our gaze and accordingly the scene point we are looking at changes its depth. This paper proposes an image display that reproduces retinal out-of-focus blur by using a stereoscopic display and eye trackers. Its purpose is to provide the viewer with more realistic visual experiences than conventional (stereoscopic) displays. Unlike previous similar systems that track only one of the viewer's eyes to estimate the gaze depth, the proposed system tracks both eyes individually using two eye trackers and estimates the gaze depth from the convergence angle calculated by triangulation. This provides several advantages over existing schemes, such as being able to deal with scenes having multiple depths. We describe detailed implementations of the proposed system and show the results of an experiment conducted to examine its effectiveness. In the experiment, creating a scene having two depths using two LCD displays together with a half mirror, we examined how difficult it is for viewers to distinguish between the real scene and its virtual reproduction created by the proposed display system. The results of the experiment show the effectiveness of the proposed approach.
• Seon-Ho SHIN, Jooyoung LEE, Jong-Hyun KIM, Ikkyun KIM, MyungKeun YOON
Article type: LETTER
Subject area: Fundamentals of Information Systems
2016 Volume E99.D Issue 3 Pages 747-750
Published: March 01, 2016
Released on J-STAGE: March 01, 2016
JOURNAL FREE ACCESS
We design a new hash table for high-speed networking that reduces main memory accesses even when the ratio of inserted items to the table size is high, at which point previous schemes no longer work. This improvement comes from a new design of a summary, called expanded keys, exploiting recent multiple hash functions and Bloom filter theories.
• Keehang KWON, Jeongyoon SEO, Daeseong KANG
Article type: LETTER
Subject area: Software System
2016 Volume E99.D Issue 3 Pages 751-755
Published: March 01, 2016
Released on J-STAGE: March 01, 2016
JOURNAL FREE ACCESS
Adding versatile interactions to imperative programming - C, Java and Android - is an essential task. Unfortunately, existing languages provide only limited constructs for user interaction. These constructs are usually in the form of unbounded quantification. For example, existing languages can take the keyboard input from the user only via the read(x)/scan(x) statement. Note that the value of x is unbounded in the sense that x can have any value. This statement is thus not useful for applications with bounded inputs. To support bounded choices, we propose new bounded-choice statements for user interation. Each input device (keyboard, mouse, touchpad, ...) naturally requires a new bounded-choice statement. To make things simple, however, we focus on a bounded-choice statement for keyboard - kchoose - to allow for more controlled and more guided participation from the user. We illustrate our idea via CBI, an extension of the core C with a new bounded-choice statement for the keyboard.
• Jae-Chul UM, Ki-Seong LEE, Chan-Gun LEE
Article type: LETTER
Subject area: Software Engineering
2016 Volume E99.D Issue 3 Pages 756-759
Published: March 01, 2016
Released on J-STAGE: March 01, 2016
JOURNAL FREE ACCESS
Software architecture recovery techniques are often adopted to derive a module view of software from its source code in case software architecture documents are unavailable or outdated. The module view is one of the most important perspectives of software architecture. In this paper, we propose a novel approach to derive a module view by adaptively integrating structural dependency and textual similarity. Our approach utilizes Newman modularity and Shannon information entropy to determine the appropriate weights of the dependencies during the integration. We apply our approach to various open-source projects and show the experimental results validating the effectiveness of the approach.
• Jaehyun HWANG, Cheol-Ho HONG, Hyo-Joong SUH
Article type: LETTER
Subject area: Information Network
2016 Volume E99.D Issue 3 Pages 760-762
Published: March 01, 2016
Released on J-STAGE: March 01, 2016
JOURNAL FREE ACCESS
This paper proposes a rate adjustment scheme for inbound data traffic on a virtualized host. Most prior studies on network virtualization have only focused on outbound traffic, yet many cloud applications rely on inbound traffic performance. The proposed scheme adjusts the inbound rates of virtual network interfaces dynamically in proportion to the bandwidth demands of the virtual machines.
• Yasutaka HATAKEYAMA, Takahiro OGAWA, Hironori IKEDA, Miki HASEYAMA
Article type: LETTER
Subject area: Artificial Intelligence, Data Mining
2016 Volume E99.D Issue 3 Pages 763-768
Published: March 01, 2016
Released on J-STAGE: March 01, 2016
JOURNAL FREE ACCESS
In this paper, we propose a method to estimate the most resource-consuming disease from electronic claim data based on Labeled Latent Dirichlet Allocation (Labeled LDA). The proposed method models each electronic claim from its medical procedures as a mixture of resource-consuming diseases. Thus, the most resource-consuming disease can be automatically estimated by applying Labeled LDA to the electronic claim data. Although our method is composed of a simple scheme, this is the first trial for realizing estimation of the most resource-consuming disease.
• Yi-Jia ZHANG, Zhong-Jian KANG, Xin-Ling GUO, Zhe-Ming LU
Article type: LETTER
Subject area: Artificial Intelligence, Data Mining
2016 Volume E99.D Issue 3 Pages 769-772
Published: March 01, 2016
Released on J-STAGE: March 01, 2016
JOURNAL FREE ACCESS
The power grid defines one of the most important technological networks of our times and has been widely studied as a kind of complex network. It has been developed for more than one century and becomes an extremely huge and seemingly robust system. But it becomes extremely fragile as well because some unexpected minimal failures may lead to sudden and massive blackouts. Many works have been carried out to investigate the structural vulnerability of power grids from the topological point of view based on the complex network theory. This Letter focuses on the structural vulnerability of the power grid under the effect of selective node removal. We propose a new kind of node centrality called overall information centrality (OIC) to guide the node removal attack. We test the effectiveness of our centrality in guiding the node removal based on several IEEE power grids. Simulation results show that, compared with other node centralities such as degree centrality (DC), betweenness centrality (BC) and closeness centrality (CC), our OIC is more effective to guide the node removal and can destroy the power grid in less steps.
• Leida LI, Yu ZHOU, Jinjian WU, Jiansheng QIAN, Beijing CHEN
Article type: LETTER
Subject area: Image Processing and Video Processing
2016 Volume E99.D Issue 3 Pages 773-776
Published: March 01, 2016
Released on J-STAGE: March 01, 2016
JOURNAL FREE ACCESS
Image retouching is fundamental in photography, which is widely used to improve the perceptual quality of a low-quality image. Traditional image quality metrics are designed for degraded images, so they are limited in evaluating the quality of retouched images. This letter presents a RETouched Image QUality Evaluation (RETIQUE) algorithm by measuring structure and color changes between the original and retouched images. Structure changes are measured by gradient similarity. Color colorfulness and saturation are utilized to measure color changes. The overall quality score of a retouched image is computed as the linear combination of gradient similarity and color similarity. The performance of RETIQUE is evaluated on a public Digitally Retouched Image Quality (DRIQ) database. Experimental results demonstrate that the proposed metric outperforms the state-of-the-arts.
• Soongi HONG, Yoonsik CHOE, Yong-Goo KIM
Article type: LETTER
Subject area: Image Processing and Video Processing
2016 Volume E99.D Issue 3 Pages 777-780
Published: March 01, 2016
Released on J-STAGE: March 01, 2016
JOURNAL FREE ACCESS
In transcoding, it is well known that refinement of the motion vectors is critical to enhance the quality of transcoded video while significantly reducing transcoding complexity. This paper proposes a novel cost model to estimate the rate-distortion cost of motion vector composition in order to develop a reliable motion vector re-estimation method that has reasonable computation cost. Based on a statistical analysis of motion compensated prediction errors, we design a basic form of the proposed cost model as a function of distance from the optimal motion vector. Simulations with a transcoder employing the proposed cost model demonstrate a significant quality gain over representative video transcoding schemes with no complexity increase.