Special Section on Foundations of Computer Science — New Trends in Algorithms and Theory of Computation —
-
Takashi HORIYAMA
2013 Volume E96.D Issue 3 Pages
399
Published: March 01, 2013
Released on J-STAGE: March 01, 2013
JOURNAL
FREE ACCESS
-
Eishi CHIBA, Hiroshi FUJIWARA, Yoshiyuki SEKIGUCHI, Toshihide IBARAKI
Article type: PAPER
2013 Volume E96.D Issue 3 Pages
400-407
Published: March 01, 2013
Released on J-STAGE: March 01, 2013
JOURNAL
FREE ACCESS
Flat Panel Displays (FPDs) are manufactured using many pieces of different processing equipment arranged sequentially in a line. Although the constant inter-arrival time (i.e., the tact time) of glass substrates in the line should be kept as short as possible, the collision probability between glass substrates increases as tact time decreases. Since the glass substrate is expensive and fragile, collisions should be avoided. In this paper, we derive a closed form formula of the approximate collision probability for a model, in which the processing time on each piece of equipment is assumed to follow Erlang distribution. We also compare some numerical results of the closed form and computer simulation results of the collision probability.
View full abstract
-
Mingyu XIAO, Hiroshi NAGAMOCHI
Article type: PAPER
2013 Volume E96.D Issue 3 Pages
408-418
Published: March 01, 2013
Released on J-STAGE: March 01, 2013
JOURNAL
FREE ACCESS
Given a graph
G=(
V,
E) together with a nonnegative integer requirement on vertices
r:
V→
Z+, the
annotated edge dominating set problem is to find a minimum set
M ⊆
E such that, each edge in
E -
M is adjacent to some edge in
M, and
M contains at least
r(
v) edges incident on each vertex
v ∈
V. The annotated edge dominating set problem is a natural extension of the classical edge dominating set problem, in which the requirement on vertices is zero. The edge dominating set problem is an important graph problem and has been extensively studied. It is well known that the problem is NP-hard, even when the graph is restricted to a planar or bipartite graph with maximum degree 3. In this paper, we show that the annotated edge dominating set problem in graphs with maximum degree 3 can be solved in
O*(1.2721
n) time and polynomial space, where
n is the number of vertices in the graph. We also show that there is an
O*(2.2306
k)-time polynomial-space algorithm to decide whether a graph with maximum degree 3 has an annotated edge dominating set of size
k or not.
View full abstract
-
Hirotoshi HONMA, Kodai ABE, Yoko NAKAJIMA, Shigeru MASUYAMA
Article type: PAPER
2013 Volume E96.D Issue 3 Pages
419-425
Published: March 01, 2013
Released on J-STAGE: March 01, 2013
JOURNAL
FREE ACCESS
Let
Gs=(
Vs,
Es) be a simple connected graph. A vertex
v ∈ Vs is an articulation vertex if deletion of
v and its incident edges from
Gs disconnects the graph into at least two connected components. Finding all articulation vertices of a given graph is called the articulation vertex problem. A vertex
u ∈ Vs is called a hinge vertex if there exist any two vertices
x and
y in
Gs whose distance increase when
u is removed. Finding all hinge vertices of a given graph is called the hinge vertex problem. These problems can be applied to improve the stability and robustness of communication network systems. In this paper, we propose linear time algorithms for the articulation vertex problem and the hinge vertex problem of circular permutation graphs.
View full abstract
-
Masashi KIYOMI, Toshiki SAITOH, Ryuhei UEHARA
Article type: PAPER
2013 Volume E96.D Issue 3 Pages
426-432
Published: March 01, 2013
Released on J-STAGE: March 01, 2013
JOURNAL
FREE ACCESS
PREIMAGE CONSTRUCTION problem by Kratsch and Hemaspaandra naturally arose from the famous graph reconstruction conjecture. It deals with the algorithmic aspects of the conjecture. We present an O(
n8)time algorithm for PREIMAGE CONSTRUCTION on permutation graphs and an O(
n4(
n+
m)) time algorithm for PREIMAGE CONSTRUCTION on distance-hereditary graphs, where
n is the number of graphs in the input, and
m is the number of edges in a preimage. Since each graph of the input has
n-1 vertices and O(
n2) edges, the input size is O(
n3) (, or O(
nm)). There are polynomial time isomorphism algorithms for permutation graphs and distance-hereditary graphs. However the number of permutation (distance-hereditary) graphs obtained by adding a vertex to a permutation (distance-hereditary) graph is generally exponentially large. Thus exhaustive checking of these graphs does not achieve any polynomial time algorithm. Therefore reducing the number of preimage candidates is the key point.
View full abstract
-
Kotaro NAKAGAWA, Hiroki YAMAGUCHI
Article type: PAPER
2013 Volume E96.D Issue 3 Pages
433-442
Published: March 01, 2013
Released on J-STAGE: March 01, 2013
JOURNAL
FREE ACCESS
We show a phase transition of the first eigenvalue of random (
c,
d)-regular graphs, whose instance of them consists of one vertex with degree
c and the other vertices with degree
d for
c>d. We investigate a reduction from the first eigenvalue analysis of a general (
c,
d)-regular graph to that of a tree, and prove that, for any fixed
c and
d, and for a graph
G chosen from the set of all (
c,
d)-regular graphs with
n vertices uniformly at random, the first eigenvalue of
G is approximately $\max \{d,\, c / \sqrt{c-d+1} \}$ with high probability.
View full abstract
-
Yuichi ASAHIRO, Hiroshi ETO, Eiji MIYANO
Article type: PAPER
2013 Volume E96.D Issue 3 Pages
443-449
Published: March 01, 2013
Released on J-STAGE: March 01, 2013
JOURNAL
FREE ACCESS
Given a connected graph
G = (
V,
E) on
n vertices, the Maximum
r-Regular Induced Connected Subgraph (
r-MaxRICS) problem asks for a maximum sized subset of vertices
S ⊆ V such that the induced subgraph
G[
S] on
S is connected and
r-regular. It is known that 2-MaxRICS and 3-MaxRICS are
NP-hard. Moreover, 2-MaxRICS cannot be approximated within a factor of
n1-ε in polynomial time for any
ε > 0 unless
P =
NP. In this paper, we show that
r-MaxRICS are
NP-hard for any fixed integer
r ≥ 4. Furthermore, we show that for any fixed integer
r ≥ 3,
r-MaxRICS cannot be approximated within a factor of
n1/6-ε in polynomial time for any
ε > 0 unless
P =
NP.
View full abstract
-
Aleksandar SHURBEVSKI, Hiroshi NAGAMOCHI, Yoshiyuki KARUNO
Article type: PAPER
2013 Volume E96.D Issue 3 Pages
450-456
Published: March 01, 2013
Released on J-STAGE: March 01, 2013
JOURNAL
FREE ACCESS
In this paper, we consider a problem of simultaneously optimizing a sequence of graphs and a route which exhaustively visits the vertices from each pair of successive graphs in the sequence. This type of problem arises from repetitive routing of grasp-and-delivery robots used in the production of printed circuit boards. The problem is formulated as follows. We are given a metric graph
G*=(
V*,
E*), a set of
m+1 disjoint subsets
Ci ⊆
V* of vertices with |
Ci|=
n,
i=0,1,...,
m, and a starting vertex
s ∈
C0. We seek to find a sequence π=(
Ci1,
Ci2, ...,
Cim) of the subsets of vertices and a shortest walk
P which visits all (
m+1)
n vertices in
G* in such a way that after starting from
s, the walk alternately visits the vertices in
Cik-1 and
Cik, for
k=1,2,...,
m (
i0=0). Thus,
P is a walk with
m(2
n-1) edges obtained by concatenating
m alternating Hamiltonian paths between
Cik-1 and
Cik,
k=1,2,...,
m. In this paper, we show that an approximate sequence of subsets of vertices and an approximate walk with at most three times the optimal route length can be found in polynomial time.
View full abstract
-
Masaya NAKAHARA, Shirou MARUYAMA, Tetsuji KUBOYAMA, Hiroshi SAKAMOTO
Article type: PAPER
2013 Volume E96.D Issue 3 Pages
457-464
Published: March 01, 2013
Released on J-STAGE: March 01, 2013
JOURNAL
FREE ACCESS
A scalable pattern discovery by compression is proposed. A string is representable by a context-free grammar deriving the string deterministically. In this framework of
grammar-based compression, the aim of the algorithm is to output as small a grammar as possible. Beyond that, the optimization problem is approximately solvable. In such approximation algorithms, the compressor based on
edit-sensitive parsing (ESP) is especially suitable for detecting maximal common substrings as well as long frequent substrings. Based on ESP, we design a linear time algorithm to find all frequent patterns in a string approximately and prove several lower bounds to guarantee the length of extracted patterns. We also examine the performance of our algorithm by experiments in biological sequences and other compressible real world texts. Compared to other practical algorithms, our algorithm is faster and more scalable with large and repetitive strings.
View full abstract
-
Ji-Won HUH, Shuji ISOBE, Eisuke KOIZUMI, Hiroki SHIZUYA
Article type: PAPER
2013 Volume E96.D Issue 3 Pages
465-471
Published: March 01, 2013
Released on J-STAGE: March 01, 2013
JOURNAL
FREE ACCESS
In this paper, we investigate a relationship between the length-decreasing self-reducibility and the many-one-like reducibilities for partial multivalued functions. We show that if any parsimonious (many-one or metric many-one) complete function for NPMV (or NPMV
g) is length-decreasing self-reducible, then any function in NPMV (or NPMV
g) has a polynomial-time computable refinement. This result implies that there exists an NPMV (or NPMV
g)-complete function which is not length-decreasing self-reducible unless P=NP.
View full abstract
-
Keiichirou KUSAKARI
Article type: PAPER
2013 Volume E96.D Issue 3 Pages
472-480
Published: March 01, 2013
Released on J-STAGE: March 01, 2013
JOURNAL
FREE ACCESS
For simply-typed term rewriting systems (STRSs) and higher-order rewrite systems (HRSs)
à la Nipkow, we proposed a method for proving termination, namely the static dependency pair method. The method combines the dependency pair method introduced for first-order rewrite systems with the notion of strong computability introduced for typed
λ-calculi. This method analyzes a static recursive structure based on definition dependency. By solving suitable constraints generated by the analysis, we can prove termination. In this paper, we extend the method to rewriting systems for functional programs (RFPs) with product, algebraic data, and ML-polymorphic types. Although the type system in STRSs contains only product and simple types and the type system in HRSs contains only simple types, our RFPs allow product types, type constructors (algebraic data types), and type variables (ML-polymorphic types). Hence, our RFPs are more representative of existing functional programs than STRSs and HRSs. Therefore, our result makes a large contribution to applying theoretical rewriting techniques to actual problems, that is, to proving the termination of existing functional programs.
View full abstract
-
Kazuya HARAGUCHI, Hirotaka ONO
Article type: PAPER
2013 Volume E96.D Issue 3 Pages
481-488
Published: March 01, 2013
Released on J-STAGE: March 01, 2013
JOURNAL
FREE ACCESS
BlockSum, also known as KeisanBlock in Japanese, is a Latin square filling type puzzle, such as Sudoku. In this paper, we prove that the decision problem whether a given instance of BlockSum has a solution or not is NP-complete.
View full abstract
-
Yuya HIGASHIKAWA, Naoki KATOH
Article type: PAPER
2013 Volume E96.D Issue 3 Pages
489-497
Published: March 01, 2013
Released on J-STAGE: March 01, 2013
JOURNAL
FREE ACCESS
This paper considers online vertex exploration problems in a simple polygon where starting from a point in the inside of a simple polygon, a searcher is required to explore a simple polygon to visit all its vertices and finally return to the initial position as quickly as possible. The information of the polygon is given online. As the exploration proceeds, the searcher gains more information of the polygon. We give a 1.219-competitive algorithm for this problem. We also study the case of a rectilinear simple polygon, and give a 1.167-competitive algorithm.
View full abstract
-
Sung Kwon KIM
Article type: LETTER
2013 Volume E96.D Issue 3 Pages
498-501
Published: March 01, 2013
Released on J-STAGE: March 01, 2013
JOURNAL
FREE ACCESS
Given a tree
T with edge lengths and edge weights, and a value
B, the length-constrained heaviest path problem is to find a path in
T with maximum path weight whose path length is at most
B. We present a linear time algorithm for the problem when the edge lengths are uniform, i.e., all one. This algorithm with slight modification can be used to find the heaviest path of length exactly
B in
T in linear time.
View full abstract
-
Chuzo IWAMOTO, Yuta MUKAI, Yuichi SUMIDA, Kenichi MORITA
Article type: LETTER
2013 Volume E96.D Issue 3 Pages
502-505
Published: March 01, 2013
Released on J-STAGE: March 01, 2013
JOURNAL
FREE ACCESS
We study the computational complexity of the following two-player game. The instance is a graph
G=(
V,
E), an initial vertex
s ∈ V, and a target set
T ⊆ V. A “cat” is initially placed on
s. Player 1 chooses a vertex in the graph and removes it and its incident edges from the graph. Player 2 moves the cat from the current vertex to one of the adjacent vertices. Players 1 and 2 alternate removing a vertex and moving the cat, respectively. The game continues until either the cat reaches a vertex of
T or the cat cannot be moved. Player 1 wins if and only if the cat cannot be moved before it reaches a vertex of
T. It is shown that deciding whether player 1 has a forced win on the game on
G is PSPACE-complete.
View full abstract
-
Shigeru AKAMATSU
2013 Volume E96.D Issue 3 Pages
506
Published: March 01, 2013
Released on J-STAGE: March 01, 2013
JOURNAL
FREE ACCESS
-
Masashi KOMORI, Hiroko KAMIDE, Satoru KAWAMURA, Chika NAGAOKA
Article type: PAPER
Subject area: Face Perception and Recognition
2013 Volume E96.D Issue 3 Pages
507-513
Published: March 01, 2013
Released on J-STAGE: March 01, 2013
JOURNAL
FREE ACCESS
This study investigated the relationship between social skills and facial asymmetry in facial expressions. Three-dimensional facial landmark data of facial expressions (neutral, happy, and angry) were obtained from Japanese participants (
n=62). Following a facial expression task, each participant completed KiSS-18 (Kikuchi's Scale of Social Skills; Kikuchi, 2007). Using a generalized Procrustes analysis, faces and their mirror-reversed versions were represented as points on a hyperplane. The asymmetry of each individual face was defined as Euclidian distance between the face and its mirror reversed face on this plane. Subtraction of the asymmetry level of a neutral face of each individual from the asymmetry level of a target emotion face was defined as the index of “expression asymmetry” given by a particular emotion. Correlation coefficients of KiSS-18 scores and expression asymmetry scores were computed for both happy and angry expressions. Significant negative correlations between KiSS-18 scores and expression asymmetries were found for both expressions. Results indicate that the symmetry in facial expressions increases with higher level of social skills.
View full abstract
-
Naiwala P. CHANDRASIRI, Ryuta SUZUKI, Nobuyuki WATANABE, Hiroshi YAMAD ...
Article type: PAPER
Subject area: Face Perception and Recognition
2013 Volume E96.D Issue 3 Pages
514-522
Published: March 01, 2013
Released on J-STAGE: March 01, 2013
JOURNAL
FREE ACCESS
Face perception and recognition have attracted more attention recently in multidisciplinary fields such as engineering, psychology, neuroscience, etc. with the advances in physical/physiological measurement and data analysis technologies. In this paper, our main interest is building computational models of human face recognition based on psychological experiments. We specially focus on modeling human face recognition characteristics of average face in the dimension of distinctiveness. Psychological experiments were carried out to measure distinctiveness of face images and their results are explained by computer analysis results of the images. Two psychological experiments, 1) Classical experiment of distinctiveness rating and, 2) Novel experiment of recognition of an average face were performed. In the later experiment, we examined on how the average face of two face images was recognized by a human in a similarity test respect to the original images which were utilized for the calculation of the average face. To explain results of the psychological experiments, eigenface spaces were constructed based on Principal Component Analysis (PCA). Significant correlation was found between human and PCA based computer recognition results. Emulation of human recognition of faces is one of the expected applications of this research.
View full abstract
-
Yikui ZHAI, Junying GAN, Jinwen LI, Junying ZENG, Ying XU
Article type: PAPER
Subject area: Face Perception and Recognition
2013 Volume E96.D Issue 3 Pages
523-530
Published: March 01, 2013
Released on J-STAGE: March 01, 2013
JOURNAL
FREE ACCESS
Due to security demand of society development, real-time face recognition has been receiving more and more attention nowadays. In this paper, a real-time face recognition system via Local Binary Pattern (LBP) plus Improved Biomimetic Pattern Recognition (BPR) has been proposed. This system comprises three main steps: real-time color face detection process, feature extraction process and recognition process. Firstly, a color face detector is proposed to detect face with eye alignment and simultaneous performance; while in feature extraction step, LBP method is adopted to eliminate the negative effect of the light heterogeneity. Finally, an improved BPR method with Selective Sampling construction is applied to the recognition system. Experiments on our established database named WYU Database, PUT Database and AR Database show that this real-time face recognition system can work with high efficiency and has achieved comparable performance with the state-of-the-art systems.
View full abstract
-
Xi LI, Tomokazu TAKAHASHI, Daisuke DEGUCHI, Ichiro IDE, Hiroshi MURASE
Article type: PAPER
Subject area: Face Perception and Recognition
2013 Volume E96.D Issue 3 Pages
531-537
Published: March 01, 2013
Released on J-STAGE: March 01, 2013
JOURNAL
FREE ACCESS
This paper presents an approach for cross-pose face recognition by virtual view generation using an appearance clustering based local view transition model. Previously, the traditional global pattern based view transition model (VTM) method was extended to its local version called LVTM, which learns the linear transformation of pixel values between frontal and non-frontal image pairs from training images using partial image in a small region for each location, instead of transforming the entire image pattern. In this paper, we show that the accuracy of the appearance transition model and the recognition rate can be further improved by better exploiting the inherent linear relationship between frontal-nonfrontal face image patch pairs. This is achieved based on the observation that variations in appearance caused by pose are closely related to the corresponding 3D structure and intuitively frontal-nonfrontal patch pairs from more similar local 3D face structures should have a stronger linear relationship. Thus for each specific location, instead of learning a common transformation as in the LVTM, the corresponding local patches are first clustered based on an appearance similarity distance metric and then the transition models are learned separately for each cluster. In the testing stage, each local patch for the input non-frontal probe image is transformed using the learned local view transition model corresponding to the most visually similar cluster. The experimental results on a real-world face dataset demonstrated the superiority of the proposed method in terms of recognition rate.
View full abstract
-
Dong-Ju KIM, Sang-Heon LEE, Myoung-Kyu SHON
Article type: PAPER
Subject area: Face Perception and Recognition
2013 Volume E96.D Issue 3 Pages
538-549
Published: March 01, 2013
Released on J-STAGE: March 01, 2013
JOURNAL
FREE ACCESS
This paper proposes a novel face recognition approach using a centralized gradient pattern imageand image covariance-based facial feature extraction algorithms, i.e. a two-dimensional principal component analysis and an alternative two-dimensional principal component analysis. The centralized gradient pattern image is obtained by AND operation of a modified center-symmetric local binary pattern imageand a modified local directional pattern image, and it is then utilized as input image for the facial feature extraction based on image covariance. To verify the proposed face recognition method, the performance evaluation was carried out using various recognition algorithms on the Yale B, the extended Yale B and the CMU-PIE illumination databases. From the experimental results, the proposed method showed the best recognition accuracy compared to different approaches, and we confirmed that the proposed approach is robust to illumination variation.
View full abstract
-
Wei ZHOU, Sei-ichiro KAMATA
Article type: PAPER
Subject area: Face Perception and Recognition
2013 Volume E96.D Issue 3 Pages
550-558
Published: March 01, 2013
Released on J-STAGE: March 01, 2013
JOURNAL
FREE ACCESS
Linear Discriminant Analysis (LDA) is a well-known feature extraction method for supervised subspace learning in statistical pattern recognition. In this paper, a novel method of LDA based on a new L1-norm optimization technique and its variances are proposed. The conventional LDA, which is based on L2-norm, is sensitivity to the presence of outliers, since it used the L2-norm to measure the between-class and within-class distances. In addition, the conventional LDA often suffers from the so-called small sample size (3S) problem since the number of samples is always smaller than the dimension of the feature space in many applications, such as face recognition. Based on L1-norm, the proposed methods have several advantages, first they are robust to outliers because they utilize the L1-norm, which is less sensitive to outliers. Second, they have no 3S problem. Third, they are invariant to rotations as well. The proposed methods are capable of reducing the influence of outliers substantially, resulting in a robust classification. Performance assessment in face application shows that the proposed approaches are more effectiveness to address outliers issue than traditional ones.
View full abstract
-
Mariko HIROKAWA, Yoshimitsu KUROKI
Article type: LETTER
Subject area: Face Perception and Recognition
2013 Volume E96.D Issue 3 Pages
559-561
Published: March 01, 2013
Released on J-STAGE: March 01, 2013
JOURNAL
FREE ACCESS
PCA-L1 (principal component analysis based on L1-norm maximization) is an approximate solution of L1-PCA (PCA based on the L1-norm), and has robustness against outliers compared with traditional PCA. However, the more dimensions the feature space has, the more calculation time PCA-L1 consumes. This paper focuses on an initialization procedure of PCA-L1 algorithm, and proposes a fast method of PCA-L1 using Gram-Schmidt orthogonalization. Experimental results on face recognition show that the proposed method works faster than conventional PCA-L1 without decrease of recognition accuracy.
View full abstract
-
Akisato KIMURA, Ryo YONETANI, Takatsugu HIRAYAMA
Article type: INVITED SURVEY PAPER
Subject area: Image Recognition, Computer Vision
2013 Volume E96.D Issue 3 Pages
562-578
Published: March 01, 2013
Released on J-STAGE: March 01, 2013
JOURNAL
FREE ACCESS
We humans are easily able to instantaneously detect the regions in a visual scene that are most likely to contain something of interest. Exploiting this pre-selection mechanism called
visual attention for image and video processing systems would make them more sophisticated and therefore more useful. This paper briefly describes various computational models of human visual attention and their development, as well as related psychophysical findings. In particular, our objective is to carefully distinguish several types of studies related to human visual attention and
saliency as a measure of attentiveness, and to provide a taxonomy from several viewpoints such as the main objective, the use of additional cues and mathematical principles. This survey finally discusses possible future directions for research into human visual attention and saliency computation.
View full abstract
-
Pei-Chun LIN, Junzo WATADA, Berlin WU
Article type: PAPER
Subject area: Fundamentals of Information Systems
2013 Volume E96.D Issue 3 Pages
579-588
Published: March 01, 2013
Released on J-STAGE: March 01, 2013
JOURNAL
FREE ACCESS
The objective of our research is to build a statistical test that can evaluate different risks of a portfolio selection model with fuzzy data. The central points and radiuses of fuzzy numbers are used to determine the portfolio selection model, and we statistically evaluate the best return by a fuzzy statistical test. Empirical studies are presented to illustrate the risk evaluation of the portfolio selection model with interval values. We conclude that the fuzzy statistical test enables us to evaluate a stable expected return and low risk investment with different choices for
k, which indicates the risk level. The results of numerical examples show that our method is suitable for short-term investments.
View full abstract
-
Kyongjin JO, Seon Wook KIM, Jong-Kook KIM
Article type: PAPER
Subject area: Fundamentals of Information Systems
2013 Volume E96.D Issue 3 Pages
589-600
Published: March 01, 2013
Released on J-STAGE: March 01, 2013
JOURNAL
FREE ACCESS
The size and complexity of software in computer systems and even in consumer electronics is drastically and continuously increasing, thus increasing the compilation time. For example, the compilation time for building some of mobile phones' platform software takes several hours. In order to reduce the compilation time, this paper proposes a Distributed Scalable Compilation Tool, called DiSCo where full compilation passes such as preprocessing, compilation, and even linking are performed at remote machines, i.e. in parallel. To the best of our knowledge DiSCo is the first distributed compiler to support complete distributed processing in all the compilation passes. We use an extensive dependency analysis in parsing compilation commands for exploiting higher command-level parallelism, and we apply a file caching method and a network-drive protocol for reducing the remote compilation overhead and simplifying the implementation. Lastly, we minimize load imbalance and remote machine management overhead with our heuristic static scheduling method by predicting compilation time and considering the overheads invoked by the compilation process. Our evaluation using four large mobile applications and eight GNU applications shows that the performance of DiSCo is scalable and the performance is close to a profile scheduling.
View full abstract
-
Hung K. NGUYEN, Peng CAO, Xue-Xiang WANG, Jun YANG, Longxing SHI, Min ...
Article type: PAPER
Subject area: Computer System
2013 Volume E96.D Issue 3 Pages
601-615
Published: March 01, 2013
Released on J-STAGE: March 01, 2013
JOURNAL
FREE ACCESS
REMUS-II (
REconfigurable
MUltimedia
System 2) is a coarse-grained dynamically reconfigurable computing system for multimedia and communication baseband processing. This paper proposes a real-time H.264 baseline profile encoder on REMUS-II. First, we propose an overall mapping flow for mapping algorithms onto the platform of REMUS-II system and then illustrate it by implementing the H.264 encoder. Second, parallel and pipelining techniques are considered for fully exploiting the abundant computing resources of REMUS-II, thus increasing total computing throughput and solving high computational complexity of H.264 encoder. Besides, some data-reuse schemes are also used to increase data-reuse ratio and therefore reduce the required data bandwidth. Third, we propose a scheduling scheme to manage run-time reconfiguration of the system. The scheduling is also responsible for synchronizing the data communication between tasks and handling conflict between hardware resources. Experimental results prove that the REMUS-MB (REMUS-II version for mobile applications) system can perform a real-time H.264/AVC baseline profile encoder. The encoder can encode CIF@30fps video sequences with two reference frames and maximum search range of [-16,15]. The implementation, thereby, can be applied to handheld devices targeted at mobile multimedia applications. The platform of REMUS-MB system is designed and synthesized by using TSMC 65nm low power technology. The die size of REMUS-MB is 13.97mm
2. REMUS-MB consumes, on average, about 100mW while working at 166MHz. To my knowledge, in the literature this is the first implementation of H.264 encoding algorithm on a coarse-grained dynamically reconfigurable computing system.
View full abstract
-
Wei GE, Zhi QI, Yue DU, Lu MA, Longxing SHI
Article type: PAPER
Subject area: Computer System
2013 Volume E96.D Issue 3 Pages
616-623
Published: March 01, 2013
Released on J-STAGE: March 01, 2013
JOURNAL
FREE ACCESS
The
Coarse Grained Reconfigurable Architectures (CGRAs) are proposed as new choices for enhancing the ability of parallel processing. Data transfer throughput between Reconfigurable Cell Array (RCA) and on-chip local memory is usually the main performance bottleneck of CGRAs. In order to release this stress, we propose a novel data transfer strategy that is called Heuristic Data Prefetch and Reuse (HDPR), for the first time in the case of explicit CGRAs. The HDPR strategy provides not only the flexible data access schedule but also the high data throughput needed to realize fast pipelined implementations of various loop kernels. To improve the data utilization efficiency, a dual-bank cache-like data reuse structure is proposed. Furthermore, a heuristic data prefetch is also introduced to decrease the data access latency. Experimental results demonstrate that when compared with conventional explicit data transfer strategies, our work achieves a significant speedup improvement of, on average, 1.73 times at the expense of only 5.86% increase in area.
View full abstract
-
Song-Hyon KIM, Inchul SONG, Kyong-Ha LEE, Yoon-Joon LEE
Article type: PAPER
Subject area: Data Engineering, Web Information Systems
2013 Volume E96.D Issue 3 Pages
624-633
Published: March 01, 2013
Released on J-STAGE: March 01, 2013
JOURNAL
FREE ACCESS
Subgraph matching is a fundamental operation for querying graph-structured data. Due to potential errors and noises in real-world graph data, exact subgraph matching is sometimes inappropriate in practice. In this paper we consider an approximate subgraph matching model that allows missing edges. Based on this model, approximate subgraph matching finds all occurrences of a given query graph in a database graph, allowing missing edges. A straightforward approach is to first generate query subgraphs of a given query graph by deleting edges and then perform exact subgraph matching for each query subgraph. In this paper we propose a sharing-based approach to approximate subgraph matching, called SASUM. Our method is based on the fact that query subgraphs are highly overlapped. Due to this overlapping nature of query subgraphs, the matches of a query subgraph can be computed from the matches of a smaller query subgraph, which results in reducing the number of query subgraphs that require expensive exact subgraph matching. Our method uses a lattice framework to identify sharing opportunities between query subgraphs. To further reduce the number of graphs that need exact subgraph matching, SASUM generates small base graphs that are shared by query subgraphs and chooses the minimum number of base graphs whose matches are used to derive the matching results of all query subgraphs. A comprehensive set of experiments shows that our approach outperforms the state-of-the-art approach by orders of magnitude in terms of query execution time.
View full abstract
-
Gang WANG, Li ZHANG, Yonggang HUANG, Yan SUN
Article type: PAPER
Subject area: Data Engineering, Web Information Systems
2013 Volume E96.D Issue 3 Pages
634-642
Published: March 01, 2013
Released on J-STAGE: March 01, 2013
JOURNAL
FREE ACCESS
It is the key concern for service providers that how a web service stands out among functionally similar services. QoS is a distinct and decisive factor in service selection among functionally similar services. Therefore, how to design services to meet customers' QoS requirements is an urgent problem for service providers. This paper proposes an approach using QFD (Quality Function Deployment) which is a quality methodology to transfer services' QoS requirements into services' design attribute characteristics. Fuzzy set is utilized to deal with subjective and vague assessments such as importance of QoS properties. TCI (Technical Competitive Index) is defined to compare the technical competitive capacity of a web service with those of other functionally similar services in the aspect of QoS. Optimization solutions of target values of service design attributes is determined by GA (Genetic Algorithm) in order to make the technical performance of the improved service higher than those of any other rival service products with the lowest improvement efforts. Finally, we evaluate candidate improvement solutions on cost-effectiveness. As the output of QFD process, the optimization targets and order of priority of service design attributes can be used as an important basis for developing and improving service products.
View full abstract
-
Xiangxu MENG, Xiaodong WANG, Xinye LIN
Article type: PAPER
Subject area: Data Engineering, Web Information Systems
2013 Volume E96.D Issue 3 Pages
643-654
Published: March 01, 2013
Released on J-STAGE: March 01, 2013
JOURNAL
FREE ACCESS
The GPS trajectory databases serve as bases for many intelligent applications that need to extract some trajectories for future processing or mining. When doing such tasks, spatio-temporal range queries based methods, which find all sub-trajectories within the given spatial extent and time interval, are commonly used. However, the history trajectory indexes of such methods suffer from two problems. First, temporal and spatial factors are not considered simutaneously, resulting in low performance when processing spatio-temporal queries. Second, the efficiency of indexes is sensitive to query size. The query performance changes dramatically as the query size changed. This paper proposes workload-aware Adaptive OcTree based Trajectory clustering Index (ATTI) aiming at optimizing trajectory storage and index performance. The contributions are three-folds. First, the distribution and time delay of the trajectory storage are introduced into the cost model of spatio-temporal range query; Second, the distribution of spatial division is dynamically adjusted based on GPS update workload; Third, the query workload adaptive mechanism is proposed based on virtual OcTree forest. A wide range of experiments are carried out over Microsoft GeoLife project dataset, and the results show that query delay of ATTI could be about 50% shorter than that of the nested index.
View full abstract
-
Yong-Jun YOU, Sung-Do CHI, Jae-Ick KIM
Article type: PAPER
Subject area: Information Network
2013 Volume E96.D Issue 3 Pages
655-662
Published: March 01, 2013
Released on J-STAGE: March 01, 2013
JOURNAL
FREE ACCESS
This paper proposes an agent-based modeling and simulation methodology for analyzing the tactical and operational effectiveness of warfare environment. To do this, we adopt the advanced agent modeling principle, HEAP (Hierarchical Encapsulation and Abstraction Principle), as well as the hierarchical modeling and simulation framework, SES/MB (System Entity Structure/Model Base). Proposed methodology is differentiated from other conventional agent-based defense M&S approaches in that; (i) it supports an intelligent hierarchical multi-agent architecture, (ii) it provides an efficient mechanism for analyzing the strategic and operational effectiveness of warfare environment between multiple platforms. The proposed methodology is successfully applied to the two by two warships warfare simulation for analyzing the tactical effectiveness.
View full abstract
-
Xu LI, Kai LU, Xiaoping WANG, Bin DAI, Xu ZHOU
Article type: PAPER
Subject area: Dependable Computing
2013 Volume E96.D Issue 3 Pages
663-672
Published: March 01, 2013
Released on J-STAGE: March 01, 2013
JOURNAL
FREE ACCESS
Existing large-scale systems suffer from various hardware/software failures, motivating the research of fault-tolerance techniques. Checkpoint-restart techniques are widely applied fault-tolerance approaches, especially in scientific computing systems. However, the overhead of checkpoint largely influences the overall system performance. Recently, the emerging byte-addressable, persistent memory technologies, such as phase change memory (PCM), make it possible to implement checkpointing in arbitrary data granularity. However, the impact of data granularity on the checkpointing cost has not been fully addressed. In this paper, we investigate how data granularity influences the performance of a checkpoint system. Further, we design and implement a high-performance checkpoint system named AG-ckpt. AG-ckpt is a hybrid-granularity incremental checkpointing scheme through: (1) low-cost modified-memory detection and (2) fine-grained memory duplication. Moreover, we also formulize the performance-granularity relationship of checkpointing systems through a mathematical model, and further obtain the optimum solutions. We conduct the experiments through several typical benchmarks to verify the performance gain of our design. Compared to conventional incremental checkpoint, our results show that AG-ckpt can reduce checkpoint data amount up to 50% and provide a speedup of 1.2x-1.3x on checkpoint efficiency.
View full abstract
-
Hiroshi TAKENOUCHI, Masataka TOKUMARU, Noriaki MURANAKA
Article type: PAPER
Subject area: Human-computer Interaction
2013 Volume E96.D Issue 3 Pages
673-680
Published: March 01, 2013
Released on J-STAGE: March 01, 2013
JOURNAL
FREE ACCESS
We present an Interactive Tabu Search (ITS) algorithm to reduce the evaluation load of Interactive Evolutionary Computation (IEC) users. Most previous IEC studies used an evaluation interface that required users to provide evaluation values for all candidate solutions. However, user's burden with such an evaluation interface is large. Therefore, we propose ITS where users choose the favorite candidate solution from the presented candidate solutions. Tabu Search (TS) is recognized as an optimization technique. ITS evaluation is simpler than Interactive Genetic Algorithm (IGA) evaluation, in which users provide evaluation values for all candidate solutions. Therefore, ITS is effective for reducing user evaluation load. We evaluated the performance of our proposed ITS and a Normal IGA (NIGA), which is a conventional 10-stage evaluation, using a numerical simulation with an evaluation agent that imitates human preferences (Kansei). In addition, we implemented an ITS evaluation for a running-shoes-design system and examined its effectiveness through an experiment with real users. The simulation results showed that the evolution performance of ITS is better than that of NIGA. In addition, we conducted an evaluation experiment with 21 subjects in their 20s to assess the effectiveness of these methods. The results showed that the satisfaction levels for the candidates generated by ITS and NIGA were approximately equal. Moreover, it was easier for test subjects to evaluate candidate solutions with ITS than with NIGA.
View full abstract
-
Cheng CHENG, Bilan ZHU, Masaki NAKAGAWA
Article type: PAPER
Subject area: Pattern Recognition
2013 Volume E96.D Issue 3 Pages
681-689
Published: March 01, 2013
Released on J-STAGE: March 01, 2013
JOURNAL
FREE ACCESS
This paper presents an approach based on character recognition to searching for keywords in on-line handwritten Japanese text. It employs an on-line character classifier and an off-line classifier or a combined classifier, which produce recognition candidates, and it searches for keywords in the lattice of candidates. It integrates scores to individually recognize characters and their geometric context. We use quadratic discriminant function(QDF) or support vector machines(SVM) models to evaluate the geometric features of individual characters and the relationships between characters. This paper also presents an approach based on feature matching that employs on-line or off-line features. We evaluate three recognition-based methods, two feature-matching-based methods, as well as ideal cases of the latter and concluded that the approach based on character recognition outperformed that based on feature matching.
View full abstract
-
Trung Hieu BUI, Eitaku NOBUYAMA, Takeshi SAITOH
Article type: PAPER
Subject area: Pattern Recognition
2013 Volume E96.D Issue 3 Pages
690-698
Published: March 01, 2013
Released on J-STAGE: March 01, 2013
JOURNAL
FREE ACCESS
Estimating a proper location of vanishing point from a single road image without any prior known camera parameters is a challenging problem due to limited information from the input image. Most edge-based methods for vanishing point detection only work well for structured roads with clear painted lines or distinct boundaries, while they usually fail in unstructured roads lacking sharply defined, smoothly curving edges. In order to overcome this limitation, texture-based methods for vanishing point detection have been widely published. Authors of these methods often calculate the texture orientation at every pixel of the road image by using directional filter banks such as Gabor wavelet filter, and seek the vanishing point by a voting scheme. A local adaptive soft voting method for obtaining the vanishing point was proposed in a previous study. Although this method is more effective and faster than prior texture-based methods, the associated computational cost is still high due to a large number of scanning pixels. On the other hand, this method leads to an estimation error in some images, in which the radius of the proposed half-disk voting region is not large enough. The goal of this paper is to reduce the computational cost and improve the performance of the algorithm. Therefore, we propose a novel local soft voting method, in which the number of scanning pixels is much reduced, and a new vanishing point candidate region is introduced to improve the estimation accuracy. The proposed method has been implemented and tested on 1000 road images which contain large variations in color, texture, lighting condition and surrounding environment. The experimental results demonstrate that this new voting method is both efficient and effective in detecting the vanishing point from a single road image and requires much less computational cost when compared to the previous voting method.
View full abstract
-
Santi NURATCH, Panuthat BOONPRAMUK, Chai WUTIWIWATCHAI
Article type: PAPER
Subject area: Speech and Hearing
2013 Volume E96.D Issue 3 Pages
699-707
Published: March 01, 2013
Released on J-STAGE: March 01, 2013
JOURNAL
FREE ACCESS
This paper presents a new technique to smooth speech feature vectors for text-independent speaker verification using an adaptive band-pass IIR filer. The filter is designed by considering the probability density of modulation-frequency components of an
M-dimensional feature vector. Each dimension of the feature vector is processed and filtered separately. Initial filter parameters, low-cut-off and high-cut-off frequencies, are first determined by the global mean of the probability densities computed from all feature vectors of a given speech utterance. Then, the cut-off frequencies are adapted over time, i.e. every frame vector, in both low-frequency and high-frequency bands based also on the global mean and the standard deviation of feature vectors. The filtered feature vectors are used in a SVM-GMM Supervector speaker verification system. The NIST Speaker Recognition Evaluation 2006 (SRE06) core-test is used in evaluation. Experimental results show that the proposed technique clearly outperforms a baseline system using a conventional RelAtive SpecTrA (RASTA) filter.
View full abstract
-
Wyllian B. da SILVA, Keiko V. O. FONSECA, Alexandre de A. P. POHL
Article type: PAPER
Subject area: Image Processing and Video Processing
2013 Volume E96.D Issue 3 Pages
708-718
Published: March 01, 2013
Released on J-STAGE: March 01, 2013
JOURNAL
FREE ACCESS
A simple and efficient reduced-reference video quality assessment method based on the activity-difference of DCT coefficients is proposed. The method provides better accuracy, monotonicity, and consistent predictions than the PSNR full-reference metric and comparable results with the full-reference SSIM. It also shows an improved performance to a similar VQ technique based on the calculation of the pixel luminance differences performed in the spatial-domain.
View full abstract
-
Zhi DENG, Huaxi GU, Yingtang YANG, Hua YOU
Article type: LETTER
Subject area: Computer System
2013 Volume E96.D Issue 3 Pages
719-722
Published: March 01, 2013
Released on J-STAGE: March 01, 2013
JOURNAL
FREE ACCESS
In this paper, an energy- and traffic-balance-aware mapping algorithm from IP cores to nodes in a network is proposed for application-specific Network-on-Chip(NoC). The multi-objective optimization model is set up by considering the NoC architecture, and addressed by the proposed mapping algorithm that decomposes mapping optimization into a number of scalar subproblems simultaneously. In order to show performance of the proposed algorithm, the application specific benchmark is applied in the simulation. The experimental results demonstrate that the algorithm has advantages in energy consumption and traffic balance over other algorithms.
View full abstract
-
Myung-Ho PARK, Ki-Gon NAM, Jin Seok KIM, Dae Hyun YUM, Pil Joong LEE
Article type: LETTER
Subject area: Information Network
2013 Volume E96.D Issue 3 Pages
723-726
Published: March 01, 2013
Released on J-STAGE: March 01, 2013
JOURNAL
FREE ACCESS
With the increased deployment of wireless sensor networks (WSNs) in location-based services, the need for accurate localization of sensor nodes is gaining importance. Sensor nodes in a WSN localize themselves with the help of anchors that know their own positions. Some anchors may be malicious and provide incorrect information to the sensor nodes. In this case, accurate localization of a sensor node may be severely affected. In this study, we propose a secure and lightweight localization method. In the proposed method, uncertainties in the estimated distance between the anchors and a sensor node are taken into account to improve localization accuracy. That is, we minimize the weighted summation of the residual squares. Simulation results show that our method is very effective for accurate localization of sensor nodes. The proposed method can accurately localize a sensor node in the presence of malicious anchors and it is computationally efficient.
View full abstract
-
Jia-Rui LIU, Shi-Ze GUO, Zhe-Ming LU, Fa-Xin YU, Hui LI
Article type: LETTER
Subject area: Information Network
2013 Volume E96.D Issue 3 Pages
727-730
Published: March 01, 2013
Released on J-STAGE: March 01, 2013
JOURNAL
FREE ACCESS
In complex network analysis, there are various measures to characterize the centrality of each node within a graph, which determines the relative importance of each node. The more centrality a node has in a network, the more significance it has in the spread of infection. As one of the important extensions to shortest-path based betweenness centrality, the flow betweenness centrality is defined as the degree to which each node contributes to the sum of maximum flows between all pairs of nodes. One of the drawbacks of the flow betweenness centrality is that its time complexity is somewhat high. This Letter proposes an approximate method to calculate the flow betweenness centrality and provides experimental results as evidence.
View full abstract
-
Zhe WANG, Kai HU, Baolin YIN
Article type: LETTER
Subject area: Information Network
2013 Volume E96.D Issue 3 Pages
731-734
Published: March 01, 2013
Released on J-STAGE: March 01, 2013
JOURNAL
FREE ACCESS
We propose a novel network traffic matrix decomposition method named
Stable Principal Component Pursuit with Frequency-Domain Regularization (SPCP-FDR), which improves the Stable Principal Component Pursuit (SPCP) method by using a frequency-domain noise regularization function. An experiment demonstrates the feasibility of this new decomposition method.
View full abstract
-
Yunja CHOI
Article type: LETTER
Subject area: Dependable Computing
2013 Volume E96.D Issue 3 Pages
735-738
Published: March 01, 2013
Released on J-STAGE: March 01, 2013
JOURNAL
FREE ACCESS
An automotive operating system is a typical safety-critical software and therefore requires extensive analysis w.r.t its effect on system safety. Our earlier work [1] reported a systematic model checking approach for checking the safety properties of the OSEK/VDX-based operating system Trampoline. This article reports further performance improvement using embeddedC constructs for efficient verification of the Trampoline model developed in the earlier work. Experiments show that the use of embeddedC constructs greatly reduces verification costs.
View full abstract
-
Zhi LIU, Zhaocai SUN, Hongjun WANG
Article type: LETTER
Subject area: Artificial Intelligence, Data Mining
2013 Volume E96.D Issue 3 Pages
739-741
Published: March 01, 2013
Released on J-STAGE: March 01, 2013
JOURNAL
FREE ACCESS
In this study, a novel forest method based on specific random trees (SRT) was proposed for a multiclass classification problem. The proposed SRT was built on one specific class, which decides whether a sample belongs to a certain class. The forest can make a final decision on classification by ensembling all the specific trees. Compared with the original random forest, our method has higher strength, but lower correlation and upper error bound. The experimental results based on 10 different public datasets demonstrated the efficiency of the proposed method.
View full abstract
-
Ildefons MAGRANS DE ABRIL, Masashi SUGIYAMA
Article type: LETTER
Subject area: Artificial Intelligence, Data Mining
2013 Volume E96.D Issue 3 Pages
742-745
Published: March 01, 2013
Released on J-STAGE: March 01, 2013
JOURNAL
FREE ACCESS
This letter presents the ideas and methods of the winning solution
* for the Kaggle Algorithmic Trading Challenge. This analysis challenge took place between 11th November 2011 and 8th January 2012, and 264 competitors submitted solutions. The objective of this competition was to develop empirical predictive models to explain stock market prices following a liquidity shock. The winning system builds upon the optimal composition of several models and a feature extraction and selection strategy. We used Random Forest as a modeling technique to train all sub-models as a function of an optimal feature set. The modeling approach can cope with highly complex data having low Maximal Information Coefficients between the dependent variable and the feature set and provides a feature ranking metric which we used in our feature selection algorithm.
View full abstract
-
Jung-In LEE, Jeung-Yoon CHOI, Hong-Goo KANG
Article type: LETTER
Subject area: Speech and Hearing
2013 Volume E96.D Issue 3 Pages
746-749
Published: March 01, 2013
Released on J-STAGE: March 01, 2013
JOURNAL
FREE ACCESS
Refinement methods for landmark detection and extraction of articulator-free features for a knowledge-based speech recognition system are described. Sub-band energy difference profiles are used to detect landmarks, with additional parameters used to improve accuracy. For articulator-free feature extraction, duration, relative energy, and silence detection are additionally used to find [
continuant] and [
strident] features. Vowel, obstruent and sonorant consonant landmarks, and locations of voicing onsets and offsets are detected within a unified framework with 85% accuracy overall. Additionally, 75% and 79% of [
continuant] and [
strident] features, respectively, are detected from landmarks.
View full abstract
-
Zhongyuan LAI, Wenyu LIU, Fan ZHANG, Guang CHENG
Article type: LETTER
Subject area: Image Processing and Video Processing
2013 Volume E96.D Issue 3 Pages
750-753
Published: March 01, 2013
Released on J-STAGE: March 01, 2013
JOURNAL
FREE ACCESS
In this paper, we present a perceptual distortion measure (PDM) for polygon-based shape coding. We model the PDM as the salience of relevance triangle, and express the PDM by using three properties derived from the salience of visual part. Performance analysis and experimental results show that our proposal can improve the quality of the shape reconstruction when the object contour has sharp protrusions.
View full abstract
-
Sangwoo AHN, Jongwha CHONG
Article type: LETTER
Subject area: Image Processing and Video Processing
2013 Volume E96.D Issue 3 Pages
754-757
Published: March 01, 2013
Released on J-STAGE: March 01, 2013
JOURNAL
FREE ACCESS
In this letter, we propose a novel search approach to blur kernel estimation for defocused image restoration. An adaptive binary search on consensus is the main contribution of our research. It is based on binary search and random sample consensus set (RANSAC). Moreover an evaluating function which uses a histogram of gradient distribution is proposed for assessing restored images. Simulations on an image benchmark dataset shows that the proposed algorithm can estimate, on average, the blur kernels 15.14% more accurately than other defocused image restoration algorithms.
View full abstract
-
Jinjun KUANG, Yi CHAI
Article type: LETTER
Subject area: Image Recognition, Computer Vision
2013 Volume E96.D Issue 3 Pages
758-761
Published: March 01, 2013
Released on J-STAGE: March 01, 2013
JOURNAL
FREE ACCESS
This paper presents a novel scale-rotation invariant generative model (SRIGM) and a kernel sparse representation classification (KSRC) method for scene categorization. Recently the sparse representation classification (SRC) methods have been highly successful in a number of image processing tasks. Despite its popularity, the SRC framework lucks the abilities to handle multi-class data with high inter-class similarity or high intra-class variation. The kernel random coordinate descent (KRCD) algorithm is proposed for
l1 minimization in the kernel space under the KSRC framework. It allows the proposed method to obtain satisfactory classification accuracy when inter-class similarity is high. The training samples are partitioned in multiple scales and rotated in different resolutions to create a generative model that is invariant to scale and rotation changes. This model enables the KSRC framework to overcome the high intra-class variation problem for scene categorization. The experimental results show the proposed method obtains more stable performances than other existing state-of-art scene categorization methods.
View full abstract