-
Masaya SHIMAKAWA, Shigeki HAGIHARA, Naoki YONEZAKI
Article type: PAPER
Subject area: Fundamentals of Information Systems
2013 Volume E96.D Issue 10 Pages
2187-2193
Published: October 01, 2013
Released on J-STAGE: October 01, 2013
JOURNAL
FREE ACCESS
Many fatal accidents involving safety-critical reactive systems have occurred in unexpected situations, which were not considered during the design and test phases of system development. To prevent such accidents, reactive systems should be designed to respond appropriately to any request from an environment at any time. Verifying this property during the specification phase reduces the development costs of safety-critical reactive systems. This property of a specification is commonly known as realizability. The complexity of the realizability problem is 2EXPTIME-complete. We have introduced the concept of strong satisfiability, which is a necessary condition for realizability. Many practical unrealizable specifications are also strongly unsatisfiable. In this paper, we show that the complexity of the strong satisfiability problem is EXPSPACE-complete. This means that strong satisfiability offers the advantage of lower complexity for analysis, compared to realizability. Moreover, we show that the strong satisfiability problem remains EXPSPACE-complete even when only formulae with a temporal depth of at most 2 are allowed.
View full abstract
-
Shuai MU, Dongdong LI, Yubei CHEN, Yangdong DENG, Zhihua WANG
Article type: PAPER
Subject area: Computer System
2013 Volume E96.D Issue 10 Pages
2194-2207
Published: October 01, 2013
Released on J-STAGE: October 01, 2013
JOURNAL
FREE ACCESS
By exploiting data-level parallelism, Graphics Processing Units (GPUs) have become a high-throughput, general purpose computing platform. Many real-world applications especially those following a stream processing pattern, however, feature interleaved task-pipelined and data parallelism. Current GPUs are ill equipped for such applications due to the insufficient usage of computing resources and/or the excessive off-chip memory traffic. In this paper, we focus on microarchitectural enhancements to enable task-pipelined execution of data-parallel kernels on GPUs. We propose an efficient adaptive dynamic scheduling mechanism and a moderately modified L2 design. With minor hardware overhead, our techniques orchestrate both task-pipeline and data parallelisms in a unified manner. Simulation results derived by a cycle-accurate simulator on real-world applications prove that the proposed GPU microarchitecture improves the computing throughput by 18% and reduces the overall accesses to off-chip GPU memory by 13%.
View full abstract
-
Chun-Hung CHEN, Yuan-Liang TANG, Wen-Shyong HSIEH
Article type: PAPER
Subject area: Information Network
2013 Volume E96.D Issue 10 Pages
2208-2214
Published: October 01, 2013
Released on J-STAGE: October 01, 2013
JOURNAL
FREE ACCESS
Digital watermarking techniques have been used to assert the ownerships of digital images. The ownership information is embedded in an image as a watermark so that the owner of the image can be identified. However, many types of attacks have been used in attempts to break or remove embedded watermarks. Therefore, the watermark should be very robust against various kinds of attacks. Among them, the print-and-scan (PS) attack is very challenging because it not only alters the pixel values but also changes the positions of the original pixels. In this paper, we propose a watermarking system operating in the discrete cosine transform (DCT) domain. The polarities of the DCT coefficients are modified for watermark embedding. This is done by considering the properties of DCT coefficients under the PS attack. The proposed system is able to maintain the image quality after watermarking and the embedded watermark is very robust against the PS attack as well.
View full abstract
-
Cuiyin LIU, Shu-qing CHEN, Qiao FU
Article type: PAPER
Subject area: Image Processing and Video Processing
2013 Volume E96.D Issue 10 Pages
2215-2223
Published: October 01, 2013
Released on J-STAGE: October 01, 2013
JOURNAL
FREE ACCESS
In this paper, an efficient multi-modal medical image fusion approach is proposed based on local features contrast and bilateral sharpness criterion in nonsubsampled contourlet transform (NSCT) domain. Compared with other multiscale decomposition analysis tools, the nonsubsampled contourlet transform not only can eliminate the “block-effect” and the “pseudo-effect”, but also can represent the source image in multiple direction and capture the geometric structure of source image in transform domain. These advantages of NSCT can, when used in fusion algorithm, help to attain more visual information in fused image and improve the fusion quality. At the same time, in order to improve the robustness of fusion algorithm and to improve the quality of the fused image, two selection rules should be considered. Firstly, a new bilateral sharpness criterion is proposed to select the lowpass coefficient, which exploits both strength and phase coherence. Secondly, a modified SML (sum modified Laplacian) is introduced into the local contrast measurements, which is suitable for human vision system and can extract more useful detailed information from source images. Experimental results demonstrate that the proposed method performs better than the conventional fusion algorithm in terms of both visual quality and objective evaluation criteria.
View full abstract
-
Deshan CHEN, Atsushi MIYAMOTO, Shun'ichi KANEKO
Article type: PAPER
Subject area: Image Recognition, Computer Vision
2013 Volume E96.D Issue 10 Pages
2224-2234
Published: October 01, 2013
Released on J-STAGE: October 01, 2013
JOURNAL
FREE ACCESS
This paper describes a robust three-dimensional (3D) surface reconstruction method that can automatically eliminate shadowing errors. For modeling shadowing effect, a new shadowing compensation model based on the angle distribution of backscattered electrons is introduced. Further, it is modified with respect to some practical factors. Moreover, the proposed iterative shadowing compensation method, which performs commutatively between the compensation of image intensities and the modification of the corresponding 3D surface, can effectively provide both an accurate 3D surface and compensated shadowless images after convergence.
View full abstract
-
Rong HUANG, Palaiahnakote SHIVAKUMARA, Yaokai FENG, Seiichi UCHIDA
Article type: PAPER
Subject area: Image Recognition, Computer Vision
2013 Volume E96.D Issue 10 Pages
2235-2244
Published: October 01, 2013
Released on J-STAGE: October 01, 2013
JOURNAL
FREE ACCESS
To handle the variety of scene characters, we propose a cooperative multiple-hypothesis framework which consists of an image operator set module, an Optical Character Recognition (OCR) module and an integration module. Multiple image operators activated by multiple parameters probe suspected character regions. The OCR module is then applied to each suspected region and returns multiple candidates with weight values for future integration. Without the aid of the heuristic rules which impose constraints on segmentation area, aspect ratio, color consistency, text line orientations, etc., the integration module automatically prunes the redundant detection/recognition and pads the missing detection/recognition. The proposed framework bridges the gap between scene character detection and recognition, in the sense that a practical OCR engine is effectively leveraged for result refinement. In addition, the proposed method achieves the detection and recognition at the character level, which enables dealing with special scenarios such as single character, text along arbitrary orientations or text along curves. We perform experiments on the benchmark ICDAR 2011 Robust Reading Competition dataset which includes a text localization task and a word recognition task. The quantitative results demonstrate that multiple hypotheses outperform a single hypothesis, and be comparable with state-of-the-art methods in terms of recall, precision, F-measure, character recognition rate, total edit distance and word recognition rate. Moreover, two additional experiments are conducted to confirm the simplicity of parameter setting in this proposal.
View full abstract
-
Nattapong TONGTEP, Thanaruk THEERAMUNKONG
Article type: PAPER
Subject area: Natural Language Processing
2013 Volume E96.D Issue 10 Pages
2245-2256
Published: October 01, 2013
Released on J-STAGE: October 01, 2013
JOURNAL
FREE ACCESS
Automated or semi-automated annotation is a practical solution for large-scale corpus construction. However, the special characteristics of Thai language, such as lack of word-boundary and sentence-boundary markers, trigger several issues in automatic corpus annotation. This paper presents a multi-stage annotation framework, containing two stages of chunking and three stages of tagging. The two chunking stages are pattern matching-based named entity (NE) extraction and dictionary-based word segmentation while the three succeeding tagging stages are dictionary-, pattern- and statist09812490981249ical-based tagging. Applying heuristics of ambiguity priority, NE extraction is performed first on an original text using a set of patterns, in the order of pattern ambiguity. Next, the remaining text is segmented into words with a dictionary. The obtained chunks are then tagged with types of named entities or parts-of-speech (PoS) using dictionaries, patterns and statistics. Focusing on the reduction of human intervention in corpus construction, our experimental results show that the dictionary-based tagging process can assign unique tags to 64.92% of the words, with the remaining of 24.14% unknown words and 10.94% ambiguously tagged words. Later, the pattern-based tagging can reduce unknown words to only 13.34% while the statistical-based tagging can solve the ambiguously tagged words to only 3.01%.
View full abstract
-
Hirofumi TSUZUKI, Mauricio KUGLER, Susumu KUROYANAGI, Akira IWATA
Article type: PAPER
Subject area: Biocybernetics, Neurocomputing
2013 Volume E96.D Issue 10 Pages
2257-2265
Published: October 01, 2013
Released on J-STAGE: October 01, 2013
JOURNAL
FREE ACCESS
This paper presents a Complex-Valued Neural Network-based sound localization method. The proposed approach uses two microphones to localize sound sources in the whole horizontal plane. The method uses time delay and amplitude difference to generate a set of features which are then classified by a Complex-Valued Multi-Layer Perceptron. The advantage of using complex values is that the amplitude information can naturally masks the phase information. The proposed method is analyzed experimentally with regard to the spectral characteristics of the target sounds and its tolerance to noise. The obtained results emphasize and confirm the advantages of using Complex-Valued Neural Networks for the sound localization problem in comparison to the traditional Real-Valued Neural Network model.
View full abstract
-
Kazuto OSHIMA
Article type: LETTER
Subject area: Fundamentals of Information Systems
2013 Volume E96.D Issue 10 Pages
2266-2267
Published: October 01, 2013
Released on J-STAGE: October 01, 2013
JOURNAL
FREE ACCESS
In the Knill-Laflamme-Milburn (KLM) scheme, quantum teleportation is nearly deterministically carried out with linear optics. To reconstruct an original quantum state, however, a phase shift is required for an output state. We exhibit a proper phase shift to complete quantum teleportation.
View full abstract
-
Junya KAIDA, Yuko HARA-AZUMI, Takuji HIEDA, Ittetsu TANIGUCHI, Hiroyuk ...
Article type: LETTER
Subject area: Computer System
2013 Volume E96.D Issue 10 Pages
2268-2271
Published: October 01, 2013
Released on J-STAGE: October 01, 2013
JOURNAL
FREE ACCESS
This paper studies the static mapping of multiple applications on embedded many-core SoCs. The mapping techniques proposed in this paper take into account both inter-application and intra-application parallelism in order to fully utilize the potential parallelism of the many-core architecture. Two approaches are proposed for static mapping: one approach is based on integer linear programming and the other is based on a greedy algorithm. Experiments show the effectiveness of the proposed techniques.
View full abstract
-
Donghai TIAN, Mo CHEN, Changzhen HU, Xuanya LI
Article type: LETTER
Subject area: Software System
2013 Volume E96.D Issue 10 Pages
2272-2276
Published: October 01, 2013
Released on J-STAGE: October 01, 2013
JOURNAL
FREE ACCESS
As more and more software vulnerabilities are exposed, shellcode has become very popular in recent years. It is widely used by attackers to exploit vulnerabilities and then hijack program's execution. Previous solutions suffer from limitations in that: 1) Some methods based on static analysis may fail to detect the shellcode using obfuscation techniques. 2) Other methods based on dynamic analysis could impose considerable performance overhead. In this paper, we propose Lemo, an efficient shellcode detection system. Our system is compatible with commodity hardware and operating systems, which enables deployment. To improve the performance of our system, we make use of the multi-core technology. The experiments show that our system can detect shellcode efficiently.
View full abstract
-
Jeonggon LEE, Bum-Soo KIM, Mi-Jung CHOI, Yang-Sae MOON
Article type: LETTER
Subject area: Data Engineering, Web Information Systems
2013 Volume E96.D Issue 10 Pages
2277-2281
Published: October 01, 2013
Released on J-STAGE: October 01, 2013
JOURNAL
FREE ACCESS
Histogram sequences represent high-dimensional time-series converted from images by space filling curves (SFCs). To overcome the high-dimensionality nature of histogram sequences (e.g., 10
6 dimensions for a 1024×1024 image), we often use lower-dimensional transformations, but the tightness of their lower-bounds is highly affected by the types of SFCs. In this paper we attack a challenging problem of evaluating which SFC shows the better performance when we apply the lower-dimensional transformation to histogram sequences. For this, we first present a concept of
spatial locality and propose
spatial locality preservation metric (
SLPM in short). We then evaluate five well-known SFCs from the perspective of
SLPM and verify that the evaluation result concurs with the actual transformation performance. Finally, we empirically validate the accuracy of
SLPM by providing that the Hilbert-order with the highest
SLPM also shows the best performance in
k-NN (
k-nearest neighbors) search.
View full abstract
-
Janya SAINUI, Masashi SUGIYAMA
Article type: LETTER
Subject area: Artificial Intelligence, Data Mining
2013 Volume E96.D Issue 10 Pages
2282-2285
Published: October 01, 2013
Released on J-STAGE: October 01, 2013
JOURNAL
FREE ACCESS
Mutual information (MI) is a standard measure of statistical dependence of random variables. However, due to the log function and the ratio of probability densities included in MI, it is sensitive to outliers. On the other hand, the
L2-distance variant of MI called
quadratic MI (QMI) tends to be robust against outliers because QMI is just the integral of the squared difference between the joint density and the product of marginals. In this paper, we propose a kernel least-squares QMI estimator called
least-squares QMI (LSQMI) that directly estimates the density difference without estimating each density. A notable advantage of LSQMI is that its solution can be analytically and efficiently computed just by solving a system of linear equations. We then apply LSQMI to dependence-maximization clustering, and demonstrate its usefulness experimentally.
View full abstract
-
Yun JIN, Peng SONG, Wenming ZHENG, Li ZHAO, Minghai XIN
Article type: LETTER
Subject area: Speech and Hearing
2013 Volume E96.D Issue 10 Pages
2286-2289
Published: October 01, 2013
Released on J-STAGE: October 01, 2013
JOURNAL
FREE ACCESS
In this paper, a two-layer Multiple Kernel Learning (MKL) scheme for speaker-independent speech emotion recognition is presented. In the first layer, MKL is used for feature selection. The training samples are separated into
n groups according to some rules. All groups are used for feature selection to obtain
n sparse feature subsets. The intersection and the union of all feature subsets are the result of our feature selection methods. In the second layer, MKL is used again for speech emotion classification with the selected features. In order to evaluate the effectiveness of our proposed two-layer MKL scheme, we compare it with state-of-the-art results. It is shown that our scheme results in large gain in performance. Furthermore, another experiment is carried out to compare our feature selection method with other popular ones. And the result proves the effectiveness of our feature selection method.
View full abstract
-
Guojun LIN, Mei XIE, Ling MAO
Article type: LETTER
Subject area: Image Recognition, Computer Vision
2013 Volume E96.D Issue 10 Pages
2290-2293
Published: October 01, 2013
Released on J-STAGE: October 01, 2013
JOURNAL
FREE ACCESS
For face recognition with a single training image per person, Collaborative Representation based Classification (CRC) has significantly less complexity than Extended Sparse Representation based Classification (ESRC). However, CRC gets lower recognition rates than ESRC. In order to combine the advantages of CRC and ESRC, we propose Extended Collaborative Representation based Classification (ECRC) for face recognition with a single training image per person. ECRC constructs an auxiliary intraclass variant dictionary to represent the possible variation between the testing and training images. Experimental results show that ECRC outperforms the compared methods in terms of both high recognition rates and low computation complexity.
View full abstract
-
Linfeng XU, Liaoyuan ZENG, Zhengning WANG
Article type: LETTER
Subject area: Image Recognition, Computer Vision
2013 Volume E96.D Issue 10 Pages
2294-2297
Published: October 01, 2013
Released on J-STAGE: October 01, 2013
JOURNAL
FREE ACCESS
In this letter, we use the saliency maps obtained by several bottom-up methods to learn a model to generate a bottom-up saliency map. In order to consider top-down image semantics, we use the high-level features of objectness and background probability to learn a top-down saliency map. The bottom-up map and top-down map are combined through a two-layer structure. Quantitative experiments demonstrate that the proposed method and features are effective to predict human fixation.
View full abstract