-
Ittetsu TANIGUCHI, Junya KAIDA, Takuji HIEDA, Yuko HARA-AZUMI, Hiroyuk ...
Article type: PAPER
Subject area: Fundamentals of Information Systems
2014 Volume E97.D Issue 11 Pages
2827-2834
Published: 2014
Released on J-STAGE: November 01, 2014
JOURNAL
FREE ACCESS
This paper studies mapping techniques of multiple applications on embedded many-core SoCs. The mapping techniques proposed in this paper are static which means the mapping is decided at design time. The mapping techniques take into account both inter-application and intra-application parallelism in order to fully utilize the potential parallelism of the many-core architecture. Additionally, the proposed static mapping supports dynamic application switching, which means the applications mapped onto the same cores are switched to each other at runtime. Two approaches are proposed for static mapping: one approach is based on integer linear programming and the other is based on a greedy algorithm. Experimental results show the effectiveness of the proposed techniques.
View full abstract
-
Ye GAO, Masayuki SATO, Ryusuke EGAWA, Hiroyuki TAKIZAWA, Hiroaki KOBAY ...
Article type: PAPER
Subject area: Computer System
2014 Volume E97.D Issue 11 Pages
2835-2843
Published: 2014
Released on J-STAGE: November 01, 2014
JOURNAL
FREE ACCESS
Vector processors have significant advantages for next generation multimedia applications (MMAs). One of the advantages is that vector processors can achieve high data transfer performance by using a high bandwidth memory sub-system, resulting in a high sustained computing performance. However, the high bandwidth memory sub-system usually leads to enormous costs in terms of chip area, power and energy consumption. These costs are too expensive for commodity computer systems, which are the main execution platform of MMAs. This paper proposes a new multi-banked cache memory for commodity computer systems called MVP-cache in order to expand the potential of vector architectures on MMAs. Unlike conventional multi-banked cache memories, which employ one tag array and one data array in a sub-cache, MVP-cache associates one tag array with multiple independent data arrays of small-sized cache lines. In this way, MVP-cache realizes less static power consumption on its tag arrays. MVP-cache can also achieve high efficiency on short vector data transfers because the flexibility of data transfers can be improved by independently controlling the data transfers of each data array.
View full abstract
-
Ilhoon SHIN
Article type: PAPER
Subject area: Software System
2014 Volume E97.D Issue 11 Pages
2844-2851
Published: 2014
Released on J-STAGE: November 01, 2014
JOURNAL
FREE ACCESS
NAND-based block devices such as memory cards and solid-state drives embed a flash translation layer (FTL) to emulate the standard block device interface and its features. The overall performance of these devices is determined mainly by the efficiency of the FTL scheme, so intensive research has been performed to improve the average performance of the FTL scheme. However, its worst-case performance has rarely been considered. The present study aims to improve the worst-case performance without affecting the average performance. The central concept is to distribute the garbage collection cost, which is the main source of performance fluctuations, over multiple requests. The proposed scheme comprises three modules: i) anticipated partial log block merging to distribute the garbage collection time; ii) reclaiming clean pages by moving valid pages to bound the worst-case garbage collection time, instead of performing repeated block merges; and iii) victim selection based on the valid page count in a victim log and the required clean page count to avoid subsequent garbage collections. A trace-driven simulation showed that the worst-case performance was improved up to 1,300% using the proposed garbage collection scheme. The average performance was also similar to that of the original scheme. This improvement was achieved without additional memory overheads.
View full abstract
-
Md-Mizanur RAHOMAN, Ryutaro ICHISE
Article type: PAPER
Subject area: Data Engineering, Web Information Systems
2014 Volume E97.D Issue 11 Pages
2852-2862
Published: 2014
Released on J-STAGE: November 01, 2014
JOURNAL
FREE ACCESS
Keyword-based linked data information retrieval is an easy choice for general-purpose users, but the implementation of such an approach is a challenge because mere keywords do not hold semantic information. Some studies have incorporated templates in an effort to bridge this gap, but most such approaches have proven ineffective because of inefficient template management. Because linked data can be presented in a structured format, we can assume that the data's internal statistics can be used to effectively influence template management. In this work, we explore the use of this influence for template creation, ranking, and scaling. Then, we demonstrate how our proposal for automatic linked data information retrieval can be used alongside familiar keyword-based information retrieval methods, and can also be incorporated alongside other techniques, such as ontology inclusion and sophisticated matching, in order to achieve increased levels of performance.
View full abstract
-
Chunlu WANG, Chenye QIU, Xingquan ZUO, Chuanyi LIU
Article type: PAPER
Subject area: Artificial Intelligence, Data Mining
2014 Volume E97.D Issue 11 Pages
2863-2871
Published: 2014
Released on J-STAGE: November 01, 2014
JOURNAL
FREE ACCESS
Reducing accident severity is an effective way to improve road safety. In the literature of accident severity analysis, two main disadvantages exist: most studies use classification accuracy to measure the quality of a classifier which is not appropriate in the condition of unbalanced dataset; the other is the results are not easy to be interpreted by users. Aiming at these drawbacks, a novel multi-objective particle swarm optimization (MOPSO) method is proposed to identify the contributing factors that impact accident severity. By employing Pareto dominance concept, a set of Pareto optimal rules can be obtained by MOPSO automatically, without any pre-defined threshold or variables. Then the rules are used to form a non-ordered classifier. A MOPSO is applied to discover a set of Pareto optimal rules. The accident data of Beijing between 2008 and 2010 are used to build the model. The proposed approach is compared with several rule learning algorithms. The results show the proposed approach can generate a set of accurate and comprehensible rules which can indicate the relationship between risk factors and accident severity.
View full abstract
-
Weixun GAO, Qiying CAO, Yao QIAN
Article type: PAPER
Subject area: Speech and Hearing
2014 Volume E97.D Issue 11 Pages
2872-2880
Published: 2014
Released on J-STAGE: November 01, 2014
JOURNAL
FREE ACCESS
In this paper, we use neural networks (NNs) for cross-dialectal (Mandarin-Shanghainese) voice conversion using a bi-dialectal speakers' recordings. This system employs a nonlinear mapping function, which is trained by parallel mandarin features of source and target speakers, to convert source speaker's Shanghainese features to those of target speaker. This study investigates three training aspects: a) Frequency warping, which is supposed to be language independent; b) Pre-training, which drives weights to a better starting point than random initialization or be regarded as unsupervised feature learning; and c) Sequence training, which minimizes sequence-level errors and matches objectives used in training and converting. Experimental results show that the performance of cross-dialectal voice conversion is close to that of intra-dialectal. This benefit is likely from the strong learning capabilities of NNs, e.g., exploiting feature correlations between fundamental frequency (F0) and spectrum. The objective measures: log spectral distortion (LSD) and root mean squared error (RMSE) of F0, both show that pre-training and sequence training outperform the frame-level mean square error (MSE) training. The naturalness of the converted Shanghainese speech and the similarity between converted Shanghainese speech and target Mandarin speech are significantly improved.
View full abstract
-
Woo KYEONG SEONG, Ji HUN PARK, Hong KOOK KIM
Article type: PAPER
Subject area: Speech and Hearing
2014 Volume E97.D Issue 11 Pages
2881-2887
Published: 2014
Released on J-STAGE: November 01, 2014
JOURNAL
FREE ACCESS
Dysarthric speech results from damage to the central nervous system involving the articulator, which can mainly be characterized by poor articulation due to irregular sub-glottal pressure, loudness bursts, phoneme elongation, and unexpected pauses during utterances. Since dysarthric speakers have physical disabilities due to the impairment of their nervous system, they cannot easily control electronic devices. For this reason, automatic speech recognition (ASR) can be a convenient interface for dysarthric speakers to control electronic devices. However, the performance of dysarthric ASR severely degrades when there is background noise. Thus, in this paper, we propose a noise reduction method that improves the performance of dysarthric ASR. The proposed method selectively applies either a Wiener filtering algorithm or a Kalman filtering algorithm according to the result of voiced or unvoiced classification. Then, the performance of the proposed method is compared to a conventional Wiener filtering method in terms of ASR accuracy.
View full abstract
-
Guanwen ZHANG, Jien KATO, Yu WANG, Kenji MASE
Article type: PAPER
Subject area: Image Processing and Video Processing
2014 Volume E97.D Issue 11 Pages
2888-2902
Published: 2014
Released on J-STAGE: November 01, 2014
JOURNAL
FREE ACCESS
There exist two intrinsic issues in multiple-shot person re-identification: (1) large differences in camera view, illumination, and non-rigid deformation of posture that make the intra-class variance even larger than the inter-class variance; (2) only a few training data that are available for learning tasks in a realistic re-identification scenario. In our previous work, we proposed a local distance comparison framework to deal with the first issue. In this paper, to deal with the second issue (i.e., to derive a reliable distance metric from limited training data), we propose an adaptive learning method to learn an adaptive distance metric, which integrates prior knowledge learned from a large existing auxiliary dataset and task-specific information extracted from a much smaller training dataset. Experimental results on several public benchmark datasets show that combined with the local distance comparison framework, our adaptive learning method is superior to conventional approaches.
View full abstract
-
Yun SHEN, Yitong LIU, Jing LIU, Hongwen YANG, Dacheng YANG
Article type: PAPER
Subject area: Image Processing and Video Processing
2014 Volume E97.D Issue 11 Pages
2903-2911
Published: 2014
Released on J-STAGE: November 01, 2014
JOURNAL
FREE ACCESS
In this paper, we design an Unequal Error Protection (UEP) rateless code with special coding graph and apply it to propose a novel HTTP adaptive streaming based on UEP rateless code (HASUR). Our designed UEP rateless code provides high diversity on decoding probability and priority for data in different important level with overhead smaller than 0.27. By adopting this UEP rateless channel coding and scalable video source coding, our HASUR ensures symbols with basic quality to be decoded first to guarantee fluent playback experience. Besides, it also provides multiple layers to ensure the most suitable quality for fluctuant bandwidth and packet loss rate (PLR) without estimating them in advance. We evaluate our HASUR against the alternative solutions. Simulation results show that HASUR provides higher video quality and more adapts to bandwidth and PLR than other two commercial schemes under End-to-End transmission.
View full abstract
-
Wenji YANG, Wei HUANG, Shanxue CHEN
Article type: PAPER
Subject area: Image Processing and Video Processing
2014 Volume E97.D Issue 11 Pages
2912-2918
Published: 2014
Released on J-STAGE: November 01, 2014
JOURNAL
FREE ACCESS
Arterial spin labeling (ASL) is a non-invasive magnetic resonance imaging (MRI) method that can provide direct and quantitative measurements of cerebral blood flow (CBF) of scanned patients. ASL can be utilized as an imaging modality to detect Alzheimer's disease (AD), as brain atrophy of AD patients can be revealed by low CBF values in certain brain regions. However, partial volume effects (PVE), which is mainly caused by signal cross-contamination due to voxel heterogeneity and limited spatial resolution of ASL images, often prevents CBF in ASL from being precisely measured. In this study, a novel PVE correction method is proposed based on pixel-wise voxels in ASL images; it can well handle with the existing problems of blurring and loss of brain details in conventional PVE correction methods. Dozens of comparison experiments and statistical analysis also suggest that the proposed method is superior to other PVE correction methods in AD diagnosis based on real patients data.
View full abstract
-
Sunghoon JUNG, Minhwan KIM
Article type: PAPER
Subject area: Image Recognition, Computer Vision
2014 Volume E97.D Issue 11 Pages
2919-2934
Published: 2014
Released on J-STAGE: November 01, 2014
JOURNAL
FREE ACCESS
This paper proposes a novel method for determining a three-dimensional (3D) bounding box to estimate pose (position and orientation) and size of a 3D object corresponding to a segmented object region in an image acquired by a single calibrated camera. The method is designed to work upon an object on the ground and to determine a bounding box aligned to the direction of the object, thereby reducing the number of degrees of freedom in localizing the bounding box to 5 from 9. Observations associated with the structural properties of back-projected object regions on the ground are suggested, which are useful for determining the object points expected to be on the ground. A suitable base is then estimated from the expected on-ground object points by applying to them an assumption of bilateral symmetry. A bounding box with this base is finally constructed by determining its height, such that back-projection of the constructed box onto the ground minimally encloses back-projection of the given object region. Through experiments with some 3D-modelled objects and real objects, we found that a bounding box aligned to the dominant direction estimated from edges with common direction looks natural, and the accuracy of the pose and size is enough for localizing actual on-ground objects in an industrial working space. The proposed method is expected to be used effectively in the fields of smart surveillance and autonomous navigation.
View full abstract
-
Wei LI, Masayuki MUKUNOKI, Yinghui KUANG, Yang WU, Michihiko MINOH
Article type: PAPER
Subject area: Image Recognition, Computer Vision
2014 Volume E97.D Issue 11 Pages
2935-2946
Published: 2014
Released on J-STAGE: November 01, 2014
JOURNAL
FREE ACCESS
Re-identifying the same person in different images is a distinct challenge for visual surveillance systems. Building an accurate correspondence between highly variable images requires a suitable dissimilarity measure. To date, most existing measures have used adapted distance based on a learned metric. Unfortunately, real-world human image data, which tends to show large intra-class variations and small inter-class differences, continues to prevent these measures from achieving satisfactory re-identification performance. Recognizing neighboring distribution can provide additional useful information to help tackle the deviation of the to-be-measured samples, we propose a novel dissimilarity measure from the neighborhood-wise relative information perspective, which can deliver the effectiveness of those well-distributed samples to the badly-distributed samples to make intra-class dissimilarities smaller than inter-class dissimilarities, in a learned discriminative space. The effectiveness of this method is demonstrated by explanation and experimentation.
View full abstract
-
Kazuya TAKAGI, Satoshi KONDO, Kensuke NAKAMURA, Mitsuyoshi TAKIGUCHI
Article type: PAPER
Subject area: Biological Engineering
2014 Volume E97.D Issue 11 Pages
2947-2954
Published: 2014
Released on J-STAGE: November 01, 2014
JOURNAL
FREE ACCESS
One of the major applications of contrast-enhanced ultrasound (CEUS) is lesion classification. After contrast agents are administered, it is possible to identify a lesion type from its enhancement pattern. However, CEUS image reading is not easy because there are various types of enhancement patterns even for the same type of lesion, and clear classification criteria have not yet been defined. Some studies have used conventional time intensity curves (TICs), which show the vessel dynamics of a lesion. It is possible to predict lesion type from the TIC parameters, such as the coefficients obtained by curve fitting, peak intensity, flow rate and time to peak. However, these parameters are not always provide sufficient accuracy. In this paper, we prepare 1D Haar-like features which describe intensity changes in a TIC and adopt the Adaboost machine learning technique, which eases understanding of which features are useful. Hyperparameters of weak classifiers, e.g., the step size of a Haar-like filter length and threshold for output of the filter, are optimized by searching for those parameters that give the best accuracy. We evaluate the proposed method using 36 focal splenic lesions in canines 16 of which were benign and 20 malignant. The accuracies were 91.7% (33/36) when inspected by an experienced veterinarian, 75.0% (27/36) by linear discriminant analysis (LDA) using conventional three TIC parameters: time to peak, area under curve and peak intensity, and 91.7% (33/36) using our proposed method. McNemar testing shows the p-value to be less than 0.05 between the proposed method and LDA. This result shows the statistical significance of differences between the proposed method and the conventional TIC analysis method using LDA.
View full abstract
-
Akihiro FUJII, Osni MARQUES
Article type: LETTER
Subject area: Computer System
2014 Volume E97.D Issue 11 Pages
2955-2958
Published: 2014
Released on J-STAGE: November 01, 2014
JOURNAL
FREE ACCESS
Communication costs have become a performance bottleneck in many applications, and are a big issue for high performance computing on massively parallel machines. This paper proposes a halo exchange method for unstructured sparse matrix vector products within the algebraic multigrid method, and evaluate it on a supercomputer with mesh/torus networks. In our numerical tests with a Poisson problem, the proposed method accelerates the linear solver more than 14 times with 23040 cores.
View full abstract
-
Yoji YAMATO, Naoko SHIGEMATSU, Norihiro MIURA
Article type: LETTER
Subject area: Software Engineering
2014 Volume E97.D Issue 11 Pages
2959-2962
Published: 2014
Released on J-STAGE: November 01, 2014
JOURNAL
FREE ACCESS
In this paper, we evaluate a method of agile software development for carrier Cloud service platform development. It is generally said that agile software development is suitable for small-scale development, but we adopt it for the development which has more than 30 members. We attempted to enable automatic regression tests for each iteration when we adopted agile software development, so that we could start our Cloud service sufficiently fast. We compared and evaluated software reliability growth curves, regression test efforts and bug causes with waterfall development.
View full abstract
-
Jangyoung KIM
Article type: LETTER
Subject area: Information Network
2014 Volume E97.D Issue 11 Pages
2963-2966
Published: 2014
Released on J-STAGE: November 01, 2014
JOURNAL
FREE ACCESS
This paper presents a prediction model based on historical data to achieve optimal values of pipelining, concurrency and parallelism (PCP) in GridFTP data transfers in Cloud systems. Setting the correct values for these three parameters is crucial in achieving high throughput in end-to-end data movement. However, predicting and setting the optimal values for these parameters is a challenging task, especially in shared and non-predictive network conditions. Several factors can affect the optimal values for these parameters such as the background network traffic, available bandwidth, Round-Trip Time (RTT), TCP buffer size, and file size. Existing models either fail to provide accurate predictions or come with very high prediction overheads. The author shows that new model based on historical data can achieve high accuracy with low overhead.
View full abstract
-
Sungwon LEE, Dongkyun KIM
Article type: LETTER
Subject area: Information Network
2014 Volume E97.D Issue 11 Pages
2967-2970
Published: 2014
Released on J-STAGE: November 01, 2014
JOURNAL
FREE ACCESS
In typical end-to-end recovery protocols, an ACK segment is delivered to a source node over a single path. ACK loss requires the source to retransmit the corresponding data packet. However, in underwater wireless sensor networks which prefer flooding-based routing protocols, the source node has redundant chances to receive the ACK segment since multiple copies of the ACK segment can arrive at the source node along multiple paths. Since existing RTO calculation algorithms do not consider inherent features of underlying routing protocols, spurious packet retransmissions are unavoidable. Hence, in this letter, we propose a new ACK loss-aware RTO calculation algorithm, which utilizes statistical ACK arrival times and ACK loss rate, in order to reduce such retransmissions.
View full abstract
-
Chanho JUNG
Article type: LETTER
Subject area: Image Processing and Video Processing
2014 Volume E97.D Issue 11 Pages
2971-2973
Published: 2014
Released on J-STAGE: November 01, 2014
JOURNAL
FREE ACCESS
Integrating the visual attention (VA) model into an objective image quality metric is a rapidly evolving area in modern image quality assessment (IQA) research due to the significant opportunities the VA information presents. So far, in the literature, it has been suggested to use either a task-free saliency map or a quality-task one for the integration into quality metric. A hybrid integration approach which takes the advantages of both saliency maps is presented in this paper. We compare our hybrid integration scheme with existing integration schemes using simple quality metrics. Results show that the proposed method performs better than the previous techniques in terms of prediction accuracy.
View full abstract
-
Haoqi XIONG, Jingjing GAO, Chongjin ZHU, Yanling LI, Shu ZHANG, Mei XI ...
Article type: LETTER
Subject area: Biological Engineering
2014 Volume E97.D Issue 11 Pages
2974-2978
Published: 2014
Released on J-STAGE: November 01, 2014
JOURNAL
FREE ACCESS
The MR image segmentation is always a challenging problem because of the intensity inhomogeneity. Many existing methods don't reach their expected segmentations; besides their implementations are usually complicated. Therefore, we originally interleave the extended Otsu segmentation with bias field estimation in an energy minimization. Via our proposed method, the optimal segmentation and bias field estimation are achieved simultaneously throughout the reciprocal iteration. The results of our method not only satisfy the required classification via its applications in the synthetic and the real images, but also demonstrate that our method is superior to the baseline methods in accordance with the performance analysis of JS metrics.
View full abstract