-
Ken-ichiro MORIDOMI, Kohei HATANO, Eiji TAKIMOTO
Article type: PAPER
Subject area: Fundamentals of Information Systems
2018 Volume E101.D Issue 6 Pages
1511-1520
Published: June 01, 2018
Released on J-STAGE: June 01, 2018
JOURNAL
FREE ACCESS
We consider online linear optimization over symmetric positive semi-definite matrices, which has various applications including the online collaborative filtering. The problem is formulated as a repeated game between the algorithm and the adversary, where in each round t the algorithm and the adversary choose matrices Xt and Lt, respectively, and then the algorithm suffers a loss given by the Frobenius inner product of Xt and Lt. The goal of the algorithm is to minimize the cumulative loss. We can employ a standard framework called Follow the Regularized Leader (FTRL) for designing algorithms, where we need to choose an appropriate regularization function to obtain a good performance guarantee. We show that the log-determinant regularization works better than other popular regularization functions in the case where the loss matrices Lt are all sparse. Using this property, we show that our algorithm achieves an optimal performance guarantee for the online collaborative filtering. The technical contribution of the paper is to develop a new technique of deriving performance bounds by exploiting the property of strong convexity of the log-determinant with respect to the loss matrices, while in the previous analysis the strong convexity is defined with respect to a norm. Intuitively, skipping the norm analysis results in the improved bound. Moreover, we apply our method to online linear optimization over vectors and show that the FTRL with the Burg entropy regularizer, which is the analogue of the log-determinant regularizer in the vector case, works well.
View full abstract
-
Naoki FUJIEDA, Kiyohiro SATO, Ryodai IWAMOTO, Shuichi ICHIKAWA
Article type: PAPER
Subject area: Computer System
2018 Volume E101.D Issue 6 Pages
1521-1531
Published: June 01, 2018
Released on J-STAGE: June 01, 2018
JOURNAL
FREE ACCESS
Instruction set randomization (ISR) is a cost-effective obfuscation technique that modifies or enhances the relationship between instructions and machine languages. An Instruction Register File (IRF), a list of frequently used instructions, can be used for ISR by providing the way of indirect access to them. This study examines the IRF that integrates a positional register, which was proposed as a supplementary unit of the IRF, for the sake of tamper resistance. According to our evaluation, with a new design for the contents of the positional register, the measure of tamper resistance was increased by 8.2% at a maximum, which corresponds to a 32.2% increase in the size of the IRF. The number of logic elements increased by the addition of the positional register was 3.5% of its baseline processor.
View full abstract
-
Takuya KOJIMA, Naoki ANDO, Hayate OKUHARA, Ng. Anh Vu DOAN, Hideharu A ...
Article type: PAPER
Subject area: Computer System
2018 Volume E101.D Issue 6 Pages
1532-1540
Published: June 01, 2018
Released on J-STAGE: June 01, 2018
JOURNAL
FREE ACCESS
Variable Pipeline Cool Mega Array (VPCMA) is a low power Coarse Grained Reconfigurable Architecture (CGRA) based on the concept of CMA (Cool Mega Array). It provides a pipeline structure in the PE array that can be configured so as to fit target algorithms and required performance. Also, VPCMA uses the Silicon On Thin Buried oxide (SOTB) technology, a type of Fully Depleted Silicon On Insulator (FDSOI), so it is possible to control its body bias voltage to provide a balance between performance and leakage power. In this paper, we study the optimization of the VPCMA body bias while considering simultaneously its variable pipeline structure. Through evaluations, we can observe that it is possible to achieve an average reduction of energy consumption, for the studied applications, of 17.75% and 10.49% when compared to respectively the zero bias (without body bias control) and the uniform (control of the whole PE array) cases, while respecting performance constraints. Besides, it is observed that, with appropriate body bias control, it is possible to extend the possible performance, hence enabling broader trade-off analyzes between consumption and performance. Considering the dynamic power as well as the static power, more appropriate pipeline structure and body bias voltage can be obtained. In addition, when the control of VDD is integrated, higher performance can be achieved with a steady increase of the power. These promising results show that applying an adequate optimization technique for the body bias control while simultaneously considering pipeline structures can not only enable further power reduction than previous methods, but also allow more trade-off analysis possibilities.
View full abstract
-
Hyun Seung SON, R. Young Chul KIM
Article type: PAPER
Subject area: Software Engineering
2018 Volume E101.D Issue 6 Pages
1541-1551
Published: June 01, 2018
Released on J-STAGE: June 01, 2018
JOURNAL
FREE ACCESS
The traditional tests are planned and designed at the early stages, but it is possible to execute test cases after implementing source code. Since there is a time difference between design stage and testing stage, by the time a software design error is found it will be too late. To solve this problem, this paper suggests a virtual pre-testing process. While the virtual pre-testing process can find software and testing errors before the developing stage, it can automatically generate and execute test cases with modeling and simulation (M&S) in a virtual environment. The first part of this method is to create test cases with state transition tree based on state diagram, which include state, transition, instruction pair, and all path coverage. The second part is to model and simulate a virtual target, which then pre-test the target with test cases. In other words, these generated test cases are automatically transformed into the event list. This simultaneously executes test cases to the simulated target within a virtual environment. As a result, it is possible to find the design and test error at the early stages of the development cycle and in turn can reduce development time and cost as much as possible.
View full abstract
-
Yong WANG, Xiaoran DUAN, Xiaodong YANG, Yiquan ZHANG, Xiaosong ZHANG
Article type: PAPER
Subject area: Data Engineering, Web Information Systems
2018 Volume E101.D Issue 6 Pages
1552-1561
Published: June 01, 2018
Released on J-STAGE: June 01, 2018
JOURNAL
FREE ACCESS
Geosocial networking allows users to interact with respect to their current locations, which enables a group of users to determine where to meet. This calls for techniques that support processing of Multiple-user Location-based Keyword (MULK) queries, which return a set of Point-of-Interests (POIs) that are 'close' to the locations of the users in a group and can provide them with potential options at the lowest expense (e.g., minimizing travel distance). In this paper, we formalize the MULK query and propose a dynamic programming-based algorithm to find the optimal result set. Further, we design two approximation algorithms to improve MULK query processing efficiency. The experimental evaluations show that our solutions are feasible and efficient under various parameter settings.
View full abstract
-
Ju Yong CHANG, Yong Seok HEO
Article type: PAPER
Subject area: Pattern Recognition
2018 Volume E101.D Issue 6 Pages
1562-1571
Published: June 01, 2018
Released on J-STAGE: June 01, 2018
JOURNAL
FREE ACCESS
We present a new action classification method for skeletal sequence data. The proposed method is based on simple nonparametric feature matching without a learning process. We first augment the training dataset to implicitly construct an exponentially increasing number of training sequences, which can be used to improve the generalization power of the proposed action classifier. These augmented training sequences are matched to the test sequence with the relaxed dynamic time warping (DTW) technique. Our relaxed formulation allows the proposed method to work faster and with higher efficiency than the conventional DTW-based method using a non-augmented dataset. Experimental results show that the proposed approach produces effective action classification results for various scales of real datasets.
View full abstract
-
Jinwei WANG, Huazhi SUN
Article type: PAPER
Subject area: Pattern Recognition
2018 Volume E101.D Issue 6 Pages
1572-1580
Published: June 01, 2018
Released on J-STAGE: June 01, 2018
JOURNAL
FREE ACCESS
Automatically recognizing pain and estimating pain intensity is an emerging research area that has promising applications in the medical and healthcare field, and this task possesses a crucial role in the diagnosis and treatment of patients who have limited ability to communicate verbally and remains a challenge in pattern recognition. Recently, deep learning has achieved impressive results in many domains. However, deep architectures require a significant amount of labeled data for training, and they may fail to outperform conventional handcrafted features due to insufficient data, which is also the problem faced by pain detection. Furthermore, the latest studies show that handcrafted features may provide complementary information to deep-learned features; hence, combining these features may result in improved performance. Motived by the above considerations, in this paper, we propose an innovative method based on the combination of deep spatiotemporal and handcrafted features for pain intensity estimation. We use C3D, a deep 3-dimensional convolutional network that takes a continuous sequence of video frames as input, to extract spatiotemporal facial features. C3D models the appearance and motion of videos simultaneously. For handcrafted features, we propose extracting the geometric information by computing the distance between normalized facial landmarks per frame and the ones of the mean face shape, and we extract the appearance information using the histogram of oriented gradients (HOG) features around normalized facial landmarks per frame. Two levels of SVRs are trained using spatiotemporal, geometric and appearance features to obtain estimation results. We tested our proposed method on the UNBC-McMaster shoulder pain expression archive database and obtained experimental results that outperform the current state-of-the-art.
View full abstract
-
Ryo MASUMURA, Taichi ASAMI, Takanobu OBA, Hirokazu MASATAKI, Sumitaka ...
Article type: PAPER
Subject area: Speech and Hearing
2018 Volume E101.D Issue 6 Pages
1581-1590
Published: June 01, 2018
Released on J-STAGE: June 01, 2018
JOURNAL
FREE ACCESS
This paper proposes a novel domain adaptation method that can utilize out-of-domain text resources and partially domain matched text resources in language modeling. A major problem in domain adaptation is that it is hard to obtain adequate adaptation effects from out-of-domain text resources. To tackle the problem, our idea is to carry out model merger in a latent variable space created from latent words language models (LWLMs). The latent variables in the LWLMs are represented as specific words selected from the observed word space, so LWLMs can share a common latent variable space. It enables us to perform flexible mixture modeling with consideration of the latent variable space. This paper presents two types of mixture modeling, i.e., LWLM mixture models and LWLM cross-mixture models. The LWLM mixture models can perform a latent word space mixture modeling to mitigate domain mismatch problem. Furthermore, in the LWLM cross-mixture models, LMs which individually constructed from partially matched text resources are split into two element models, each of which can be subjected to mixture modeling. For the approaches, this paper also describes methods to optimize mixture weights using a validation data set. Experiments show that the mixture in latent word space can achieve performance improvements for both target domain and out-of-domain compared with that in observed word space.
View full abstract
-
Aiying ZHANG, Chongjia NI
Article type: PAPER
Subject area: Speech and Hearing
2018 Volume E101.D Issue 6 Pages
1591-1604
Published: June 01, 2018
Released on J-STAGE: June 01, 2018
JOURNAL
FREE ACCESS
Automatic speech recognition (ASR) and keyword search (KWS) have more and more found their way into our everyday lives, and their successes could boil down lots of factors. In these factors, large scale of speech data used for acoustic modeling is the key factor. However, it is difficult and time-consuming to acquire large scale of transcribed speech data for some languages, especially for low-resource languages. Thus, at low-resource condition, it becomes important with which transcribed data for acoustic modeling for improving the performance of ASR and KWS. In view of using acoustic data for acoustic modeling, there are two different ways. One is using the target language data, and another is using large scale of other source languages data for cross-lingual transfer. In this paper, we propose some approaches for efficient selecting acoustic data for acoustic modeling. For target language data, a submodular based unsupervised data selection approach is proposed. The submodular based unsupervised data selection could select more informative and representative utterances for manual transcription for acoustic modeling. For other source languages data, the high misclassified as target language based submodular multilingual data selection approach and knowledge based group multilingual data selection approach are proposed. When using selected multilingual data for multilingual deep neural network training for cross-lingual transfer, it could improve the performance of ASR and KWS of target language. When comparing our proposed multilingual data selection approach with language identification based multilingual data selection approach, our proposed approach also obtains better effect. In this paper, we also analyze and compare the language factor and the acoustic factor influence on the performance of ASR and KWS. The influence of different scale of target language data on the performance of ASR and KWS at mono-lingual condition and cross-lingual condition are also compared and analyzed, and some significant conclusions can be concluded.
View full abstract
-
RISNANDAR, Masayoshi ARITSUGI
Article type: PAPER
Subject area: Image Processing and Video Processing
2018 Volume E101.D Issue 6 Pages
1605-1620
Published: June 01, 2018
Released on J-STAGE: June 01, 2018
JOURNAL
FREE ACCESS
New deblocking artifact, or blocking artifact reduction, algorithms based on nonlinear adaptive soft-threshold anisotropic filter in wavelet are proposed. Our deblocking algorithm uses soft-threshold, adaptive wavelet direction, adaptive anisotropic filter, and estimation. The novelties of this paper are an adaptive soft-threshold for deblocking artifact and an optimal intersection of confidence intervals (OICI) method in deblocking artifact estimation. The soft-threshold values are adaptable to different thresholds of flat area, texture area, and blocking artifact. The OICI is a reconstruction technique of estimated deblocking artifact which improves acceptable quality level of estimated deblocking artifact and reduces execution time of deblocking artifact estimation compared to the other methods. Our adaptive OICI method outperforms other adaptive deblocking artifact methods. Our estimated deblocking artifact algorithms have up to 98% of MSE improvement, up to 89% of RMSE improvement, and up to 99% of MAE improvement. We also got up to 77.98% reduction of computational time of deblocking artifact estimations, compared to other methods. We have estimated shift and add algorithms by using Euler++(E++) and Runge-Kutta of order 4++ (RK4++) algorithms which iterate one step an ordinary differential equation integration method. Experimental results showed that our E++ and RK4++ algorithms could reduce computational time in terms of shift and add, and RK4++ algorithm is superior to E++ algorithm.
View full abstract
-
Min WANG, Shudao ZHOU
Article type: PAPER
Subject area: Image Processing and Video Processing
2018 Volume E101.D Issue 6 Pages
1621-1628
Published: June 01, 2018
Released on J-STAGE: June 01, 2018
JOURNAL
FREE ACCESS
This paper proposes an image denoising method using singular value decomposition (SVD) with block-rotation-based operations in wavelet domain. First, we decompose a noisy image to some sub-blocks, and use the single-level discrete 2-D wavelet transform to decompose each sub-block into the low-frequency image part and the high-frequency parts. Then, we use SVD and rotation-based SVD with the rank-1 approximation to filter the noise of the different high-frequency parts, and get the denoised sub-blocks. Finally, we reconstruct the sub-block from the low-frequency part and the filtered the high-frequency parts by the inverse wavelet transform, and reorganize each denoised sub-blocks to obtain the final denoised image. Experiments show the effectiveness of this method, compared with relevant methods.
View full abstract
-
Muneki YASUDA, Junpei WATANABE, Shun KATAOKA, Kazuyuki TANAKA
Article type: PAPER
Subject area: Image Processing and Video Processing
2018 Volume E101.D Issue 6 Pages
1629-1639
Published: June 01, 2018
Released on J-STAGE: June 01, 2018
JOURNAL
FREE ACCESS
In this paper, we consider Bayesian image denoising based on a Gaussian Markov random field (GMRF) model, for which we propose an new algorithm. Our method can solve Bayesian image denoising problems, including hyperparameter estimation, in O(n)-time, where n is the number of pixels in a given image. From the perspective of the order of the computational time, this is a state-of-the-art algorithm for the present problem setting. Moreover, the results of our numerical experiments we show our method is in fact effective in practice.
View full abstract
-
Yo UMEKI, Taichi YOSHIDA, Masahiro IWAHASHI
Article type: PAPER
Subject area: Image Processing and Video Processing
2018 Volume E101.D Issue 6 Pages
1640-1647
Published: June 01, 2018
Released on J-STAGE: June 01, 2018
JOURNAL
FREE ACCESS
In this paper, we propose a method of salient object detection based on distributed seeds and a co-propagation of seed information. Salient object detection is a technique which estimates important objects for human by calculating saliency values of pixels. Previous salient object detection methods often produce incorrect saliency values near salient objects in the case of images which have some objects, called the leakage of saliencies. Therefore, a method based on a co-propagation, the scale invariant feature transform, the high dimensional color transform, and machine learning is proposed to reduce the leakage. Firstly, the proposed method estimates regions clearly located in salient objects and the background, which are called as seeds and resultant seeds, are distributed over images. Next, the saliency information of seeds is simultaneously propagated, which is then referred as a co-propagation. The proposed method can reduce the leakage caused because of the above methods when the co-propagation of each information collide with each other near the boundary. Experiments show that the proposed method significantly outperforms the state-of-the-art methods in mean absolute error and F-measure, which perceptually reduces the leakage.
View full abstract
-
Naoyuki AWANO, Kana MOROHOSHI
Article type: PAPER
Subject area: Image Recognition, Computer Vision
2018 Volume E101.D Issue 6 Pages
1648-1656
Published: June 01, 2018
Released on J-STAGE: June 01, 2018
JOURNAL
FREE ACCESS
Most people are concerned about their appearance, and the easiest way to change the appearance is to change the hairstyle. However, except for professional hairstylists, it is difficult to objectively judge which hairstyle suits them. Currently, oval faces are generally said to be the ideal facial shape in terms of suitability to various hairstyles. Meanwhile, field of visual perception (FVP), proposed recently in the field of cognitive science, has attracted attention as a model to represent the visual perception phenomenon. Moreover, a computation model for digital images has been proposed, and it is expected to be used in quantitative evaluation of sensibility and sensitivity called “kansei.” Quantitative evaluation of “goodness of patterns” and “strength of impressions” by evaluating distributions of the field has been reported. However, it is unknown whether the evaluation method can be generalized for use in various subjects, because it has been applied only to some research subjects, such as characters, text, and simple graphics. In this study, for the first time, we apply FVP to facial images with various hairstyles and verify whether it has the potential of evaluating impressions of female faces. Specifically, we verify whether the impressions of facial images that combine various facial shapes and female hairstyles can be represented using FVP. We prepare many combinational images of facial shapes and hairstyles and conduct a psychological experiment to evaluate their impressions. Moreover, we compute the FVP of each image and propose a novel evaluation method by analyzing the distributions. The conventional and proposed evaluation values correlated to the psychological evaluation values after normalization, and demonstrated the effectiveness of the FVP as an image feature quantity to evaluate faces.
View full abstract
-
Yuto KUROSAKI, Masayoshi OHTA, Hidetaka ITO, Hiroomi HIKAWA
Article type: PAPER
Subject area: Biocybernetics, Neurocomputing
2018 Volume E101.D Issue 6 Pages
1657-1665
Published: June 01, 2018
Released on J-STAGE: June 01, 2018
JOURNAL
FREE ACCESS
This paper discusses the effect of pre-grouping on vector classification based on the self-organizing map (SOM). The SOM is an unsupervised learning neural network, and is used to form clusters of vectors using its topology preserving nature. The use of SOMs for practical applications, however, may pose difficulties in achieving high recognition accuracy. For example, in image recognition, the accuracy is degraded due to the variation of lighting conditions. This paper considers the effect of pre-grouping of feature vectors on such types of applications. The proposed pre-grouping functionality is also based on the SOM and introduced into a new parallel configuration of the previously proposed SOM-Hebb classifers. The overall system is implemented and applied to position identification from images obtained in indoor and outdoor settings. The system first performs the grouping of images according to the rough representation of the brightness profile of images, and then assigns each SOM-Hebb classifier in the parallel configuration to one of the groups. Recognition parameters of each classifier are tuned for the vectors belonging to its group. Comparison between the recognition systems with and without the grouping shows that the grouping can improve recognition accuracy.
View full abstract
-
Mohamad Sabri bin SINAL, Eiji KAMIOKA
Article type: PAPER
Subject area: Biological Engineering
2018 Volume E101.D Issue 6 Pages
1666-1676
Published: June 01, 2018
Released on J-STAGE: June 01, 2018
JOURNAL
FREE ACCESS
Automatic detection of heart cycle abnormalities in a long duration of ECG data is a crucial technique for diagnosing an early stage of heart diseases. Concretely, Paroxysmal stage of Atrial Fibrillation rhythms (ParAF) must be discriminated from Normal Sinus rhythms (NS). The both of waveforms in ECG data are very similar, and thus it is difficult to completely detect the Paroxysmal stage of Atrial Fibrillation rhythms. Previous studies have tried to solve this issue and some of them achieved the discrimination with a high degree of accuracy. However, the accuracies of them do not reach 100%. In addition, no research has achieved it in a long duration, e.g. 12 hours, of ECG data. In this study, a new mechanism to tackle with these issues is proposed: “Door-to-Door” algorithm is introduced to accurately and quickly detect significant peaks of heart cycle in 12 hours of ECG data and to discriminate obvious ParAF rhythms from NS rhythms. In addition, a quantitative method using Artificial Neural Network (ANN), which discriminates unobvious ParAF rhythms from NS rhythms, is investigated. As the result of Door-to-Door algorithm performance evaluation, it was revealed that Door-to-Door algorithm achieves the accuracy of 100% in detecting the significant peaks of heart cycle in 17 NS ECG data. In addition, it was verified that ANN-based method achieves the accuracy of 100% in discriminating the Paroxysmal stage of 15 Atrial Fibrillation data from 17 NS data. Furthermore, it was confirmed that the computational time to perform the proposed mechanism is less than the half of the previous study. From these achievements, it is concluded that the proposed mechanism can practically be used to diagnose early stage of heart diseases.
View full abstract
-
Wenjie YU, Xunbo LI, Zhi ZENG, Xiang LI, Jian LIU
Article type: LETTER
Subject area: Fundamentals of Information Systems
2018 Volume E101.D Issue 6 Pages
1677-1681
Published: June 01, 2018
Released on J-STAGE: June 01, 2018
JOURNAL
FREE ACCESS
In this paper, the problem of lifetime extension of wireless sensor networks (WSNs) with redundant sensor nodes deployed in 3D vegetation-covered fields is modeled, which includes building communication models, network model and energy model. Generally, such a problem cannot be solved by a conventional method directly. Here we propose an Artificial Bee Colony (ABC) based optimal grouping algorithm (ABC-OG) to solve it. The main contribution of the algorithm is to find the optimal number of feasible subsets (FSs) of WSN and assign them to work in rotation. It is verified that reasonably grouping sensors into FSs can average the network energy consumption and prolong the lifetime of the network. In order to further verify the effectiveness of ABC-OG, two other algorithms are included for comparison. The experimental results show that the proposed ABC-OG algorithm provides better optimization performance.
View full abstract
-
Liang CHEN, Dongyi CHEN, Xiao CHEN
Article type: LETTER
Subject area: Computer System
2018 Volume E101.D Issue 6 Pages
1682-1685
Published: June 01, 2018
Released on J-STAGE: June 01, 2018
JOURNAL
FREE ACCESS
Operations, such as text entry and zooming, are simple and frequently used on mobile touch devices. However, these operations are far from being perfectly supported. In this paper, we present our prototype, BackAssist, which takes advantage of back-of-device input to augment front-of-device touch interaction. Furthermore, we present the results of a user study to evaluate whether users can master the back-of-device control of BackAssist or not. The results show that the back-of-device control can be easily grasped and used by ordinary smart phone users. Finally, we present two BackAssist supported applications - a virtual keyboard application and a map application. Users who tried out the two applications give positive feedback to the BackAssist supported augmentation.
View full abstract
-
Sinh-Ngoc NGUYEN, Van-Quyet NGUYEN, Giang-Truong NGUYEN, JeongNyeo KIM ...
Article type: LETTER
Subject area: Information Network
2018 Volume E101.D Issue 6 Pages
1686-1690
Published: June 01, 2018
Released on J-STAGE: June 01, 2018
JOURNAL
FREE ACCESS
Distributed Reflective Denial of Services (DRDoS) attacks have gained huge popularity and become a major factor in a number of massive cyber-attacks. Usually, the attackers launch this kind of attack with small volume of requests to generate a large volume of attack traffic aiming at the victim by using IP spoofing from legitimate hosts. There have been several approaches, such as static threshold based approach and confirmation-based approach, focusing on DRDoS attack detection at victim's side. However, these approaches have significant disadvantages: (1) they are only passive defences after the attack and (2) it is hard to trace back the attackers. To address this problem, considerable attention has been paid to the study of detecting DRDoS attack at source side. Because the existing proposals following this direction are supposed to be ineffective to deal with small volume of attack traffic, there is still a room for improvement. In this paper, we propose a novel method to detect DRDoS attack request traffic on SDN(Software Defined Network)-enabled gateways in the source side of attack traffic. Our method adjusts the sampling rate and provides a traffic-aware adaptive threshold along with the margin based on analysing observed traffic behind gateways. Experimental results show that the proposed method is a promising solution to detect DRDoS attack request in the source side.
View full abstract
-
Shaojun ZHANG, Julong LAN, Chao QI, Penghao SUN
Article type: LETTER
Subject area: Information Network
2018 Volume E101.D Issue 6 Pages
1691-1693
Published: June 01, 2018
Released on J-STAGE: June 01, 2018
JOURNAL
FREE ACCESS
Distributed control plane architecture has been employed in software-defined data center networks to improve the scalability of control plane. However, since the flow space is partitioned by assigning switches to different controllers, the network topology is also partitioned and the rule setup process has to invoke multiple controllers. Besides, the control load balancing based on switch migration is heavyweight. In this paper, we propose a lightweight load partition method which decouples the flow space from the network topology. The flow space is partitioned with hosts rather than switches as carriers, which supports fine-grained and lightweight load balancing. Moreover, the switches are no longer needed to be assigned to different controllers and we keep all of them controlled by each controller, thus each flow request can be processed by exactly one controller in a centralized style. Evaluations show that our scheme reduces rule setup costs and achieves lightweight load balancing.
View full abstract
-
Liaoruo HUANG, Qingguo SHEN, Zhangkai LUO
Article type: LETTER
Subject area: Information Network
2018 Volume E101.D Issue 6 Pages
1694-1698
Published: June 01, 2018
Released on J-STAGE: June 01, 2018
JOURNAL
FREE ACCESS
Bandwidth reservation is an important way to guarantee deterministic end-to-end service quality. However, with the traditional bandwidth reservation mechanism, the allocated bandwidth at each link is by default the same without considering the available resource of each link, which may lead to unbalanced resource utilization and limit the number of user connections that network can accommodate. In this paper, we propose a non-uniform bandwidth reservation method, which can further balance the resource utilization of network by optimizing the reserved bandwidth at each link according to its link load. Furthermore, to implement the proposed method, we devise a flexible and automatic bandwidth reservation mechanism based on meter table of Openflow. Through simulations, it is showed that our method can achieve better load balancing performance and make network accommodate more user connections comparing with the traditional methods in most application scenarios.
View full abstract
-
Jinho AHN
Article type: LETTER
Subject area: Dependable Computing
2018 Volume E101.D Issue 6 Pages
1699-1702
Published: June 01, 2018
Released on J-STAGE: June 01, 2018
JOURNAL
FREE ACCESS
In this paper, we present a hybrid message logging protocol consisting of three modules for two-level hierarchical and distributed architectures to address the drawbacks of sender-based message logging. The first module reduces the number of in-group control messages and, the rest, the number of inter-group control messages while localizing recovery. In addition, it can distribute the load of logging and keeping inter-group messages to group members as evenly as possible. The simulation results show the proposed protocol considerably outperforms the traditional protocol in terms of message logging overhead and scalability.
View full abstract
-
Dongdong GUAN, Xiaoan TANG, Li WANG, Junda ZHANG
Article type: LETTER
Subject area: Pattern Recognition
2018 Volume E101.D Issue 6 Pages
1703-1706
Published: June 01, 2018
Released on J-STAGE: June 01, 2018
JOURNAL
FREE ACCESS
Synthetic aperture radar (SAR) image classification is a popular yet challenging research topic in the field of SAR image interpretation. This paper presents a new classification method based on extreme learning machine (ELM) and the superpixel-guided composite kernels (SGCK). By introducing the generalized likelihood ratio (GLR) similarity, a modified simple linear iterative clustering (SLIC) algorithm is firstly developed to generate superpixel for SAR image. Instead of using a fixed-size region, the shape-adaptive superpixel is used to exploit the spatial information, which is effective to classify the pixels in the detailed and near-edge regions. Following the framework of composite kernels, the SGCK is constructed base on the spatial information and backscatter intensity information. Finally, the SGCK is incorporated an ELM classifier. Experimental results on both simulated SAR image and real SAR image demonstrate that the proposed framework is superior to some traditional classification methods.
View full abstract
-
Duc V. NGUYEN, Huyen T. T. TRAN, Nam PHAM NGOC, Truong Cong THANG
Article type: LETTER
Subject area: Image Processing and Video Processing
2018 Volume E101.D Issue 6 Pages
1707-1710
Published: June 01, 2018
Released on J-STAGE: June 01, 2018
JOURNAL
FREE ACCESS
In this letter, we propose a solution for managing multiple adaptive streaming clients running on different devices in a wireless home network. Our solution consists of two main aspects: a manager that determines bandwidth allocated for each client and a client-based throughput control mechanism that regulates the video traffic throughput of each client. The experimental results using a real test-bed show that our solution is able to effectively improve the quality for concurrent streaming clients.
View full abstract
-
Hyunhak SHIN, Bonhwa KU, Wooyoung HONG, Hanseok KO
Article type: LETTER
Subject area: Image Recognition, Computer Vision
2018 Volume E101.D Issue 6 Pages
1711-1714
Published: June 01, 2018
Released on J-STAGE: June 01, 2018
JOURNAL
FREE ACCESS
Most conventional research on target motion analysis (TMA) based on least squares (LS) has focused on performing asymptotically unbiased estimation with inaccurate measurements. However, such research may often yield inaccurate estimation results when only a small set of measurement data is used. In this paper, we propose an accurate TMA method even with a small set of bearing measurements. First, a subset of measurements is selected by a random sample consensus (RANSAC) algorithm. Then, LS is applied to the selected subset to estimate target motion. Finally, to increase accuracy, the target motion estimation is refined through a bias compensation algorithm. Simulated results verify the effectiveness of the proposed method.
View full abstract
-
Li XU, Bing LUO, Zheng PEI
Article type: LETTER
Subject area: Image Recognition, Computer Vision
2018 Volume E101.D Issue 6 Pages
1715-1719
Published: June 01, 2018
Released on J-STAGE: June 01, 2018
JOURNAL
FREE ACCESS
In this paper, we propose a boundary-aware superpixel segmentation method, which could quickly and exactly extract superpixel with a non-iteration framework. The basic idea is to construct a minimum spanning tree (MST) based on structure edge to measure the local similarity among pixels, and then label each pixel as the index with shortest path seeds. Intuitively, we first construct MST on the original pixels with boundary feature to calculate the similarity of adjacent pixels. Then the geodesic distance between pixels can be exactly obtained based on two-round tree recursions. We determinate pixel label as the shortest path seed index. Experimental results on BSD500 segmentation benchmark demonstrate the proposed method obtains best performance compared with seven state-of-the-art methods. Especially for the low density situation, our method can obtain the boundary-aware oversegmentation region.
View full abstract