-
Toshiaki FUJII
2018 Volume E101.D Issue 9 Pages
2178
Published: September 01, 2018
Released on J-STAGE: September 01, 2018
JOURNAL
FREE ACCESS
-
Minsu KIM, Kunwoo LEE, Katsuhiko GONDOW, Jun-ichi IMURA
Article type: PAPER
2018 Volume E101.D Issue 9 Pages
2179-2189
Published: September 01, 2018
Released on J-STAGE: September 01, 2018
JOURNAL
FREE ACCESS
The main purpose of Codemark is to distribute digital contents using offline media. Due to the main purpose of Codemark, Codemark cannot be used on digital images. It has high robustness on only printed images. This paper presents a new color code called Robust Index Code (RIC for short), which has high robustness on JPEG Compression and Resize targeting digital images. RIC embeds a remote database index to digital images so that users can reach to any digital contents. Experimental results, using our implemented RIC encoder and decoder, have shown high robustness on JPEG Comp. and Resize of the proposed codemark. The embedded database indexes can be extracted 100% on compressed images to 30%. In conclusion, it is able to store all the type of digital products by embedding indexes into digital images to access database, which means it makes a Superdistribution system with digital images realized. Therefore RIC has the potential for new Internet image services, since all the images encoded by RIC are possible to access original products anywhere.
View full abstract
-
Yusuke YAGI, Keita TAKAHASHI, Toshiaki FUJII, Toshiki SONODA, Hajime N ...
Article type: PAPER
2018 Volume E101.D Issue 9 Pages
2190-2200
Published: September 01, 2018
Released on J-STAGE: September 01, 2018
JOURNAL
FREE ACCESS
A light field, which is often understood as a set of dense multi-view images, has been utilized in various 2D/3D applications. Efficient light field acquisition using a coded aperture camera is the target problem considered in this paper. Specifically, the entire light field, which consists of many images, should be reconstructed from only a few images that are captured through different aperture patterns. In previous work, this problem has often been discussed from the context of compressed sensing (CS), where sparse representations on a pre-trained dictionary or basis are explored to reconstruct the light field. In contrast, we formulated this problem from the perspective of principal component analysis (PCA) and non-negative matrix factorization (NMF), where only a small number of basis vectors are selected in advance based on the analysis of the training dataset. From this formulation, we derived optimal non-negative aperture patterns and a straight-forward reconstruction algorithm. Even though our method is based on conventional techniques, it has proven to be more accurate and much faster than a state-of-the-art CS-based method.
View full abstract
-
Yu CHEN, Jing XIAO, Liuyi HU, Dan CHEN, Zhongyuan WANG, Dengshi LI
Article type: PAPER
2018 Volume E101.D Issue 9 Pages
2201-2208
Published: September 01, 2018
Released on J-STAGE: September 01, 2018
JOURNAL
FREE ACCESS
Saliency detection for videos has been paid great attention and extensively studied in recent years. However, various visual scene with complicated motions leads to noticeable background noise and non-uniformly highlighting the foreground objects. In this paper, we proposed a video saliency detection model using spatio-temporal cues. In spatial domain, the location of foreground region is utilized as spatial cue to constrain the accumulation of contrast for background regions. In temporal domain, the spatial distribution of motion-similar regions is adopted as temporal cue to further suppress the background noise. Moreover, a backward matching based temporal prediction method is developed to adjust the temporal saliency according to its corresponding prediction from the previous frame, thus enforcing the consistency along time axis. The performance evaluation on several popular benchmark data sets validates that our approach outperforms existing state-of-the-arts.
View full abstract
-
Renjie WU, Sei-ichiro KAMATA
Article type: PAPER
2018 Volume E101.D Issue 9 Pages
2209-2219
Published: September 01, 2018
Released on J-STAGE: September 01, 2018
JOURNAL
FREE ACCESS
In recent years, deep learning based approaches have substantially improved the performance of face recognition. Most existing deep learning techniques work well, but neglect effective utilization of face correlation information. The resulting performance loss is noteworthy for personal appearance variations caused by factors such as illumination, pose, occlusion, and misalignment. We believe that face correlation information should be introduced to solve this network performance problem originating from by intra-personal variations. Recently, graph deep learning approaches have emerged for representing structured graph data. A graph is a powerful tool for representing complex information of the face image. In this paper, we survey the recent research related to the graph structure of Convolutional Neural Networks and try to devise a definition of graph structure included in Compressed Sensing and Deep Learning. This paper devoted to the story explain of two properties of our graph - sparse and depth. Sparse can be advantageous since features are more likely to be linearly separable and they are more robust. The depth means that this is a multi-resolution multi-channel learning process. We think that sparse graph based deep neural network can more effectively make similar objects to attract each other, the relative, different objects mutually exclusive, similar to a better sparse multi-resolution clustering. Based on this concept, we propose a sparse graph representation based on the face correlation information that is embedded via the sparse reconstruction and deep learning within an irregular domain. The resulting classification is remarkably robust. The proposed method achieves high recognition rates of 99.61% (94.67%) on the benchmark LFW (YTF) facial evaluation database.
View full abstract
-
Yitong LIU, Wang TIAN, Yuchen LI, Hongwen YANG
Article type: LETTER
2018 Volume E101.D Issue 9 Pages
2220-2223
Published: September 01, 2018
Released on J-STAGE: September 01, 2018
JOURNAL
FREE ACCESS
High Efficiency Video Coding (HEVC) has a better coding efficiency comparing with H.264/AVC. However, performance enhancement results in increased computational complexity which is mainly brought by the quadtree based coding tree unit (CTU). In this paper, an early termination algorithm based on AdaBoost classifier for coding unit (CU) is proposed to accelerate the process of searching the best partition for CTU. Experiment results indicate that our method can save 39% computational complexity on average at the cost of increasing Bjontegaard-Delta rate (BD-rate) by 0.18.
View full abstract
-
Ying SONG, Xia ZHAO, Bo WANG, Yuzhong SUN
Article type: PAPER
Subject area: Fundamentals of Information Systems
2018 Volume E101.D Issue 9 Pages
2224-2234
Published: September 01, 2018
Released on J-STAGE: September 01, 2018
JOURNAL
FREE ACCESS
High energy cost is a big challenge faced by the current data centers, wherein computing energy and cooling energy are main contributors to such cost. Consolidating workload onto fewer servers decreases the computing energy. However, it may result in thermal hotspots which typically consume greater cooling energy. Thus the tradeoff between computing energy decreasing and cooling energy decreasing is necessary for energy saving. In this paper, we propose a minimized-total-energy virtual machine (VM for short) migration model called C2vmMap based on efficient tradeoff between computing and cooling energies, with respect to two relationships: one for between the resource utilization and computing power and the other for among the resource utilization, the inlet and outlet temperatures of servers, and the cooling power. Regarding online resolution of the above model for better scalability, we propose a VM migration algorithm called C2vmMap_heur to decrease the total energy of a data center at run-time. We evaluate C2vmMap_heur under various workload scenarios. The real server experimental results show that C2vmMap_heur reduces up to 40.43% energy compared with the non-migration load balance algorithm. This algorithm saves up to 3x energy compared with the existing VM migration algorithm.
View full abstract
-
Shin-ichi NAKAYAMA, Shigeru MASUYAMA
Article type: PAPER
Subject area: Fundamentals of Information Systems
2018 Volume E101.D Issue 9 Pages
2235-2246
Published: September 01, 2018
Released on J-STAGE: September 01, 2018
JOURNAL
FREE ACCESS
Given a graph G=(V,E) where V and E are a vertex and an edge set, respectively, specified with a subset VNT of vertices called a non-terminal set, the spanning tree with non-terminal set VNT is a connected and acyclic spanning subgraph of G that contains all the vertices of V where each vertex in a non-terminal set is not a leaf. The complexity of finding a spanning tree with non-terminal set VNT on general graphs where each edge has the weight of one is known to be NP-hard. In this paper, we show that if G is an interval graph then finding a spanning tree with a non-terminal set VNT of G is linearly-solvable when each edge has the weight of one.
View full abstract
-
Satoshi IMAMURA, Yuichiro YASUI, Koji INOUE, Takatsugu ONO, Hiroshi SA ...
Article type: PAPER
Subject area: Computer System
2018 Volume E101.D Issue 9 Pages
2247-2257
Published: September 01, 2018
Released on J-STAGE: September 01, 2018
JOURNAL
FREE ACCESS
The power consumption of server platforms has been increasing as the amount of hardware resources equipped on them is increased. Especially, the capacity of DRAM continues to grow, and it is not rare that DRAM consumes higher power than processors on modern servers. Therefore, a reduction in the DRAM energy consumption is a critical challenge to reduce the system-level energy consumption. Although it is well known that improving row buffer locality(RBL) and bank-level parallelism (BLP) is effective to reduce the DRAM energy consumption, our preliminary evaluation on a real server demonstrates that RBL is generally low across 15 multithreaded benchmarks. In this paper, we investigate the memory access patterns of these benchmarks using a simulator and observe that cache line-grained channel interleaving schemes, which are widely applied to modern servers including multiple memory channels, hurt the RBL each of the benchmarks potentially possesses. In order to address this problem, we focus on a row-grained channel interleaving scheme and compare it with three cache line-grained schemes. Our evaluation shows that it reduces the DRAM energy consumption by 16.7%, 12.3%, and 5.5% on average (up to 34.7%, 28.2%, and 12.0%) compared to the other schemes, respectively.
View full abstract
-
Koya MITSUZUKA, Michihiro KOIBUCHI, Hideharu AMANO, Hiroki MATSUTANI
Article type: PAPER
Subject area: Computer System
2018 Volume E101.D Issue 9 Pages
2258-2268
Published: September 01, 2018
Released on J-STAGE: September 01, 2018
JOURNAL
FREE ACCESS
In parallel processing applications, a few worker nodes called “stragglers”, which execute their tasks significantly slower than other tasks, increase the execution time of the job. In this paper, we propose a network switch based straggler handling system to mitigate the burden of the compute nodes. We also propose how to offload detecting stragglers and computing their results in the network switch with no additional communications between worker nodes. We introduce some approximate techniques for the proxy computation and response at the switch; thus our switch is called “ApproxSW.” As a result of a simulation experiment, the proposed approximation based on task similarity achieves the best accuracy in terms of quality of generated Map outputs. We also analyze how to suppress unnecessary proxy computation by the ApproxSW. We implement ApproxSW on NetFPGA-SUME board that has four 10Gbit Ethernet (10GbE) interfaces and a Virtex-7 FPGA. Experimental results shows that the ApproxSW functions do not degrade the original 10GbE switch performance.
View full abstract
-
Takashi WATANABE, Akito MONDEN, Zeynep YÜCEL, Yasutaka KAMEI, Shuji MO ...
Article type: PAPER
Subject area: Software Engineering
2018 Volume E101.D Issue 9 Pages
2269-2278
Published: September 01, 2018
Released on J-STAGE: September 01, 2018
JOURNAL
FREE ACCESS
Association rule mining discovers relationships among variables in a data set, representing them as rules. These are expected to often have predictive abilities, that is, to be able to predict future events, but commonly used rule interestingness measures, such as support and confidence, do not directly assess their predictive power. This paper proposes a cross-validation -based metric that quantifies the predictive power of such rules for characterizing software defects. The results of evaluation this metric experimentally using four open-source data sets (Mylyn, NetBeans, Apache Ant and jEdit) show that it can improve rule prioritization performance over conventional metrics (support, confidence and odds ratio) by 72.8% for Mylyn, 15.0% for NetBeans, 10.5% for Apache Ant and 0 for jEdit in terms of SumNormPre(100) precision criterion. This suggests that the proposed metric can provide better rule prioritization performance than conventional metrics and can at least provide similar performance even in the worst case.
View full abstract
-
Wiradee IMRATTANATRAI, Makoto P. KATO, Katsumi TANAKA, Masatoshi YOSHI ...
Article type: PAPER
Subject area: Data Engineering, Web Information Systems
2018 Volume E101.D Issue 9 Pages
2279-2290
Published: September 01, 2018
Released on J-STAGE: September 01, 2018
JOURNAL
FREE ACCESS
This paper proposes methods of finding a ranked list of entities for a given query (e.g. “Kennin-ji”, “Tenryu-ji”, or “Kinkaku-ji” for the query “ancient zen buddhist temples in kyoto”) by leveraging different types of modifiers in the query through identifying corresponding properties (e.g. established date and location for the modifiers “ancient” and “kyoto”, respectively). While most major search engines provide the entity search functionality that returns a list of entities based on users' queries, entities are neither presented for a wide variety of search queries, nor in the order that users expect. To enhance the effectiveness of entity search, we propose two entity ranking methods. Our first proposed method is a Web-based entity ranking that directly finds relevant entities from Web search results returned in response to the query as a whole, and propagates the estimated relevance to the other entities. The second proposed method is a property-based entity ranking that ranks entities based on properties corresponding to modifiers in the query. To this end, we propose a novel property identification method that identifies a set of relevant properties based on a Support Vector Machine (SVM) using our seven criteria that are effective for different types of modifiers. The experimental results showed that our proposed property identification method could predict more relevant properties than using each of the criteria separately. Moreover, we achieved the best performance for returning a ranked list of relevant entities when using the combination of the Web-based and property-based entity ranking methods.
View full abstract
-
Yi LIU, Qingkun MENG, Xingtong LIU, Jian WANG, Lei ZHANG, Chaojing TAN ...
Article type: PAPER
Subject area: Information Network
2018 Volume E101.D Issue 9 Pages
2291-2297
Published: September 01, 2018
Released on J-STAGE: September 01, 2018
JOURNAL
FREE ACCESS
Electronic payment protocols provide secure service for electronic commerce transactions and protect private information from malicious entities in a network. Formal methods have been introduced to verify the security of electronic payment protocols; however, these methods concentrate on the accountability and fairness of the protocols, without considering the impact caused by timeliness. To make up for this deficiency, we present a formal method to analyze the security properties of electronic payment protocols, namely, accountability, fairness and timeliness. We add a concise time expression to an existing logical reasoning method to represent the event time and extend the time characteristics of the logical inference rules. Then, the Netbill protocol is analyzed with our formal method, and we find that the fairness of the protocol is not satisfied due to the timeliness problem. The results illustrate that our formal method can analyze the key properties of electronic payment protocols. Furthermore, it can be used to verify the time properties of other security protocols.
View full abstract
-
Yuehua WANG, Zhinong ZHONG, Anran YANG, Ning JING
Article type: PAPER
Subject area: Artificial Intelligence, Data Mining
2018 Volume E101.D Issue 9 Pages
2298-2306
Published: September 01, 2018
Released on J-STAGE: September 01, 2018
JOURNAL
FREE ACCESS
Review rating prediction is an important problem in machine learning and data mining areas and has attracted much attention in recent years. Most existing methods for review rating prediction on Location-Based Social Networks only capture the semantics of texts, but ignore user information (social links, geolocations, etc.), which makes them less personalized and brings down the prediction accuracy. For example, a user's visit to a venue may be influenced by their friends' suggestions or the travel distance to the venue. To address this problem, we develop a review rating prediction framework named TSG by utilizing users' review Text, Social links and the Geolocation information with machine learning techniques. Experimental results demonstrate the effectiveness of the framework.
View full abstract
-
Hang CUI, Shoichi HIRASAWA, Hiroaki KOBAYASHI, Hiroyuki TAKIZAWA
Article type: PAPER
Subject area: Artificial Intelligence, Data Mining
2018 Volume E101.D Issue 9 Pages
2307-2314
Published: September 01, 2018
Released on J-STAGE: September 01, 2018
JOURNAL
FREE ACCESS
Sparse Matrix-Vector multiplication (SpMV) is a computational kernel widely used in many applications. Because of the importance, many different implementations have been proposed to accelerate this computational kernel. The performance characteristics of those SpMV implementations are quite different, and it is basically difficult to select the implementation that has the best performance for a given sparse matrix without performance profiling. One existing approach to the SpMV best-code selection problem is by using manually-predefined features and a machine learning model for the selection. However, it is generally hard to manually define features that can perfectly express the characteristics of the original sparse matrix necessary for the code selection. Besides, some information loss would happen by using this approach. This paper hence presents an effective deep learning mechanism for SpMV code selection best suited for a given sparse matrix. Instead of using manually-predefined features of a sparse matrix, a feature image and a deep learning network are used to map each sparse matrix to the implementation, which is expected to have the best performance, in advance of the execution. The benefits of using the proposed mechanism are discussed by calculating the prediction accuracy and the performance. According to the evaluation, the proposed mechanism can select an optimal or suboptimal implementation for an unseen sparse matrix in the test data set in most cases. These results demonstrate that, by using deep learning, a whole sparse matrix can be used to do the best implementation prediction, and the prediction accuracy achieved by the proposed mechanism is higher than that of using predefined features.
View full abstract
-
Zhi-xiong XU, Lei CAO, Xi-liang CHEN, Chen-xi LI, Yong-liang ZHANG, Ju ...
Article type: PAPER
Subject area: Artificial Intelligence, Data Mining
2018 Volume E101.D Issue 9 Pages
2315-2322
Published: September 01, 2018
Released on J-STAGE: September 01, 2018
JOURNAL
FREE ACCESS
The commonly used Deep Q Networks is known to overestimate action values under certain conditions. It's also proved that overestimations do harm to performance, which might cause instability and divergence of learning. In this paper, we present the Deep Sarsa and Q Networks (DSQN) algorithm, which can considered as an enhancement to the Deep Q Networks algorithm. First, DSQN algorithm takes advantage of the experience replay and target network techniques in Deep Q Networks to improve the stability of neural networks. Second, double estimator is utilized for Q-learning to reduce overestimations. Especially, we introduce Sarsa learning to Deep Q Networks for removing overestimations further. Finally, DSQN algorithm is evaluated on cart-pole balancing, mountain car and lunarlander control task from the OpenAI Gym. The empirical evaluation results show that the proposed method leads to reduced overestimations, more stable learning process and improved performance.
View full abstract
-
Yue TAN, Wei LIU, Zhenyu YANG, Xiaoni DU, Zongtian LIU
Article type: PAPER
Subject area: Artificial Intelligence, Data Mining
2018 Volume E101.D Issue 9 Pages
2323-2333
Published: September 01, 2018
Released on J-STAGE: September 01, 2018
JOURNAL
FREE ACCESS
Event-centered information integration is regarded as one of the most pressing issues in improving disaster emergency management. Ontology plays an increasingly important role in emergency information integration, and provides the possibility for emergency reasoning. However, the development of event ontology for disaster emergency is a laborious and difficult task due to the increasingly scale and complexity of emergencies. Ontology pattern is a modeling solution to solve the recurrent ontology design problem, which can improve the efficiency of ontology development by reusing patterns. By study on characteristics of numerous emergencies, this paper proposes a generic ontology pattern for emergency system modeling. Based on the emergency ontology pattern, a set of reasoning rules for emergency-evolution, emergency-solution and emergency-resource utilization reasoning were proposed to conduct emergency knowledge reasoning and q.
View full abstract
-
Peerasak INTARAPAIBOON, Thanaruk THEERAMUNKONG
Article type: PAPER
Subject area: Artificial Intelligence, Data Mining
2018 Volume E101.D Issue 9 Pages
2334-2345
Published: September 01, 2018
Released on J-STAGE: September 01, 2018
JOURNAL
FREE ACCESS
Multi-slot information extraction, also known as frame extraction, is a task that identify several related entities simultaneously. Most researches on this task are concerned with applying IE patterns (rules) to extract related entities from unstructured documents. An important obstacle for the success in this task is unknowing where text portions containing interested information are. This problem is more complicated when involving languages with sentence boundary ambiguity, e.g. the Thai language. Applying IE rules to all reasonable text portions can degrade the effect of this obstacle, but it raises another problem that is incorrect (unwanted) extractions. This paper aims to present a method for removing these incorrect extractions. In the method, extractions are represented as intuitionistic fuzzy sets, and a similarity measure for IFSs is used to calculate distance between IFS of an unclassified extraction and that of each already-classified extraction. The concept of k nearest neighbor is adopted to design whether the unclassified extraction is correct or not. From the experiment on various domains, the proposed technique improves extraction precision while satisfactorily preserving recall.
View full abstract
-
Ryo IWAKI, Hiroki YOKOYAMA, Minoru ASADA
Article type: PAPER
Subject area: Artificial Intelligence, Data Mining
2018 Volume E101.D Issue 9 Pages
2346-2355
Published: September 01, 2018
Released on J-STAGE: September 01, 2018
JOURNAL
FREE ACCESS
The step size is a parameter of fundamental importance in learning algorithms, particularly for the natural policy gradient (NPG) methods. We derive an upper bound for the step size in an incremental NPG estimation, and propose an adaptive step size to implement the derived upper bound. The proposed adaptive step size guarantees that an updated parameter does not overshoot the target, which is achieved by weighting the learning samples according to their relative importances. We also provide tight upper and lower bounds for the step size, though they are not suitable for the incremental learning. We confirm the usefulness of the proposed step size using the classical benchmarks. To the best of our knowledge, this is the first adaptive step size method for NPG estimation.
View full abstract
-
Warunya WUNNASRI, Jaruwat PAILAI, Yusuke HAYASHI, Tsukasa HIRASHIMA
Article type: PAPER
Subject area: Educational Technology
2018 Volume E101.D Issue 9 Pages
2356-2367
Published: September 01, 2018
Released on J-STAGE: September 01, 2018
JOURNAL
FREE ACCESS
Collaborative learning is an active teaching and learning strategy, in which learners who give each other elaborated explanations can learn most. However, it is difficult for learners to explain their own understanding elaborately in collaborative learning. In this study, we propose a collaborative use of a Kit-Build concept map (KB map) called “Reciprocal KB map”. In a Reciprocal KB map for a pair discussion, at first, the two participants make their own concept maps expressing their comprehension. Then, they exchange the components of their maps and request each other to reconstruct their maps by using the components. The differences between the original map and the reconstructed map are diagnosed automatically as an advantage of the KB map. Reciprocal KB map is expected to encourage pair discussion to recognize the understanding of each other and to create an effective discussion. In an experiment reported in this paper, Reciprocal KB map was used for supporting a pair discussion and was compared with a pair discussion which was supported by a traditional concept map. Nineteen pairs of university students were requested to use the traditional concept map in their discussion, while 20 pairs of university students used Reciprocal KB map for discussing the same topic. The results of the experiment were analyzed using three metrics: a discussion score, a similarity score, and questionnaires. The discussion score, which investigates the value of talk in discussion, demonstrates that Reciprocal KB map can promote more effective discussion between the partners compared to the traditional concept map. The similarity score, which evaluates the similarity of the concept maps, demonstrates that Reciprocal KB map can encourage the pair of partners to understand each other better compared to the traditional concept map. Last, the questionnaires illustrate that Reciprocal KB map can support the pair of partners to collaborate in the discussion smoothly and that the participants accepted this method for sharing their understanding with each other. These results suggest that Reciprocal KB map is a promising approach for encouraging pairs of partners to understand each other and to promote the effective discussions.
View full abstract
-
Motoharu SONOGASHIRA, Masaaki IIYAMA, Michihiko MINOH
Article type: PAPER
Subject area: Image Processing and Video Processing
2018 Volume E101.D Issue 9 Pages
2368-2380
Published: September 01, 2018
Released on J-STAGE: September 01, 2018
JOURNAL
FREE ACCESS
Vignetting is a common type of image degradation that makes peripheral parts of an image darker than the central part. Single-image devignetting aims to remove undesirable vignetting from an image without resorting to calibration, thereby providing high-quality images required for a wide range of applications. Previous studies into single-image devignetting have focused on the estimation of vignetting functions under the assumption that degradation other than vignetting is negligible. However, noise in real-world observations remains unremoved after inversion of vignetting, and prevents stable estimation of vignetting functions, thereby resulting in low quality of restored images. In this paper, we introduce a methodology of image restoration based on variational Bayes (VB) to devignetting, aiming at high-quality devignetting in the presence of noise. Through VB inference, we jointly estimate a vignetting function and a latent image free from both vignetting and noise, using a general image prior for noise removal. Compared with state-of-the-art methods, the proposed VB approach to single-image devignetting maintains effectiveness in the presence of noise, as we demonstrate experimentally.
View full abstract
-
Keisuke NONAKA, Houari SABIRIN, Jun CHEN, Hiroshi SANKOH, Sei NAITO
Article type: PAPER
Subject area: Image Processing and Video Processing
2018 Volume E101.D Issue 9 Pages
2381-2391
Published: September 01, 2018
Released on J-STAGE: September 01, 2018
JOURNAL
FREE ACCESS
A free-viewpoint application has been developed that yields an immersive user experience. One of the simple free-viewpoint approaches called “billboard methods” is suitable for displaying a synthesized 3D view in a mobile device, but it suffers from the limitation that a billboard should be positioned in only one position in the world. This fact gives users an unacceptable impression in the case where an object being shot is situated at multiple points. To solve this problem, we propose optimal deformation of the billboard. The deformation is designed as a mapping of grid points in the input billboard silhouette to produce an optimal silhouette from an accurate voxel model of the object. We formulate and solve this procedure as a nonlinear optimization problem based on a grid-point constraint and some a priori information. Our results show that the proposed method generates a synthesized virtual image having a natural appearance and better objective score in terms of the silhouette and structural similarity.
View full abstract
-
Su LIU, Xingguang GENG, Yitao ZHANG, Shaolong ZHANG, Jun ZHANG, Yanbin ...
Article type: PAPER
Subject area: Biological Engineering
2018 Volume E101.D Issue 9 Pages
2392-2400
Published: September 01, 2018
Released on J-STAGE: September 01, 2018
JOURNAL
FREE ACCESS
The quality of edge detection is related to detection angle, scale, and threshold. There have been many algorithms to promote edge detection quality by some rules about detection angles. However these algorithm did not form rules to detect edges at an arbitrary angle, therefore they just used different number of angles and did not indicate optimized number of angles. In this paper, a novel edge detection algorithm is proposed to detect edges at arbitrary angles and optimized number of angles in the algorithm is introduced. The algorithm combines singularity detection with Gaussian wavelet transform and edge detection at arbitrary directions and contain five steps: 1) An image is divided into some pixel lines at certain angle in the range from 45° to 90° according to decomposition rules of this paper. 2) Singularities of pixel lines are detected and form an edge image at the certain angle. 3) Many edge images at different angles form a final edge images. 4) Detection angles in the range from 45° to 90° are extended to range from 0° to 360°. 5) Optimized number of angles for the algorithm is proposed. Then the algorithm with optimized number of angles shows better performances.
View full abstract
-
Chao TANG, Huaxi GU, Kun WANG
Article type: LETTER
Subject area: Computer System
2018 Volume E101.D Issue 9 Pages
2401-2403
Published: September 01, 2018
Released on J-STAGE: September 01, 2018
JOURNAL
FREE ACCESS
Optical interconnect is a promising candidate for network on chip. As the key element in the network on chip, the routers greatly affect the performance of the whole system. In this letter, we proposed a new router architecture, Waffle, based on compact 2×2 hybrid photonic-plasmonic switching elements. Also, an optimized architecture, Waffle-XY, was designed for the network employed XY routing algorithm. Both Waffle and Waffle-XY are strictly non-blocking architectures and can be employed in the popular mesh-like networks. Theoretical analysis illustrated that Waffle and Waffle-XY possessed a better performance compared with several representative routers.
View full abstract
-
Joon-Young PAIK, Rize JIN, Tae-Sun CHUNG
Article type: LETTER
Subject area: Data Engineering, Web Information Systems
2018 Volume E101.D Issue 9 Pages
2404-2408
Published: September 01, 2018
Released on J-STAGE: September 01, 2018
JOURNAL
FREE ACCESS
In terms of system reliability, data recovery is a crucial capability. The lack of data recovery leads to the permanent loss of valuable data. This paper aims at improving data recovery in flash-based storage devices where extremely poor data recovery is shown. For this, we focus on garbage collection that determines the life span of data which have high possibility of data recovery requests by users. A new garbage collection mechanism with awareness of data recovery is proposed. First, deleted or overwritten data are categorized into shallow invalid data and deep invalid data based on the possibility of data recovery requests. Second, the proposed mechanism selects victim area for reclamation of free space, considering the shallow invalid data that have the high possibility of data recovery requests. Our proposal prohibits more shallow invalid data from being eliminated during garbage collections. The experimental results show that our garbage collection mechanism can improve data recovery with minor performance degradation.
View full abstract
-
Zhi-xiong XU, Lei CAO, Xi-liang CHEN, Chen-xi LI
Article type: LETTER
Subject area: Artificial Intelligence, Data Mining
2018 Volume E101.D Issue 9 Pages
2409-2412
Published: September 01, 2018
Released on J-STAGE: September 01, 2018
JOURNAL
FREE ACCESS
Aiming at the contradiction between exploration and exploitation in deep reinforcement learning, this paper proposes “reward-based exploration strategy combined with Softmax action selection” (RBE-Softmax) as a dynamic exploration strategy to guide the agent to learn. The superiority of the proposed method is that the characteristic of agent's learning process is utilized to adapt exploration parameters online, and the agent is able to select potential optimal action more effectively. The proposed method is evaluated in discrete and continuous control tasks on OpenAI Gym, and the empirical evaluation results show that RBE-Softmax method leads to statistically-significant improvement in the performance of deep reinforcement learning algorithms.
View full abstract
-
Nii L. SOWAH, Qingbo WU, Fanman MENG, Liangzhi TANG, Yinan LIU, Linfen ...
Article type: LETTER
Subject area: Pattern Recognition
2018 Volume E101.D Issue 9 Pages
2413-2416
Published: September 01, 2018
Released on J-STAGE: September 01, 2018
JOURNAL
FREE ACCESS
In this paper, we improve upon the accuracy of existing tracklet generation methods by repairing tracklets based on their quality evaluation and detection propagation. Starting from object detections, we generate tracklets using three existing methods. Then we perform co-tracklet quality evaluation to score each tracklet and filtered out good tracklet based on their scores. A detection propagation method is designed to transfer the detections in the good tracklets to the bad ones so as to repair bad tracklets. The tracklet quality evaluation in our method is implemented by intra-tracklet detection consistency and inter-tracklet detection completeness. Two propagation methods; global propagation and local propagation are defined to achieve more accurate tracklet propagation. We demonstrate the effectiveness of the proposed method on the MOT 15 dataset
View full abstract
-
Maoxi LI, Qingyu XIANG, Zhiming CHEN, Mingwen WANG
Article type: LETTER
Subject area: Natural Language Processing
2018 Volume E101.D Issue 9 Pages
2417-2421
Published: September 01, 2018
Released on J-STAGE: September 01, 2018
JOURNAL
FREE ACCESS
The-state-of-the-art neural quality estimation (QE) of machine translation model consists of two sub-networks that are tuned separately, a bidirectional recurrent neural network (RNN) encoder-decoder trained for neural machine translation, called the predictor, and an RNN trained for sentence-level QE tasks, called the estimator. We propose to combine the two sub-networks into a whole neural network, called the unified neural network. When training, the bidirectional RNN encoder-decoder are initialized and pre-trained with the bilingual parallel corpus, and then, the networks are trained jointly to minimize the mean absolute error over the QE training samples. Compared with the predictor and estimator approach, the use of a unified neural network helps to train the parameters of the neural networks that are more suitable for the QE task. Experimental results on the benchmark data set of the WMT17 sentence-level QE shared task show that the proposed unified neural network approach consistently outperforms the predictor and estimator approach and significantly outperforms the other baseline QE approaches.
View full abstract