-
Tsutomu YOSHINAGA
2014 Volume E97.D Issue 12 Pages
2982-2983
Published: 2014
Released on J-STAGE: December 01, 2014
JOURNAL
FREE ACCESS
-
Ryo HAMAMOTO, Chisa TAKANO, Kenji ISHIDA, Masaki AIDA
Article type: PAPER
Subject area: Wireless Network
2014 Volume E97.D Issue 12 Pages
2984-2994
Published: 2014
Released on J-STAGE: December 01, 2014
JOURNAL
FREE ACCESS
Mobile ad hoc networks (MANETs) consist of mobile terminals that directly connect with one another to communicate without a network infrastructure, such as base stations and/or access points of wireless local area networks (LANs) connected to wired backbone networks. Large-scale disasters such as tsunamis and earthquakes can cause serious damage to life, property as well as any network infrastructure. However, MANETs can function even after severe disasters have destroyed regular network infrastructure. We have proposed an autonomous decentralized structure formation technology based on local interaction, and have applied it to implement autonomous decentralized clustering on MANETs. This method is known to configure clusters that reflect the network condition, such as residual battery power and the degree of each node. However, the effect of clusters that reflect the network condition has not been evaluated. In this study, we configure clusters using our method, the back-diffusion method, and a bio-inspired method, which is a kind of autonomous decentralized clustering that cannot reflect the network condition. We also clarify the importance of clustering that reflects the network condition, with regard to power consumption and data transfer efficiency.
View full abstract
-
Qian ZHAO, Yukikazu NAKAMOTO
Article type: PAPER
Subject area: Wireless Network
2014 Volume E97.D Issue 12 Pages
2995-3006
Published: 2014
Released on J-STAGE: December 01, 2014
JOURNAL
FREE ACCESS
Wireless sensor networks (WSNs) consist of numerous wireless sensor nodes, each sensor node embedding a tiny communication device enabling the nodes to communicate with each other or the base station. In this paper, we investigate the problem that communication distance must be considered in minimizing the wireless communication energy since the energy consumption is proportional to the 2nd to the 6th power of the distance. Moreover, another problem is that there is a non-uniform energy drain effect in most topologies. Known as the energy hole problem, it can result in premature termination of the entire network. To address these problems, in this paper we first propose a communication routing algorithm that can solve the energy hole problem to the maximum extent possible while minimizing the wireless communication energy by generating an energy efficient spanning tree. This algorithm is beneficial for network lifetimes defined by a high node termination percentage. For the WSNs for which the energy hole problem is critical, we propose two route switching algorithms to solve the energy hole problem; they are beneficial for network lifetimes defined by a low node termination percentage. Simulation results showed that these algorithms avoid the energy hole problem and thereby greatly extend the lifetime of WSNs by more than 3 to 6 times that of ones using direct transmission in a 20-node network and a 50-node network if the lifetime of a WSN is defined by 1% of the number of terminated nodes in the WSN.
View full abstract
-
Zhenwei DING, Yusuke OMORI, Ryoichi SHINKUMA, Tatsuro TAKAHASHI
Article type: PAPER
Subject area: Wireless Network
2014 Volume E97.D Issue 12 Pages
3007-3015
Published: 2014
Released on J-STAGE: December 01, 2014
JOURNAL
FREE ACCESS
Simulating the mobility of mobile devices has always been an important issue as far as wireless networks are concerned because mobility needs to be taken into account in various situations in wireless networks. Researchers have been trying, for many years, to improve the accuracy and flexibility of mobility models. Although recent progress of designing mobility models based on social graph have enhanced the performance of mobility models and made them more convenient to use, we believe the accuracy and flexibility of mobility models could be further improved by taking a more integrated structure as the input. In this paper, we propose a new way of designing mobility models on the basis of relational graph [1] which is a graph depicting the relation among objects, e.g. relation between people and people, and also people and places. Moreover, some novel mobility features were introduced in the proposed model to provide social, spatial and temporal properties in order to produce results similar to real mobility data. It was demonstrated by simulation that these measures could generate results similar to real mobility data.
View full abstract
-
Hui JING, Hitoshi AIDA
Article type: PAPER
Subject area: Wireless Network
2014 Volume E97.D Issue 12 Pages
3016-3024
Published: 2014
Released on J-STAGE: December 01, 2014
JOURNAL
FREE ACCESS
As one of the most widely investigated studies in wireless sensor networks (WSNs), multihop networking is increasingly developed and applied for achieving energy efficient communications and enhancing transmission reliability. To accurately and realistically analyze the performance metric (energy efficiency), firstly we provide a measurement of the energy dissipation for each state and establish a practical energy consumption model for a WSN. According to the analytical model of connectivity, Gaussian approximation approaches to experimental connection probability are expressed for optimization problem on energy efficiency. Moreover, for integrating experimental results with theories, we propose the methodology in multihop wireless sensor networks to maximize efficiency by nonlinear programming, considering energy consumptions and the total quantity of sensing data to base station. Furthermore, we present evaluations adapting to various wireless sensor networks quantitatively with respect to energy efficiency and network configuration, in view of connectivity, the length of data, maximum number of hops and total number of nodes. As the consequence, the realistic analysis can be used in practical applications, especially on self-organization sensor networks. The analysis also shows correlations between the efficiency and maximum number of hops, that is the multihop systems with several hops can accommodate enough devices in ordinary applications. In this paper, our contribution distinguished from others is that our model and analysis are extended from experiments. Therefore, the results of analysis and proposal can be conveniently applied to actual networks.
View full abstract
-
Yasuaki YUJI, Satoshi FUJITA
Article type: PAPER
Subject area: Network
2014 Volume E97.D Issue 12 Pages
3025-3032
Published: 2014
Released on J-STAGE: December 01, 2014
JOURNAL
FREE ACCESS
This paper proposes a method to reduce the playback suspension in a Video-on-Demand system based on the Peer-to-Peer technology (P2P VoD). Our main contribution is twofold. The first is the proposal of a hierarchical P2P architecture with the notion of dynamic swarms. Swarm is a group of peers to have similar playback position and those swarms are connected with an overlay so that requested pieces are forwarded from a swarm to another swarm in a bucket brigade manner, where the forward of pieces is regulated by the super-peer (SP) of each swarm. The second contribution is the proposal of a match making scheme between requests and uploaders. The simulation result indicates that the proposed scheme reduces the total waiting time of a randomized scheme by 24% and the load of the media server by 76%.
View full abstract
-
Taishi NAKASHIMA, Satoshi FUJITA
Article type: PAPER
Subject area: Network
2014 Volume E97.D Issue 12 Pages
3033-3040
Published: 2014
Released on J-STAGE: December 01, 2014
JOURNAL
FREE ACCESS
This paper proposes a consistency maintenance scheme for P2P file sharing systems. The basic idea of the proposed scheme is to construct a static tree for each shared file to efficiently propagate the update information to all replica peers. The link to the root of the trees is acquired by referring to a Chord ring which stores the mapping from the set of shared files to the set of tree roots. The performance of the scheme is evaluated by simulation. The simulation result indicates that: 1) it reduces the number of messages in the Li's scheme by 54%, 2) it reduces the propagation delay of the scheme by more than 10%, and 3) the increase of the delay due to peer churns is effectively bounded provided that the percentage of leaving peers is less than 40%.
View full abstract
-
Makoto SUGIHARA
Article type: PAPER
Subject area: Network
2014 Volume E97.D Issue 12 Pages
3041-3051
Published: 2014
Released on J-STAGE: December 01, 2014
JOURNAL
FREE ACCESS
Industrial applications such as automotive ones require a cheap communication mechanism to send out communication messages from node to node by their deadline time. This paper presents a design paradigm in which we optimize both assignment of a network node to a bus and slot multiplexing of a FlexRay network system under hard real-time constraints so that we can minimize the cost of wire harness for the FlexRay network system. We present a cost minimization problem as a non-linear model. We developed a network synthesis tool which was based on simulated annealing. Our experimental results show that our design paradigm achieved a 50.0% less cost than a previously proposed approach for a virtual cost model.
View full abstract
-
Akihiko KASAGI, Koji NAKANO, Yasuaki ITO
Article type: PAPER
Subject area: GPU
2014 Volume E97.D Issue 12 Pages
3052-3062
Published: 2014
Released on J-STAGE: December 01, 2014
JOURNAL
FREE ACCESS
The Hierarchical Memory Machine (HMM) is a theoretical parallel computing model that captures the essence of computation on CUDA-enabled GPUs. The offline permutation is a task to copy numbers stored in an array
a of size
n to an array
b of the same size along a permutation
P given in advance. A conventional algorithm can complete the offline permutation by executing
b[
p[
i]] ←
a[
i] for all
i in parallel, where an array
p stores the permutation
P. We first present that the conventional algorithm runs $D_w(P)+2{n\over w}+3L-3$ time units using
n threads on the HMM with width
w and latency
L, where
Dw(
P) is the distribution of
P. We next show that important regular permutations including transpose, shuffle, and bit-reversal permutations run $2{n\over w}+2{n\over kw}+2L-2$ time units on the HMM with
k DMMs. We have implemented permutation algorithms for these regular permutations on GeForce GTX 680 GPU. The experimental results show that these algorithms run much faster than the conventional algorithm. We also present an offline permutation algorithm for any permutation running in $16{n\over w}+16{n\over kw}+16L-16$ time units on the HMM with
k DMMs. Quite surprisingly, our offline permutation algorithm on the GPU achieves better performance than the conventional algorithm in random permutation, although the running time has a large constant factor. We can say that the experimental results provide a good example of GPU computation showing that a complicated but ingenious implementation with a larger constant factor in computing time can outperform a much simpler conventional algorithm.
View full abstract
-
Duhu MAN, Koji NAKANO, Yasuaki ITO
Article type: PAPER
Subject area: GPU
2014 Volume E97.D Issue 12 Pages
3063-3071
Published: 2014
Released on J-STAGE: December 01, 2014
JOURNAL
FREE ACCESS
The Hierarchical Memory Machine (HMM) is a theoretical parallel computing model that captures the essence of computing on CUDA-enabled GPUs. The approximate string matching (ASM) for two strings
X and
Y of length
m and
n is a task to find a substring of
Y most similar to
X. The main contribution of this paper is to show an optimal parallel algorithm for the approximate string matching on the HMM and implement it on GeForce GTX 580 GPU. Our algorithm runs in $O({n\over w}+{mn\over dw}+{nL\over p}+{mnl\over p})$ time units on the HMM with
p threads,
d streaming processors, memory band width
w, global memory access latency
L, and shared memory access latency
l. We also show that the lower bound of the computing time is $\Omega({n\over w}+{mn\over dw}+{nL\over p}+{mnl\over p})$ time units. Thus, our algorithm for the approximate string matching is time optimal. Further, we implemented our algorithm on GeForce GTX 580 GPU and evaluated the performance. The experimental results show that the ASM of two strings of 1024 and 4M (=2
22) characters can be done in 419.6ms, while the sequential algorithm can compute it in 27720ms. Thus, our implementation on the GPU attains a speedup factor of 66.1 over the single CPU implementation.
View full abstract
-
Kai HUANG, Min YU, Xiaomeng ZHANG, Dandan ZHENG, Siwen XIU, Rongjie YA ...
Article type: PAPER
Subject area: Architecture
2014 Volume E97.D Issue 12 Pages
3072-3082
Published: 2014
Released on J-STAGE: December 01, 2014
JOURNAL
FREE ACCESS
The increasing complexity of embedded applications and the prevalence of multiprocessor system-on-chip (MPSoC) introduce a great challenge for designers on how to achieve performance and programmability simultaneously in embedded systems. Automatic multithreaded code generation methods taking account of performance optimization techniques can be an effective solution. In this paper, we consider the issue of increasing processor utilization and reducing communication cost during multithreaded code generation from Simulink models to improve system performance. We propose a combination of three-layered multithreaded software with Integer Linear Programming (ILP) based design-time mapping and scheduling policies to get optimal performance. The hierarchical software with a thread layer increases processor usage, while the mapping and scheduling policies formulate a group of integer linear programming formulations to minimize communication cost as well as to maximize performance. Experimental results demonstrate the advantages of the proposed techniques on performance improvements.
View full abstract
-
Yukihiro SASAGAWA, Jun YAO, Yasuhiko NAKASHIMA
Article type: PAPER
Subject area: Architecture
2014 Volume E97.D Issue 12 Pages
3083-3091
Published: 2014
Released on J-STAGE: December 01, 2014
JOURNAL
FREE ACCESS
Razor Flip-Flop (FF) is a good combination for the dynamic voltage scaling (DVS) technique to achieve high energy efficiency. We previously proposed a RazorProtector scheme, which uses, under a very high IR-drop zone, a redundant data-path to provide a very fast recovery for a Razor-FF based processor. In this paper, we propose a dynamic method to adjust the redundancy level to fine-grained fit both the program behaviors and processor manufacturing variations so as to achieve an optimal power saving. We design an online turning method to adjust the redundancy level according to the most related parameters, ILP (Instruction Level Parallelism) and DCF (Delay Criticality Factor). Our simulation results show that under a workload suite with different behaviors, the adaptive redundancy can achieve better Energy Delay Product (EDP) reduction than any static controls. Compared to the traditional application of Razor-FF and DVS, our proposed dynamic control achieves an EDP reduction of 56% in average for the workloads we studied.
View full abstract
-
Jun YAO, Yasuhiko NAKASHIMA, Naveen DEVISETTI, Kazuhiro YOSHIMURA, Tak ...
Article type: PAPER
Subject area: Architecture
2014 Volume E97.D Issue 12 Pages
3092-3100
Published: 2014
Released on J-STAGE: December 01, 2014
JOURNAL
FREE ACCESS
General purpose many-core architecture (MCA) such as GPGPU has recently been used widely to continue the performance scaling when the continuous increase in the working frequency has approached the manufacturing limitation. However, both the general purpose MCA and its building block general purpose processor (GPP) lack a tuning capability to boost energy efficiency for individual applications, especially computation intensive applications. As an alternative to the above MCA platforms, we propose in this paper our LAPP (Linear Array Pipeline) architecture, which takes a special-purpose reconfigurable structure for an optimal MIPS/W. However, we also keep the backward binary compatibility, which is not featured in most special hardware. More specifically, we used a general purpose VLIW processor, interpreting a commercial VLIW ISA, as the baseline frontend part to provide the backward binary compatibility. We also extended the functional unit (FU) stage into an FU array to form the reconfigurable backend for efficient execution of program hotspots to exploit parallelism. The hardware modules in this general purpose reconfigurable architecture have been locally zoned into several groups to apply preferable low-power techniques according to the module hardware features. Our results show that under a comparable performance, the tightly coupled general/special purpose hardware, which is based on a 180nm cell library, can achieve 10.8 times the MIPS/W of MCA architecture of the same technology features. When a 65 technology node is assumed, a similar 9.4x MIPS/W can be achieved by using the LAPP without changing program binaries.
View full abstract
-
Asahi TAKAOKA, Satoshi TAYU, Shuichi UENO
Article type: PAPER
Subject area: Fundamentals of Information Systems
2014 Volume E97.D Issue 12 Pages
3101-3109
Published: 2014
Released on J-STAGE: December 01, 2014
JOURNAL
FREE ACCESS
An orthogonal ray graph is an intersection graph of horizontal and vertical rays (closed half-lines) in the plane. Such a graph is 3-directional if every vertical ray has the same direction, and 2-directional if every vertical ray has the same direction and every horizontal ray has the same direction. We derive some structural properties of orthogonal ray graphs, and based on these properties, we introduce polynomial-time algorithms that solve the dominating set problem, the induced matching problem, and the strong edge coloring problem for these graphs. We show that for 2-directional orthogonal ray graphs, the dominating set problem can be solved in
O(
n2 log
5 n) time, the weighted dominating set problem can be solved in
O(
n4 log
n) time, and the number of dominating sets of a fixed size can be computed in
O(
n6 log
n) time, where
n is the number of vertices in the graph. We also show that for 2-directional orthogonal ray graphs, the weighted induced matching problem and the strong edge coloring problem can be solved in
O(
n2+
m log
n) time, where
m is the number of edges in the graph. Moreover, we show that for 3-directional orthogonal ray graphs, the induced matching problem can be solved in
O(
m2) time, the weighted induced matching problem can be solved in
O(
m4) time, and the strong edge coloring problem can be solved in
O(
m3) time. We finally show that the weighted induced matching problem can be solved in
O(
m6) time for orthogonal ray graphs.
View full abstract
-
Yuya KORA, Kyohei YAMAGUCHI, Hideki ANDO
Article type: PAPER
Subject area: Computer System
2014 Volume E97.D Issue 12 Pages
3110-3123
Published: 2014
Released on J-STAGE: December 01, 2014
JOURNAL
FREE ACCESS
Single-thread performance has not improved much over the past few years, despite an ever increasing transistor budget. One of the reasons for this is that there is a speed gap between the processor and main memory, known as the memory wall. A promising method to overcome this memory wall is aggressive out-of-order execution by extensively enlarging the instruction window resources to exploit memory-level parallelism (MLP). However, simply enlarging the window resources lengthens the clock cycle time. Although pipelining the resources solves this problem, it in turn prevents instruction-level parallelism (ILP) from being exploited because issuing instructions requires multiple clock cycles. This paper proposed a dynamic scheme that adaptively resizes the instruction window based on the predicted available parallelism, either ILP or MLP. Specifically, if the scheme predicts that MLP is available during execution, the instruction window is enlarged and the window resources are pipelined, thereby exploiting MLP. Conversely, if the scheme predicts that less MLP is available, that is, ILP is exploitable for improved performance, the instruction window is shrunk and the window resources are de-pipelined, thereby exploiting ILP. Our evaluation results using the SPEC2006 benchmark programs show that the proposed scheme achieves nearly the best performance possible with fixed-size resources. On average, our scheme realizes a performance improvement of 21% over that of a conventional processor, with additional cost of only 6% of the area of the conventional processor core or 3% of that of the entire processor chip. The evaluation results also show 8% better energy efficiency in terms of 1/EDP (energy-delay product).
View full abstract
-
Antoine TROUVÉ, Arnaldo J. CRUZ, Dhouha BEN BRAHIM, Hiroki FUKUYAMA, K ...
Article type: PAPER
Subject area: Software System
2014 Volume E97.D Issue 12 Pages
3124-3132
Published: 2014
Released on J-STAGE: December 01, 2014
JOURNAL
FREE ACCESS
Basic block vectorization consists in realizing instruction-level parallelism inside basic blocks in order to generate SIMD instructions and thus speedup data processing. It is however problematic, because the vectorized program may actually be slower than the original one. Therefore, it would be useful to predict beforehand whether or not vectorization will actually produce any speedup. This paper proposes to do so by expressing vectorization profitability as a classification problem, and by predicting it using a machine learning technique called support vector machine (SVM). It considers three compilers (icc, gcc and llvm), and a benchmark suite made of 151 loops, unrolled with factors ranging from 1 to 20. The paper further proposes a technique that combines the results of two SVMs to reach 99% of accuracy for all three compilers. Moreover, by correctly predicting unprofitable vectorizations, the technique presented in this paper provides speedups of up to 2.16 times, 2.47 times and 3.83 times for icc, gcc and LLVM, respectively (9%, 18% and 56% on average). It also lowers to less than 1% the probability of the compiler generating a slower program with vectorization turned on (from more than 25% for the compilers alone).
View full abstract
-
Sewoog KIM, Dongwoo KANG, Jongmoo CHOI
Article type: PAPER
Subject area: Software System
2014 Volume E97.D Issue 12 Pages
3133-3141
Published: 2014
Released on J-STAGE: December 01, 2014
JOURNAL
FREE ACCESS
As the virtualization technology becomes the core ingredient for recent promising IT infrastructures such as utility computing and cloud computing, accurate analysis of the internal behaviors of virtual machines becomes more and more important. In this paper, we first propose a novel I/O fairness analysis tool for virtualization systems. It supports the following three features: fine-grained, multimodal and multidimensional. Then, using the tool, we observe various I/O behaviors in our experimental XEN-based virtualization system. Our observations disclose that 1) I/O fairness among virtual machines is broken frequently even though each virtual machine requests the same amount of I/Os, 2) the unfairness is caused by an intricate combination of factors including I/O scheduling, CPU scheduling and interactions between the I/O control domain and virtual machines, and 3) some mechanisms, especially the CFQ (Completely Fair Queuing) I/O scheduler that supports fairness reasonable well in a non-virtualization system, do not work well in a virtualization system due to the virtualization-unawareness. These observations drive us to design a new virtualization-aware I/O scheduler for enhancing I/O fairness. It gives scheduling opportunities to asynchronous I/Os in a controlled manner so that it can avoid the unfairness caused by the priority inversion between the low-priority asynchronous I/Os and high-priority synchronous I/Os. Real implementation based experimental results have shown that our proposal can enhance I/O fairness reducing the standard deviation of the finishing time among virtual machines from 4.5 to 1.2.
View full abstract
-
Tomohiro WARASHINA, Kazuo AOYAMA, Hiroshi SAWADA, Takashi HATTORI
Article type: PAPER
Subject area: Data Engineering, Web Information Systems
2014 Volume E97.D Issue 12 Pages
3142-3154
Published: 2014
Released on J-STAGE: December 01, 2014
JOURNAL
FREE ACCESS
This paper presents an efficient method using Hadoop MapReduce for constructing a
K-nearest neighbor graph (
K-NNG) from a large-scale data set.
K-NNG has been utilized as a data structure for data analysis techniques in various applications. If we are to apply the techniques to a large-scale data set, it is desirable that we develop an efficient
K-NNG construction method. We focus on
NN-Descent, which is a recently proposed method that efficiently constructs an approximate
K-NNG.
NN-Descent is implemented on a shared-memory system with OpenMP-based parallelization, and its extension for the Hadoop MapReduce framework is implied for a larger data set such that the shared-memory system is difficult to deal with. However, a simple extension for the Hadoop MapReduce framework is impractical since it requires extremely high system performance because of the high memory consumption and the low data transmission efficiency of MapReduce jobs. The proposed method relaxes the requirement by improving the MapReduce jobs, which employs an appropriate key-value pair format and an efficient sampling strategy. Experiments on large-scale data sets demonstrate that the proposed method both works efficiently and is scalable in terms of a data size, the number of machine nodes, and the graph structural parameter
K.
View full abstract
-
Yoon Hak KIM
Article type: PAPER
Subject area: Information Network
2014 Volume E97.D Issue 12 Pages
3155-3162
Published: 2014
Released on J-STAGE: December 01, 2014
JOURNAL
FREE ACCESS
In this paper, we consider distributed estimation where the measurement at each of the distributed sensor nodes is quantized before being transmitted to a fusion node which produces an estimate of the parameter of interest. Since each quantized measurement can be linked to a region where the parameter is found, aggregating the information obtained from multiple nodes corresponds to generating intersections between the regions. Thus, we develop estimation algorithms that seek to find the intersection region with the maximum likelihood rather than the parameter itself. Specifically, we propose two practical techniques that facilitate fast search with significantly reduced complexity and apply the proposed techniques to a system where an acoustic amplitude sensor model is employed at each node for source localization. Our simulation results show that our proposed algorithms achieve good performance with reasonable complexity as compared with the minimum mean squared error (MMSE) and the maximum likelihood (ML) estimators.
View full abstract
-
Xun SHAO, Go HASEGAWA, Yoshiaki TANIGUCHI, Hirotaka NAKANO
Article type: PAPER
Subject area: Information Network
2014 Volume E97.D Issue 12 Pages
3163-3170
Published: 2014
Released on J-STAGE: December 01, 2014
JOURNAL
FREE ACCESS
As interdomain routing protocol, BGP is fairly simple, and allows plenty of policies based on ISPs' preferences. However, recent studies show that BGP routes are often non-optimal in end-to-end performance, due to technological and economic reasons. To obtain improved end-to-end performance, overlay routing, which can change traffic routing in application layer, has gained attention. However, overlay routing often violates BGP routing policies and harms ISPs' interest. In order to take the advantage of overlay to improve the end-to-end performance, while overcoming the disadvantages, we propose a novel interdomain overlay structure, in which overlay nodes are operated by ISPs within an ISP alliance. The traffic between ISPs within the alliance could be routed by overlay routing, and the other traffic would still be routed by BGP. As economic structure plays very important role in interdomain routing, so we propose an effective and fair charging and pricing scheme within the ISP alliance in correspondence with the overlay routing structure. Finally, we give a simple pricing algorithm, with which ISPs can find the optimal prices in the practice. By mathematical analysis and numerical experiments, we show the correctness and convergence of the pricing algorithm.
View full abstract
-
Satoshi HASHIMOTO, Takahiro TANAKA, Kazuaki AOKI, Kinya FUJITA
Article type: PAPER
Subject area: Human-computer Interaction
2014 Volume E97.D Issue 12 Pages
3171-3180
Published: 2014
Released on J-STAGE: December 01, 2014
JOURNAL
FREE ACCESS
Frequently interrupting someone who is busy will decrease his or her productivity. To minimize this risk, a number of interruptibility estimation methods based on PC activity such as typing or mouse clicks have been developed. However, these estimation methods do not take account of the effect of conversations in relation to the interruptibility of office workers engaged in intellectual activities such as scientific research. This study proposes an interruptibility estimation method that takes account of the conversation status. Two conversation indices, “In conversation” and “End of conversation” were used in a method that we developed based on our analysis of 50 hours worth of recorded activity. Experiments, using the conversation status as judged by the Wizard-of-OZ method, demonstrated that the estimation accuracy can be improved by the two indices. Furthermore, an automatic conversation status recognition system was developed to replace the Wizard-of-OZ procedure. The results of using it for interruptibility estimation suggest the effectiveness of the automatically recognized conversation status.
View full abstract
-
Kenichiro FUKUSHI, Itsuo KUMAZAWA
Article type: PAPER
Subject area: Image Recognition, Computer Vision
2014 Volume E97.D Issue 12 Pages
3181-3191
Published: 2014
Released on J-STAGE: December 01, 2014
JOURNAL
FREE ACCESS
In this paper, we present a computer vision-based human tracking system with multiple stereo cameras. Many widely used methods, such as KLT-tracker, update the trackers “frame-to-frame,” so that features extracted from one frame are utilized to update their current state. In contrast, we propose a novel optimization technique for the “multi-frame” approach that computes resultant trajectories directly from video sequences, in order to achieve high-level robustness against severe occlusion, which is known to be a challenging problem in computer vision. We developed a heuristic optimization technique to estimate human trajectories, instead of using dynamic programming (DP) or an iterative approach, which makes our method sufficiently computationally efficient to operate in realtime. Six video sequences where one to six people walk in a narrow laboratory space are processed using our system. The results confirm that our system is capable of tracking cluttered scenes in which severe occlusion occurs and people are frequently in close proximity to each other. Moreover, minimal information is required for tracking, instead of full camera images, which is communicated over the network. Hence, commonly used network devices are sufficient for constructing our tracking system.
View full abstract
-
Wa SI, Xun PAN, Harutoshi OGAI, Katsumi HIRAI, Noriyoshi YAMAUCHI, Tan ...
Article type: PAPER
Subject area: Biocybernetics, Neurocomputing
2014 Volume E97.D Issue 12 Pages
3192-3200
Published: 2014
Released on J-STAGE: December 01, 2014
JOURNAL
FREE ACCESS
This paper represents an illumination modeling method for lighting control which can model the illumination distribution inside office buildings. The algorithm uses data from the illumination sensors to train Radial Basis Function Neural Networks (RBFNN) which can be used to calculate 1) the illuminance contribution from each luminaire to different positions in the office 2) the natural illuminance distribution inside the office. This method can be used to provide detailed illumination contribution from both artificial and natural light sources for lighting control algorithms by using small amount of sensors. Simulations with DIALux are made to prove the feasibility and accuracy of the modeling method.
View full abstract
-
Toshiaki OKABE, Kazuhiro HOTTA
Article type: PAPER
Subject area: Biological Engineering
2014 Volume E97.D Issue 12 Pages
3201-3209
Published: 2014
Released on J-STAGE: December 01, 2014
JOURNAL
FREE ACCESS
This paper proposes an automatic error correction method for melanosome tracking. Melanosomes in intracellular images are currently tracked manually when investigating diseases, and an automatic tracking method is desirable. We detect all melanosome candidates by SIFT with 2 different parameters. Of course, the SIFT also detects non-melanosomes. Therefore, we use the 4-valued difference image (4-VDimage) to eliminate non-melanosome candidates. After tracking melanosome, we re-track the melanosome with low confidence again from
t+1 to
t. If the results from
t to
t+1 and from
t+1 to
t are different, we judge that initial tracking result is a failure, the melanosome is eliminated as a candidate and re-tracking is carried out. Experiments demonstrate that our method can correct the error and improves the accuracy.
View full abstract
-
Junping DENG, Xian-Hua HAN, Yen-Wei CHEN, Gang XU, Yoshinobu SATO, Mas ...
Article type: PAPER
Subject area: Biological Engineering
2014 Volume E97.D Issue 12 Pages
3210-3221
Published: 2014
Released on J-STAGE: December 01, 2014
JOURNAL
FREE ACCESS
Chronic liver disease is a major worldwide health problem. Diagnosis and staging of chronic liver diseases is an important issue. In this paper, we propose a quantitative method of analyzing local morphological changes for accurate and practical computer-aided diagnosis of cirrhosis. Our method is based on sparse and low-rank matrix decomposition, since the matrix of the liver shapes can be decomposed into two parts: a low-rank matrix, which can be considered similar to that of a normal liver, and a sparse error term that represents the local deformation. Compared with the previous global morphological analysis strategy based on the statistical shape model (SSM), our proposed method improves the accuracy of both normal and abnormal classifications. We also propose using the norm of the sparse error term as a simple measure for classification as normal or abnormal. The experimental results of the proposed method are better than those of the state-of-the-art SSM-based methods.
View full abstract
-
Junpyo JEON, Hyoung-Muk LIM, Hyuncheol PARK, Hyoung-Kyu SONG
Article type: LETTER
Subject area: Fundamentals of Information Systems
2014 Volume E97.D Issue 12 Pages
3222-3225
Published: 2014
Released on J-STAGE: December 01, 2014
JOURNAL
FREE ACCESS
Cooperative communication has been proposed to improve the disadvantages of the multiple-input multiple-output (MIMO) technique without using extra multiple antennas. In an orthogonal frequency division multiple access (OFDMA) system, a cooperative communication that each user shares their allocated sub-channels instead of the MIMO system has been proposed to improve the throughput. But the cooperative communication has a problem as the decreased throughput because it is necessary that users send and receive the information to each other to improve reliability. In this letter, the modified cooperative transmission scheme is proposed to improve reliability in the fading channel, and it can solve the problem for BER performance that is dependent on the errors in the first phase that exchanges the information between both users during the first time.
View full abstract
-
Shunzhi ZHU, Ying MA, Weiwei PAN, Xiatian ZHU, Guangchun LUO
Article type: LETTER
Subject area: Pattern Recognition
2014 Volume E97.D Issue 12 Pages
3226-3229
Published: 2014
Released on J-STAGE: December 01, 2014
JOURNAL
FREE ACCESS
A Balanced Neighborhood Classifier (BNEC) is proposed for class imbalanced data. This method is not only well positioned to capture the class distribution information, but also has the good merits of high-fitting-performance and simplicity. Experiments on both synthetic and real data sets show its effectiveness.
View full abstract
-
Masanori MORISE, Satoshi TSUZUKI, Hideki BANNO, Kenji OZAWA
Article type: LETTER
Subject area: Speech and Hearing
2014 Volume E97.D Issue 12 Pages
3230-3233
Published: 2014
Released on J-STAGE: December 01, 2014
JOURNAL
FREE ACCESS
This research deals with muffled speech as the evaluation target and introduces a criterion for evaluating the auditory impression in muffled speech. It focuses on the vocal tract area function (VTAF) to evaluate the auditory impression, and the criterion uses temporal differentiation of this function to track the temporal variation of the shape of the mouth. The experimental results indicate that the proposed criterion can be used to evaluate the auditory impression as well as the subjective impression.
View full abstract
-
Jie GUO, Bin SONG, Fang TIAN, Haixiao LIU, Hao QIN
Article type: LETTER
Subject area: Image Processing and Video Processing
2014 Volume E97.D Issue 12 Pages
3234-3235
Published: 2014
Released on J-STAGE: December 01, 2014
JOURNAL
FREE ACCESS
For compressed sensing, to address problems which do not involve reconstruction, a correlation analysis between measurements and the transform coefficients is proposed. It is shown that there is a linear relationship between them, which indicates that we can abstract the inner property of images directly in the measurement domain.
View full abstract
-
Zihan YU, Kiichi URAHAMA
Article type: LETTER
Subject area: Image Processing and Video Processing
2014 Volume E97.D Issue 12 Pages
3236-3238
Published: 2014
Released on J-STAGE: December 01, 2014
JOURNAL
FREE ACCESS
We propose an unsharp-masking technique which preserves the hue of colors in images. This method magnifies the contrast of colors and spatially sharpens textures in images. The contrast magnification ratio is adaptively controlled. We show by experiments that this method enhances the color tone of photographs while keeping their perceptual scene depth.
View full abstract
-
Xue CHEN, Chunheng WANG, Baihua XIAO, Song GAO
Article type: LETTER
Subject area: Image Recognition, Computer Vision
2014 Volume E97.D Issue 12 Pages
3239-3243
Published: 2014
Released on J-STAGE: December 01, 2014
JOURNAL
FREE ACCESS
This paper proposes to obtain high-level, domain-robust representations for cross-view face recognition. Specially, we introduce Convolutional Deep Belief Networks (CDBN) as the feature learning model, and an CDBN based interpolating path between the source and target views is built to model the correlation of cross-view data. The promising results outperform other state-of-the-art methods.
View full abstract
-
Lifeng HE, Xiao ZHAO, Bin YAO, Yun YANG, Yuyan CHAO
Article type: LETTER
Subject area: Image Recognition, Computer Vision
2014 Volume E97.D Issue 12 Pages
3244-3247
Published: 2014
Released on J-STAGE: December 01, 2014
JOURNAL
FREE ACCESS
This paper proposes an efficient two-scan labeling algorithm for binary hexagonal images. Unlike conventional labeling algorithms, which process pixels one by one in the first scan, our algorithm processes pixels two by two. We show that using our algorithm, we can check a smaller number of pixels. Experimental results demonstrated that our method is more efficient than the algorithm extended straightly from the corresponding labeling algorithm for rectangle binary images.
View full abstract
-
Huaxin XIAO, Yu LIU, Wei WANG, Maojun ZHANG
Article type: LETTER
Subject area: Image Recognition, Computer Vision
2014 Volume E97.D Issue 12 Pages
3248-3251
Published: 2014
Released on J-STAGE: December 01, 2014
JOURNAL
FREE ACCESS
In consideration of the image noise captured by photoelectric cameras at nighttime, a robust motion detection algorithm based on sparse representation is proposed in this study. A universal dictionary for arbitrary scenes is presented. Realistic and synthetic experiments demonstrate the robustness of the proposed approach.
View full abstract