-
Ce YU, Xiang CHEN, Chunyu WANG, Hutong WU, Jizhou SUN, Yuelei LI, Xiao ...
Article type: PAPER
Subject area: Fundamentals of Information Systems
2015 Volume E98.D Issue 10 Pages
1727-1735
Published: October 01, 2015
Released on J-STAGE: October 01, 2015
JOURNAL
FREE ACCESS
Multi-agent based simulation has been widely used in behavior finance, and several single-processed simulation platforms with Agent-Based Modeling (ABM) have been proposed. However, traditional simulations of stock markets on single processed computers are limited by the computing capability since financial researchers need larger and larger number of agents and more and more rounds to evolve agents' intelligence and get more efficient data. This paper introduces a distributed multi-agent simulation platform, named PSSPAM, for stock market simulation focusing on large scale of parallel agents, communication system and simulation scheduling. A logical architecture for distributed artificial stock market simulation is proposed, containing four loosely coupled modules: agent module, market module, communication system and user interface. With the customizable trading strategies inside, agents are deployed to multiple computing nodes. Agents exchange messages with each other and with the market based on a customizable network topology through a uniform communication system. With a large number of agent threads, the round scheduling strategy is used during the simulation, and a worker pool is applied in the market module. Financial researchers can design their own financial models and run the simulation through the user interface, without caring about the complexity of parallelization and related problems. Two groups of experiments are conducted, one with internal communication between agents and the other without communication between agents, to verify PSSPAM to be compatible with the data from Euronext-NYSE. And the platform shows fair scalability and performance under different parallelism configurations.
View full abstract
-
Atsuki NAGAO, Kazuhisa SETO, Junichi TERUYAMA
Article type: PAPER
Subject area: Fundamentals of Information Systems
2015 Volume E98.D Issue 10 Pages
1736-1743
Published: October 01, 2015
Released on J-STAGE: October 01, 2015
JOURNAL
FREE ACCESS
We propose efficient algorithms for Sorting
k-Sets in Bins. The Sorting
k-Sets in Bins problem can be described as follows. We are given numbered
n bins with
k balls in each bin. Balls in the
i-th bin are numbered
n-
i+1. We can only swap balls between adjacent bins. Our task is to move all of the balls to the same numbered bins. For this problem, we give an efficient greedy algorithm with $\frac{k+1}{4}n^2+O(k+n)$ swaps and provide a detailed analysis for
k=3. In addition, we give a more efficient recursive algorithm using $\frac{15}{16}n^2+O(n)$ swaps for
k=3.
View full abstract
-
Koji HASEBE, Jumpei OKOSHI, Kazuhiko KATO
Article type: PAPER
Subject area: Software System
2015 Volume E98.D Issue 10 Pages
1744-1754
Published: October 01, 2015
Released on J-STAGE: October 01, 2015
JOURNAL
FREE ACCESS
We present a power-saving method for large-scale storage systems of cloud data sharing services, particularly those providing media (video and photograph) sharing services. The idea behind our method is to periodically rearrange stored data in a disk array, so that the workload is skewed toward a small subset of disks, while other disks can be sent to standby mode. This idea is borrowed from the Popular Data Concentration (PDC) technique, but to avoid an increase in response time caused by the accesses to disks in standby mode, we introduce a function that predicts future access frequencies of the uploaded files. This function uses the correlation of potential future accesses with the combination of elapsed time after upload and the total number of accesses in the past. We obtain this function in statistical analysis of the real access patterns of 50,000 randomly selected publicly available photographs on Flickr over 7,000 hours (around 10 months). Moreover, to adapt to a constant massive influx of data, we propose a mechanism that effectively packs the continuously uploaded data into the disk array in a storage system based on the PDC. To evaluate the effectiveness of our method, we measured the performance in simulations and a prototype implementation. We observed that our method consumed 12.2% less energy than the static configuration (in which all disks are in active mode). At the same time, our method maintained a preferred response time, with 0.23% of the total accesses involving disks in standby mode.
View full abstract
-
Xue LEI, Wei HUANG, Wenqing FAN, Yixian YANG
Article type: PAPER
Subject area: Software System
2015 Volume E98.D Issue 10 Pages
1755-1764
Published: October 01, 2015
Released on J-STAGE: October 01, 2015
JOURNAL
FREE ACCESS
Dynamic analysis is frail and insufficient to find hidden paths in environment-intensive program. By analyzing a broad spectrum of different concolic testing systems, we conclude that a number of them cannot handle programs that interact with the environment or require a complete working model. This paper addresses this problem by automatically identifying and modifying outputs of the data input interface function(DIIF). The approach is based on fine-grained taint analysis for detecting and updating the data that interacts with the environment to generate a new set of inputs to execute hidden paths. Moreover, we developed a prototype and conducted extensive experiments using a set of complex and environmentally intensive programs. Finally, the result demonstrates that our approach could identify the DIIF precisely and discover hidden path obviously.
View full abstract
-
Haitao ZHANG, Toshiaki AOKI, Yuki CHIBA
Article type: PAPER
Subject area: Software System
2015 Volume E98.D Issue 10 Pages
1765-1776
Published: October 01, 2015
Released on J-STAGE: October 01, 2015
JOURNAL
FREE ACCESS
OSEK/VDX, a standard for an automobile OS, has been widely adopted by many manufacturers to design and develop a vehicle-mounted OS. With the increasing functionalities in vehicles, more and more complex applications are be developed based on the OSEK/VDX OS. However, how to ensure the reliability of developed applications is becoming a challenge for developers. To ensure the reliability of developed applications, model checking as an exhaustive technique can be applied to discover subtle errors in the development process. Many model checkers have been successfully applied to verify sequential software and general multi-threaded software. However, it is hard to directly use existing model checkers to precisely verify OSEK/VDX applications, since the execution characteristics of OSEK/VDX applications are different from the sequential software and general multi-threaded software. In this paper, we describe and develop an approach to translate OSEK/VDX applications into sequential programs in order to employ existing model checkers to precisely verify OSEK/VDX applications. The value of our approach is that it can be considered as a front-end translator for enabling existing model checkers to verify OSEK/VDX applications.
View full abstract
-
Yingxu LAI, Wenwen ZHANG, Zhen YANG
Article type: PAPER
Subject area: Software System
2015 Volume E98.D Issue 10 Pages
1777-1787
Published: October 01, 2015
Released on J-STAGE: October 01, 2015
JOURNAL
FREE ACCESS
Current software behavior models lack the ability to conduct semantic analysis. We propose a new model to detect abnormal behaviors based on a function semantic tree. First, a software behavior model in terms of state graph and software function is developed. Next, anomaly detection based on the model is conducted in two main steps: calculating deviation density of suspicious behaviors by comparison with state graph and detecting function sequence by function semantic rules. Deviation density can well detect control flow attacks by a deviation factor and a period division. In addition, with the help of semantic analysis, function semantic rules can accurately detect application layer attacks that fail in traditional approaches. Finally, a case study of RSS software illustrates how our approach works. Case study and a contrast experiment have shown that our model has strong expressivity and detection ability, which outperforms traditional behavior models.
View full abstract
-
Peng CHENG, Ivan LEE, Jeng-Shyang PAN, Chun-Wei LIN, John F. RODDICK
Article type: PAPER
Subject area: Artificial Intelligence, Data Mining
2015 Volume E98.D Issue 10 Pages
1788-1798
Published: October 01, 2015
Released on J-STAGE: October 01, 2015
JOURNAL
FREE ACCESS
Association rule mining is a powerful data mining tool, and it can be used to discover unknown patterns from large volumes of data. However, people often have to face the risk of disclosing sensitive information when data is shared with different organizations. The association rule mining techniques may be improperly used to find sensitive patterns which the owner is unwilling to disclose. One of the great challenges in association rule mining is how to protect the confidentiality of sensitive patterns when data is released. Association rule hiding refers to sanitize a database so that certain sensitive association rules cannot be mined out in the released database. In this study, we proposed a new method which hides sensitive rules by removing some items in a database to reduce the support or confidence levels of sensitive rules below specified thresholds. Based on the information of positive border rules and negative border rules contained in transactions, the proposed method chooses suitable candidates for modification aimed at reducing the side effects and the data distortion degree. Comparative experiments on real datasets and synthetic datasets demonstrate that the proposed method can hide sensitive rules with much fewer side effects and database modifications.
View full abstract
-
Miquel ESPI, Masakiyo FUJIMOTO, Tomohiro NAKATANI
Article type: PAPER
Subject area: Speech and Hearing
2015 Volume E98.D Issue 10 Pages
1799-1807
Published: October 01, 2015
Released on J-STAGE: October 01, 2015
JOURNAL
FREE ACCESS
We present a method for recognition of acoustic events in conversation scenarios where speech usually overlaps with other acoustic events. While speech is usually considered the most informative acoustic event in a conversation scene, it does not always contain all the information. Non-speech events, such as a door knock, steps, or a keyboard typing can reveal aspects of the scene that speakers miss or avoid to mention. Moreover, being able to robustly detect these events could further support speech enhancement and recognition systems by providing useful information cues about the surrounding scenarios and noise. In acoustic event detection, state-of-the-art techniques are typically based on derived features (e.g. MFCC, or Mel-filter-banks) which have successfully parameterized the spectrogram of speech but reduce resolution and detail when we are targeting other kinds of events. In this paper, we propose a method that learns features in an unsupervised manner from high-resolution spectrogram patches (considering a patch as a certain number of consecutive frame features stacked together), and integrates within the deep neural network framework to detect and classify acoustic events. Superiority over both previous works in the field, and similar approaches based on derived features, has been assessed by statical measures and evaluation with CHIL2007 corpus, an annotated database of seminar recordings.
View full abstract
-
Chung-Chien HSU, Kah-Meng CHEONG, Tai-Shih CHI, Yu TSAO
Article type: PAPER
Subject area: Speech and Hearing
2015 Volume E98.D Issue 10 Pages
1808-1817
Published: October 01, 2015
Released on J-STAGE: October 01, 2015
JOURNAL
FREE ACCESS
This paper proposes a voice activity detection (VAD) algorithm based on an energy related feature of the frequency modulation of harmonics. A multi-resolution spectro-temporal analysis framework, which was developed to extract texture features of the audio signal from its Fourier spectrogram, is used to extract frequency modulation features of the speech signal. The proposed algorithm labels the voice active segments of the speech signal by comparing the energy related feature of the frequency modulation of harmonics with a threshold. Then, the proposed VAD is implemented on one of Texas Instruments (TI) digital signal processor (DSP) platforms for real-time operation. Simulations conducted on the DSP platform demonstrate the proposed VAD performs significantly better than three standard VADs, ITU-T G.729B, ETSI AMR1 and AMR2, in non-stationary noise in terms of the receiver operating characteristic (ROC) curves and the recognition rates from a practical distributed speech recognition (DSR) system.
View full abstract
-
Shuji SAKAI, Koichi ITO, Takafumi AOKI, Takafumi WATANABE, Hiroki UNTE ...
Article type: PAPER
Subject area: Image Recognition, Computer Vision
2015 Volume E98.D Issue 10 Pages
1818-1828
Published: October 01, 2015
Released on J-STAGE: October 01, 2015
JOURNAL
FREE ACCESS
Methods of window matching to estimate 3D points are the most serious factors affecting the accuracy, robustness, and computational cost of Multi-View Stereo (MVS) algorithms. Most existing MVS algorithms employ window matching based on Normalized Cross-Correlation (NCC) to estimate the depth of a 3D point. NCC-based window matching estimates the displacement between matching windows with sub-pixel accuracy by linear/cubic interpolation, which does not represent accurate sub-pixel values of matching windows. This paper proposes a technique of window matching that is very accurate using Phase-Only Correlation (POC) with geometric correction for MVS. The accurate sub-pixel displacement between two matching windows can be estimated by fitting the analytical correlation peak model of the POC function. The proposed method also corrects the geometric transformations of matching windows by taking into consideration the 3D shape of a target object. The use of the proposed geometric correction approach makes it possible to achieve accurate 3D reconstruction from multi-view images even for images with large transformations. The proposed method demonstrates more accurate 3D reconstruction from multi-view images than the conventional methods in a set of experiments.
View full abstract
-
Hung-Tsai WU, Yi-Ting WU, Wen-Whei CHANG
Article type: PAPER
Subject area: Image Recognition, Computer Vision
2015 Volume E98.D Issue 10 Pages
1829-1837
Published: October 01, 2015
Released on J-STAGE: October 01, 2015
JOURNAL
FREE ACCESS
In wireless telecardiology applications, electrocardiogram (ECG) signals are often represented in compressed format for efficient transmission and storage purposes. Incorporation of compressed ECG based biometric enables faster person identification as it by-passes the full decompression. This study presents a new method to combine ECG biometrics with data compression within a common JPEG2000 framework. To this end, an ECG signal is considered as an image and the JPEG2000 standard is applied for data compression. Features relating to ECG morphology and heartbeat intervals are computed directly from the compressed ECG. Different classification approaches are used for person identification. Experiments on standard ECG databases demonstrate the validity of the proposed system for biometric identification with high accuracies on both healthy and diseased subjects.
View full abstract
-
Masaki AZUMA, Hiroomi HIKAWA
Article type: PAPER
Subject area: Biocybernetics, Neurocomputing
2015 Volume E98.D Issue 10 Pages
1838-1846
Published: October 01, 2015
Released on J-STAGE: October 01, 2015
JOURNAL
FREE ACCESS
Neural networks are widely used in various fields due to their superior learning abilities. This paper proposes a hardware winner-take-all neural network (WTANN) that employs a new winner-take-all (WTA) circuit with phase-modulated pulse signals and digital phase-locked loops (DPLLs). The system uses DPLL as a computing element, so all input values are expressed by phases of rectangular signals. The proposed WTA circuit employs a simple winner search circuit. The proposed WTANN architecture is described by very high speed integrated circuit (VHSIC) hardware description language (VHDL), and its feasibility was tested and verified through simulations and experiments. Conventional WTA takes a global winner search approach, in which vector distances are collected from all neurons and compared. In contrast, the WTA in the proposed system is carried out locally by a distributed winner search circuit among neurons. Therefore, no global communication channels with a wide bandwidth between the winner search module and each neuron are required. Furthermore, the proposed WTANN can easily extend the system scale, merely by increasing the number of neurons. The circuit size and speed were then evaluated by applying the VHDL description to a logic synthesis tool and experiments using a field programmable gate array (FPGA). Vector classifications with WTANN using two kinds of data sets, Iris and Wine, were carried out in VHDL simulations. The results revealed that the proposed WTANN achieved valid learning.
View full abstract
-
Jiasen HUANG, Junyan REN, Wei LI
Article type: LETTER
Subject area: Fundamentals of Information Systems
2015 Volume E98.D Issue 10 Pages
1847-1851
Published: October 01, 2015
Released on J-STAGE: October 01, 2015
JOURNAL
FREE ACCESS
Sparse Matrix-Vector Multiplication (SpMxV) is widely used in many high-performance computing applications, including information retrieval, medical imaging, and economic modeling. To eliminate the overhead of zero padding in SpMxV, prior works have focused on partitioning a sparse matrix into row vectors sets (RVS's) or sub-matrices. However, performance was still degraded due to the sparsity pattern of a sparse matrix. In this letter, we propose a heuristics, called
recursive merging, which uses a greedy approach to recursively merge those row vectors of nonzeros in a matrix into the RVS's, such that each set included is ensured a local optimal solution. For ten uneven benchmark matrices from the University of Florida Sparse Matrix Collection, our proposed partitioning algorithm is always identified as the method with the highest mean density (over 96%), but with the lowest average relative difference (below 0.07%) over computing powers.
View full abstract
-
Hideo FUJIWARA, Katsuya FUJIWARA
Article type: LETTER
Subject area: Dependable Computing
2015 Volume E98.D Issue 10 Pages
1852-1855
Published: October 01, 2015
Released on J-STAGE: October 01, 2015
JOURNAL
FREE ACCESS
In our previous work [12], [13], we introduced generalized feed-forward shift registers (GF
2SR, for short) to apply them to secure and testable scan design, where we considered the security problem from the viewpoint of the complexity of identifying the structure of GF
2SRs. Although the proposed scan design is secure in the sense that the structure of a GF
2SR cannot be identified only from the primary input/output relation, it may not be secure if part of the contents of the circuit leak out. In this paper, we introduce a more secure concept called
strong security such that no internal state of strongly secure circuits leaks out, and present how to design such strongly secure GF
2SRs.
View full abstract
-
Peng CHENG, Chun-Wei LIN, Jeng-Shyang PAN, Ivan LEE
Article type: LETTER
Subject area: Artificial Intelligence, Data Mining
2015 Volume E98.D Issue 10 Pages
1856-1860
Published: October 01, 2015
Released on J-STAGE: October 01, 2015
JOURNAL
FREE ACCESS
Sharing data might bring the risk of disclosing the sensitive knowledge in it. Usually, the data owner may choose to sanitize data by modifying some items in it to hide sensitive knowledge prior to sharing. This paper focuses on protecting sensitive knowledge in the form of frequent itemsets by data sanitization. The sanitization process may result in side effects, i.e., the data distortion and the damage to the non-sensitive frequent itemsets. How to minimize these side effects is a challenging problem faced by the research community. Actually, there is a trade-off when trying to minimize both side effects simultaneously. In view of this, we propose a data sanitization method based on evolutionary multi-objective optimization (EMO). This method can hide specified sensitive itemsets completely while minimizing the accompanying side effects. Experiments on real datasets show that the proposed approach is very effective in performing the hiding task with fewer damage to the original data and non-sensitive knowledge.
View full abstract
-
Sangmin PARK, Jinsung BYUN, Byeongkwan KANG, Daebeom JEONG, Beomseok L ...
Article type: LETTER
Subject area: Office Information Systems, e-Business Modeling
2015 Volume E98.D Issue 10 Pages
1861-1865
Published: October 01, 2015
Released on J-STAGE: October 01, 2015
JOURNAL
FREE ACCESS
This letter introduces an Energy-Aware LED Light System (EA-LLS) that provides adequate illumination to users according to the analysis of the sun's position, the user's movement, and various environmental factors, without sun illumination detection sensors. This letter presents research using algorithms and scenarios. We propose an EA-LLS that offers not only On/Off and dimming control, but dimming control through daylight, space, and user behavior analysis.
View full abstract
-
Zhong ZHANG, Shuang LIU, Zhiwei ZHANG
Article type: LETTER
Subject area: Pattern Recognition
2015 Volume E98.D Issue 10 Pages
1866-1870
Published: October 01, 2015
Released on J-STAGE: October 01, 2015
JOURNAL
FREE ACCESS
Sparsity-based methods have been recently applied to abnormal event detection and have achieved impressive results. However, most such methods suffer from the problem of dimensionality curse; furthermore, they also take no consideration of the relationship among coefficient vectors. In this paper, we propose a novel method called consistent sparse representation (CSR) to overcome the drawbacks. We first reconstruct each feature in the space spanned by the clustering centers of training features so as to reduce the dimensionality of features and preserve the neighboring structure. Then, the consistent regularization is added to the sparse representation model, which explicitly considers the relationship of coefficient vectors. Our method is verified on two challenging databases (UCSD Ped1 database and Subway batabase), and the experimental results demonstrate that our method obtains better results than previous methods in abnormal event detection.
View full abstract
-
Kun CHEN, Yuehua LI, Xingjian XU, Yuanjiang LI
Article type: LETTER
Subject area: Pattern Recognition
2015 Volume E98.D Issue 10 Pages
1871-1874
Published: October 01, 2015
Released on J-STAGE: October 01, 2015
JOURNAL
FREE ACCESS
In this paper, we first propose ten new discrimination features of SAR images in the moving and stationary target acquisition and recognition (MSTAR) database. The Ada_MCBoost algorithm is then proposed to classify multiclass SAR targets. In the new algorithm, we introduce a novel large-margin loss function to design a multiclass classifier directly instead of decomposing the multiclass problem into a set of binary ones through the error-correcting output codes (ECOC) method. Finally, experiments show that the new features are helpful for SAR targets discrimination; the new algorithm had better recognition performance than three other contrast methods.
View full abstract
-
Changhong CHEN, Hehe DOU, Zongliang GAN
Article type: LETTER
Subject area: Pattern Recognition
2015 Volume E98.D Issue 10 Pages
1875-1878
Published: October 01, 2015
Released on J-STAGE: October 01, 2015
JOURNAL
FREE ACCESS
Collective activity recognition plays an important role in high-level video analysis. Most current feature representations look at contextual information extracted from the behaviour of nearby people. Every person needs to be detected and his pose should be estimated. After extracting the feature, hierarchical graphical models are always employed to model the spatio-temporal patterns of individuals and their interactions, and so can not avoid complex preprocessing and inference operations. To overcome these drawbacks, we present a new feature representation method, called attribute-based spatio-temporal (AST) descriptor. First, two types of information, spatio-temporal (ST) features and attribute features, are exploited. Attribute-based features are manually specified. An attribute classifier is trained to model the relationship between the ST features and attribute-based features, according to which the attribute features are refreshed. Then, the ST features, attribute features and the relationship between the attributes are combined to form the AST descriptor. An objective classifier can be specified on the AST descriptor and the weight parameters of the classifier are used for recognition. Experiments on standard collective activity benchmark sets show the effectiveness of the proposed descriptor.
View full abstract
-
Young-Seok CHOI
Article type: LETTER
Subject area: Speech and Hearing
2015 Volume E98.D Issue 10 Pages
1879-1883
Published: October 01, 2015
Released on J-STAGE: October 01, 2015
JOURNAL
FREE ACCESS
In this letter, a new subband adaptive filter (SAF) which is robust against impulsive noise in system identification is presented. To address the vulnerability of adaptive filters based on the
L2-norm optimization criterion to impulsive noise, the robust SAF (R-SAF) comes from the
L1-norm optimization criterion with a constraint on the energy of the weight update. Minimizing
L1-norm of the
a posteriori error in each subband with a constraint on minimum disturbance gives rise to robustness against impulsive noise and the capable convergence performance. Simulation results clearly demonstrate that the proposal, R-SAF, outperforms the classical adaptive filtering algorithms when impulsive noise as well as background noise exist.
View full abstract
-
Kugjin YUN, Won-sik CHEONG, Kyuheon KIM
Article type: LETTER
Subject area: Image Processing and Video Processing
2015 Volume E98.D Issue 10 Pages
1884-1887
Published: October 01, 2015
Released on J-STAGE: October 01, 2015
JOURNAL
FREE ACCESS
Recently, standard organizations of ATSC, DVB and TTA have been working to design various immersive media broadcasting services such as the hybrid network-based 3D video, UHD video and multiple views. This letter focuses on providing a new synchronization and transport system target decoder (T-STD) model of 3D video distribution based on heterogeneous transmission protocol in a hybrid network environment, where a broadcasting network and broadband (IP) network are combined. On the basis of the experimental results, the proposed technology has been proved to be successfully used as a core element for synchronization and T-STD model in a hybrid network-based 3D broadcasting. It has been also found out that it could be used as a base technique for various IP associated hybrid broadcasting services.
View full abstract
-
Su-hyun LEE, Yong-jin JEONG
Article type: LETTER
Subject area: Image Processing and Video Processing
2015 Volume E98.D Issue 10 Pages
1888-1891
Published: October 01, 2015
Released on J-STAGE: October 01, 2015
JOURNAL
FREE ACCESS
Integral image is the sum of input image pixel values. It is mainly used to speed up the process of a box filter operation, such as Haar-like features. However, large memory capacity for integral image data can be an obstacle in an embedded environment with limited hardware. In a previous research, [5] reduced the size of integral image memory using 2×2 block structure with additional calculations. It can be easily extended to
n×
n block structure for further reduction, but it requires more additional calculations. In this paper, we propose a new block structure for the integral image by modifying the location of the reference pixel in the block. It results in much less additional calculations by reducing the number of memory accesses, while keeping the same amount of memory as the original block structure.
View full abstract
-
Yuta KANUKI, Naoya OHTA
Article type: LETTER
Subject area: Image Recognition, Computer Vision
2015 Volume E98.D Issue 10 Pages
1892-1895
Published: October 01, 2015
Released on J-STAGE: October 01, 2015
JOURNAL
FREE ACCESS
Recently, cameras are equipped on cars in order to assist their drivers. These cameras often have a severe radial distortion because of their wide view angle, and sometimes it is necessary to compensate it in a fully automatic way in the field. We have proposed such a method, which uses the entropy of the histogram of oriented gradient (HOG) to evaluate the goodness of the compensation. Its performance was satisfactory, but the computational burden was too heavy to be executed by drive assistance devices. In this report, we discuss a method to speed up the algorithm, and obtain a new light algorithm feasible for such devices. We also show more comprehensive performance evaluation results then those in the previous reports.
View full abstract