-
Shin-ichi MINATO
Article type: INVITED SURVEY PAPER
Subject area: Fundamentals of Information Systems
2013 Volume E96.D Issue 7 Pages
1419-1429
Published: July 01, 2013
Released on J-STAGE: July 01, 2013
JOURNAL
FREE ACCESS
Discrete structures are foundational material for computer science and mathematics, which are related to set theory, symbolic logic, inductive proof, graph theory, combinatorics, probability theory, etc. Many problems solved by computers can be decomposed into discrete structures using simple primitive algebraic operations. It is very important to represent discrete structures compactly and to execute efficiently tasks such as equivalency/validity checking, analysis of models, and optimization. Recently, BDDs (Binary Decision Diagrams) and ZDDs (Zero-suppressed BDDs) have attracted a great deal of attention, because they efficiently represent and manipulate large-scale combinational logic data, which are the basic discrete structures in various fields of application. Although a quarter of a century has passed since Bryant's first idea, there are still a lot of interesting and exciting research topics related to BDD and ZDD. BDD/ZDD is based on in-memory data processing techniques, and it enjoys the advantage of using random access memory. Recent commodity PCs are equipped with gigabytes of main memory, and we can now solve large-scale problems which used to be impossible due to memory shortage. Thus, especially since 2000, the scope of BDD/ZDD methods has increased. This survey paper describes the history of, and recent research activity pertaining to, techniques related to BDD and ZDD.
View full abstract
-
Yufei LIN, Xuejun YANG, Xinhai XU, Xiaowei GUO
Article type: PAPER
Subject area: Computer System
2013 Volume E96.D Issue 7 Pages
1430-1442
Published: July 01, 2013
Released on J-STAGE: July 01, 2013
JOURNAL
FREE ACCESS
Scaling up the system size has been the common approach to achieving high performance in parallel computing. However, designing and implementing a large-scale parallel system can be very costly in terms of money and time. When building a target system, it is desirable to initially build a smaller version by using the processing nodes with the same architecture as those in the target system. This allows us to achieve efficient and scalable prediction by using the smaller system to predict the performance of the target system. Such scalability prediction is critical because it enables system designers to evaluate different design alternatives so that a certain performance goal can be successfully achieved. As the de facto standard for writing parallel applications, MPI is widely used in large-scale parallel computing. By categorizing the discrete event simulation methods for MPI programs and analyzing the characteristics of scalability prediction, we propose a novel simulation method, called
virtual-actual combined execution-driven (VACED) simulation, to achieve scalable prediction for MPI programs. The basic idea behind is to predict the execution time of an MPI program on a target machine by running it on a smaller system so that we can predict its communication time by virtual simulation and obtain its sequential computation time by actual execution. We introduce a model for the VACED simulation as well as the design and implementation of VACED-SIM, a lightweight simulator based on fine-grained activity and event definitions. We have validated our approach on a sub-system of Tianhe-1A. Our experimental results show that VACED-SIM exhibits higher accuracy and efficiency than MPI-SIM. In particular, for a target system with 1024 cores, the relative errors of VACED-SIM are less than 10% and the slowdowns are close to 1.
View full abstract
-
Yong-Jin PARK, Woo-Chan PARK, Jun-Hyun BAE, Jinhong PARK, Tack-Don HAN
Article type: PAPER
Subject area: Computer System
2013 Volume E96.D Issue 7 Pages
1443-1448
Published: July 01, 2013
Released on J-STAGE: July 01, 2013
JOURNAL
FREE ACCESS
In this paper, we proposed that an area- and speed-effective fixed-point pipelined divider be used for reducing the bit-width of a division unit to fit a mobile rendering processor. To decide the bit-width of a division unit, error analysis has been carried out in various ways. As a result, when the original bit-width was 31-bit, the proposed method reduced the bit-width to 24-bitand reduced the area by 42% with a maximum error of 0.00001%.
View full abstract
-
Xu BAI, Michitaka KAMEYAMA
Article type: PAPER
Subject area: Computer System
2013 Volume E96.D Issue 7 Pages
1449-1456
Published: July 01, 2013
Released on J-STAGE: July 01, 2013
JOURNAL
FREE ACCESS
A multiple-valued data transfer scheme using X-net is proposed to realize a compact bit-serial reconfigurable VLSI (BS-RVLSI). In the multiple-valued data transfer scheme using X-net, two binary data can be transferred from two adjacent cells to one common adjacent cell simultaneously at each “X” intersection. One cell composed of a logic block and a switch block is connected to four adjacent cross points by four one-bit switches so that the complexity of the switch block is reduced to 50% in comparison with the cell of a BS-RVLSI using an eight nearest-neighbor mesh network (8-NNM). In the logic block, threshold logic circuits are used to perform threshold operations, and then their binary dual-rail voltage outputs enter a binary logic module which can be programmed to realize an arbitrary two-variable binary function or a bit-serial adder. As a result, the configuration memory count and transistor count of the proposed multiple-valued cell are reduced to 34% and 58%, respectively, in comparison with those of an equivalent CMOS cell. Moreover, its power consumption for an arbitrary 2-variable binary function becomes 67% at 800MHz under the condition of the same delay time.
View full abstract
-
Eunji PAK, Sang-Hoon KIM, Jaehyuk HUH, Seungryoul MAENG
Article type: PAPER
Subject area: Computer System
2013 Volume E96.D Issue 7 Pages
1457-1466
Published: July 01, 2013
Released on J-STAGE: July 01, 2013
JOURNAL
FREE ACCESS
Although shared caches allow the dynamic allocation of limited cache capacity among cores, traditional LRU replacement policies often cannot prevent negative interference among cores. To address the contention problem in shared caches, cache partitioning and application scheduling techniques have been extensively studied. Partitioning explicitly determines cache capacity for each core to maximize the overall throughput. On the other hand, application scheduling by operating systems groups the least interfering applications for each shared cache, when multiple shared caches exist in systems. Although application scheduling can mitigate the contention problem without any extra hardware support, its effect can be limited for some severe contentions. This paper proposes a low cost solution, based on application scheduling with a simple cache insertion control. Instead of using a full hardware-based cache partitioning mechanism, the proposed technique mostly relies on application scheduling. It selectively uses LRU insertion to the shared caches, which can be added with negligible hardware changes from the current commercial processor designs. For the completeness of cache interference evaluation, this paper examines all possible mixes from a set of applications, instead of using a just few selected mixes. The evaluation shows that the proposed technique can mitigate the cache contention problem effectively, close to the ideal scheduling and partitioning.
View full abstract
-
Yong XIE, Gang ZENG, Yang CHEN, Ryo KURACHI, Hiroaki TAKADA, Renfa LI
Article type: PAPER
Subject area: Software System
2013 Volume E96.D Issue 7 Pages
1467-1477
Published: July 01, 2013
Released on J-STAGE: July 01, 2013
JOURNAL
FREE ACCESS
In modern automobiles, Controller Area Network (CAN) has been widely used in different sub systems that are connected by using gateway. While a gateway is necessary to integrate different electronic sub systems, it brings challenges for the analysis of Worst Case Response Time (WCRT) for CAN messages, which is critical from the safety point of view. In this paper, we first analyzed the challenges for WCRT analysis of messages in gateway-interconnected CANs. Then, based on the existing WCRT analysis method proposed for one single CAN, a new WCRT analysis method that uses two new definitions to analyze the interfering delay of sporadically arriving gateway messages is proposed for non-gateway messages. Furthermore, a division approach, where the end-to-end WCRT analysis of gateway messages is transformed into the similar situation with that of non-gateway messages, is adopted for gateway messages. Finally, the proposed method is extended to include CANs with different bandwidths. The proposed method is proved to be safe, and experimental results demonstrated its effectiveness by comparing it with a full space searching based simulator and applying it to a real message set.
View full abstract
-
Mi-Young CHOI, Chang-Joo MOON, Doo-Kwon BAIK
Article type: PAPER
Subject area: Data Engineering, Web Information Systems
2013 Volume E96.D Issue 7 Pages
1478-1488
Published: July 01, 2013
Released on J-STAGE: July 01, 2013
JOURNAL
FREE ACCESS
The Semantic Web uses RDF/RDFS, which can enable a machine to understand web data without human interference. But most web data is not available in RDF/RDFS documents because most web data is still stored in databases. It is much more favorable to use stored data in a database to build the Semantic Web. This paper proposes an enhanced relational RDF/RDFS interoperable data model (ER
2iDM) and a transformation procedure from relational data model (RDM) to RDF/RDFS based on ER
2iDM. The ER
2iDM is a data model that plays the role of an inter-mediator between RDM and RDF/RDFS during a transformation procedure. The data and schema information in the database are migrated to the ER
2iDM according to the proposed translation procedures without incurring loss of meaning of the entities, relationships, and data. The RDF/RDFS generation tool makes a RDF/RDFS XML document automatically from the ER
2iDM. The proposed ER
2iDM and transformation procedure provides detailed guidelines for transformation from RDM to RDF/RDFS unlike existing studies; therefore, we can more efficiently build up the Semantic Web using database stored data.
View full abstract
-
Hamidreza TAVAKOLI, Majid NADERI
Article type: PAPER
Subject area: Information Network
2013 Volume E96.D Issue 7 Pages
1489-1494
Published: July 01, 2013
Released on J-STAGE: July 01, 2013
JOURNAL
FREE ACCESS
Optimizing lifetime of a wireless sensor network has received considerable attention in recent years. In this paper, using the feasibility and simplicity of grid-based clustering and routing schemes, we investigate optimizing lifetime of a two-dimensional wireless sensor network. Thus how to determine the optimal grid sizes in order to prolong network lifetime becomes an important problem. At first, we propose a model for lifetime of a grid in equal-grid model. We also consider that nodes can transfer packets to a grid which is two or more grids away in order to investigate the trade-off between traffic and transmission energy consumption. After developing the model for an adjustable-grid scenario, in order to optimize lifetime of the network, we derive the optimal values for dimensions of the grids. The results show that if radio ranges are adjusted appropriately, the network lifetime in adjustable-grid model is prolonged compared with the best case where an equal-grid model is used.
View full abstract
-
Lankeshwara MUNASINGHE, Ryutaro ICHISE
Article type: PAPER
Subject area: Artificial Intelligence, Data Mining
2013 Volume E96.D Issue 7 Pages
1495-1502
Published: July 01, 2013
Released on J-STAGE: July 01, 2013
JOURNAL
FREE ACCESS
Link prediction in social networks, such as friendship networks and coauthorship networks, has recently attracted a great deal of attention. There have been numerous attempts to address the problem of link prediction through diverse approaches. In the present paper, we focused on predicting links in social networks using information flow via active links. The information flow heavily depends on link activeness. The links become active if the interactions happen frequently and recently with respect to the current time. The time stamps of the interactions or links provide vital information for determining the activeness of the links. In the present paper, we introduced a new algorithm, referred to as
T_Flow, that captures the important aspects of information flow via active links in social networks. We tested
T_Flow with two social network data sets, namely, a data set extracted from Facebook friendship network and a coauthorship network data set extracted from
ePrint archives. We compare the link prediction performances of
T_Flow with the previous method
PropFlow. The results of
T_Flow method revealed a notable improvement in link prediction for facebook data and significant improvement in link prediction for coauthorship data.
View full abstract
-
Soma SHIRAISHI, Yaokai FENG, Seiichi UCHIDA
Article type: PAPER
Subject area: Pattern Recognition
2013 Volume E96.D Issue 7 Pages
1503-1512
Published: July 01, 2013
Released on J-STAGE: July 01, 2013
JOURNAL
FREE ACCESS
This paper proposes a new part-based approach for skew estimation of document images. The proposed method first estimates skew angles on rather small areas, which are the local parts of characters, and subsequently determines the global skew angle by aggregating those local estimations. A local skew estimation on a part of a skewed character is performed by finding an identical part from prepared upright character images and calculating the angular difference. Specifically, a keypoint detector (e.g. SURF) is used to determine the local parts of characters, and once the parts are described as feature vectors, a nearest neighbor search is conducted in the instance database to identify the parts. Finally, a local skew estimation is acquired by calculating the difference of the dominant angles of brightness gradient of the parts. After the local skew estimation, the global skew angle is estimated by the majority voting of those local estimations, disregarding some noisy estimations. Our experiments have shown that the proposed method is more robust to short and sparse text lines and non-text backgrounds in document images compared to conventional methods.
View full abstract
-
Wittawat JITKRITTUM, Hirotaka HACHIYA, Masashi SUGIYAMA
Article type: PAPER
Subject area: Pattern Recognition
2013 Volume E96.D Issue 7 Pages
1513-1524
Published: July 01, 2013
Released on J-STAGE: July 01, 2013
JOURNAL
FREE ACCESS
Feature selection is a technique to screen out less important features. Many existing supervised feature selection algorithms use redundancy and relevancy as the main criteria to select features. However,
feature interaction, potentially a key characteristic in real-world problems, has not received much attention. As an attempt to take feature interaction into account, we propose
l1-LSMI, an
l1-regularization based algorithm that maximizes a squared-loss variant of mutual information between selected features and outputs. Numerical results show that
l1-LSMI performs well in handling redundancy, detecting non-linear dependency, and considering feature interaction.
View full abstract
-
Yinqiang ZHENG, Shigeki SUGIMOTO, Masatoshi OKUTOMI
Article type: PAPER
Subject area: Image Recognition, Computer Vision
2013 Volume E96.D Issue 7 Pages
1525-1535
Published: July 01, 2013
Released on J-STAGE: July 01, 2013
JOURNAL
FREE ACCESS
We propose an accurate and scalable solution to the perspective-
n-point problem, referred to as ASP
nP. Our main idea is to estimate the orientation and position parameters by directly minimizing a properly defined algebraic error. By using a novel quaternion representation of the rotation, our solution is immune to any parametrization degeneracy. To obtain the global optimum, we use the Gröbner basis technique to solve the polynomial system derived from the first-order optimality condition. The main advantages of our proposed solution lie in accuracy and scalability. Extensive experiment results, with both synthetic and real data, demonstrate that our proposed solution has better accuracy than the state-of-the-art noniterative solutions. More importantly, by exploiting vectorization operations, the computational cost of our ASP
nP solution is almost constant, independent of the number of point correspondences
n in the wide range from 4 to 1000. In our experiment settings, the ASP
nP solution takes about 4 milliseconds, thus best suited for real-time applications with a drastically varying number of 3D-to-2D point correspondences.
View full abstract
-
Zezhong LI, Hideto IKEDA, Junichi FUKUMOTO
Article type: PAPER
Subject area: Natural Language Processing
2013 Volume E96.D Issue 7 Pages
1536-1543
Published: July 01, 2013
Released on J-STAGE: July 01, 2013
JOURNAL
FREE ACCESS
In most phrase-based statistical machine translation (SMT) systems, the translation model relies on word alignment, which serves as a constraint for the subsequent building of a phrase table. Word alignment is usually inferred by GIZA++, which implements all the IBM models and HMM model in the framework of Expectation Maximum (EM). In this paper, we present a fully Bayesian inference for word alignment. Different from the EM approach, the Bayesian inference makes use of all possible parameter values rather than estimating a single parameter value, from which we expect a more robust inference. After inferring the word alignment, current SMT systems usually train the phrase table from Viterbi word alignment, which is prone to learn incorrect phrases due to the word alignment mistakes. To overcome this drawback, a new phrase extraction method is proposed based on multiple Gibbs samples from Bayesian inference for word alignment. Empirical results show promising improvements over baselines in alignment quality as well as the translation performance.
View full abstract
-
Young-Woong KO, Ho-Min JUNG, Wan-Yeon LEE, Min-Ja KIM, Chuck YOO
Article type: LETTER
Subject area: Computer System
2013 Volume E96.D Issue 7 Pages
1544-1547
Published: July 01, 2013
Released on J-STAGE: July 01, 2013
JOURNAL
FREE ACCESS
In this paper, we propose a stride static chunking deduplication algorithm using a hybrid approach that exploits the advantages of static chunking and byte-shift chunking algorithm. The key contribution of our approach is to reduce the computation time and enhance deduplication performance. We assume that duplicated data blocks are generally gathered into groups; thus, if we find one duplicated data block using byte-shift, then we can find subsequent data blocks with the static chunking approach. Experimental results show that stride static chunking algorithm gives significant benefits over static chunking, byte-shift chunking and variable-length chunking algorithm, particularly for reducing processing time and storage space.
View full abstract
-
Yuanbin HAN, Shizhan CHEN, Zhiyong FENG
Article type: LETTER
Subject area: Data Engineering, Web Information Systems
2013 Volume E96.D Issue 7 Pages
1548-1551
Published: July 01, 2013
Released on J-STAGE: July 01, 2013
JOURNAL
FREE ACCESS
This paper presents a novel topic modeling (TM) approach for discovering meaningful topics for Web APIs, which is a potential dimensionality reduction way for efficient and effective classification, retrieval, organization, and management of numerous APIs. We exploit the possibility of conducting TM on multi-labeled APIs by combining a supervised TM (known as Labeled LDA) with ontology. Experiments conducting on real-world API data set show that the proposed method outperforms standard Labeled LDA with an average gain of 7.0% in measuring quality of the generated topics. In addition, we also evaluate the similarity matching between topics generated by our method and standard Labeled LDA, which demonstrates the significance of incorporating ontology.
View full abstract
-
Zhangjun FAN, Daoxing GUO, Bangning ZHANG, Youyun XU
Article type: LETTER
Subject area: Information Network
2013 Volume E96.D Issue 7 Pages
1552-1556
Published: July 01, 2013
Released on J-STAGE: July 01, 2013
JOURNAL
FREE ACCESS
This letter investigates the outage performance of a joint transmit and receive antenna selection scheme in an amplify-and-forward two-way relaying system with channel estimation error. A closed-form approximate outage probability expression is derived, based on which the asymptotic outage probability expression is derived to get an insight on system's outage performance at high signal-to-noise (SNR) region. Monte Carlo simulation results are presented to verify the analytical results.
View full abstract
-
Hua FAN, Quanyuan WU, Jianfeng ZHANG
Article type: LETTER
Subject area: Artificial Intelligence, Data Mining
2013 Volume E96.D Issue 7 Pages
1557-1560
Published: July 01, 2013
Released on J-STAGE: July 01, 2013
JOURNAL
FREE ACCESS
Despite the improvement of the accuracy of RFID readers, there are still erroneous readings such as missed reads and ghost reads. In this letter, we propose two effective models, a Bayesian inference-based decision model and a path-based detection model, to increase the accuracy of RFID data cleaning in RFID based supply chain management. In addition, the maximum entropy model is introduced for determining the value of sliding window size. Experiment results validate the performance of the proposed method and show that it is able to clean raw RFID data with a higher accuracy.
View full abstract
-
Tae-Young KIM
Article type: LETTER
Subject area: Educational Technology
2013 Volume E96.D Issue 7 Pages
1561-1564
Published: July 01, 2013
Released on J-STAGE: July 01, 2013
JOURNAL
FREE ACCESS
Nowadays, many interface devices or training systems have been developed with recent developments in IT technology, but only a few training systems for developmentally disabled people have been introduced. In this paper, we present a real-time, interactional and situational training system based on augmented reality in order to improve cognitive capability and adaptive ability in the daily lives of developmentally disabled people. Our system is specifically based on serving food in restaurants. It allows disabled people wearing the HMD attached with camera to conduct the training to cope with a series of situations safely while serving customers food and drinks and take the training session as much as they want. After experimenting on our presented system for 3 months, we found that they actively participated in the training and their cognitive abilities increasingly went faster through repeated training, resulting in the improvement in their cognitive ability and their ability to deal with situations.
View full abstract
-
Xianhua SONG, Shen WANG, Siuming YIU, Lin JIANG, Xiamu NIU
Article type: LETTER
Subject area: Image Processing and Video Processing
2013 Volume E96.D Issue 7 Pages
1565-1568
Published: July 01, 2013
Released on J-STAGE: July 01, 2013
JOURNAL
FREE ACCESS
Passive-blind image forensics is a technique that judges whether an image is forged in the absence of watermarking. In image forgery, region duplication is a simple and widely used method. In this paper, we proposed a novel method to detect image region duplication using the spin imagewhich is an intensity-based and rotation invariant descriptor. The method can detect region duplication exactly and is robust to geometric transformations. Furthermore, it is superior to the popular SIFT-based detection method when the copied patch is from smooth background. The experiments have proved the method's effectiveness.
View full abstract
-
Zongliang GAN
Article type: LETTER
Subject area: Image Processing and Video Processing
2013 Volume E96.D Issue 7 Pages
1569-1572
Published: July 01, 2013
Released on J-STAGE: July 01, 2013
JOURNAL
FREE ACCESS
In this letter, we present a fast image/video super resolution framework using edge and nonlocal constraint. The proposed method has three steps. First, we improve the initial estimation using content-adaptive bilateral filtering to strengthen edge. Second, the high resolution image is estimated by using classical back projection method. Third, we use joint content-adaptive nonlocal means filtering to get the final result, and self-similarity structures are obtained by the low resolution image. Furthermore, content-adaptive filtering and fast self-similarity search strategy can effectively reduce computation complexity. The experimental results show the proposed method has good performance with low complexity and can be used for real-time environment.
View full abstract
-
Jin Soo SEO
Article type: LETTER
Subject area: Music Information Processing
2013 Volume E96.D Issue 7 Pages
1573-1576
Published: July 01, 2013
Released on J-STAGE: July 01, 2013
JOURNAL
FREE ACCESS
Music-similarity computation is an essential building block for the browsing, retrieval, and indexing of digital music archives. This paper proposes a music similarity function based on the centroid model, which divides the feature space into non-overlapping clusters for the efficient computation of the timber distance of two songs. We place particular emphasis on the centroid deviation as a feature for music-similarity computation. Experiments show that the centroid-model representation of the auditory features is promising for music-similarity computation.
View full abstract