IEICE Transactions on Information and Systems
Online ISSN : 1745-1361
Print ISSN : 0916-8532
Volume E93.D , Issue 6
Showing 1-37 articles out of 37 articles from the selected issue
Special Section on Info-Plosion
  • Masaru KITSUREGAWA
    2010 Volume E93.D Issue 6 Pages 1329
    Published: June 01, 2010
    Released: June 01, 2010
    JOURNALS FREE ACCESS
    Download PDF (55K)
  • Lei LI, Bin FU, Christos FALOUTSOS
    Type: INVITED PAPER
    2010 Volume E93.D Issue 6 Pages 1330-1342
    Published: June 01, 2010
    Released: June 01, 2010
    JOURNALS FREE ACCESS
    Quad-core cpus have been a common desktop configuration for today's office. The increasing number of processors on a single chip opens new opportunity for parallel computing. Our goal is to make use of the multi-core as well as multi-processor architectures to speed up large-scale data mining algorithms. In this paper, we present a general parallel learning framework, Cut-And-Stitch, for training hidden Markov chain models. Particularly, we propose two model-specific variants, CAS-LDS for learning linear dynamical systems (LDS) and CAS-HMM for learning hidden Markov models (HMM). Our main contribution is a novel method to handle the data dependencies due to the chain structure of hidden variables, so as to parallelize the EM-based parameter learning algorithm. We implement CAS-LDS and CAS-HMM using OpenMP on two supercomputers and a quad-core commercial desktop. The experimental results show that parallel algorithms using Cut-And-Stitch achieve comparable accuracy and almost linear speedups over the traditional serial version.
    Download PDF (772K)
  • Ting WANG, Ling LIU
    Type: INVITED PAPER
    2010 Volume E93.D Issue 6 Pages 1343-1351
    Published: June 01, 2010
    Released: June 01, 2010
    JOURNALS FREE ACCESS
    Consider a client who intends to perform a massive computing task comprsing a number of sub-tasks, while both storage and computation are outsourced by a third-party service provider. How could the client ensure the integrity and completeness of the computation result? Meanwhile, how could the assurance mechanism incur no disincentive, e.g., excessive communication cost, for any service provider or client to participate in such a scheme? We detail this problem and present a general model of execution assurance for massive computing tasks. A series of key features distinguish our work from existing ones: a) we consider the context wherein both storage and computation are provided by untrusted third parties, and client has no data possession; b) we propose a simple yet effective assurance model based on a novel integration of the machineries of data authentication and computational private information retrieval (cPIR); c) we conduct an analytical study on the inherent trade-offs among the verification accuracy, and the computation, storage, and communication costs.
    Download PDF (332K)
  • Matthias RAMBOW, Florian ROHRMÜLLER, Omiros KOURAKOS, Drazen BR&S ...
    Type: INVITED PAPER
    2010 Volume E93.D Issue 6 Pages 1352-1360
    Published: June 01, 2010
    Released: June 01, 2010
    JOURNALS FREE ACCESS
    Robotic systems operating in the real-world have to cope with unforeseen events by determining appropriate decisions based on noisy or partial knowledge. In this respect high functional robots are equipped with many sensors and actuators and run multiple processing modules in parallel. The resulting complexity is even further increased in case of cooperative multi-robot systems, since mechanisms for joint operation are needed. In this paper a complete and modular framework that handles this complexity in multi-robot systems is presented. It provides efficient exchange of generated data as well as a generic scheme for task execution and robot coordination.
    Download PDF (1228K)
  • Ryohei SASANO, Daisuke KAWAHARA, Sadao KUROHASHI
    Type: PAPER
    Subject area: Natural Language Processing
    2010 Volume E93.D Issue 6 Pages 1361-1368
    Published: June 01, 2010
    Released: June 01, 2010
    JOURNALS FREE ACCESS
    This paper reports the effect of corpus size on case frame acquisition for predicate-argument structure analysis in Japanese. For this study, we collect a Japanese corpus consisting of up to 100 billion words, and construct case frames from corpora of six different sizes. Then, we apply these case frames to syntactic and case structure analysis, and zero anaphora resolution, in order to investigate the relationship between the corpus size for case frame acquisition and the performance of predicate-argument structure analysis. We obtained better analyses by using case frames constructed from larger corpora; the performance was not saturated even with a corpus size of 100 billion words.
    Download PDF (274K)
  • Atsushi FUJII, Seiji TAKEGATA
    Type: PAPER
    Subject area: Natural Language Processing
    2010 Volume E93.D Issue 6 Pages 1369-1377
    Published: June 01, 2010
    Released: June 01, 2010
    JOURNALS FREE ACCESS
    Reflecting the rapid growth of information technology, the configuration of software applications such as word processors and spreadsheets is both sophisticated and complicated. It is often difficult for users to identify relevant functions in the online manual for a target application. In this paper, we propose a method for question answering that finds functions related to the user's request. To enhance our method, we addressed two “mismatch” problems. The first problem is associated with a mismatch in vocabulary, where the same concept is represented by different words in the manual and in the user's question. The second problem is associated with a mismatch in function. Although the user may have a hypothetical function for a purpose in mind, this purpose can sometimes be accomplished by other functions. To resolve these mismatch problems, we extract terms related to software functions from the Web, so that the user's question can be matched to the relevant function with high accuracy. We demonstrate the effectiveness of our method experimentally.
    Download PDF (311K)
  • Nobuyuki SHIMIZU, Masashi SUGIYAMA, Hiroshi NAKAGAWA
    Type: PAPER
    Subject area: Natural Language Processing
    2010 Volume E93.D Issue 6 Pages 1378-1385
    Published: June 01, 2010
    Released: June 01, 2010
    JOURNALS FREE ACCESS
    Traditionally, popular synonym acquisition methods are based on the distributional hypothesis, and a metric such as Jaccard coefficients is used to evaluate the similarity between the contexts of words to obtain synonyms for a query. On the other hand, when one tries to compile and clean a thesaurus, one often already has a modest number of synonym relations at hand. Could something be done with a half-built thesaurus alone? We propose the use of spectral methods and discuss their relation to other network-based algorithms in natural language processing (NLP), such as PageRank and Bootstrapping. Since compiling a thesaurus is very laborious, we believe that adding the proposed method to the toolkit of thesaurus constructors would significantly ease the pain in accomplishing this task.
    Download PDF (259K)
  • Xiao SUN, Degen HUANG, Fuji REN
    Type: PAPER
    Subject area: Natural Language Processing
    2010 Volume E93.D Issue 6 Pages 1386-1393
    Published: June 01, 2010
    Released: June 01, 2010
    JOURNALS FREE ACCESS
    Chinese new words and their part-of-speech (POS) are particularly problematic in Chinese natural language processing. With the fast development of internet and information technology, it is impossible to get a complete system dictionary for Chinese natural language processing, as new words out of the basic system dictionary are always being created. A latent semi-CRF model, which combines the strengths of LDCRF (Latent-Dynamic Conditional Random Field) and semi-CRF, is proposed to detect the new words together with their POS synchronously regardless of the types of the new words from the Chinese text without being pre-segmented. Unlike the original semi-CRF, the LDCRF is applied to generate the candidate entities for training and testing the latent semi-CRF, which accelerates the training speed and decreases the computation cost. The complexity of the latent semi-CRF could be further adjusted by tuning the number of hidden variables in LDCRF and the number of the candidate entities from the Nbest outputs of the LDCRF. A new-words-generating framework is proposed for model training and testing, under which the definitions and distributions of the new words conform to the ones existing in real text. Specific features called “Global Fragment Information” for new word detection and POS tagging are adopted in the model training and testing. The experimental results show that the proposed method is capable of detecting even low frequency new words together with their POS tags. The proposed model is found to be performing competitively with the state-of-the-art models presented.
    Download PDF (223K)
  • Zhenglu YANG, Lin LI, Masaru KITSUREGAWA
    Type: PAPER
    Subject area: Information Retrieval
    2010 Volume E93.D Issue 6 Pages 1394-1402
    Published: June 01, 2010
    Released: June 01, 2010
    JOURNALS FREE ACCESS
    Skyline query is very important because it is the basis of many applications, e.g., decision making, user-preference queries. Given an N-dimensional dataset D, a point p is said to dominate another point q if p is better than q in at least one dimension and equal to or better than q in the remaining dimensions. In this paper, we study a generalized problem of skyline query that, users are more interested in the details of the dominant relationship in a dataset, i.e., a point p dominates how many other points and whom they are. We show that the existing framework proposed in [17] can not efficiently solve this problem. We find the interrelated connection between the partial order and the dominant relationship. Based on this discovery, we propose a new data structure, ParCube, which concisely represents the dominant relationship. We propose some effective strategies to construct ParCube. Extensive experiments illustrate the efficiency of our methods.
    Download PDF (737K)
  • Tsubasa TAKAHASHI, Hiroyuki KITAGAWA, Keita WATANABE
    Type: PAPER
    Subject area: Information Retrieval
    2010 Volume E93.D Issue 6 Pages 1403-1413
    Published: June 01, 2010
    Released: June 01, 2010
    JOURNALS FREE ACCESS
    Social bookmarking services have recently made it possible for us to register and share our own bookmarks on the web and are attracting attention. The services let us get structured data: (URL, Username, Timestamp, Tag Set). And these data represent user interest in web pages. The number of bookmarks is a barometer of web page value. Some web pages have many bookmarks, but most of those bookmarks may have been posted far in the past. Therefore, even if a web page has many bookmarks, their value is not guaranteed. If most of the bookmarks are very old, the page may be obsolete. In this paper, by focusing on the timestamp sequence of social bookmarkings on web pages, we model their activation levels representing current values. Further, we improve our previously proposed ranking method for web search by introducing the activation level concept. Finally, through experiments, we show effectiveness of the proposed ranking method.
    Download PDF (1611K)
  • Young-joo CHUNG, Masashi TOYODA, Masaru KITSUREGAWA
    Type: PAPER
    Subject area: Information Retrieval
    2010 Volume E93.D Issue 6 Pages 1414-1421
    Published: June 01, 2010
    Released: June 01, 2010
    JOURNALS FREE ACCESS
    In this paper, we propose a method for finding web sites whose links are hijacked by web spammers. A hijacked site is a trustworthy site that points to untrustworthy sites. To detect hijacked sites, we evaluate the trustworthiness of web sites, and examine how trustworthy sites are hijacked by untrustworthy sites in their out-neighbors. The trustworthiness is evaluated based on the difference between the white and spam scores that calculated by two modified versions of PageRank. We define two hijacked scores that measure how likely a trustworthy site is to be hijacked based on the distribution of the trustworthiness in its out-neighbors. The performance of those hijacked scores are compared using our large-scale Japanese Web archive. The results show that a better performance is obtained by the score that considers both trustworthy and untrustworthy out-neighbors, compared with the one that only considers untrustworthy out-neighbors.
    Download PDF (210K)
  • Hisashi KURASAWA, Daiji FUKAGAWA, Atsuhiro TAKASU, Jun ADACHI
    Type: PAPER
    Subject area: Multimedia Databases
    2010 Volume E93.D Issue 6 Pages 1422-1432
    Published: June 01, 2010
    Released: June 01, 2010
    JOURNALS FREE ACCESS
    When developing an index for a similarity search in metric spaces, how to divide the space for effective search pruning is a fundamental issue. We present Maximal Metric Margin Partitioning (MMMP), a partitioning scheme for similarity search indexes. MMMP divides the data based on its distribution pattern, especially for the boundaries of clusters. A partitioning boundary created by MMMP is likely to be located in a sparse area between clusters. Moreover, the partitioning boundary is at maximum distances from the two cluster edges. We also present an indexing scheme, named the MMMP-Index, which uses MMMP and pivot filtering. The MMMP-Index can prune many objects that are not relevant to a query, and it reduces the query execution cost. Our experimental results show that MMMP effectively indexes clustered data and reduces the search cost. For clustered data in a vector space, the MMMP-Index reduces the computational cost to less than two thirds that of comparable schemes.
    Download PDF (1695K)
  • Fengrong LI, Yoshiharu ISHIKAWA
    Type: PAPER
    Subject area: Parallel and Distributed Databases
    2010 Volume E93.D Issue 6 Pages 1433-1446
    Published: June 01, 2010
    Released: June 01, 2010
    JOURNALS FREE ACCESS
    As the spread of high-speed networks and the development of network technologies, P2P technologies are actively used today for information exchange in the network. While information exchange in a P2P network is quite flexible, there is an important problem — lack of reliability. Since we cannot know the details of how the data was obtained, it is hard to fully rely on it. To ensure the reliability of exchanged data, we have proposed the framework of a traceable P2P record exchange based on database technologies. In this framework, records are exchanged among autonomous peers, and each peer stores its exchange and modification histories in it. The framework supports the function of tracing queries to query the details of the obtained data. A tracing query is described in datalog and executed as a recursive query in the P2P network. In this paper, we focus on the query processing strategies for the framework. We consider two types of queries, ad hoc queries and continual queries, and present the query processing strategies for their executions.
    Download PDF (825K)
  • Min Soo KIM, Jin Hyun SON, Ju Wan KIM, Myoung Ho KIM
    Type: PAPER
    Subject area: Spatial Databases
    2010 Volume E93.D Issue 6 Pages 1447-1458
    Published: June 01, 2010
    Released: June 01, 2010
    JOURNALS FREE ACCESS
    In the area of wireless sensor networks, the efficient spatial query processing based on the locations of sensor nodes is required. Especially, spatial queries on two sensor networks need a distributed spatial join processing among the sensor networks. Because the distributed spatial join processing causes lots of wireless transmissions in accessing sensor nodes of two sensor networks, our goal of this paper is to reduce the wireless transmissions for the energy efficiency of sensor nodes. In this paper, we propose an energy-efficient distributed spatial join algorithm on two heterogeneous sensor networks, which performs in-network spatial join processing. To optimize the in-network processing, we also propose a Grid-based Rectangle tree (GR-tree) and a grid-based approximation function. The GR-tree reduces the wireless transmissions by supporting a distributed spatial search for sensor nodes. The grid-based approximation function reduces the wireless transmissions by reducing the volume of spatial query objects which should be pushed down to sensor nodes. Finally, we compare naïve and existing approaches through extensive experiments and clarify our approach's distinguished features.
    Download PDF (1667K)
  • Takashi HISAMORI, Toru ARIKAWA, Gosuke OHASHI
    Type: PAPER
    Subject area: Image Retrieval
    2010 Volume E93.D Issue 6 Pages 1459-1469
    Published: June 01, 2010
    Released: June 01, 2010
    JOURNALS FREE ACCESS
    In previous studies, the retrieval accuracy of large image databases has been improved as a result of reducing the semantic gap by combining the input sketch with relevance feedback. A further improvement of retrieval accuracy is expected by combining each stroke, and its order, of the input sketch with the relevance feedback. However, this leaves as a problem the fact that the effect of the relevance feedback substantially depends on the stroke order in the input sketch. Although it is theoretically possible to consider all the possible stroke orders, that would cause a realistic problem of creating an enormous amount of data. Consequently, the technique introduced in this paper intends to improve retrieval efficiency by effectively using the relevance feedback by means of conducting data mining of the sketch considering the similarity in the order of strokes. To ascertain the effectiveness of this technique, a retrieval experiment was conducted using 20, 000 images of a collection, the Corel Photo Gallery, and the experiment was able to confirm an improvement in the retrieval efficiency.
    Download PDF (976K)
  • Takatsugu HIRAYAMA, Jean-Baptiste DODANE, Hiroaki KAWASHIMA, Takashi M ...
    Type: PAPER
    Subject area: Human-computer Interaction
    2010 Volume E93.D Issue 6 Pages 1470-1478
    Published: June 01, 2010
    Released: June 01, 2010
    JOURNALS FREE ACCESS
    People are being inundated under enormous volumes of information and they often dither about making the right choices from these. Interactive user support by information service system such as concierge services will effectively assist such people. However, human-machine interaction still lacks naturalness and thoughtfulness despite the widespread utilization of intelligent systems. The system needs to estimate user's interest to improve the interaction and support the choices. We propose a novel approach to estimating the interest, which is based on the relationship between the dynamics of user's eye movements, i.e., the endogenous control mode of saccades, and machine's proactive presentations of visual contents. Under a specially-designed presentation phase to make the user express the endogenous saccades, we analyzed the timing structures between the saccades and the presentation events. We defined resistance as a novel time-delay feature representing the duration a user's gaze remains fixed on the previously presented content regardless of the next event. In experimental results obtained from 10 subjects, we confirmed that resistance is a good indicator for estimating the interest of most subjects (75% success in 28 experiments on 7 subjects). This demonstrated a higher accuracy than conventional estimates of interest based on gaze duration or frequency.
    Download PDF (1069K)
  • Yuma MUNEKAWA, Fumihiko INO, Kenichi HAGIHARA
    Type: PAPER
    Subject area: Parallel and Distributed Architecture
    2010 Volume E93.D Issue 6 Pages 1479-1488
    Published: June 01, 2010
    Released: June 01, 2010
    JOURNALS FREE ACCESS
    This paper presents a fast method capable of accelerating the Smith-Waterman algorithm for biological database search on a cluster of graphics processing units (GPUs). Our method is implemented using compute unified device architecture (CUDA), which is available on the nVIDIA GPU. As compared with previous methods, our method has four major contributions. (1) The method efficiently uses on-chip shared memory to reduce the data amount being transferred between off-chip video memory and processing elements in the GPU. (2) It also reduces the number of data fetches by applying a data reuse technique to query and database sequences. (3) A pipelined method is also implemented to overlap GPU execution with database access. (4) Finally, a master/worker paradigm is employed to accelerate hundreds of database searches on a cluster system. In experiments, the peak performance on a GeForce GTX 280 card reaches 8.32 giga cell updates per second (GCUPS). We also find that our method reduces the amount of data fetches to 1/140, achieving approximately three times higher performance than a previous CUDA-based method. Our 32-node cluster version is approximately 28 times faster than a single GPU version. Furthermore, the effective performance reaches 75.6 giga instructions per second (GIPS) using 32 GeForce 8800 GTX cards.
    Download PDF (816K)
Regular Section
  • Shigeru NINAGAWA
    Type: PAPER
    Subject area: Fundamentals of Information Systems
    2010 Volume E93.D Issue 6 Pages 1489-1496
    Published: June 01, 2010
    Released: June 01, 2010
    JOURNALS FREE ACCESS
    There is evidence in favor of a relationship between the presence of 1/ƒ noise and computational universality in cellular automata. To confirm the relationship, we search for two-dimensional cellular automata with a 1/ƒ power spectrum by means of genetic algorithms. The power spectrum is calculated from the evolution of the state of the cell, starting from a random initial configuration. The fitness is estimated by the power spectrum with consideration of the spectral similarity to the 1/ƒ spectrum. The result shows that the rule with the highest fitness over the most runs exhibits a 1/ƒ type spectrum and its transition function and behavior are quite similar to those of the Game of Life, which is known to be a computationally universal cellular automaton. These results support the relationship between the presence of 1/ƒ noise and computational universality.
    Download PDF (450K)
  • Yong-Eun KIM, Kyung-Ju CHO, Jin-Gyun CHUNG, Xinming HUANG
    Type: PAPER
    Subject area: Fundamentals of Information Systems
    2010 Volume E93.D Issue 6 Pages 1497-1503
    Published: June 01, 2010
    Released: June 01, 2010
    JOURNALS FREE ACCESS
    This paper presents an error compensation method for fixed-width group canonic signed digit (GCSD) multipliers that receive a W-bit input and generate a W-bit product. To efficiently compensate for the truncation error, the encoded signals from the GCSD multiplier are used for the generation of the error compensation bias. By Synopsys simulations, it is shown that the proposed method leads to up to 84% reduction in power consumption and up to 78% reduction in area compared with the fixed-width modified Booth multipliers.
    Download PDF (583K)
  • Cheng-Min LIN
    Type: PAPER
    Subject area: Software System
    2010 Volume E93.D Issue 6 Pages 1504-1511
    Published: June 01, 2010
    Released: June 01, 2010
    JOURNALS FREE ACCESS
    Control Area Network (CAN) development began in 1983 and continues today. The forecast for annual world production in 2008 is approximately 65-67 million vehicles with 10-15 CAN nodes per vehicle on average [1]. Although the CAN network is successful in automobile and industry control because the network provides low cost, high reliability, and priority messages, a starvation problem exists in the network because the network is designed to use a fixed priority mechanism. This paper presents a priority inversion scheme, belonging to a dynamic priority mechanism to prevent the starvation problem. The proposed scheme uses one bit to separate all messages into two categories with/without inverted priority. An analysis model is also constructed in this paper. From the model, a message with inverted priority has a higher priority to be processed than messages without inverted priority so its mean waiting time is shorter than the others. Two cases with and without inversion are implemented in our experiments using a probabilistic model checking tool based on an automatic formal verification technique. Numerical results demonstrate that low-priority messages with priority inversion have better expression in the probability in a full queue state than others without inversion. However, our scheme is very simple and efficient and can be easily implemented at the chip level.
    Download PDF (662K)
  • Lung-Pin CHEN, I-Chen WU, William CHU, Jhen-You HONG, Meng-Yuan HO
    Type: PAPER
    Subject area: Data Engineering, Web Information Systems
    2010 Volume E93.D Issue 6 Pages 1512-1520
    Published: June 01, 2010
    Released: June 01, 2010
    JOURNALS FREE ACCESS
    Deploying and managing content objects efficiently is critical for building a scalable and transparent content delivery system. This paper investigates the advanced incremental deploying problem of which the objects are delivered in a successive manner. Recently, the researchers show that the minimum-cost content deployment can be obtained by reducing the problem to the well-known network flow problem. In this paper, the maximum flow algorithm for a single graph is extended to the incremental growing graph. Based on this extension, an efficient incremental content deployment algorithm is developed in this work.
    Download PDF (383K)
  • Junbo WANG, Zixue CHENG, Lei JING, Kaoru OTA, Mizuo KANSEN
    Type: PAPER
    Subject area: Information Network
    2010 Volume E93.D Issue 6 Pages 1521-1539
    Published: June 01, 2010
    Released: June 01, 2010
    JOURNALS FREE ACCESS
    Context-aware systems detect user's physical and social contexts based on sensor networks, and provide services that adapt to the user accordingly. Representing, detecting, and managing the contexts are important issues in context-aware systems. Composition of contexts is a useful method for these works, since it can detect a context by automatically composing small pieces of information to discover service. Danger-aware services are a kind of context-aware services which need description of relations between a user and his/her surrounding objects and between users. However when applying the existing composition methods to danger-aware services, they show the following shortcomings that (1) they have not provided an explicit method for representing composition of multi-user' contexts, (2) there is no flexible reasoning mechanism based on similarity of contexts, so that they can just provide services exactly following the predefined context reasoning rules. Therefore, in this paper, we propose a two-stage composition method based on context similarity to solve the above problems. The first stage is composition of the useful information to represent the context for a single user. The second stage is composition of multi-users' contexts to provide services by considering the relation of users. Finally the danger degree of the detected context is computed by using context similarity between the detected context and the predefined context. Context is dynamically represented based on two-stage composition rules and a Situation theory based Ontology, which combines the advantages of Ontology and Situation theory. We implement the system in an indoor ubiquitous environment, and evaluate the system through two experiments with the support of subjects. The experiment results show the method is effective, and the accuracy of danger detection is acceptable to a danger-aware system.
    Download PDF (1661K)
  • Fuminori MAKIKAWA, Tatsuhiro TSUCHIYA, Tohru KIKUNO
    Type: PAPER
    Subject area: Information Network
    2010 Volume E93.D Issue 6 Pages 1540-1548
    Published: June 01, 2010
    Released: June 01, 2010
    JOURNALS FREE ACCESS
    A Peer-To-Peer (P2P) application uses an overlay network which is a virtual network constructed over the physical network. Traditional overlay construction methods do not take physical location of nodes into consideration, resulting in a large amount of redundant traffic. Some proximity-aware construction methods have been proposed to address this problem. These methods typically connect nearby nodes in the physical network. However, as the number of nodes increases, the path length of a route between two distant nodes rapidly increases. To alleviate this problem, we propose a technique which can be incorporated in existing overlay construction methods. The idea behind this technique is to employ long links to directly connect distant nodes. Through simulation experiments, we show that using our proposed technique, networks can achieve small path length and low communication cost while maintaining high resiliency to failures.
    Download PDF (545K)
  • Tomokazu YONEDA, Akiko SHUTO, Hideyuki ICHIHARA, Tomoo INOUE, Hideo FU ...
    Type: PAPER
    Subject area: Information Network
    2010 Volume E93.D Issue 6 Pages 1549-1559
    Published: June 01, 2010
    Released: June 01, 2010
    JOURNALS FREE ACCESS
    We present a graph model and an ILP model for TAM design for transparency-based SoC testing. The proposed method is an extension of a previous work proposed by Chakrabarty with respect to the following three points: (1) constraint relaxation by considering test data flow for each core separately, (2) optimization of the cost for transparency as well as the cost for additional interconnect area simultaneously and (3) consideration of additional bypass paths. Therefore, the proposed ILP model can represent various problems including the same problem as the previous work and produce better results. Experimental results show the effectiveness and flexibility of the proposed method compared to the previous work.
    Download PDF (563K)
  • Gicheol WANG, Kang-Suk SONG, Gihwan CHO
    Type: PAPER
    Subject area: Information Network
    2010 Volume E93.D Issue 6 Pages 1560-1571
    Published: June 01, 2010
    Released: June 01, 2010
    JOURNALS FREE ACCESS
    In modern sensor networks, key management is essential to transmit data from sensors to the sink securely. That is, sensors are likely to be compromised by attackers, and a key management scheme should renew the keys for communication as frequently as possible. In clustered sensor networks, CHs (Cluster Heads) tend to become targets of compromise attack because they collect data from sensors and deliver the aggregated data to the sink. However, existing key renewal schemes do not change the CH role nodes, and thus they are vulnerable to the compromise of CHs. Our scheme is called DIRECT (DynamIc key REnewal using Cluster head elecTion) because it materializes the dynamic key renewals through secure CH elections. In the scheme, the network is divided into sectors to separate CH elections in each sector from other sectors. Then, sensors establish pairwise keys with other sensors in their sector for intra-sector communication. Every CH election round, all sensors securely elect a CH in their sector by defeating the malicious actions of attackers. Therefore, the probability that a compromised node is elected as a CH decreases significantly. The simulation results show that our approach significantly improves the integrity of data, energy efficiency, and network longevity.
    Download PDF (1819K)
  • Jianmei GUO, Yinglin WANG, Jian CAO
    Type: PAPER
    Subject area: Office Information Systems, e-Business Modeling
    2010 Volume E93.D Issue 6 Pages 1572-1579
    Published: June 01, 2010
    Released: June 01, 2010
    JOURNALS FREE ACCESS
    Adaptable workflow participant assignment (WPA) is crucial to the efficiency and quality of workflow execution. This paper proposes an ontology-based approach to adaptable WPA (OWPA). OWPA introduces domain ontology to organize the enterprise data and uses a well-defined OWPA rule to express an authorization constraint. OWPA can represent more complex authorization constraints by flexibly using the enterprise data, the workflow data, the user-input data, and the built-in functions. By a high-usability interactive interface, OWPA allows users to define and modify the OWPA rules easily without any programming work. Moreover, OWPA is bound to the workflow modeling tool and the workflow monitor respectively to adapt to dynamic workflow modification in workflow definitions and workflow instances. OWPA has been applied in three enterprises in China.
    Download PDF (1257K)
  • Shuoyan LIU, De XU, Songhe FENG
    Type: PAPER
    Subject area: Pattern Recognition
    2010 Volume E93.D Issue 6 Pages 1580-1588
    Published: June 01, 2010
    Released: June 01, 2010
    JOURNALS FREE ACCESS
    Bag-of-Visual-Words representation has recently become popular for scene classification. However, learning the visual words in an unsupervised manner suffers from the problem when faced these patches with similar appearances corresponding to distinct semantic concepts. This paper proposes a novel supervised learning framework, which aims at taking full advantage of label information to address the problem. Specifically, the Gaussian Mixture Modeling (GMM) is firstly applied to obtain “semantic interpretation” of patches using scene labels. Each scene induces a probability density on the low-level visual features space, and patches are represented as vectors of posterior scene semantic concepts probabilities. And then the Information Bottleneck (IB) algorithm is introduce to cluster the patches into “visual words” via a supervised manner, from the perspective of semantic interpretations. Such operation can maximize the semantic information of the visual words. Once obtained the visual words, the appearing frequency of the corresponding visual words in a given image forms a histogram, which can be subsequently used in the scene categorization task via the Support Vector Machine (SVM) classifier. Experiments on a challenging dataset show that the proposed visual words better perform scene classification task than most existing methods.
    Download PDF (448K)
  • Yamato OHTANI, Tomoki TODA, Hiroshi SARUWATARI, Kiyohiro SHIKANO
    Type: PAPER
    Subject area: Speech and Hearing
    2010 Volume E93.D Issue 6 Pages 1589-1598
    Published: June 01, 2010
    Released: June 01, 2010
    JOURNALS FREE ACCESS
    In this paper, we describe a novel model training method for one-to-many eigenvoice conversion (EVC). One-to-many EVC is a technique for converting a specific source speaker's voice into an arbitrary target speaker's voice. An eigenvoice Gaussian mixture model (EV-GMM) is trained in advance using multiple parallel data sets consisting of utterance-pairs of the source speaker and many pre-stored target speakers. The EV-GMM can be adapted to new target speakers using only a few of their arbitrary utterances by estimating a small number of adaptive parameters. In the adaptation process, several parameters of the EV-GMM to be fixed for different target speakers strongly affect the conversion performance of the adapted model. In order to improve the conversion performance in one-to-many EVC, we propose an adaptive training method of the EV-GMM. In the proposed training method, both the fixed parameters and the adaptive parameters are optimized by maximizing a total likelihood function of the EV-GMMs adapted to individual pre-stored target speakers. We conducted objective and subjective evaluations to demonstrate the effectiveness of the proposed training method. The experimental results show that the proposed adaptive training yields significant quality improvements in the converted speech.
    Download PDF (438K)
  • Roghayeh DOOST, Abolghasem SAYADIAN, Hossein SHAMSI
    Type: PAPER
    Subject area: Speech and Hearing
    2010 Volume E93.D Issue 6 Pages 1599-1607
    Published: June 01, 2010
    Released: June 01, 2010
    JOURNALS FREE ACCESS
    In this paper the SNR estimation is performed frame by frame, during the speech activity. For this purpose, the fourth-order moments of the real and imaginary parts of frequency components are extracted, for both the speech and noise, separately. For each noisy frame, the mentioned fourth-order moments are also estimated. Making use of the proposed formulas, the signal-to-noise ratio is estimated in each frequency index of the noisy frame. These formulas also predict the overall signal-to-noise ratio in each noisy frame. What makes our method outstanding compared to conventional approaches is that this method takes into consideration both the speech and noise identically. It estimates the negative SNR almost as well as the positive SNR.
    Download PDF (511K)
  • Nasharuddin ZAINAL, Toshihisa TANAKA, Yukihiko YAMASHITA
    Type: PAPER
    Subject area: Image Processing and Video Processing
    2010 Volume E93.D Issue 6 Pages 1608-1617
    Published: June 01, 2010
    Released: June 01, 2010
    JOURNALS FREE ACCESS
    We propose a moving picture coding by lapped transform and an edge adaptive deblocking filter to reduce the blocking distortion. We apply subband coding (SBC) with lapped transform (LT) and zero pruning set partitioning in hierarchical trees (zpSPIHT) to encode the difference picture. Effective coding using zpSPIHT was achieved by quantizing and pruning the quantized zeros. The blocking distortion caused by block motion compensated prediction is reduced by an edge adaptive deblocking filter. Since the original edges can be detected precisely at the reference picture, an edge adaptive deblocking filter on the predicted picture is very effective. Experimental results show that blocking distortion has been visually reduced at very low bit rate coding and better PSNRs of about 1.0dB was achieved.
    Download PDF (916K)
  • Kazu MISHIBA, Masaaki IKEHARA
    Type: PAPER
    Subject area: Image Processing and Video Processing
    2010 Volume E93.D Issue 6 Pages 1618-1624
    Published: June 01, 2010
    Released: June 01, 2010
    JOURNALS FREE ACCESS
    This paper proposes a novel adaptive image interpolation method using an edge-directed smoothness filter. Adaptive image interpolation methods tend to create higher visual quality images than traditional interpolation methods such as bicubic interpolation. These methods, however, often suffer from high computational costs and production of inadequate interpolated pixels. We propose a novel method to overcome these problems. Our approach is to estimate the enlarged image from the original image based on an observation model. Obtaining an image with edge-directed smoothness, we constrain the estimated image to have many edge-directed smooth pixels which are measured by using the edge-directed smoothness filter introduced in this paper. Additionally, we also propose a simplification of our algorithm to run with lower computational complexity and smaller memory. Simulation results show that the proposal method produces images with high visual quality and performs well on PSNR and computational times.
    Download PDF (953K)
  • Sopon PHUMEECHANYA, Charnchai PLUEMPITIWIRIYAWEJ, Saowapak THONGVIGITM ...
    Type: PAPER
    Subject area: Image Processing and Video Processing
    2010 Volume E93.D Issue 6 Pages 1625-1635
    Published: June 01, 2010
    Released: June 01, 2010
    JOURNALS FREE ACCESS
    In this paper, we propose a novel active contour method for image segmentation using a local regional information on extendable search line. We call it the LRES active contour. Our active contour uses the intensity values along a set of search lines that are perpendicular to the contour front. These search lines are used to inform the contour front toward which direction to move in order to find the object's boundary. Unlike other methods, none of these search lines have a predetermined length. Instead, their length increases gradually until a boundary of the object is found. We compare the performance of our LRES active contour to other existing active contours, both edge-based and region-based. The results show that our method provides more desirable segmentation outcomes, particularly on some images where other methods may fail. Not only is our method robust to noise and able to reach into a deep concave shape, it also has a large capture range and performs well in segmenting heterogeneous textured objects.
    Download PDF (1322K)
  • Tomohiko MUKAI, Ken-ichi WAKISAKA, Shigeru KURIYAMA
    Type: PAPER
    Subject area: Computer Graphics
    2010 Volume E93.D Issue 6 Pages 1636-1643
    Published: June 01, 2010
    Released: June 01, 2010
    JOURNALS FREE ACCESS
    This paper proposes a method for retrieving human motion data with concise retrieval rules based on the spatio-temporal features of motion appearance. Our method first converts motion clip into a form of clausal language that represents geometrical relations between body parts and their temporal relationship. A retrieval rule is then learned from the set of manually classified examples using inductive logic programming (ILP). ILP automatically discovers the essential rule in the same clausal form with a user-defined hypothesis-testing procedure. All motions are indexed using this clausal language, and the desired clips are retrieved by subsequence matching using the rule. Such rule-based retrieval offers reasonable performance and the rule can be intuitively edited in the same language form. Consequently, our method enables efficient and flexible search from a large dataset with simple query language.
    Download PDF (364K)
  • Jaegeuk KIM, Jinho SEOL, Seungryoul MAENG
    Type: LETTER
    Subject area: Computer System
    2010 Volume E93.D Issue 6 Pages 1644-1647
    Published: June 01, 2010
    Released: June 01, 2010
    JOURNALS FREE ACCESS
    This letter introduces a buffer management issue in designing SSDs for log-structured file systems (LFSs). We implemented a novel trace-driven SSD simulator in SystemC language, and simulated several SSD architectures with the NILFS2 trace. From the results, we give two major considerations related to the buffer management as follows. (1) The write buffer is used as a buffer not a cache, since all write requests are sequential in NILFS2. (2) For better performance, the main architectural factor is the bus bandwidth, but 332MHz is enough. Instead, the read buffer makes a key role in performance improvement while caching data. To enhance SSDs, accordingly, it is an effective way to make efficient read buffer management policies, and one of the examples is tracking the valid data zone in NILFS2, which can increase the data hit ratio in read buffers significantly.
    Download PDF (225K)
  • Hea-Suk KIM, Yang-Sae MOON
    Type: LETTER
    Subject area: Artificial Intelligence, Data Mining
    2010 Volume E93.D Issue 6 Pages 1648-1651
    Published: June 01, 2010
    Released: June 01, 2010
    JOURNALS FREE ACCESS
    Privacy-preserving clustering (PPC in short) is important in publishing sensitive time-series data. Previous PPC solutions, however, have a problem of not preserving distance orders or incurring privacy breach. To solve this problem, we propose a new PPC approach that exploits Fourier magnitudes of time-series. Our magnitude-based method does not cause privacy breach even though its techniques or related parameters are publicly revealed. Using magnitudes only, however, incurs the distance order problem, and we thus present magnitude selection strategies to preserve as many Euclidean distance orders as possible. Through extensive experiments, we showcase the superiority of our magnitude-based approach.
    Download PDF (289K)
  • Jong-Mo KUM, Joon-Hyuk CHANG
    Type: LETTER
    Subject area: Speech and Hearing
    2010 Volume E93.D Issue 6 Pages 1652-1655
    Published: June 01, 2010
    Released: June 01, 2010
    JOURNALS FREE ACCESS
    In this paper, we propose a novel method based on the second-order conditional maximum a posteriori (CMAP) to improve the performance of the global soft decision in speech enhancement. The conventional global soft decision scheme is found through investigation to have a disadvantage in that the global speech absence probability (GSAP) in that scheme is adjusted by a fixed parameter, which could be a restrictive assumption in the consecutive occurrences of speech frames. To address this problem, we devise a method to incorporate the second-order CMAP in determining the GSAP, which is clearly different from the previous approach in that not only current observation but also the speech activity decisions of the previous two frames are exploited. Performances of the proposed method are evaluated by a number of tests in various environments and show better results than previous work.
    Download PDF (250K)
  • Yuhwai TSENG, Chauchin SU, Chien-Nan Jimmy LIU
    Type: LETTER
    Subject area: Biological Engineering
    2010 Volume E93.D Issue 6 Pages 1656-1660
    Published: June 01, 2010
    Released: June 01, 2010
    JOURNALS FREE ACCESS
    In this study, we use the deconvolution of a square test stimulus to replace a series of sinusoidal test waveforms with different frequencies to simplify the measurement of human body impedance. The average biological impedance of body parts is evaluated by constructing a frequency response of the equivalent human body system. Only two stainless-steel electrodes are employed in the measurement and evaluation.
    Download PDF (759K)
feedback
Top