IEICE Transactions on Information and Systems
Online ISSN : 1745-1361
Print ISSN : 0916-8532
Volume E92.D , Issue 12
Showing 1-37 articles out of 37 articles from the selected issue
Special Section on Natural Language Processing and its Applications
  • Naomi INOUE
    2009 Volume E92.D Issue 12 Pages 2297
    Published: December 01, 2009
    Released: December 01, 2009
    JOURNALS FREE ACCESS
    Download PDF (50K)
  • Canasai KRUENGKRAI, Kiyotaka UCHIMOTO, Jun'ichi KAZAMA, Yiou WANG, Ken ...
    Type: PAPER
    Subject area: Morphological/Syntactic Analysis
    2009 Volume E92.D Issue 12 Pages 2298-2305
    Published: December 01, 2009
    Released: December 01, 2009
    JOURNALS FREE ACCESS
    In this paper, we present a discriminative word-character hybrid model for joint Chinese word segmentation and POS tagging. Our word-character hybrid model offers high performance since it can handle both known and unknown words.We describe our strategies that yield good balance for learning the characteristics of known and unknown words and propose an error-driven policy that delivers such balance by acquiring examples of unknown words from particular errors in a training corpus. We describe an efficient framework for training our model based on the Margin Infused Relaxed Algorithm (MIRA), evaluate our approach on the Penn Chinese Treebank, and show that it achieves superior performance compared to the state-of-the-art approaches reported in the literature.
    Download PDF (338K)
  • Yoshihide KATO, Shigeki MATSUBARA
    Type: PAPER
    Subject area: Morphological/Syntactic Analysis
    2009 Volume E92.D Issue 12 Pages 2306-2312
    Published: December 01, 2009
    Released: December 01, 2009
    JOURNALS FREE ACCESS
    This paper describes an incremental parser based on an adjoining operation. By using the operation, we can avoid the problem of infinite local ambiguity. This paper further proposes a restricted version of the adjoining operation, which preserves lexical dependencies of partial parse trees. Our experimental results showed that the restriction enhances the accuracy of the incremental parsing.
    Download PDF (379K)
  • Tomohisa SANO, Shiho Hoshi NOBESAWA, Hiroyuki OKAMOTO, Hiroya SUSUKI, ...
    Type: PAPER
    Subject area: Unknown Word Processing
    2009 Volume E92.D Issue 12 Pages 2313-2320
    Published: December 01, 2009
    Released: December 01, 2009
    JOURNALS FREE ACCESS
    Toponyms and other named entities are main issues in unknown word processing problem. Our purpose is to salvage unknown toponyms, not only for avoiding noises but also providing them information of area candidates to where they may belong. Most of previous toponym resolution methods were targeting disambiguation among area candidates, which is caused by the multiple existence of a toponym. These approaches were mostly based on gazetteers and contexts. When it comes to the documents which may contain toponyms worldwide, like newspaper articles, toponym resolution is not just an ambiguity resolution, but an area candidate selection from all the areas on Earth. Thus we propose an automatic toponym resolution method which enables to identify its area candidates based only on their surface statistics, in place of dictionary-lookup approaches. Our method combines two modules, area candidate reduction and area candidate examination which uses block-unit data, to obtain high accuracy without reducing recall rate. Our empirical result showed 85.54% precision rate, 91.92% recall rate and .89 F-measure value on average. This method is a flexible and robust approach for toponym resolution targeting unrestricted number of areas.
    Download PDF (374K)
  • Jakkrit TECHO, Cholwich NATTEE, Thanaruk THEERAMUNKONG
    Type: PAPER
    Subject area: Unknown Word Processing
    2009 Volume E92.D Issue 12 Pages 2321-2333
    Published: December 01, 2009
    Released: December 01, 2009
    JOURNALS FREE ACCESS
    While classification techniques can be applied for automatic unknown word recognition in a language without word boundary, it faces with the problem of unbalanced datasets where the number of positive unknown word candidates is dominantly smaller than that of negative candidates. To solve this problem, this paper presents a corpus-based approach that introduces a so-called group-based ranking evaluation technique into ensemble learning in order to generate a sequence of classification models that later collaborate to select the most probable unknown word from multiple candidates. Given a classification model, the group-based ranking evaluation (GRE) is applied to construct a training dataset for learning the succeeding model, by weighing each of its candidates according to their ranks and correctness when the candidates of an unknown word are considered as one group. A number of experiments have been conducted on a large Thai medical text to evaluate performance of the proposed group-based ranking evaluation approach, namely V-GRE, compared to the conventional naïve Bayes classifier and our vanilla version without ensemble learning. As the result, the proposed method achieves an accuracy of 90.93±0.50% when the first rank is selected while it gains 97.26±0.26% when the top-ten candidates are considered, that is 8.45% and 6.79% improvement over the conventional record-based naïve Bayes classifier and the vanilla version. Another result on applying only best features show 93.93±0.22% and up to 98.85±0.15% accuracy for top-1 and top-10, respectively. They are 3.97% and 9.78% improvement over naive Bayes and the vanilla version. Finally, an error analysis is given.
    Download PDF (1935K)
  • Koichi TAKEUCHI, Hideyuki TAKAHASHI
    Type: PAPER
    Subject area: Linguistic Knowledge Acquisition
    2009 Volume E92.D Issue 12 Pages 2334-2340
    Published: December 01, 2009
    Released: December 01, 2009
    JOURNALS FREE ACCESS
    The extraction of verb synonyms is a key technology to build a verb dictionary as a language resource. This paper presents a co-clustering-based verb synonym extraction approach that increases the number of extracted meanings of polysemous verbs from a large text corpus. For verb synonym extraction with a clustering approach dealing with polysemous verbs can be one problem issue because each polysemous verb should be categorized into different clusters depending on each meaning; thus there is a high possibility of failing to extract some of the meanings of polysemous verbs. Our proposed approach can extract the different meanings of polysemous verbs by recursively eliminating the extracted clusters from the initial data set. The experimental results of verb synonym extraction show that the proposed approach increases the correct verb clusters by about 50% with a 0.9% increase in precision and a 1.5% increase in recall over the previous approach.
    Download PDF (273K)
  • Hiroyuki SAKAI, Shigeru MASUYAMA
    Type: PAPER
    Subject area: Document Analysis
    2009 Volume E92.D Issue 12 Pages 2341-2350
    Published: December 01, 2009
    Released: December 01, 2009
    JOURNALS FREE ACCESS
    We propose a method of assigning polarity to causal information extracted from Japanese financial articles concerning business performance of companies. Our method assigns polarity (positive or negative) to causal information in accordance with business performance, e.g. “zidousya no uriage ga koutyou: (Sales of cars are good)” (The polarity positive is assigned in this example). We may use causal expressions assigned polarity by our method, e.g., to analyze content of articles concerning business performance circumstantially. First, our method classifies articles concerning business performance into positive articles and negative articles. Using them, our method assigns polarity (positive or negative) to causal information extracted from the set of articles concerning business performance. Although our method needs training dataset for classifying articles concerning business performance into positive and negative ones, our method does not need a training dataset for assigning polarity to causal information. Hence, even if causal information not appearing in the training dataset for classifying articles concerning business performance into positive and negative ones exist, our method is able to assign it polarity by using statistical information of this classified sets of articles. We evaluated our method and confirmed that it attained 74.4% precision and 50.4% recall of assigning polarity positive, and 76.8% precision and 61.5% recall of assigning polarity negative, respectively.
    Download PDF (590K)
  • Michiko YASUKAWA, Hui Tian LIM, Hidetoshi YOKOO
    Type: PAPER
    Subject area: Document Analysis
    2009 Volume E92.D Issue 12 Pages 2351-2359
    Published: December 01, 2009
    Released: December 01, 2009
    JOURNALS FREE ACCESS
    In Malay language, there are no conjugations and declensions and affixes have important grammatical functions. In Malay, the same word may function as a noun, an adjective, an adverb, or, a verb, depending on its position in the sentence. Although extensively simple root words are used in informal conversations, it is essential to use the precise words in formal speech or written texts. In Malay, to make sentences clear, derivative words are used. Derivation is achieved mainly by the use of affixes. There are approximately a hundred possible derivative forms of a root word in written language of the educated Malay. Therefore, the composition of Malay words may be complicated. Although there are several types of stemming algorithms available for text processing in English and some other languages, they cannot be used to overcome the difficulties in Malay word stemming. Stemming is the process of reducing various words to their root forms in order to improve the effectiveness of text processing in information systems. It is essential to avoid both over-stemming and under-stemming errors. We have developed a new Malay stemmer (stemming algorithm) for removing inflectional and derivational affixes. Our stemmer uses a set of affix rules and two types of dictionaries: a root-word dictionary and a derivative-word dictionary. The use of set of rules is aimed at reducing the occurrence of under-stemming errors, while that of the dictionaries is believed to reduce the occurrence of over-stemming errors. We performed an experiment to evaluate the application of our stemmer in text mining software. For the experiment, text data used were actual web pages collected from the World Wide Web to demonstrate the effectiveness of our Malay stemming algorithm. The experimental results showed that our stemmer can effectively increase the precision of the extracted Boolean expressions for text categorization.
    Download PDF (320K)
  • Heum PARK, Hyuk-Chul KWON
    Type: PAPER
    Subject area: Document Analysis
    2009 Volume E92.D Issue 12 Pages 2360-2368
    Published: December 01, 2009
    Released: December 01, 2009
    JOURNALS FREE ACCESS
    This paper presents an extended Relief-F algorithm for nominal attribute estimation, for application to small-document classification. Relief algorithms are general and successful instance-based feature-filtering algorithms for data classification and regression. Many improved Relief algorithms have been introduced as solutions to problems of redundancy and irrelevant noisy features and to the limitations of the algorithms for multiclass datasets. However, these algorithms have only rarely been applied to text classification, because the numerous features in multiclass datasets lead to great time complexity. Therefore, in considering their application to text feature filtering and classification, we presented an extended Relief-F algorithm for numerical attribute estimation (E-Relief-F) in 2007. However, we found limitations and some problems with it. Therefore, in this paper, we introduce additional problems with Relief algorithms for text feature filtering, including the negative influence of computation similarities and weights caused by a small number of features in an instance, the absence of nearest hits and misses for some instances, and great time complexity. We then suggest a new extended Relief-F algorithm for nominal attribute estimation (E-Relief-Fd) to solve these problems, and we apply it to small text-document classification. We used the algorithm in experiments to estimate feature quality for various datasets, its application to classification, and its performance in comparison with existing Relief algorithms. The experimental results show that the new E-Relief-Fd algorithm offers better performance than previous Relief algorithms, including E-Relief-F.
    Download PDF (1094K)
  • Katsuya MASUDA, Jun'ichi TSUJII
    Type: PAPER
    Subject area: Information Retrieval
    2009 Volume E92.D Issue 12 Pages 2369-2377
    Published: December 01, 2009
    Released: December 01, 2009
    JOURNALS FREE ACCESS
    This paper presents algorithms for searching text regions with specifying annotated information in tag-annotated text by using Region Algebra. The original algebra and its efficient algorithms are extended to handle both nested regions and crossed regions. The extensions are necessary for text search by using rich linguistic annotations. We first assign a depth number to every nested tag region to order these regions and write efficient algorithms using the depth number for the containment operations which can treat nested tag regions. Next, we introduce variables for attribute values of tags into the algebra to treat annotations in which attributes indicate another tag regions, and propose an efficient method of treating re-entrancy by incrementally determining values for variables. Our algorithms have been implemented in a text search engine for MEDLINE, which is a large textbase of abstracts in medical science. Experiments in tag-annotated MEDLINE abstracts demonstrate the effectiveness of specifying annotations and the efficiency of our algorithms. The system is made publicly accessible at http://www-tsujii.is.s.u-tokyo.ac.jp/medie/.
    Download PDF (864K)
  • Michael PAUL, Karunesh ARORA, Eiichiro SUMITA
    Type: PAPER
    Subject area: Machine Translation
    2009 Volume E92.D Issue 12 Pages 2378-2385
    Published: December 01, 2009
    Released: December 01, 2009
    JOURNALS FREE ACCESS
    This paper proposes a method for handling out-of-vocabulary (OOV) words that cannot be translated using conventional phrase-based statistical machine translation (SMT) systems. For a given OOV word, lexical approximation techniques are utilized to identify spelling and inflectional word variants that occur in the training data. All OOV words in the source sentence are then replaced with appropriate word variants found in the training corpus, thus reducing the number of OOV words in the input. Moreover, in order to increase the coverage of such word translations, the SMT translation model is extended by adding new phrase translations for all source language words that do not have a single-word entry in the original phrase-table but only appear in the context of larger phrases. The effectiveness of the proposed methods is investigated for the translation of Hindi to English, Chinese, and Japanese.
    Download PDF (286K)
  • Kei HASHIMOTO, Hirofumi YAMAMOTO, Hideo OKUMA, Eiichiro SUMITA, Keiich ...
    Type: PAPER
    Subject area: Machine Translation
    2009 Volume E92.D Issue 12 Pages 2386-2393
    Published: December 01, 2009
    Released: December 01, 2009
    JOURNALS FREE ACCESS
    This paper presents a reordering model using a source-side parse-tree for phrase-based statistical machine translation. The proposed model is an extension of IST-ITG (imposing source tree on inversion transduction grammar) constraints. In the proposed method, the target-side word order is obtained by rotating nodes of the source-side parse-tree. We modeled the node rotation, monotone or swap, using word alignments based on a training parallel corpus and source-side parse-trees. The model efficiently suppresses erroneous target word orderings, especially global orderings. Furthermore, the proposed method conducts a probabilistic evaluation of target word reorderings. In English-to-Japanese and English-to-Chinese translation experiments, the proposed method resulted in a 0.49-point improvement (29.31 to 29.80) and a 0.33-point improvement (18.60 to 18.93) in word BLEU-4 compared with IST-ITG constraints, respectively. This indicates the validity of the proposed reordering model.
    Download PDF (255K)
  • Pawel DYBALA, Michal PTASZYNSKI, Rafal RZEPKA, Kenji ARAKI
    Type: PAPER
    Subject area: Spoken Dialogue System
    2009 Volume E92.D Issue 12 Pages 2394-2401
    Published: December 01, 2009
    Released: December 01, 2009
    JOURNALS FREE ACCESS
    The topic of Human Computer Interaction (HCI) has been gathering more and more scientific attention of late. A very important, but often undervalued area in this field is human engagement. That is, a person's commitment to take part in and continue the interaction. In this paper we describe work on a humor-equipped casual conversational system (chatterbot) and investigate the effect of humor on a user's engagement in the conversation. A group of users was made to converse with two systems: one with and one without humor. The chat logs were then analyzed using an emotive analysis system to check user reactions and attitudes towards each system. Results were projected on Russell's two-dimensional emotiveness space to evaluate the positivity/negativity and activation/deactivation of these emotions. This analysis indicated emotions elicited by the humor-equipped system were more positively active and less negatively active than by the system without humor. The implications of results and relation between them and user engagement in the conversation are discussed. We also propose a distinction between positive and negative engagement.
    Download PDF (854K)
Regular Section
  • Kei OHNISHI, Kaori YOSHIDA, Yuji OIE
    Type: PAPER
    Subject area: Computation and Computational Models
    2009 Volume E92.D Issue 12 Pages 2402-2415
    Published: December 01, 2009
    Released: December 01, 2009
    JOURNALS FREE ACCESS
    We present the concept of folksonomical peer-to-peer (P2P) file sharing networks that allow participants (peers) to freely assign structured search tags to files. These networks are similar to folksonomies in the present Web from the point of view that users assign search tags to information distributed over a network. As a concrete example, we consider an unstructured P2P network using vectorized Kansei (human sensitivity) information as structured search tags for file search. Vectorized Kansei information as search tags indicates what participants feel about their files and is assigned by the participant to each of their files. A search query also has the same form of search tags and indicates what participants want to feel about files that they will eventually obtain. A method that enables file search using vectorized Kansei information is the Kansei query-forwarding method, which probabilistically propagates a search query to peers that are likely to hold more files having search tags that are similar to the query. The similarity between the search query and the search tags is measured in terms of their dot product. The simulation experiments examine if the Kansei query-forwarding method can provide equal search performance for all peers in a network in which only the Kansei information and the tendency with respect to file collection are different among all of the peers. The simulation results show that the Kansei query forwarding method and a random-walk-based query forwarding method, for comparison, work effectively in different situations and are complementary. Furthermore, the Kansei query forwarding method is shown, through simulations, to be superior to or equal to the random-walk based one in terms of search speed.
    Download PDF (852K)
  • Yasuyuki KAWAMURA, Akira MATSUBAYASHI
    Type: PAPER
    Subject area: Algorithm Theory
    2009 Volume E92.D Issue 12 Pages 2416-2421
    Published: December 01, 2009
    Released: December 01, 2009
    JOURNALS FREE ACCESS
    We study the online file allocation problem on ring networks. In this paper, we present a 7-competitive randomized algorithm against an adaptive online adversary on uniform cactus graphs. The algorithm is deterministic if the file size is 1. Moreover, we obtain lower bounds of 4.25 and 3.833 for a deterministic algorithm and a randomized algorithm against an adaptive online adversary, respectively, on ring networks.
    Download PDF (185K)
  • Youhui ZHANG, Weimin ZHENG
    Type: PAPER
    Subject area: System Programs
    2009 Volume E92.D Issue 12 Pages 2422-2429
    Published: December 01, 2009
    Released: December 01, 2009
    JOURNALS FREE ACCESS
    At work, at home, and in some public places, a desktop PC is usually available nowadays. Therefore, it is important for users to be able to play various videos on different PCs smoothly, but the diversity of codec types complicates the situation. Although some mainstream media players can try to download the needed codec automatically, this may fail for average users because installing the codec usually requires administrator privileges to complete, while the user may not be the owner of the PC. We believe an ideal solution should work without users' intervention, and need no special privileges. This paper proposes such a user-friendly, program-transparent solution for Windows-based media players. It runs the media player in a user-mode virtualization environment, and then downloads the needed codec on-the-fly. Because of API (Application Programming Interface) interception, some resource-accessing API calls from the player will be redirected to the downloaded codec resources. Then from the viewpoint of the player, the necessary codec exists locally and it can handle the video smoothly, although neither system registry nor system folders was modified during this process. Besides convenience, the principle of least privilege is maintained and the host system is left clean. This paper completely analyzes the technical issues and presents such a prototype which can work with DirectShow-compatible players. Performance tests show that the overhead is negligible. Moreover, our solution conforms to the Software-As-A-Service (SaaS) mode, which is very promising in the Internet era.
    Download PDF (340K)
  • Unil YUN
    Type: PAPER
    Subject area: Data Mining
    2009 Volume E92.D Issue 12 Pages 2430-2438
    Published: December 01, 2009
    Released: December 01, 2009
    JOURNALS FREE ACCESS
    Mining correlated patterns in large transaction databases is one of the essential tasks in data mining since a huge number of patterns are usually mined, but it is hard to find patterns with the correlation. The needed data analysis should be made according to the requirements of the particular real application. In previous mining approaches, patterns with the weak affinity are found even with a high minimum support. In this paper, we suggest weighted support affinity pattern mining in which a new measure, weighted support confidence (ws-confidence) is developed to identify correlated patterns with the weighted support affinity. To efficiently prune the weak affinity patterns, we prove that the ws-confidence measure satisfies the anti-monotone and cross weighted support properties which can be applied to eliminate patterns with dissimilar weighted support levels. Based on the two properties, we develop a weighted support affinity pattern mining algorithm (WSP). The weighted support affinity patterns can be useful to answer the comparative analysis queries such as finding itemsets containing items which give similar total selling expense levels with an acceptable error range α% and detecting item lists with similar levels of total profits. In addition, our performance study shows that WSP is efficient and scalable for mining weighted support affinity patterns.
    Download PDF (816K)
  • Masato KITAKAMI, Teruki KAWASAKI
    Type: PAPER
    Subject area: Dependable Computing
    2009 Volume E92.D Issue 12 Pages 2439-2444
    Published: December 01, 2009
    Released: December 01, 2009
    JOURNALS FREE ACCESS
    Since the compressed data, which are frequently used in computer systems and communication systems, are very sensitive to errors, several error recovery methods for data compression have been proposed. Error recovery method for LZ77 coding, one of the most popular universal data compression methods, has been proposed. This cannot be applied to LZSS coding, a variation of LZ77 coding, because its compressed data consist of variable-length codewords. This paper proposes a burst error recovery method for LZSS coding. The error sensitive part of the compressed data are encoded by unary coding and moved to the beginning of the compressed data. After these data, a synchronization sequence is inserted. By searching the synchronization sequence, errors in the error sensitive part are detected. The errors are recovered by using a copy of the part. Computer simulation says that the compression ratio of the proposed method is almost equal to that of LZ77 coding and that it has very high error recovery capability.
    Download PDF (495K)
  • Daisuke IWAI, Kosuke SATO
    Type: PAPER
    Subject area: Human-computer Interaction
    2009 Volume E92.D Issue 12 Pages 2445-2453
    Published: December 01, 2009
    Released: December 01, 2009
    JOURNALS FREE ACCESS
    This paper presents an intuitive interaction technique for data exchange between multiple co-located devices. In the proposed system, CrossOverlayDesktop, desktop graphics of the devices are graphically overlaid with each other (i.e., alpha-blended). Users can exchange file data by the usual drag-and-drop manipulation through an overlaid area. The overlaid area is determined by the physical six degrees of freedom (6-DOF) correlation of the devices and thus changes according to users' direct movements of the devices. Because familiar operations such as drag-and-drop can be applied to file exchange between multiple devices, seamless, consistent, and thus intuitive multi-user collaboration is realized. Furthermore, dynamic overlay of desktop graphics allows users to intuitively establish communication, identify connected devices, and perform access control. For access control of the data, users can protect their own data by simply dragging them out of the overlaid area, because only the overlaid area becomes a public space. Several proof-of-concept experiments and evaluations were conducted. Results show the effectiveness of the proposed interaction technique.
    Download PDF (3359K)
  • Yizhong XIN, Xiangshi REN
    Type: PAPER
    Subject area: Human-computer Interaction
    2009 Volume E92.D Issue 12 Pages 2454-2461
    Published: December 01, 2009
    Released: December 01, 2009
    JOURNALS FREE ACCESS
    Adjustment of a certain parameter in the course of performing a trajectory task such as drawing or gesturing is a common manipulation in pen-based interaction. Since pen tip information is confined to x-y coordinate data, such concurrent parameter adjustment is not easily accomplished in devices using only a pen tip. This paper comparatively investigates the performance of inherent pen input modalities (Pressure, Tilt, Azimuth, and Rolling) and Key Pressing with the non-preferred hand used for precision parameter manipulation during pen sliding actions. We elaborate our experimental design framework here and conduct experimentation to evaluate the effect of the five techniques. Results show that Pressure enabled the fastest performance along with the lowest error rate, while Azimuth exhibited the worst performance. Tilt showed slightly faster performance and achieved a lower error rate than Rolling. However, Rolling achieved the most significant learning effect on Selection Time and was favored over Tilt in subjective evaluations. Our experimental results afford a general understanding of the performance of inherent pen input modalities in the course of a trajectory task in HCI (human computer interaction).
    Download PDF (926K)
  • Hongcui WANG, Tatsuya KAWAHARA
    Type: PAPER
    Subject area: Speech and Hearing
    2009 Volume E92.D Issue 12 Pages 2462-2468
    Published: December 01, 2009
    Released: December 01, 2009
    JOURNALS FREE ACCESS
    CALL (Computer Assisted Language Learning) systems using ASR (Automatic Speech Recognition) for second language learning have received increasing interest recently. However, it still remains a challenge to achieve high speech recognition performance, including accurate detection of erroneous utterances by non-native speakers. Conventionally, possible error patterns, based on linguistic knowledge, are added to the lexicon and language model, or the ASR grammar network. However, this approach easily falls in the trade-off of coverage of errors and the increase of perplexity. To solve the problem, we propose a method based on a decision tree to learn effective prediction of errors made by non-native speakers. An experimental evaluation with a number of foreign students learning Japanese shows that the proposed method can effectively generate an ASR grammar network, given a target sentence, to achieve both better coverage of errors and smaller perplexity, resulting in significant improvement in ASR accuracy.
    Download PDF (482K)
  • Andrew FINCH, Eiichiro SUMITA, Satoshi NAKAMURA
    Type: PAPER
    Subject area: Speech and Hearing
    2009 Volume E92.D Issue 12 Pages 2469-2477
    Published: December 01, 2009
    Released: December 01, 2009
    JOURNALS FREE ACCESS
    This paper presents a technique for class-dependent decoding for statistical machine translation (SMT). The approach differs from previous methods of class-dependent translation in that the class-dependent forms of all models are integrated directly into the decoding process. We employ probabilistic mixture weights between models that can change dynamically on a sentence-by-sentence basis depending on the characteristics of the source sentence. The effectiveness of this approach is demonstrated by evaluating its performance on travel conversation data. We used this approach to tackle the translation of questions and declarative sentences using class-dependent models. To achieve this, our system integrated two sets of models specifically built to deal with sentences that fall into one of two classes of dialog sentence: questions and declarations, with a third set of models built with all of the data to handle the general case. The technique was thoroughly evaluated on data from 16 language pairs using 6 machine translation evaluation metrics. We found the results were corpus-dependent, but in most cases our system was able to improve translation performance, and for some languages the improvements were substantial.
    Download PDF (921K)
  • Abdellah KADDAI, Mohammed HALIMI
    Type: PAPER
    Subject area: Speech and Hearing
    2009 Volume E92.D Issue 12 Pages 2478-2486
    Published: December 01, 2009
    Released: December 01, 2009
    JOURNALS FREE ACCESS
    In this paper an algebraic trellis vector quantization (ATVQ) that introduces algebraic codebooks into trellis coded vector quantization (TCVQ) structure is presented. Low encoding complexity and minimum memory storage requirements are achieved using the proposed approach. It exploits advantages of both the TCVQ and the algebraic codebooks to know the delayed decision, the codebook widening, the low computational complexity and the no storage of codebook. This novel vector quantization scheme is used to encode the wideband speech line spectral frequencies (LSF) parameters. Experimental results on wideband speech have shown that ATVQ yields the same performance as the traditional split vector quantization (SVQ) and the TCVQ in terms of spectral distortion (SD). It can achieve a transparent quality at 47bits/frame with a considerable reduction of memory storage and computation complexity when compared to SVQ and TCVQ.
    Download PDF (180K)
  • Jinhua WANG, De XU, Bing LI
    Type: PAPER
    Subject area: Image Processing and Video Processing
    2009 Volume E92.D Issue 12 Pages 2487-2497
    Published: December 01, 2009
    Released: December 01, 2009
    JOURNALS FREE ACCESS
    In this paper, we present a Double-Anchoring Based Tone Mapping (DABTM) algorithm for displaying high dynamic range (HDR) images. First, two anchoring values are obtained using the double-anchoring theory. Second, we use the two values to formulate the compressing operator, which can achieve the aim of tone mapping directly. A new method based on accelerated K-means for the decomposition of HDR images into groups (frameworks) is proposed. Most importantly, a group of piecewise-overlap linear functions is put forward to define the belongingness of pixels to their locating frameworks. Experiments show that our algorithm is capable of achieving dynamic range compression, while preserving fine details and avoiding common artifacts such as gradient reversals, halos, or loss of local contrast.
    Download PDF (588K)
  • Shangce GAO, Rong-Long WANG, Hiroki TAMURA, Zheng TANG
    Type: PAPER
    Subject area: Biocybernetics, Neurocomputing
    2009 Volume E92.D Issue 12 Pages 2498-2507
    Published: December 01, 2009
    Released: December 01, 2009
    JOURNALS FREE ACCESS
    This paper presents a new multi-layered artificial immune system architecture using the ideas generated from the biological immune system for solving combinatorial optimization problems. The proposed methodology is composed of five layers. After expressing the problem as a suitable representation in the first layer, the search space and the features of the problem are estimated and extracted in the second and third layers, respectively. Through taking advantage of the minimized search space from estimation and the heuristic information from extraction, the antibodies (or solutions) are evolved in the fourth layer and finally the fittest antibody is exported. In order to demonstrate the efficiency of the proposed system, the graph planarization problem is tested. Simulation results based on several benchmark instances show that the proposed algorithm performs better than traditional algorithms.
    Download PDF (477K)
  • Youngkyu PARK, Jaeseok PARK, Taewoo HAN, Sungho KANG
    Type: LETTER
    Subject area: Computer Components
    2009 Volume E92.D Issue 12 Pages 2508-2511
    Published: December 01, 2009
    Released: December 01, 2009
    JOURNALS FREE ACCESS
    This paper proposes a micro-code based Programmable Memory BIST (PMBIST) architecture that can support various kinds of test algorithms. The proposed Non-linear PMBIST (NPMBIST) guarantees high flexibility and high fault coverage using not only March algorithms but also non-linear algorithms such as Walking and Galloping. This NPMBIST has an optimized hardware overhead, since algorithms can be implemented with the minimum bits by the optimized instructions. Finally, various and complex algorithms can be run thanks to its support of multi-loop.
    Download PDF (632K)
  • Hyo J. LEE, In Hwan DOH, Eunsam KIM, Sam H. NOH
    Type: LETTER
    Subject area: System Programs
    2009 Volume E92.D Issue 12 Pages 2512-2515
    Published: December 01, 2009
    Released: December 01, 2009
    JOURNALS FREE ACCESS
    Conventional kernel prefetching schemes have focused on taking advantage of sequential access patterns that are easy to detect. However, it is observed that, on random and even sequential references, they may cause performance degradation due to inaccurate pattern prediction and overshooting. To address these problems, we propose a novel approach to work with existing kernel prefetching schemes, called Reference Pattern based kernel Prefetching (RPP). The RPP can reduce negative effects of existing schemes by identifying one more reference pattern, i.e., looping, in addition to random and sequential patterns and delaying starting prefetching until patterns are confirmed to be sequential or looping.
    Download PDF (268K)
  • Min Young JUNG, Sung Han PARK
    Type: LETTER
    Subject area: Database
    2009 Volume E92.D Issue 12 Pages 2516-2519
    Published: December 01, 2009
    Released: December 01, 2009
    JOURNALS FREE ACCESS
    We propose a video ontology system to overcome semantic gap in video retrieval. The proposed video ontology is aimed at bridging of the gap between the semantic nature of user queries and raw video contents. Also, results of semantic retrieval shows not only the concept of topic keyword but also a sub-concept of the topic keyword using semantic query extension. Through this process, recall is likely to provide high accuracy results in our method. The experiments compared with keyframe-based indexing have demonstrated that this proposed scene-based indexing presents better results in several kinds of videos.
    Download PDF (405K)
  • Taek-Young YOUN, Young-Ho PARK, Jongin LIM
    Type: LETTER
    Subject area: Application Information Security
    2009 Volume E92.D Issue 12 Pages 2520-2523
    Published: December 01, 2009
    Released: December 01, 2009
    JOURNALS FREE ACCESS
    Trapdoor commitment schemes are widely used for adding valuable properties to ordinary signatures or enhancing the security of weakly secure signatures. In this letter, we propose a trapdoor commitment scheme based on RSA function, and prove its security under the hardness of the integer factoring. Our scheme is very efficient in computing a commitment. Especially, it requires only three multiplications for evaluating a commitment when e=3 is used as a public exponent of RSA function. Moreover, our scheme has two useful properties, key exposure freeness and strong trapdoor opening, which are useful for designing secure chameleon signature schemes and converting a weakly secure signature to a strongly secure signature, respectively.
    Download PDF (90K)
  • Haechul CHOI, Ho Chul SHIN, Si-Woong LEE, Yun-Ho KO
    Type: LETTER
    Subject area: Pattern Recognition
    2009 Volume E92.D Issue 12 Pages 2524-2526
    Published: December 01, 2009
    Released: December 01, 2009
    JOURNALS FREE ACCESS
    In this paper, we propose a method for extracting an object boundary from a low-quality image such as an infrared one. To take full advantage of a training set, the overall shape is modeled by incorporating statistical characteristics of moments into the point distribution model (PDM). Furthermore, a differential equation for the moment of overall shape is derived for shape refinement, which leads to accurate and rapid deformation of a boundary template toward real object boundary. The simulation results show that the proposed method has better performance than conventional boundary extraction methods.
    Download PDF (155K)
  • Sheng LI, Yong-fang YAO, Xiao-yuan JING, Heng CHANG, Shi-qiang GAO, Da ...
    Type: LETTER
    Subject area: Pattern Recognition
    2009 Volume E92.D Issue 12 Pages 2527-2530
    Published: December 01, 2009
    Released: December 01, 2009
    JOURNALS FREE ACCESS
    This letter proposes a nonlinear DCT discriminant feature extraction approach for face recognition. The proposed approach first selects appropriate DCT frequency bands according to their levels of nonlinear discrimination. Then, this approach extracts nonlinear discriminant features from the selected DCT bands by presenting a new kernel discriminant method, i.e. the improved kernel discriminative common vector (KDCV) method. Experiments on the public FERET database show that this new approach is more effective than several related methods.
    Download PDF (240K)
  • Jingjing ZHONG, Siwei LUO, Jiao WANG
    Type: LETTER
    Subject area: Pattern Recognition
    2009 Volume E92.D Issue 12 Pages 2531-2534
    Published: December 01, 2009
    Released: December 01, 2009
    JOURNALS FREE ACCESS
    The key problem of object-based attention is the definition of objects, while contour grouping methods aim at detecting the complete boundaries of objects in images. In this paper, we develop a new contour grouping method which shows several characteristics. First, it is guided by the global saliency information. By detecting multiple boundaries in a hierarchical way, we actually construct an object-based attention model. Second, it is optimized by the grouping cost, which is decided both by Gestalt cues of directed tangents and by region saliency. Third, it gives a new definition of Gestalt cues for tangents which includes image information as well as tangent information. In this way, we can improve the robustness of our model against noise. Experiment results are shown in this paper, with a comparison against other grouping model and space-based attention model.
    Download PDF (668K)
  • Jae-Seong LEE, Chang-Joon LEE, Young-Cheol PARK, Dae-Hee YOUN
    Type: LETTER
    Subject area: Speech and Hearing
    2009 Volume E92.D Issue 12 Pages 2535-2539
    Published: December 01, 2009
    Released: December 01, 2009
    JOURNALS FREE ACCESS
    This paper proposes an efficient FFT algorithm for the Psycho-Acoustic Model (PAM) of MPEG-4 AAC. The proposed algorithm synthesizes FFT coefficients using MDCT and MDST coefficients through circular convolution. The complexity of the MDCT and MDST coefficients is approximately half of the original FFT. We also design a new PAM based on the proposed FFT algorithm, which has 15% lower computational complexity than the original PAM without degradation of sound quality. Subjective as well as objective test results are presented to confirm the efficiency of the proposed FFT computation algorithm and the PAM.
    Download PDF (256K)
  • Young Han LEE, Deok Su KIM, Hong Kook KIM, Jongmo SUNG, Mi Suk LEE, Hy ...
    Type: LETTER
    Subject area: Speech and Hearing
    2009 Volume E92.D Issue 12 Pages 2540-2544
    Published: December 01, 2009
    Released: December 01, 2009
    JOURNALS FREE ACCESS
    In this paper, we propose a bandwidth-scalable stereo audio coding method based on a layered structure. The proposed stereo coding method encodes super-wideband (SWB) stereo signals and is able to decode either wideband (WB) stereo signals or SWB stereo signals, depending on the network congestion. The performance of the proposed stereo coding method is then compared with that of a conventional stereo coding method that separately decodes WB or SWB stereo signals, in terms of subjective quality, algorithmic delay, and computational complexity. Experimental results show that when stereo audio signals sampled at a rate of 32kHz are compressed to 64kbit/s, the proposed method provides significantly better audio quality with a 64-sample shorter algorithmic delay, and comparable computational complexity.
    Download PDF (576K)
  • Tae-Kyoung KIM, Jeong-Hwan BOO, Sang Ju PARK
    Type: LETTER
    Subject area: Image Processing and Video Processing
    2009 Volume E92.D Issue 12 Pages 2545-2547
    Published: December 01, 2009
    Released: December 01, 2009
    JOURNALS FREE ACCESS
    Scalable video coding (SVC) was standardized as an extension of H.264/AVC by the JVT (Joint Video Team) in Nov. 2007. The biggest feature of SVC is multi-layered coding where two or more video sequences are compressed into a single bit-stream. This letter proposes a fast block mode decision algorithm in spatial enhancement layer of SVC. The proposed algorithm achieves early decision by limiting the number of candidate modes for block with certain characteristic called same motion vector block (SMVB). Our proposed method reduces the complexity, in terms of encoding time by up to 66.17%. Nevertheless, it shows negligible PSNR degradation by only up to 0.16dB and increases the bit-rate by only up to 0.64%, respectively.
    Download PDF (161K)
  • Hyunjin YOO, Kang Y. KIM, Kwan H. LEE
    Type: LETTER
    Subject area: Image Recognition, Computer Vision
    2009 Volume E92.D Issue 12 Pages 2548-2552
    Published: December 01, 2009
    Released: December 01, 2009
    JOURNALS FREE ACCESS
    High Dynamic Range Imaging (HDRI) refers to a set of techniques that can represent a dynamic range of real world luminance. Hence, the HDR image can be used to measure the reflectance property of materials. In order to reproduce the original color of materials using this HDR image, characterization of HDR imaging is needed. In this study, we propose a new HDRI characterization method under a known illumination condition at the HDR level. The proposed method normalizes the HDR image by using the HDR image of a light and balances the tone using the reference of the color chart. We demonstrate that our method outperforms the previous method at the LDR level by the average color difference and BRDF rendering result. The proposed method gives a much better reproduction of the original color of a given material.
    Download PDF (1551K)
  • Gumwon HONG, Jeong-Hoon LEE, Young-In SONG, Do-Gil LEE, Hae-Chang RIM
    Type: LETTER
    Subject area: Natural Language Processing
    2009 Volume E92.D Issue 12 Pages 2553-2556
    Published: December 01, 2009
    Released: December 01, 2009
    JOURNALS FREE ACCESS
    This paper presents a new approach to word spacing problems by mining reliable words from the Web and use them as additional resources. Conventional approaches to automatic word spacing use noise-free data to train parameters for word spacing models. However, the insufficiency and irrelevancy of training examples is always the main bottleneck associated with automatic word spacing. To mitigate the data-sparseness problem, this paper proposes an algorithm to discover reliable words on the Web to expand the vocabularies and a model to utilize the words as additional resources. The proposed approach is very simple and practical to adapt to new domains. Experimental results show that the proposed approach achieves better performance compared to the conventional word spacing approaches.
    Download PDF (79K)
feedback
Top