IEICE Transactions on Information and Systems
Online ISSN : 1745-1361
Print ISSN : 0916-8532
E96.D 巻, 1 号
選択された号の論文の24件中1~24を表示しています
Regular Section
  • Marcos VILLAGRA, Masaki NAKANISHI, Shigeru YAMASHITA, Yasuhiko NAKASHI ...
    原稿種別: PAPER
    専門分野: Fundamentals of Information Systems
    2013 年 E96.D 巻 1 号 p. 1-8
    発行日: 2013/01/01
    公開日: 2013/01/01
    ジャーナル フリー
    In this paper we study quantum nondeterminism in multiparty communication. There are three (possibly) different types of nondeterminism in quantum computation: i) strong, ii) weak with classical proofs, and iii) weak with quantum proofs. Here we focus on the first one. A strong quantum nondeterministic protocol accepts a correct input with positive probability and rejects an incorrect input with probability 1. In this work we relate strong quantum nondeterministic multiparty communication complexity to the rank of the communication tensor in the Number-On-Forehead and Number-In-Hand models. In particular, by extending the definition proposed by de Wolf to nondeterministic tensor-rank (nrank), we show that for any boolean function f when there is no prior shared entanglement between the players, 1) in the Number-On-Forehead model the cost is upper-bounded by the logarithm of nrank(f); 2) in the Number-In-Hand model the cost is lower-bounded by the logarithm of nrank(f). Furthermore, we show that when the number of players is o(log log n), we have $NQP\nsubseteq BQP$ for Number-On-Forehead communication.
  • Jun GAO, Minxuan ZHANG, Zuocheng XING, Chaochao FENG
    原稿種別: PAPER
    専門分野: Computer System
    2013 年 E96.D 巻 1 号 p. 9-18
    発行日: 2013/01/01
    公開日: 2013/01/01
    ジャーナル フリー
    This paper proposes a Reduced Explicitly Parallel Instruction Computing Processor (REPICP) which is an independently designed, 64-bit, general-purpose microprocessor. The REPICP based on EPIC architecture overcomes the disadvantages of hardware-based superscalar and software-based Very Long Instruction Word (VLIW) and utilizes the cooperation of compiler and hardware to enhance Instruction-Level Parallelism (ILP). In REPICP, we propose the Optimized Lock-Step execution Model (OLSM) and instruction control pipeline method. We also propose reduced innovative methods to optimize the design. The REPICP is fabricated in Artisan 0.13µm Nominal 1P8M process with 57M transistors. The die size of the REPICP is 100mm2 (10×10), and consumes only 12W power when running at 300MHz.
  • Sung Hoon BAEK
    原稿種別: PAPER
    専門分野: Software System
    2013 年 E96.D 巻 1 号 p. 19-27
    発行日: 2013/01/01
    公開日: 2013/01/01
    ジャーナル フリー
    Disk arrays and prefetching schemes are used to mitigate the performance gap between main memory and disks. This paper presents a new problem that arises if prefetching schemes that are widely used in operation systems are applied to disk arrays. The key point of the problem is that block address space from the viewpoint of the host is contiguous but from that of the disk array it is discontiguous and thus more disk accesses than expected are required. This paper presents two ways to resolve the problem that arises from the Linux readahead framework. The proposed scheme prevents a readahead window from being split into multiple requests from the viewpoint of the disk array but not from the viewpoint of the host thereby reducing disk head movements. In addition, it outperforms the prior work by adopting an asynchronous solution, improving performance for fragmented files, eliminating readahead size restriction, and improving disk parallelism. We implemented the proposed scheme and integrated it with Linux. Our experiment shows that the solution significantly improved the original Linux readahead framework when a storage server processes multiple concurrent requests.
  • Ziying DAI, Xiaoguang MAO, Yan LEI, Xiaomin WAN, Kerong BEN
    原稿種別: PAPER
    専門分野: Software Engineering
    2013 年 E96.D 巻 1 号 p. 28-39
    発行日: 2013/01/01
    公開日: 2013/01/01
    ジャーナル フリー
    A garbage collector relieves programmers from manual memory management and improves productivity and program reliability. However, there are many other finite system resources that programmers must manage by themselves, such as sockets and database connections. Growing resource leaks can lead to performance degradation and even program crashes. This paper presents the automatic resource collection approach called Resco (RESource COllector) to tolerate non-memory resource leaks. Resco prevents performance degradation and crashes due to resource leaks by two steps. First, it utilizes monitors to count resource consumption and request resource collections independently of memory usage when resource limits are about to be violated. Second, it responds to a resource collection request by safely releasing leaked resources. We implement Resco based on a Java Virtual Machine for Java programs. The performance evaluation against standard benchmarks shows that Resco has a very low overhead, around 1% or 3%. Experiments on resource leak bugs show that Resco successfully prevents most of these programs from crashing with little increase in execution time.
  • Lihua ZHAO, Ryutaro ICHISE
    原稿種別: PAPER
    専門分野: Data Engineering, Web Information Systems
    2013 年 E96.D 巻 1 号 p. 40-50
    発行日: 2013/01/01
    公開日: 2013/01/01
    ジャーナル フリー
    The Linking Open Data (LOD) cloud is a collection of linked Resource Description Framework (RDF) data with over 31billion RDF triples. Accessing linked data is a challenging task because each data set in the LOD cloud has a specific ontology schema, and familiarity with the ontology schema used is required in order to query various linked data sets. However, manually checking each data set is time-consuming, especially when many data sets from various domains are used. This difficulty can be overcome without user interaction by using an automatic method that integrates different ontology schema. In this paper, we propose a Mid-Ontology learning approach that can automatically construct a simple ontology, linking related ontology predicates (class or property) in different data sets. Our Mid-Ontology learning approach consists of three main phases: data collection, predicate grouping, and Mid-Ontology construction. Experiments show that our Mid-Ontology learning approach successfully integrates diverse ontology schema with a high quality, and effectively retrieves related information with the constructed Mid-Ontology.
  • Yukihiko SHIGESADA, Shinsuke KOBAYASHI, Noboru KOSHIZUKA, Ken SAKAMURA
    原稿種別: PAPER
    専門分野: Information Network
    2013 年 E96.D 巻 1 号 p. 51-63
    発行日: 2013/01/01
    公開日: 2013/01/01
    ジャーナル フリー
    Context awareness is one of the ultimate goals of ubiquitous computing, and spatial information plays an important role in building context awareness. In this paper, we propose a new interoperable spatial information model, which is based on ucode relation (ucR) and Place Identifier (PI), for realizing ubiquitous spatial infrastructure. In addition, we propose a design environment for spatial information database using our model. Our model is based on ucode and its relation. ucode is 128bits number and the number itself has no meaning. Hence, it is difficult to manage the relation between ucodes without using a tool. Our design environment provides to describe connection between each ucode visually and is able to manipulate data using the target space map interactively. To evaluate the proposed model and environment, we designed three spaces using our tool. In addition, we developed a web application using our spatial model. From evaluation, we have been showed that our model is effective and our design environment is useful to develop our spatial information model.
  • Kobkrit VIRIYAYUDHAKORN, Susumu KUNIFUJI
    原稿種別: PAPER
    専門分野: Artificial Intelligence, Data Mining
    2013 年 E96.D 巻 1 号 p. 64-72
    発行日: 2013/01/01
    公開日: 2013/01/01
    ジャーナル フリー
    Recent idea visualization programs still lack automatic idea summarization capabilities. This paper presents a knowledge-based method for automatically providing a short piece of English text about a topic to each idea group in idea charts. This automatic topic identification makes used Yet Another General Ontology (YAGO) and Wordnet as its knowledge bases. We propose a novel topic selection method and we compared its performance with three existing methods using two experimental datasets constructed using two idea visualization programs, i.e., the KJ Method (Kawakita Jiro Method) and mind-mapping programs. Our proposed topic identification method outperformed the baseline method in terms of both performance and consistency.
  • Chen-Shu WANG
    原稿種別: PAPER
    専門分野: Artificial Intelligence, Data Mining
    2013 年 E96.D 巻 1 号 p. 73-80
    発行日: 2013/01/01
    公開日: 2013/01/01
    ジャーナル フリー
    Product return is a critical but controversial issue. To deal with such a vague return problem, businesses must improve their information transparency in order to administrate the product return behaviour of their end users. This study proposes an intelligent return administration expert system (iRAES) to provide product return forecasting and decision support for returned product administration. The iRAES consists of two intelligent agents that adopt a hybrid data mining algorithm. The return diagnosis agent generates different alarms for certain types of product return, based on forecasts of the return possibility. The return recommender agent is implemented on the basis of case-based reasoning, and provides the return centre clerk with a recommendation for returned product administration. We present a 3C-iShop scenario to demonstrate the feasibility and efficiency of the iRAES architecture. Our experiments identify a particularly interesting return, for which iRAES generates a recommendation for returned product administration. On average, iRAES decreases the effort required to generate a recommendation by 70% compared to previous return administration systems, and improves performance via return decision support by 37%. iRAES is designed to accelerate product return administration, and improve the performance of product return knowledge management.
  • Senya POLIKOVSKY, Yoshinari KAMEDA, Yuichi OHTA
    原稿種別: PAPER
    専門分野: Pattern Recognition
    2013 年 E96.D 巻 1 号 p. 81-92
    発行日: 2013/01/01
    公開日: 2013/01/01
    ジャーナル フリー
    Facial micro-expressions are fast and subtle facial motions that are considered as one of the most useful external signs for detecting hidden emotional changes in a person. However, they are not easy to detect and measure as they appear only for a short time, with small muscle contraction in the facial areas where salient features are not available. We propose a new computer vision method for detecting and measuring timing characteristics of facial micro-expressions. The core of this method is based on a descriptor that combines pre-processing masks, histograms and concatenation of spatial-temporal gradient vectors. Presented 3D gradient histogram descriptor is able to detect and measure the timing characteristics of the fast and subtle changes of the facial skin surface. This method is specifically designed for analysis of videos recorded using a hi-speed 200fps camera. Final classification of micro expressions is done by using a k-mean classifier and a voting procedure. The Facial Action Coding System was utilized to annotate the appearance and dynamics of the expressions in our new hi-speed micro-expressions video database. The efficiency of the proposed approach was validated using our new hi-speed video database.
  • Meixun JIN, Yong-Hun LEE, Jong-Hyeok LEE
    原稿種別: PAPER
    専門分野: Natural Language Processing
    2013 年 E96.D 巻 1 号 p. 93-101
    発行日: 2013/01/01
    公開日: 2013/01/01
    ジャーナル フリー
    This paper presents a new span-based dependency chart parsing algorithm that models the relations between the left and right dependents of a head. Such relations cannot be modeled in existing span-based algorithms, despite their popularity in dependency corpora. We address this problem through ternary-span combination during the subtree derivation. By modeling the relations between the left and right dependents of a head, our proposed algorithm provides a better capability of coordination disambiguation when the conjunction is annotated as the head of the left and right conjuncts. This eventually leads to state-of-the-art performance of dependency parsing on the Chinese data of the CoNLL shared task.
  • Chih-Chiang LIN, Pi-Chung WANG
    原稿種別: PAPER
    専門分野: Biocybernetics, Neurocomputing
    2013 年 E96.D 巻 1 号 p. 102-110
    発行日: 2013/01/01
    公開日: 2013/01/01
    ジャーナル フリー
    The broadcast scheduling problem (BSP) in wireless ad-hoc networks is a well-known NP-complete combinatorial optimization problem. The BSP aims at finding a transmission schedule whose time slots are collision free in a wireless ad-hoc network with time-division multiple access (TDMA). The transmission schedule is optimized for minimizing the frame length of the node transmissions and maximizing the utilization of the shared channel. Recently, many metaheuristics can optimally solve smaller problem instances of the BSP. However, for complex problem instances, the computation of metaheuristics can be quite time and memory consuming. In this work, we propose a greedy genetic algorithm for solving the BSP with a large number of nodes. We present three heuristic genetic operators, including a greedy crossover and two greedy mutation operators, to optimize both objectives of the BSP. These heuristic genetic operators can generate good solutions. Our experiments use both benchmark data sets and randomly generated problem instances. The experimental results show that our genetic algorithm is effective in solving the BSP problem instances of large-scale networks with 2,500 nodes.
  • Florencio Rusty PUNZALAN, Tetsuo SATO, Tomohisa OKADA, Shigehide KUHAR ...
    原稿種別: PAPER
    専門分野: Biological Engineering
    2013 年 E96.D 巻 1 号 p. 111-119
    発行日: 2013/01/01
    公開日: 2013/01/01
    ジャーナル フリー
    This paper describes a simulation platform for use in the quantitative assessment of different respiratory motion correction techniques in Coronary MR angiography (CMRA). The simulator incorporates acquisition of motion parameters from heart motion tracking and applies it to a deformable heart model. To simulate respiratory motion, a high-resolution 3-D coronary heart reference image is deformed using the estimated linear transformation from a series of volunteer coronal scout scans. The deformed and motion-affected 3-D coronary images are used to generate segmented k-space data to represent MR data acquisition affected by respiratory motion. The acquired k-space data are then corrected using different respiratory motion correction methods and converted back to image data. The resulting images are quantitatively compared with each other using image-quality measures. Simulation experiment results are validated by acquiring CMRA scans using the correction methods used in the simulation.
  • Biao SUN, Qian CHEN, Xinxin XU, Li ZHANG, Jianjun JIANG
    原稿種別: LETTER
    専門分野: Fundamentals of Information Systems
    2013 年 E96.D 巻 1 号 p. 120-123
    発行日: 2013/01/01
    公開日: 2013/01/01
    ジャーナル フリー
    Compressive sensing (CS) shows that a sparse or compressible signal can be exactly recovered from its linear measurements at a rate significantly lower than the Nyquist rate. As an extreme case, 1-bit compressive sensing (1-bit CS) states that an original sparse signal can be recovered from the 1-bit measurements. In this paper, we intrduce a Fast and Accurate Two-Stage (FATS) algorithm for 1-bit compressive sensing. Simulations show that FATS not only significantly increases the signal reconstruction speed but also improves the reconstruction accuracy.
  • Wenbing JIN, Xuanya LI, Yanyong YU, Yongzhi WANG
    原稿種別: LETTER
    専門分野: Computer System
    2013 年 E96.D 巻 1 号 p. 124-128
    発行日: 2013/01/01
    公開日: 2013/01/01
    ジャーナル フリー
    To improve Last-Level Cache (LLC) management, numerous approaches have been proposed requiring additional hardware budget and increased overhead. A number of these approaches even change the organization of the existing cache design. In this letter, we propose Adaptive Insertion and Promotion (AIP) policies based on Least Recently Used (LRU) replacement. AIP dynamically inserts a missed line in the middle of the cache list and promotes a reused line several steps left, realizing the combination of LRU and LFU policies deliberately under a single unified scheme. As a result, it benefits workloads with high locality as well as with many frequently reused lines. Most importantly, AIP requires no additional hardware other than a typical LRU list, thus it can be easily integrated into the existing hardware with minimal changes. Other issues around LLC such as scans, thrashing and dead lines are all explored in our study. Experimental results on the gem5 simulator with SPEC CUP2006 benchmarks indicate that AIP outperforms LRU replacement policy by an average of 5.8% on the misses per thousand instructions metric.
  • Chillo GA, Jeongho LEE, Won Hee LEE, Kiyun YU
    原稿種別: LETTER
    専門分野: Data Engineering, Web Information Systems
    2013 年 E96.D 巻 1 号 p. 129-133
    発行日: 2013/01/01
    公開日: 2013/01/01
    ジャーナル フリー
    We present a novel point of interest (POI) construction approach based on street-level imagery (SLI) such as Google StreetView. Our method consists of: (1) the creation of a conflation map between an SLI trace and a vector map; (2) the detection of the corresponding buildings between the SLI scene and the conflation map; and (3) POI name extraction from a signboard in the SLI scene by user-interactive text recognition. Finally, a POI is generated through a combination of the POI name and attributes of the building object on a vector map. The proposed method showed recall of 92.99% and precision of 97.10% for real-world POIs.
  • Myung-Ho PARK, Ki-Gon NAM, Jin Seok KIM, Dae Hyun YUM, Pil Joong LEE
    原稿種別: LETTER
    専門分野: Information Network
    2013 年 E96.D 巻 1 号 p. 134-137
    発行日: 2013/01/01
    公開日: 2013/01/01
    ジャーナル フリー
    A distance bounding protocol provides an upper bound on the distance between communicating parties by measuring the round-trip time between challenges and responses. It is an effective countermeasure against mafia fraud attacks (a.k.a. relay attacks). The adversary success probability of previous distance bounding protocols without a final confirmation message such as digital signature or message authentication code is at least $\left(\frac{3}{8}\right)^n = \left(\frac{1}{2.67}\right)^n$. We propose a unilateral distance bounding protocol without a final confirmation message, which reduces the adversary success probability to $\left(\frac{5}{16}\right)^n = \left(\frac{1}{3.2}\right)^n$.
  • Debiao HE, Hao HU
    原稿種別: LETTER
    専門分野: Information Network
    2013 年 E96.D 巻 1 号 p. 138-140
    発行日: 2013/01/01
    公開日: 2013/01/01
    ジャーナル フリー
    Recently, Shao et al. [M. Shao and Y. Chin, A privacy-preserving dynamic id-based remote user authentication scheme with access control for multi-server environment, IEICE Transactions on Information and Systems, vol.E95-D, no.1, pp.161-168, 2012] proposed a dynamic ID-based remote user authentication scheme with access control for multi-server environments. They claimed that their scheme could withstand various attacks and provide anonymity. However, in this letter, we will point out that Shao et al.'s scheme has practical pitfalls and is not feasible for real-life implementation. We identify that their scheme is vulnerable to two kinds of attacks and cannot provide anonymity.
  • Ryo SUZUKI, Mamoru OHARA, Masayuki ARAI, Satoshi FUKUMOTO, Kazuhiko IW ...
    原稿種別: LETTER
    専門分野: Dependable Computing
    2013 年 E96.D 巻 1 号 p. 141-145
    発行日: 2013/01/01
    公開日: 2013/01/01
    ジャーナル フリー
    This paper discusses hybrid state saving for applications in which processes should create checkpoints at constant intervals and can hold a finite number of checkpoints. We propose a reclamation technique for checkpoint space, that provides effective checkpoint time arrangements for a rollback distance distribution. Numerical examples show that when we cannot use the optimal checkpoint interval due to the system requirements, the proposed technique can achieve lower expected overhead compared to the conventional technique without considering the form of the rollback distance distribution.
  • Hideki YOSHIKAWA, Masahiro KAMINAGA, Arimitsu SHIKODA
    原稿種別: LETTER
    専門分野: Dependable Computing
    2013 年 E96.D 巻 1 号 p. 146-150
    発行日: 2013/01/01
    公開日: 2013/01/01
    ジャーナル フリー
    This article presents a differential fault analysis (DFA) technique using round addition for a generalized Feistel network (GFN) including CLEFIA and RC6. Here the term “round addition” means that the round operation executes twice using the same round key. The proposed DFA needs bypassing of an operation to count the number of rounds such as increment or decrement. To verify the feasibility of our proposal, we implement several operations, including increment and decrement, on a microcontroller and experimentally confirm the operation bypassing. The proposed round addition technique works effectively for the generalized Feistel network with a partial whitening operation after the last round. In the case of a 128-bit CLEFIA, we show a procedure to reconstruct the round keys or a secret key using one correct ciphertext and two faulty ciphertexts. Our DFA also works for DES and RC6.
  • Chi-Jung HUANG, Shaw-Hwa HWANG, Cheng-Yu YEH
    原稿種別: LETTER
    専門分野: Pattern Recognition
    2013 年 E96.D 巻 1 号 p. 151-154
    発行日: 2013/01/01
    公開日: 2013/01/01
    ジャーナル フリー
    This study proposes an improvement to the Triangular Inequality Elimination (TIE) algorithm for vector quantization (VQ). The proposed approach uses recursive and intersection (RI) rules to compensate and enhance the TIE algorithm. The recursive rule changes reference codewords dynamically and produces the smallest candidate group. The intersection rule removes redundant codewords from these candidate groups. The RI-TIE approach avoids over-reliance on the continuity of the input signal. This study tests the contribution of the RI rules using the VQ-based, G.729 standard LSP encoder and some classic images. Results show that the RI rules perform excellently in the TIE algorithm.
  • Biao WANG, Weifeng LI, Zhimin LI, Qingmin LIAO
    原稿種別: LETTER
    専門分野: Image Recognition, Computer Vision
    2013 年 E96.D 巻 1 号 p. 155-158
    発行日: 2013/01/01
    公開日: 2013/01/01
    ジャーナル フリー
    In this letter, we propose an extension to the classical logarithmic total variation (LTV) model for face recognition under variant illumination conditions. LTV treats all facial areas with the same regularization parameters, which inevitably results in the loss of useful facial details and is harmful for recognition tasks. To address this problem, we propose to assign the regularization parameters which balance the large-scale (illumination) and small-scale (reflectance) components in a spatially adaptive scheme. Face recognition experiments on both Extended Yale B and the large-scale FERET databases demonstrate the effectiveness of the proposed method.
  • Quan MIAO, Guijin WANG, Xinggang LIN
    原稿種別: LETTER
    専門分野: Image Recognition, Computer Vision
    2013 年 E96.D 巻 1 号 p. 159-162
    発行日: 2013/01/01
    公開日: 2013/01/01
    ジャーナル フリー
    This paper proposes a novel method for object tracking by combining local feature and global template-based methods. The proposed algorithm consists of two stages from coarse to fine. The first stage applies on-line classifiers to match the corresponding keypoints between the input frame and the reference frame. Thus a rough motion parameter can be estimated using RANSAC. The second stage employs kernel-based global representation in successive frames to refine the motion parameter. In addition, we use the kernel weight obtained during the second stage to guide the on-line learning process of the keypoints' description. Experimental results demonstrate the effectiveness of the proposed technique.
  • Junsan ZHANG, Youli QU, Shu GONG, Shengfeng TIAN, Haoliang SUN
    原稿種別: LETTER
    専門分野: Natural Language Processing
    2013 年 E96.D 巻 1 号 p. 163-167
    発行日: 2013/01/01
    公開日: 2013/01/01
    ジャーナル フリー
    Entity is an important information carrier in Web pages. Users would like to directly get a list of relevant entities instead of a list of documents when they submit a query to the search engine. So the research of related entity finding (REF) is a meaningful work. In this paper we investigate the most important task of REF: Entity Ranking. The wrong-type entities which don't belong to the target-entity type will pollute the ranking result. We propose a novel method to filter wrong-type entities. We focus on the acquisition of seed entities and automatically extracting the common Wikipedia categories of target-entity type. Also we demonstrate how to filter wrong-type entities using the proposed model. The experimental results show our method can filter wrong-type entities effectively and improve the results of entity ranking.
Errata
feedback
Top