IEICE Transactions on Information and Systems
Online ISSN : 1745-1361
Print ISSN : 0916-8532
Volume E92.D , Issue 11
Showing 1-14 articles out of 14 articles from the selected issue
Regular Section
  • Marta R. COSTA-JUSSÀ, José A. R. FONOLLOSA
    Type: SURVEY PAPER
    Subject area: Natural Language Processing
    2009 Volume E92.D Issue 11 Pages 2179-2185
    Published: November 01, 2009
    Released: November 01, 2009
    JOURNALS FREE ACCESS
    This paper surveys several state-of-the-art reordering techniques employed in Statistical Machine Translation systems. Reordering is understood as the word-order redistribution of the translated words. In original SMT systems, this different order is only modeled within the limits of translation units. Relying only in the reordering provided by translation units may not be good enough in most language pairs, which might require longer reorderings. Therefore, additional techniques may be deployed to face the reordering challenge. The Statistical Machine Translation community has been very active recently in developing reordering techniques. This paper gives a brief survey and classification of several well-known reordering approaches.
    Download PDF (259K)
  • Kazunaga HYODO, Kengo IWAMOTO, Hideki ANDO
    Type: PAPER
    Subject area: Computer Systems
    2009 Volume E92.D Issue 11 Pages 2186-2195
    Published: November 01, 2009
    Released: November 01, 2009
    JOURNALS FREE ACCESS
    Instruction pre-execution is an effective way to prefetch data. We previously proposed an instruction pre-execution scheme, which we call two-step physical register deallocation (TSD). The TSD realizes pre-execution by exploiting the difference between the amount of instruction-level parallelism available with an unlimited number of physical registers and that available with an actual number of physical registers. Although previous TSD study has successfully improved performance, it still has an inefficient energy consumption. This is because attempts are made for instructions to be pre-executed as much as possible, independently of whether or not they can significantly contribute to load latency reduction, allowing for maximal performance improvement. This paper presents a scheme that improves the energy efficiency of the TSD by pre-executing only those instructions that have great benefit. Our evaluation results using the SPECfp2000 benchmark show that our scheme reduces the dynamic pre-executed instruction count by 76%, compared with the original scheme. This reduction saves 7% energy consumption of the execution core with 2% overhead. Performance degrades by 2%, compared with that of the original scheme, but is still 15% higher than that of the normal processor without the TSD.
    Download PDF (521K)
  • Toshihiro YOKOYAMA, Miyuki HANAOKA, Makoto SHIMAMURA, Kenji KONO, Taka ...
    Type: PAPER
    Subject area: System Programs
    2009 Volume E92.D Issue 11 Pages 2196-2206
    Published: November 01, 2009
    Released: November 01, 2009
    JOURNALS FREE ACCESS
    Secure operating systems (secure OSes) are widely used to limit the damage caused by unauthorized access to Internet servers. However, writing a security policy based on the principle of least privilege for a secure OS is a challenge for an administrator. Considering that remote attackers can never attack a server before they establish connections to it, we propose a novel scheme that exploits phases to simplify security policy descriptions for Internet servers. In our scheme, the entire system has two execution phases: an initialization phase and a protocol processing phase. The initialization phase is defined as the phase before the server establishes connections to its clients, and the protocol processing phase is defined as the phase after it establishes connections. The key observation is that access control should be enforced by the secure OS only in the protocol processing phase to defend against remote attacks. Since remote attacks cannot be launched in the initialization phase, a secure OS is not required to enforce access control in this phase. Thus, we can omit the access-control policy in the initialization phase, which effectively reduces the number of policy rules. To prove the effectiveness of our scheme, we wrote security policies for three kinds of Internet servers (HTTP, SMTP, and POP servers). Our experimental results demonstrate that our scheme effectively reduces the number of descriptions; it eliminates 47.2%, 27.5%, and 24.0% of policy rules for HTTP, SMTP, and POP servers, respectively, compared with an existing SELinux policy that includes the initialization of the server.
    Download PDF (513K)
  • Seungmin LEE, Tae-Jun PARK, Donghyeok LEE, Taekyong NAM, Sehun KIM
    Type: PAPER
    Subject area: Database
    2009 Volume E92.D Issue 11 Pages 2207-2217
    Published: November 01, 2009
    Released: November 01, 2009
    JOURNALS FREE ACCESS
    The need for data encryption that protects sensitive data in a database has increased rapidly. However, encrypted data can no longer be efficiently queried because nearly all of the data should be decrypted. Several order-preserving encryption schemes that enable indexes to be built over encrypted data have been suggested to solve this problem. They allow any comparison operation to be directly applied to encrypted data. However, one of the main disadvantages of these schemes is that they expose sensitive data to inference attacks with order information, especially when the data are used together with unencrypted columns in the database. In this study, a new order-preserving encryption scheme that provides secure queries by hiding the order is introduced. Moreover, it provides efficient queries because any user who has the encryption key knows the order. The proposed scheme is designed to be efficient and secure in such an environment. Thus, it is possible to encrypt only sensitive data while leaving other data unencrypted. The encryption is not only robust against order exposure, but also shows high performance for any query over encrypted data. In addition, the proposed scheme provides strong updates without assumptions of the distribution of plaintext. This allows it to be integrated easily with the existing database system.
    Download PDF (317K)
  • Yi-Reun KIM, Kyu-Young WHANG, Min-Soo KIM, Il-Yeol SONG
    Type: PAPER
    Subject area: Database
    2009 Volume E92.D Issue 11 Pages 2218-2234
    Published: November 01, 2009
    Released: November 01, 2009
    JOURNALS FREE ACCESS
    MEMS storage devices are new non-volatile secondary storages that have outstanding advantages over magnetic disks. MEMS storage devices, however, are much different from magnetic disks in the structure and access characteristics in the following ways. They have thousands of heads called probe tips and provide the following two major access facilities: (1) flexibility: freely selecting a set of probe tips for accessing data, (2) parallelism: simultaneously reading and writing data with the set of probe tips selected. Due to these characteristics, it is nontrivial to find data placements that fully utilize the capability of MEMS storage devices. In this paper, we propose a simple logical model called the Region-Sector (RS) model that abstracts major characteristics affecting data retrieval performance, such as flexibility and parallelism, from the physical MEMS storage model. We also suggest heuristic data placement strategies based on the RS model. To show the usability of the RS model, we derive new data placements for relational data and two-dimensional spatial data by using these strategies. Experimental results show that the proposed data placements improve the data retrieval performance by up to 4.7times for relational data and by up to 18.7times for two-dimensional spatial data of approximately 320Mbytes compared with those of existing data placements. Further, these improvements are expected to be more marked as the database size grows.
    Download PDF (989K)
  • Walaa ALY, Seiichi UCHIDA, Masakazu SUZUKI
    Type: PAPER
    Subject area: Pattern Recognition
    2009 Volume E92.D Issue 11 Pages 2235-2243
    Published: November 01, 2009
    Released: November 01, 2009
    JOURNALS FREE ACCESS
    Machine recognition of mathematical expressions on printed documents is not trivial even when all the individual characters and symbols in an expression can be recognized correctly. In this paper, an automatic classification method of spatial relationships between the adjacent symbols in a pair is presented. This classification is important to realize an accurate structure analysis module of math OCR. Experimental results on very large databases showed that this classification worked well with an accuracy of 99.525% by using distribution maps which are defined by two geometric features, relative size and relative position, with careful treatment on document-dependent characteristics.
    Download PDF (2461K)
  • Tetsuji OGAWA, Tetsunori KOBAYASHI
    Type: PAPER
    Subject area: Speech and Hearing
    2009 Volume E92.D Issue 11 Pages 2244-2252
    Published: November 01, 2009
    Released: November 01, 2009
    JOURNALS FREE ACCESS
    The accuracy of simulation-based assessments of speech recognition systems under noisy conditions is investigated with a focus on the influence of the Lombard effect on the speech recognition performances. This investigation was carried out under various recognition conditions of different sound pressure levels of ambient noise, for different recognition tasks, such as continuous speech recognition and spoken word recognition, and using different recognition systems, i.e., systems with and without adaptation of the acoustic models to ambient noise. Experimental results showed that accurate simulation was not always achieved when dry sources with neutral talking style were used, but it could be achieved if the dry sources that include the influence of the Lombard effect were used; the simulation in the latter case is accurate, irrespective of the recognition conditions.
    Download PDF (444K)
  • Mahdieh KHANMOHAMMADI, Reza AGHAIEZADEH ZOROOFI, Takashi NISHII, Hisas ...
    Type: PAPER
    Subject area: Biological Engineering
    2009 Volume E92.D Issue 11 Pages 2253-2263
    Published: November 01, 2009
    Released: November 01, 2009
    JOURNALS FREE ACCESS
    Quantification of the hip cartilages is clinically important. In this study, we propose an automatic technique for segmentation and visualization of the acetabular and femoral head cartilages based on clinically obtained multi-slice T1-weighted MR data and a hybrid approach. We follow a knowledge based approach by employing several features such as the anatomical shapes of the hip femoral and acetabular cartilages and corresponding image intensities. We estimate the center of the femoral head by a Hough transform and then automatically select the volume of interest. We then automatically segment the hip bones by a self-adaptive vector quantization technique. Next, we localize the articular central line by a modified canny edge detector based on the first and second derivative filters along the radial lines originated from the femoral head center and anatomical constraint. We then roughly segment the acetabular and femoral head cartilages using derivative images obtained in the previous step and a top-hat filter. Final masks of the acetabular and femoral head cartilages are automatically performed by employing the rough results, the estimated articular center line and the anatomical knowledge. Next, we generate a thickness map for each cartilage in the radial direction based on a Euclidian distance. Three dimensional pelvic bones, acetabular and femoral cartilages and corresponding thicknesses are overlaid and visualized. The techniques have been implemented in C++ and MATLAB environment. We have evaluated and clarified the usefulness of the proposed techniques in the presence of 40 clinical hips multi-slice MR images.
    Download PDF (1318K)
  • Akara SOPHARAK, Bunyarit UYYANONVARA, Sarah BARMAN, Thomas WILLIAMSON
    Type: PAPER
    Subject area: Biological Engineering
    2009 Volume E92.D Issue 11 Pages 2264-2271
    Published: November 01, 2009
    Released: November 01, 2009
    JOURNALS FREE ACCESS
    To prevent blindness from diabetic retinopathy, periodic screening and early diagnosis are neccessary. Due to lack of expert ophthalmologists in rural area, automated early exudate (one of visible sign of diabetic retinopathy) detection could help to reduce the number of blindness in diabetic patients. Traditional automatic exudate detection methods are based on specific parameter configuration, while the machine learning approaches which seems more flexible may be computationally high cost. A comparative analysis of traditional and machine learning of exudates detection, namely, mathematical morphology, fuzzy c-means clustering, naive Bayesian classifier, Support Vector Machine and Nearest Neighbor classifier are presented. Detected exudates are validated with expert ophthalmologists' hand-drawn ground-truths. The sensitivity, specificity, precision, accuracy and time complexity of each method are also compared.
    Download PDF (1667K)
  • Depeng JIN, Shijun LIN, Li SU, Lieguang ZENG
    Type: LETTER
    Subject area: VLSI Systems
    2009 Volume E92.D Issue 11 Pages 2272-2274
    Published: November 01, 2009
    Released: November 01, 2009
    JOURNALS FREE ACCESS
    Motivated by different error characteristics of each path, we propose a study-based error recovery scheme for Networks-on-Chip (NoC). In this scheme, two study processes are executed respectively to obtain the characteristics of the errors in every link first; and then, according to the study results and the selection rule inferred by us, this scheme selects a better error recovery scheme for every path. Simulation results show that compared with traditional simple retransmission scheme and hybrid single-error-correction, multi-error-retransmission scheme, this scheme greatly improves the throughput and cuts down the energy consumption with little area increase.
    Download PDF (184K)
  • Ning LI, De XU
    Type: LETTER
    Subject area: Image Recognition, Computer Vision
    2009 Volume E92.D Issue 11 Pages 2275-2278
    Published: November 01, 2009
    Released: November 01, 2009
    JOURNALS FREE ACCESS
    The frequency response of log-Gabor function matches well the frequency response of primate visual neurons. In this letter, motion-salient regions are extracted based on the 2D log-Gabor wavelet transform of the spatio-temporal form of actions. A supervised classification technique is then used to classify the actions. The proposed method is robust to the irregular segmentation of actors. Moreover, the 2D log-Gabor wavelet permits more compact representation of actions than the recent neurobiological models using Gabor wavelet.
    Download PDF (108K)
  • Ning WANG, De XU, Bing LI
    Type: LETTER
    Subject area: Image Recognition, Computer Vision
    2009 Volume E92.D Issue 11 Pages 2279-2282
    Published: November 01, 2009
    Released: November 01, 2009
    JOURNALS FREE ACCESS
    Color constancy is the ability to measure colors of objects independent of the light source color. Various methods have been proposed to handle this problem. Most of them depend on the statistical distributions of the pixel values. Recent studies show that incorporation image derivatives are more effective than the direct use of pixel values. Based on this idea, a novel edge-based color constancy algorithm using support vector regression (SVR) is proposed. Contrary to existing SVR color constancy algorithm, which is computed from the zero-order structure of images, our method is based on the higher-order structure of images. The experimental results show that our algorithm is more effective than the zero-order SVR color constancy methods.
    Download PDF (212K)
  • Won SEONG, June-Sik CHO, Seung-Moo NOH, Jong-Won PARK
    Type: LETTER
    Subject area: Biological Engineering
    2009 Volume E92.D Issue 11 Pages 2283-2286
    Published: November 01, 2009
    Released: November 01, 2009
    JOURNALS FREE ACCESS
    In general, the spleen accompanied by abnormal abdomen is hypertrophied. However, if the spleen size is originally small, it is hard to detect the splenic enlargement due to abnormal abdomen by simply measure the size. On the contrary, the spleen size of a person having a normal abdomen may be large by nature. Therefore, measuring the size of spleen is not a reliable diagnostic measure of its enlargement or the abdomen abnormality. This paper proposes an automatic method to diagnose the splenic enlargement due to abnormality, by examining the boundary pattern of spleen in abdominal CT images.
    Download PDF (731K)
  • Febriliyan SAMOPA, Akira ASANO, Akira TAGUCHI
    Type: LETTER
    Subject area: Biological Engineering
    2009 Volume E92.D Issue 11 Pages 2287-2290
    Published: November 01, 2009
    Released: November 01, 2009
    JOURNALS FREE ACCESS
    Measurement of an individual molar provides rich information for forensic personal identification. We propose a computer-based system for extracting an individual molar from dental panoramic radiographs. A molar is obtained by extracting the region-of-interest, separating the maxilla and mandible, and extracting the boundaries between teeth. The proposed system is almost fully automatic; all that the user has to do is clicking three points on the boundary between the maxilla and the mandible.
    Download PDF (1008K)
feedback
Top