IEICE Transactions on Information and Systems
Online ISSN : 1745-1361
Print ISSN : 0916-8532
Volume E94.D, Issue 9
Displaying 1-12 of 12 articles from this issue
Regular Section
  • Truong Vinh Truong DUY, Yukinori SATO, Yasushi INOGUCHI
    Article type: PAPER
    Subject area: Fundamentals of Information Systems
    2011 Volume E94.D Issue 9 Pages 1731-1741
    Published: September 01, 2011
    Released on J-STAGE: September 01, 2011
    JOURNAL FREE ACCESS
    With energy shortages and global climate change leading our concerns these days, the energy consumption of datacenters has become a key issue. Obviously, a substantial reduction in energy consumption can be made by powering down servers when they are not in use. This paper aims at designing, implementing and evaluating a Green Scheduler for reducing energy consumption of datacenters in Cloud computing platforms. It is composed of four algorithms: prediction, ON/OFF, task scheduling, and evaluation algorithms. The prediction algorithm employs a neural predictor to predict future load demand based on historical demand. According to the prediction, the ON/OFF algorithm dynamically adjusts server allocations to minimize the number of servers running, thus minimizing the energy use at the points of consumption to benefit all other levels. The task scheduling algorithm is responsible for directing request traffic away from powered-down servers and toward active servers. The performance is monitored by the evaluation algorithm to balance the system's adaptability against stability. For evaluation, we perform simulations with two load traces. The results show that the prediction mode, with a combination of dynamic training and dynamic provisioning of 20% additional servers, can reduce energy consumption by 49.8% with a drop rate of 0.02% on one load trace, and a drop rate of 0.16% with an energy consumption reduction of 55.4% on the other. Our method is also proven to have a distinct advantage over its counterparts.
    Download PDF (1945K)
  • Takeshi KUMAKI, Tetsushi KOIDE, Hans Jürgen MATTAUSCH, Masaharu T ...
    Article type: PAPER
    Subject area: Fundamentals of Information Systems
    2011 Volume E94.D Issue 9 Pages 1742-1754
    Published: September 01, 2011
    Released on J-STAGE: September 01, 2011
    JOURNAL FREE ACCESS
    This paper presents a software-based parallel cryptographic solution with a massive-parallel memory-embedded SIMD matrix (MTX) for data-storage systems. MTX can have up to 2,048 2-bit processing elements, which are connected by a flexible switching network, and supports 2-bit 2,048-way bit-serial and word-parallel operations with a single command. Furthermore, a next-generation SIMD matrix called MX-2 has been developed by expanding processing-element capability of MTX from 2-bit to 4-bit processing. These SIMD matrix architectures are verified to be a better alternative for processing repeated-arithmetic and logical-operations in multimedia applications with low power consumption. Moreover, we have proposed combining Content Addressable Memory (CAM) technology with the massive-parallel memory-embedded SIMD matrix architecture to enable fast pipelined table-lookup coding. Since both arithmetic logical operation and table-lookup coding execute extremely fast on these architectures, efficient execution of encryption and decryption algorithms can be realized. Evaluation results of the CAM-less and CAM-enhanced massive-parallel SIMD matrix processor for the example of the Advanced Encryption Standard (AES), which is a widely-used cryptographic algorithm, show that a throughput of up to 2.19Gbps becomes possible. This means that several standard data-storage transfer specifications, such as SD, CF (Compact Flash), USB (Universal Serial Bus) and SATA (Serial Advanced Technology Attachment) can be covered. Consequently, the massive-parallel SIMD matrix architecture is very suitable for private information protection in several data-storage media. A further advantage of the software based solution is the flexible update possibility of the implemented-cryptographic algorithm to a safer future algorithm. The massive-parallel memory-embedded SIMD matrix architecture (MTX and MX-2) is therefore a promising solution for integrated realization of real-time cryptographic algorithms with low power dissipation and small Si-area consumption.
    Download PDF (1163K)
  • Junbo WANG, Zixue CHENG, Yongping CHEN, Lei JING
    Article type: PAPER
    Subject area: Information Network
    2011 Volume E94.D Issue 9 Pages 1755-1767
    Published: September 01, 2011
    Released on J-STAGE: September 01, 2011
    JOURNAL FREE ACCESS
    Context awareness is viewed as one of the most important goals in the pervasive computing paradigm. As one kind of context awareness, danger awareness describes and detects dangerous situations around a user, and provides services such as warning to protect the user from dangers. One important problem arising in danger-aware systems is that the description/definition of dangerous situations becomes more and more complex, since many factors have to be considered in such description, which brings a big burden to the developers/users and thereby reduces the reliability of the system. It is necessary to develop a flexible reasoning method, which can ease the description/definition of dangerous situations by reasoning dangers using limited specified/predefined contexts/rules, and increase system reliability by detecting unspecified dangerous situations. Some reasoning mechanisms based on context similarity were proposed to address the above problems. However, the current mechanisms are not so accurate in some cases, since the similarity is computed from only basic knowledge, e.g. nature property, such as material, size etc, and category information, i.e. they may cause false positive and false negative problems. To solve the above problems, in this paper we propose a new flexible and accurate method from feature point of view. Firstly, a new ontology explicitly integrating basic knowledge and danger feature is designed for computing similarity in danger-aware systems. Then a new method is proposed to compute object similarity from both basic knowledge and danger feature point of views when calculating context similarity. The method is implemented in an indoor ubiquitous test bed and evaluated through experiments. The experiment result shows that the accuracy of system can be effectively increased based on the comparison between system decision and estimation of human observers, comparing with the existing methods. And the burden of defining dangerous situations can be decreased by evaluating trade-off between the system's accuracy and burden of defining dangerous situations.
    Download PDF (2122K)
  • Osama OUDA, Norimichi TSUMURA, Toshiya NAKAGUCHI
    Article type: PAPER
    Subject area: Information Network
    2011 Volume E94.D Issue 9 Pages 1768-1777
    Published: September 01, 2011
    Released on J-STAGE: September 01, 2011
    JOURNAL FREE ACCESS
    Proving the security of cancelable biometrics and other template protection techniques is a key prerequisite for the widespread deployment of biometric technologies. BioEncoding is a cancelable biometrics scheme that has been proposed recently to protect biometric templates represented as binary strings like iris codes. Unlike other template protection schemes, BioEncoding does not require user-specific keys or tokens. Moreover, it satisfies the requirements of untraceable biometrics without sacrificing the matching accuracy. However, the security of BioEncoding against smart attacks, such as correlation and optimization-based attacks, has to be proved before recommending it for practical deployment. In this paper, the security of BioEncopding, in terms of both non-invertibility and privacy protection, is analyzed. First, resistance of protected templates generated using BioEncoding against brute-force search attacks is revisited rigorously. Then, vulnerabilities of BioEncoding with respect to correlation attacks and optimization based attacks are identified and explained. Furthermore, an important modification to the BioEncoding algorithm is proposed to enhance its security against correlation attacks. The effect of integrating this modification into BioEncoding is validated and its impact on the matching accuracy is investigated empirically using CASIA-IrisV3-Interval dataset. Experimental results confirm the efficacy of the proposed modification and show that it has no negative impact on the matching accuracy.
    Download PDF (4233K)
  • Hyung Chan KIM, Tatsunori ORII, Katsunari YOSHIOKA, Daisuke INOUE, Jun ...
    Article type: PAPER
    Subject area: Information Network
    2011 Volume E94.D Issue 9 Pages 1778-1791
    Published: September 01, 2011
    Released on J-STAGE: September 01, 2011
    JOURNAL FREE ACCESS
    Many malicious programs we encounter these days are armed with their own custom encoding methods (i.e., they are packed) to deter static binary analysis. Thus, the initial step to deal with unknown (possibly malicious) binary samples obtained from malware collecting systems ordinarily involves the unpacking step. In this paper, we focus on empirical experimental evaluations on a generic unpacking method built on a dynamic binary instrumentation (DBI)framework to figure out the applicability of the DBI-based approach. First, we present yet another method of generic binary unpacking extending a conventional unpacking heuristic. Our architecture includes managing shadow states to measure code exposure according to a simple byte state model. Among available platforms, we built an unpacking implementation on PIN DBI framework. Second, we describe evaluation experiments, conducted on wild malware collections, to discuss workability as well as limitations of our tool. Without the prior knowledge of 6029 samples in the collections, we have identified at around 64% of those were analyzable with our DBI-based generic unpacking tool which is configured to operate in fully automatic batch processing. Purging corrupted and unworkable samples in native systems, it was 72%.
    Download PDF (953K)
  • Yukun LIU, Dongju LI, Tsuyoshi ISSHIKI, Hiroaki KUNIEDA
    Article type: PAPER
    Subject area: Pattern Recognition
    2011 Volume E94.D Issue 9 Pages 1792-1799
    Published: September 01, 2011
    Released on J-STAGE: September 01, 2011
    JOURNAL FREE ACCESS
    As a global feature of fingerprint patterns, the Orientation Field (OF) plays an important role in fingerprint recognition systems. This paper proposes a fast binary pattern based orientation estimation with nearest-neighbor search, which can reduce the computational complexity greatly. We also propose a classified post processing with adaptive averaging strategy to increase the accuracy of the estimated OF. Experimental results confirm that the proposed method can satisfy the strict requirements of the embedded applications over the conventional approaches.
    Download PDF (1034K)
  • Dan-ni AI, Xian-hua HAN, Guifang DUAN, Xiang RUAN, Yen-wei CHEN
    Article type: PAPER
    Subject area: Pattern Recognition
    2011 Volume E94.D Issue 9 Pages 1800-1808
    Published: September 01, 2011
    Released on J-STAGE: September 01, 2011
    JOURNAL FREE ACCESS
    This paper addresses the problem of ordering the color SIFT descriptors in the independent component analysis for image classification. Component ordering is of great importance for image classification, since it is the foundation of feature selection. To select distinctive and compact independent components (IC) of the color SIFT descriptors, we propose two ordering approaches based on local variation, named as the localization-based IC ordering and the sparseness-based IC ordering. We evaluate the performance of proposed methods, the conventional IC selection method (global variation based components selection) and original color SIFT descriptors on object and scene databases, and obtain the following two main results. First, the proposed methods are able to obtain acceptable classification results in comparison with original color SIFT descriptors. Second, the highest classification rate can be obtained by using the global selection method in the scene database, while the local ordering methods give the best performance for the object database.
    Download PDF (2639K)
  • Yousun KANG, Hiroshi NAGAHASHI, Akihiro SUGIMOTO
    Article type: PAPER
    Subject area: Image Recognition, Computer Vision
    2011 Volume E94.D Issue 9 Pages 1809-1816
    Published: September 01, 2011
    Released on J-STAGE: September 01, 2011
    JOURNAL FREE ACCESS
    Scene-context plays an important role in scene analysis and object recognition. Among various sources of scene-context, we focus on scene-context scale, which means the effective scale of local context to classify an image pixel in a scene. This paper presents random forests based image categorization using the scene-context scale. The proposed method uses random forests, which are ensembles of randomized decision trees. Since the random forests are extremely fast in both training and testing, it is possible to perform classification, clustering and regression in real time. We train multi-scale texton forests which efficiently provide both a hierarchical clustering into semantic textons and local classification in various scale levels. The scene-context scale can be estimated by the entropy of the leaf node in the multi-scale texton forests. For image categorization, we combine the classified category distributions in each scale and the estimated scene-context scale. We evaluate on the MSRC21 segmentation dataset and find that the use of the scene-context scale improves image categorization performance. Our results have outperformed the state-of-the-art in image categorization accuracy.
    Download PDF (1918K)
  • Takashi NAGAMATSU, Ryuichi SUGANO, Yukina IWAMOTO, Junzo KAMAHARA, Nao ...
    Article type: PAPER
    Subject area: Multimedia Pattern Processing
    2011 Volume E94.D Issue 9 Pages 1817-1829
    Published: September 01, 2011
    Released on J-STAGE: September 01, 2011
    JOURNAL FREE ACCESS
    This paper presents a user-calibration-free method for estimating the point of gaze (POG). This method provides a fast and stable solution for realizing user-calibration-free gaze estimation more accurately than the conventional method that uses the optical axis of the eye as an approximation of the visual axis of the eye. The optical axis of the eye can be estimated by using two cameras and two light sources. This estimation is carried out by using a spherical model of the cornea. The point of intersection of the optical axis of the eye with the object that the user gazes at is termed POA. On the basis of an assumption that the visual axes of both eyes intersect on the object, the POG is approximately estimated using the binocular 3D eye model as the midpoint of the line joining the POAs of both eyes. Based on this method, we have developed a prototype system that comprises a 19′′ display with two pairs of stereo cameras. We evaluated the system experimentally with 20 subjects who were at a distance of 600mm from the display. The root-mean-square error (RMSE) of measurement of POG in the display screen coordinate system is 1.58°.
    Download PDF (1378K)
  • Kyungbaek KIM
    Article type: LETTER
    Subject area: Information Network
    2011 Volume E94.D Issue 9 Pages 1830-1833
    Published: September 01, 2011
    Released on J-STAGE: September 01, 2011
    JOURNAL FREE ACCESS
    Distributed systems desire to construct a random overlay graph for robustness, efficient information dissemination and load balancing. A random walk-based overlay construction is a promising alternative to generate an ideal random scale free overlay in distributed systems. However, a simple random walk-based overlay construction can be affected by node churn. Especially, the number of edges increases and the degree distribution is skewed. This inappropriate distortion can be exploited by malicious nodes. In this paper, we propose a modified random walk-based overlay construction supported by a logistic/trial based decision function to compensate the impact of node churn. Through event-driven simulations, we show that the decision function helps an overlay maintain the proper degree distribution, low diameter and low clustering coefficient with shorter random walks.
    Download PDF (113K)
  • Hanhoon PARK, Hideki MITSUMINE, Mahito FUJII
    Article type: LETTER
    Subject area: Image Recognition, Computer Vision
    2011 Volume E94.D Issue 9 Pages 1834-1838
    Published: September 01, 2011
    Released on J-STAGE: September 01, 2011
    JOURNAL FREE ACCESS
    This letter presents a novel edge-based blur metric that averages the ratios between the slopes and heights of edges. The metric computes the edge slopes more carefully, i.e., by averaging the edge gradients. The effectiveness of the proposed metric is confirmed by experiments with motion or Gaussian blurred real images and comparison with existing edge-based blur metrics.
    Download PDF (1030K)
  • Tadanori FUKAMI, Takamasa SHIMADA, Fumito ISHIKAWA, Bunnoshin ISHIKAWA ...
    Article type: LETTER
    Subject area: Biological Engineering
    2011 Volume E94.D Issue 9 Pages 1839-1842
    Published: September 01, 2011
    Released on J-STAGE: September 01, 2011
    JOURNAL FREE ACCESS
    The present study examined the evaluation of aging using the photic driving response, a measure used in routine EEG examinations. We examined 60 normal participants without EEG abnormalities, classified into three age groups (20∼29, 30∼59 and over 60 years; 20 participants per group). EEG was measured at rest and during photic stimulation (PS). We calculated Z-scores as a measure of enhancement and suppression due to visual stimulation at rest and during PS and tested for between-group and intraindividual differences. We examined responses in the alpha frequency and harmonic frequency ranges separately, because alpha suppression can affect harmonic frequency responses that overlap the alpha frequency band. We found a negative correlation between Z-scores for harmonics and age by fitting the data to a linear function (CC: -0.740). In contrast, Z-scores and alpha frequency were positively correlated (CC: 0.590).
    Download PDF (270K)
feedback
Top